What Is Apache Kafka?

Developed as a publish-subscribe messaging system to handle mass amounts of data at LinkedIn, today, Apache Kafka® is an open-source distributed event streaming platform used by over 80% of the Fortune 100.

This beginner’s Kafka tutorial will help you learn Kafka, its benefits, and use cases, and how to get started from the ground up. It includes a look at Kafka architecture, core concepts, and the connector ecosystem.

Introduction

Apache Kafka is an event streaming platform used to collect, process, store, and integrate data at scale. It has numerous use cases including distributed streaming, stream processing, data integration, and pub/sub messaging.

In order to make complete sense of what Kafka does, we'll delve into what an event streaming platform is and how it works. So before delving into Kafka architecture or its core components, let's discuss what an event is. This will help explain how Kafka stores events, how to get events in and out of the system, and how to analyze event streams.

what is kafka and how does it work

What Is an Event?

An event is any type of action, incident, or change that's identified or recorded by software or applications. For example, a payment, a website click, or a temperature reading, along with a description of what happened.

In other words, an event is a combination of notification—the element of when-ness that can be used to trigger some other activity—and state. That state is usually fairly small, say less than a megabyte or so, and is normally represented in some structured format, say in JSON or an object serialized with Apache Avro™ or Protocol Buffers.

Kafka and Events – Key/Value Pairs

Kafka is based on the abstraction of a distributed commit log. By splitting a log into partitions, Kafka is able to scale-out systems. As such, Kafka models events as key/value pairs . Internally, keys and values are just sequences of bytes, but externally in your programming language of choice, they are often structured objects represented in your language’s type system. Kafka famously calls the translation between language types and internal bytes serialization and deserialization. The serialized format is usually JSON, JSON Schema, Avro, or Protobuf.

Values are typically the serialized representation of an application domain object or some form of raw message input, like the output of a sensor.

Keys can also be complex domain objects but are often primitive types like strings or integers. The key part of a Kafka event is not necessarily a unique identifier for the event, like the primary key of a row in a relational database would be. It is more likely the identifier of some entity in the system, like a user, order, or a particular connected device.

This may not sound so significant now, but we’ll see later on that keys are crucial for how Kafka deals with things like parallelization and data locality.

Why Kafka? Benefits and Use Cases

Kafka is used by over 100,000 organizations across the world and is backed by a thriving community of professional developers, who are constantly advancing the state of the art in stream processing together. Due to Kafka's high throughput, fault tolerance, resilience, and scalability, there are numerous use cases across almost every industry - from banking and fraud detection, to transportation and IoT. We typically see Kafka used for purposes like those below.

Data Integration

Kafka can connect to nearly any other data source in traditional enterprise information systems, modern databases, or in the cloud. It forms an efficient point of integration with built-in data connectors, without hiding logic or routing inside brittle, centralized infrastructure.

Metrics and Monitoring

Kafka is often used for monitoring operational data. This involves aggregating statistics from distributed applications to produce centralized feeds with real-time metrics.

Log Aggregation

A modern system is typically a distributed system, and logging data must be centralized from the various components of the system to one place. Kafka often serves as a single source of truth by centralizing data across all sources, regardless of form or volume.

Stream Processing

Performing real-time computations on event streams is a core competency of Kafka. From real-time data processing to dataflow programming, Kafka ingests, stores, and processes streams of data as it's being generated, at any scale.

Publish-Subscribe Messaging

As a distributed pub/sub messaging system, Kafka works well as a modernized version of the traditional message broker. Any time a process that generates events must be decoupled from the process or from processes receiving the events, Kafka is a scalable and flexible way to get the job done.

Kafka Architecture – Fundamental Concepts

Kafka topics.

Events have a tendency to proliferate—just think of the events that happened to you this morning—so we’ll need a system for organizing them. Kafka’s most fundamental unit of organization is the topic , which is something like a table in a relational database. As a developer using Kafka, the topic is the abstraction you probably think the most about. You create different topics to hold different kinds of events and different topics to hold filtered and transformed versions of the same kind of event.

A topic is a log of events. Logs are easy to understand, because they are simple data structures with well-known semantics. First, they are append only: When you write a new message into a log, it always goes on the end. Second, they can only be read by seeking an arbitrary offset in the log, then by scanning sequential log entries. Third, events in the log are immutable—once something has happened, it is exceedingly difficult to make it un-happen. The simple semantics of a log make it feasible for Kafka to deliver high levels of sustained throughput in and out of topics, and also make it easier to reason about the replication of topics, which we’ll cover more later.

Logs are also fundamentally durable things. Traditional enterprise messaging systems have topics and queues, which store messages temporarily to buffer them between source and destination.

Since Kafka topics are logs, there is nothing inherently temporary about the data in them. Every topic can be configured to expire data after it has reached a certain age (or the topic overall has reached a certain size), from as short as seconds to as long as years or even to retain messages indefinitely . The logs that underlie Kafka topics are files stored on disk. When you write an event to a topic, it is as durable as it would be if you had written it to any database you ever trusted.

The simplicity of the log and the immutability of the contents in it are key to Kafka’s success as a critical component in modern data infrastructure—but they are only the beginning.

Kafka Partitioning

If a topic were constrained to live entirely on one machine, that would place a pretty radical limit on the ability of Kafka to scale. It could manage many topics across many machines—Kafka is a distributed system, after all—but no one topic could ever get too big or aspire to accommodate too many reads and writes. Fortunately, Kafka does not leave us without options here: It gives us the ability to partition topics.

Partitioning takes the single topic log and breaks it into multiple logs, each of which can live on a separate node in the Kafka cluster. This way, the work of storing messages, writing new messages, and processing existing messages can be split among many nodes in the cluster.

How Kafka Partitioning Works

Having broken a topic up into partitions, we need a way of deciding which messages to write to which partitions. Typically, if a message has no key, subsequent messages will be distributed round-robin among all the topic’s partitions. In this case, all partitions get an even share of the data, but we don’t preserve any kind of ordering of the input messages. If the message does have a key, then the destination partition will be computed from a hash of the key. This allows Kafka to guarantee that messages having the same key always land in the same partition, and therefore are always in order.

For example, if you are producing events that are all associated with the same customer, using the customer ID as the key guarantees that all of the events from a given customer will always arrive in order. This creates the possibility that a very active key will create a larger and more active partition, but this risk is small in practice and is manageable when it presents itself. It is often worth it in order to preserve the ordering of keys.

Kafka Brokers

So far we have talked about events, topics, and partitions, but as of yet, we have not been too explicit about the actual computers in the picture. From a physical infrastructure standpoint, Kafka is composed of a network of machines called brokers . In a contemporary deployment, these may not be separate physical servers but containers running on pods running on virtualized servers running on actual processors in a physical datacenter somewhere. However they are deployed, they are independent machines each running the Kafka broker process. Each broker hosts some set of partitions and handles incoming requests to write new events to those partitions or read events from them. Brokers also handle replication of partitions between each other.

Replication

It would not do if we stored each partition on only one broker. Whether brokers are bare metal servers or managed containers, they and their underlying storage are susceptible to failure, so we need to copy partition data to several other brokers to keep it safe. Those copies are called follower replica, whereas the main partition is called the leader replica. When you produce data to the leader—in general, reading and writing are done to the leader—the leader and the followers work together to replicate those new writes to the followers.

This happens automatically, and while you can tune some settings in the producer to produce varying levels of durability guarantees, this is not usually a process you have to think about as a developer building systems on Kafka. All you really need to know as a developer is that your data is safe, and that if one node in the cluster dies, another will take over its role.

Client Applications

Now let’s get outside of the Kafka cluster itself to the applications that use Kafka: the producers and consumers . These are client applications that contain your code, putting messages into topics and reading messages from topics. Every component of the Kafka platform that is not a Kafka broker is, at bottom, either a producer or a consumer or both. Producing and consuming are how you interface with a cluster.

Kafka Producers

The API surface of the producer library is fairly lightweight: In Java, there is a class called KafkaProducer that you use to connect to the cluster. You give this class a map of configuration parameters, including the address of some brokers in the cluster, any appropriate security configuration, and other settings that determine the network behavior of the producer. There is another class called ProducerRecord that you use to hold the key-value pair you want to send to the cluster.

To a first-order approximation, this is all the API surface area there is to producing messages. Under the covers, the library is managing connection pools, network buffering, waiting for brokers to acknowledge messages, retransmitting messages when necessary, and a host of other details no application programmer need concern herself with.

Kafka Consumers

Using the consumer API is similar in principle to the producer. You use a class called KafkaConsumer to connect to the cluster (passing a configuration map to specify the address of the cluster, security, and other parameters). Then you use that connection to subscribe to one or more topics. When messages are available on those topics, they come back in a collection called ConsumerRecords, which contains individual instances of messages in the form of ConsumerRecord objects. A ConsumerRecord object represents the key/value pair of a single Kafka message.

KafkaConsumer manages connection pooling and the network protocol just like KafkaProducer does, but there is a much bigger story on the read side than just the network plumbing. First of all, Kafka is different from legacy message queues in that reading a message does not destroy it; it is still there to be read by any other consumer that might be interested in it. In fact, it’s perfectly normal in Kafka for many consumers to read from one topic. This one small fact has a positively disproportionate impact on the kinds of software architectures that emerge around Kafka, which is a topic covered very well elsewhere.

Also, consumers need to be able to handle the scenario in which the rate of message consumption from a topic combined with the computational cost of processing a single message are together too high for a single instance of the application to keep up. That is, consumers need to scale. In Kafka, scaling consumer groups is more or less automatic.

Kafka Components and Ecosystem

If all you had were brokers managing partitioned, replicated topics with an ever-growing collection of producers and consumers writing and reading events, you would actually have a pretty useful system. However, the experience of the Kafka community is that certain patterns will emerge that will encourage you and your fellow developers to build the same bits of functionality over and over again around core Kafka.

You will end up building common layers of application functionality to repeat certain undifferentiated tasks. This is code that does important work but is not tied in any way to the business you’re actually in. It doesn’t contribute value directly to your customers. It’s infrastructure, and it should be provided by the community or by an infrastructure vendor.

It can be tempting to write this code yourself, but you should not. Kafka Connect , the Confluent Schema Registry , Kafka Streams , and ksqlDB are examples of this kind of infrastructure code. We’ll take a look at each of them in turn.

Kafka Connect

Kafka Connect

In the world of information storage and retrieval, some systems are not Kafka. Sometimes you would like the data in those other systems to get into Kafka topics, and sometimes you would like data in Kafka topics to get into those systems. As Apache Kafka's integration API, this is exactly what Kafka Connect does.

What Does Kafka Connect Do?

On the one hand, Kafka Connect is an ecosystem of pluggable connectors, and on the other, a client application. As a client application, Connect is a server process that runs on hardware independent of the Kafka brokers themselves. It is scalable and fault tolerant, meaning you can run not just one single Connect worker but a cluster of Connect workers that share the load of moving data in and out of Kafka from and to external systems. Kafka Connect also abstracts the business of code away from the user and instead requires only JSON configuration to run. For example, here’s how you’d stream data from Kafka to Elasticsearch:

Benefits of Kafka Connect

One of the primary advantages of Kafka Connect is its large ecosystem of connectors. Writing the code that moves data to a cloud blob store, or writes to Elasticsearch, or inserts records into a relational database is code that is unlikely to vary from one business to the next. Likewise, reading from a relational database, Salesforce, or a legacy HDFS filesystem is the same operation no matter what sort of application does it. You can definitely write this code, but spending your time doing that doesn’t add any kind of unique value to your customers or make your business more uniquely competitive.

All of these are examples of Kafka connectors available in the Confluent Hub , a curated collection of connectors of all sorts and most importantly, all licenses and levels of support. Some are commercially licensed and some can be used for free. Connect Hub lets you search for source and sink connectors of all kinds and clearly shows the license of each connector. Of course, connectors need not come from the Hub and can be found on GitHub or elsewhere in the marketplace. And if after all that you still can’t find a connector that does what you need, you can write your own using a fairly simple API.

Now, it might seem straightforward to build this kind of functionality on your own: If an external source system is easy to read from, it would be easy enough to read from it and produce to a destination topic. If an external sink system is easy to write to, it would again be easy enough to consume from a topic and write to that system. But any number of complexities arise, including how to handle failover, horizontally scale, manage commonplace transformation operations on inbound or outbound data, distribute common connector code, configure and operate this through a standard interface, and more.

Connect seems deceptively simple on its surface, but it is in fact a complex distributed system and plugin ecosystem in its own right. And if that plugin ecosystem happens not to have what you need, the open-source Connect framework makes it simple to build your own connector and inherit all the scalability and fault tolerance properties Connect offers.

Schema Registry

schema registry

Once applications are busily producing messages to Kafka and consuming messages from it, two things will happen. First, new consumers of existing topics will emerge. These are brand new applications—perhaps written by the team that wrote the original producer of the messages, perhaps by another team—and will need to understand the format of the messages in the topic. Second, the format of those messages will evolve as the business evolves. Order objects gain a new status field, usernames split into first and last name from full name, and so on. The schema of our domain objects is a constantly moving target, and we must have a way of agreeing on the schema of messages in any given topic.

Confluent Schema Registry exists to solve this problem.

What Is Schema Registry?

Schema Registry is a standalone server process that runs on a machine external to the Kafka brokers. Its job is to maintain a database of all of the schemas that have been written into topics in the cluster for which it is responsible. That “database” is persisted in an internal Kafka topic and cached in the Schema Registry for low-latency access. Schema Registry can be run in a redundant, high-availability configuration, so it remains up if one instance fails.

Schema Registry is also an API that allows producers and consumers to predict whether the message they are about to produce or consume is compatible with previous versions. When a producer is configured to use the Schema Registry, it calls an API at the Schema Registry REST endpoint and presents the schema of the new message. If it is the same as the last message produced, then the produce may succeed. If it is different from the last message but matches the compatibility rules defined for the topic, the produce may still succeed. But if it is different in a way that violates the compatibility rules, the produce will fail in a way that the application code can detect.

Likewise on the consume side, if a consumer reads a message that has an incompatible schema from the version the consumer code expects, Schema Registry will tell it not to consume the message. Schema Registry doesn’t fully automate the problem of schema evolution—that is a challenge in any system regardless of the tooling—but it does make a difficult problem much easier by keeping runtime failures from happening when possible.

Looking at what we’ve covered so far, we’ve got a system for storing events durably, the ability to write and read those events, a data integration framework, and even a tool for managing evolving schemas. What remains is the purely computational side of stream processing.

Kafka Streams

In a growing Kafka-based application, consumers tend to grow in complexity. What might have started as a simple stateless transformation (e.g., masking out personally identifying information or changing the format of a message to conform with internal schema requirements) soon evolves into complex aggregation, enrichment, and more. If you recall the consumer code we looked at up above, there isn’t a lot of support in that API for operations like those: You’re going to have to build a lot of framework code to handle time windows, late-arriving messages, lookup tables, aggregation by key, and more. And once you’ve got that, recall that operations like aggregation and enrichment are typically stateful.

That “state” is going to be memory in your program’s heap, which means it’s a fault tolerance liability. If your stream processing application goes down, its state goes with it, unless you’ve devised a scheme to persist that state somewhere. That sort of thing is fiendishly complex to write and debug at scale and really does nothing to directly make your users’ lives better. This is why Apache Kafka provides a stream processing API. This is why we have Kafka Streams .

What Is Kafka Streams?

Kafka Streams is a Java API that gives you easy access to all of the computational primitives of stream processing: filtering, grouping, aggregating, joining, and more, keeping you from having to write framework code on top of the consumer API to do all those things. It also provides support for the potentially large amounts of state that result from stream processing computations. If you’re grouping events in a high-throughput topic by a field with many unique values then computing a rollup over that group every hour, you might need to use a lot of memory.

Indeed, for high-volume topics and complex stream processing topologies, it’s not at all difficult to imagine that you’d need to deploy a cluster of machines sharing the stream processing workload like a regular consumer group would. The Streams API solves both problems by handling all of the distributed state problems for you: It persists state to local disk and to internal topics in the Kafka cluster, and it automatically reassigns state between nodes in a stream processing cluster when adding or removing stream processing nodes to the cluster.

In a typical microservice, stream processing is a thing the application does in addition to other functions. For example, a shipment notification service might combine shipment events with events in a product information changelog containing customer records to produce shipment notification objects, which other services might turn into emails and text messages. But that shipment notification service might also be obligated to expose a REST API for synchronous key lookups by the mobile app or web front end when rendering views that show the status of a given shipment.

The service is reacting to events—and in this case, joining three streams together, and perhaps doing other windowed computations on the joined result—but it is also servicing HTTP requests against its REST endpoint, perhaps using the Spring Framework or Micronaut or some other Java API in common use. Because Kafka Streams is a Java library and not a set of dedicated infrastructure components that do stream processing and only stream processing, it’s trivial to stand up services that use other frameworks to accomplish other ends (like REST endpoints) and sophisticated, scalable, fault-tolerant stream processing.

Learn About Kafka with More Free Courses and Tutorials

  • Learn stream processing in Kafka with the Kafka Streams course
  • Get started with Kafka Connectors in the Kafka Connect course
  • Check out Michael Noll’s four-part series on Streams and Tables in Apache Kafka
  • Listen to the podcast about Knative 101: Kubernetes and Serverless Explained with Jacques Chester

Confluent Cloud is a fully managed Apache Kafka service available on all three major clouds. Try it for free today.

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

A wide range of resources to get you started

Deep-dives into key concepts

Architectures for event streaming

Q & A about Kafka® and its ecosystem

The Confluent blog

what is kafka presentation

Our podcast, Streaming Audio

Build a client app, explore use cases, and build on our demos and resources

Build apps in your favorite language

Hands-on stream processing examples

More resources to get you started

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Kafka and data streaming community

Community forums and Slack channels

Sharing expertise with the community

Bi-weekly newsletter with Apache Kafka® resources, news from the community, and fun links.

Nominate amazing use cases and view previous winners

Real-world Examples of Apache Kafka® and Flink® in action

Register now!

View sessions and slides from Current 2023

View sessions and slides from Kafka Summit 2023

  • Apache Kafka

Introduction to Apache Kafka ¶

Apache Kafka® is a distributed event streaming platform that is used for building real-time data pipelines and streaming applications. Kafka is designed to handle large volumes of data in a scalable and fault-tolerant manner, making it ideal for use cases such as real-time analytics, data ingestion, and event-driven architectures.

At its core, Kafka is a distributed publish-subscribe messaging system. Data is written to Kafka topics by producers and consumed from those topics by consumers. Kafka topics can be partitioned, enabling the parallel processing of data, and topics can be replicated across multiple brokers for fault tolerance.

With Kafka you get command-line tools for management and administration tasks, and Java and Scala APIs to build an event streaming solution for your scenarios.

Ready to get started?

  • Sign up for Confluent Cloud , the fully managed cloud-native service for Apache Kafka® and get started for free using the Cloud quick start .
  • Download Confluent Platform , the self managed, enterprise-grade distribution of Apache Kafka and get started using the Confluent Platform quick start .

Events and event streaming ¶

To understand distributed event streaming in more detail, you should first understand that an event is a record that “something happened” in the world or in your business. For example, in a ride-share system, you might see the following event:

  • Event key: “Alice”
  • Event value: “Trip requested at work location”
  • Event timestamp: “Jun. 25, 2020 at 2:06 p.m.”

The event data describes what happened, when, and who was involved. Event streaming is the practice of capturing events like the example, in real-time from sources like databases, sensors, mobile devices, cloud services, and software applications.

../_images/kafka-intro.png

An event streaming platform captures events in order and these streams of events are stored durably for processing, manipulation, and responding to in real time or to be retrieved later. In addition, event streams can be routed to different destination technologies as needed. Event streaming ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time.

To accomplish this, Kafka is run as a cluster on one or more servers that can span multiple datacenters. and provides its functionality in a distributed, highly scalable, elastic, fault-tolerant, and secure manner. In addition, Kafka can be deployed on bare-metal hardware, virtual machines, containers, and on-premises as well as in the cloud.

Use cases ¶

Event streaming is applied to a wide variety of use cases across a large number of industries and organizations. For example:

  • As a messaging system. For example Kafka can be used to process payments and financial transactions in real-time, such as in stock exchanges, banks, and insurance companies.
  • Activity tracking. For example Kafka can be used to track and monitor cars, trucks, fleets, and shipments in real-time, such as for taxi services, in logistics and the automotive industry.
  • To gather metrics data. For example Kafka can be used to continuously capture and analyze sensor data from IoT devices or other equipment, such as in factories and wind parks.
  • For stream processing. For example use Kafka to collect and react to customer interactions and orders, such as in retail, the hotel and travel industry, and mobile applications.
  • To decouple a system. For example, use Kafka to connect, store, and make available data produced by different divisions of a company.
  • To integrate with other big data technologies such as Hadoop.

Terminology ¶

Kafka is a distributed system consisting of different kinds of servers and clients that communicate events via a high-performance TCP network protocol. These servers and clients are all designed to work together. Following are some key terminology that you should be familiar with:

A broker refers to a server in the Kafka storage layer that stores event streams from one or more sources. A Kafka cluster is typically comprised of several brokers. Every broker in a cluster is also a bootstrap server, meaning if you can connect to one broker in a cluster, you can connect to every broker.

The Kafka cluster organizes and durably stores streams of events in categories called topics , which are Kafka’s most fundamental unit of organization. A topic is a log of events, similar to a folder in a filesystem, where events are the files in that folder.

A topic has the following characteristics:

  • A topic is append only: When a new event message is written to a topic, the message is appended to the end of the log.
  • Events in the topic are immutable, meaning they cannot be modified after they are written.
  • A consumer reads a log by looking for an offset and then reading log entries that follow sequentially.
  • Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events.

Topics cannot be queried, however, events in a topic can be read as often as needed, and unlike other messaging systems, events are not deleted after they are consumed. Instead, topics can be configured to expire data after it has reached a certain age or when the topic has reached a certain size. Kafka’s performance is effectively constant with respect to data size, so storing data for a long time should have a nominal effect on performance.

Kafka provides several CLI tools with Kafka that you can use to manage clusters, brokers and topics, and an Admin Client API so that you can implement your own admin tools.

Confluent Tip

See the Confluent Cloud Quick Start to easily get started with Kafka for free.

Producers ¶

Producers are clients that write events to Kafka. The producer specifies the topics they will write to and the producer controls how events are assigned to partitions within a topic. This can be done in a round-robin fashion for load balancing or it can be done according to some semantic partition function such as by the event key.

Kafka provides the Java Producer API to enable applications to send streams of events to a Kafka cluster.

For details on producer design and how messages are exchanged between producers, brokers, and consumers, see Kafka Producer Design and Message Delivery Guarantees .

For a short video that describes Kafka producers, see:

Consumers ¶

Consumers are clients that read events from Kafka.

The only metadata retained on a per-consumer basis is the offset or position of that consumer in a topic. This offset is controlled by the consumer. Normally a consumer will advance its offset linearly as it reads records, however, because the position is controlled by the consumer it can consume records in any order. For example, a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from “now”.

This combination of features means that Kafka consumers can come and go without much impact on the cluster or on other consumers.

Kafka provides the Java Consumer API to enable applications to read streams of events from a Kafka cluster.

For details on consumer design and how messages are exchanged between producers, brokers, and consumers, see Kafka Consumer Design: Consumers, Consumer Groups, and Offsets and Message Delivery Guarantees .

For a video that describes Kafka consumers, see:

Find more documentation, tutorials and sample code for creating Kafka producer and consumer clients in several languages in the Clients section of the Confluent documentation.

Partitions ¶

Topics are broken up into partitions , meaning a single topic log is broken into multiple logs located on different Kafka brokers. This way, the work of storing messages, writing new messages, and processing existing messages can be split among many nodes in the cluster. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time.

When a new event is published to a topic, it is actually appended to one of the topic’s partitions. Events with the same event key such as the same customer identifier or vehicle ID are written to the same partition, and Kafka guarantees that any consumer of a given topic partition will always read that partition’s events in exactly the same order as they were written.

../_images/streams-and-tables-p1_p4.png

This example topic in the image has four partitions P1–P4. Two different producer clients are publishing new events to the topic, independently from each other, by writing events over the network to the topic’s partitions. Events with the same key, which are shown with different colors in the image, are written to the same partition. Note that both producers can write to the same partition if appropriate.

Replication ¶

Replication is an important part of keeping your data highly-available and fault tolerant. Every topic can be replicated, even across geo-regions or datacenters. This means that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and more. A common production setting is a replication factor of 3, meaning there will always be three copies of your data. This replication is performed at topic partition level.

For an in-depth discussion of replication in Kafka, see Kafka Replication and Committed Messages .

Components ¶

In addition to brokers and client producers and consumers, there are other key components of Kafka that you should be familiar with:

Kafka Connect ¶

Kafka Connect is a component of Kafka that provides data integration between databases, key-value stores, search indexes, file systems and Kafka brokers. Kafka Connect provides a common framework for you to define connectors , which do the work of moving data in and out of Kafka.

There are two different types of connectors:

  • Source connectors that act as producers for Kafka
  • Sink connectors that act as consumers for Kafka

You can use one of the many connectors provided by the Kafka community, or use the Connect API to build and run your own custom data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications.

For a list of connectors you can use with Confluent Platform, see Connectors for Confluent Platform . For a list of connectors you can use with Confluent Cloud, see Connectors for Confluent Cloud

Kafka Streams ¶

In Kafka, a stream processor is anything that takes continual streams of data from input topics, performs some processing on this input, and produces continual streams of data to output topics. For example, a ride-share application might take in input streams of drivers and customers, and output a stream of rides currently taking place.

You can do simple processing directly using the producer and consumer APIs. However for more complex transformations, Kafka provides Kafka Streams .

Kafka Streams provides a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. You can build applications with Kafka Streams that does non-trivial processing tasks that compute aggregations off of streams or join streams together.

Streams help solve problems such as: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.

Streams builds on the core Kafka primitives, specifically it uses:

  • The producer and consumer APIs for input Kafka for stateful storage
  • The same group mechanism for fault tolerance among the stream processor instances

For more information, see the Kafka Streams API .

Learn more ¶

  • For a series of videos that introduce Kafka and the concepts in this topic, see Kafka 101 .
  • For a deep-dive into the design decisions and features of Kafka, see Kafka Design Overview .

This website includes content developed at the Apache Software Foundation under the terms of the Apache License v2 .

Introduction to Apache Kafka

Last updated: May 11, 2024

what is kafka presentation

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Download the E-book

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode , for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, visit the documentation page .

You can also ask questions and leave feedback on the Azure Container Apps GitHub page .

Java applications have a notoriously slow startup and a long warmup time. The CRaC (Coordinated Restore at Checkpoint) project from OpenJDK can help improve these issues by creating a checkpoint with an application's peak performance and restoring an instance of the JVM to that point.

To take full advantage of this feature, BellSoft provides containers that are highly optimized for Java applications. These package Alpaquita Linux (a full-featured OS optimized for Java and cloud environment) and Liberica JDK (an open-source Java runtime based on OpenJDK).

These ready-to-use images allow us to easily integrate CRaC in a Spring Boot application:

Improve Java application performance with CRaC support

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

To learn more about Java features on Azure Container Apps, you can get started over on the documentation page .

And, you can also ask questions and leave feedback on the Azure Container Apps GitHub page .

Whether you're just starting out or have years of experience, Spring Boot is obviously a great choice for building a web application.

Jmix builds on this highly powerful and mature Boot stack, allowing devs to build and deliver full-stack web applications without having to code the frontend. Quite flexibly as well, from simple web GUI CRUD applications to complex enterprise solutions.

Concretely, The Jmix Platform includes a framework built on top of Spring Boot, JPA, and Vaadin , and comes with Jmix Studio, an IntelliJ IDEA plugin equipped with a suite of developer productivity tools.

The platform comes with interconnected out-of-the-box add-ons for report generation, BPM, maps, instant web app generation from a DB, and quite a bit more:

>> Become an efficient full-stack developer with Jmix

DbSchema is a super-flexible database designer, which can take you from designing the DB with your team all the way to safely deploying the schema .

The way it does all of that is by using a design model , a database-independent image of the schema, which can be shared in a team using GIT and compared or deployed on to any database.

And, of course, it can be heavily visual, allowing you to interact with the database using diagrams, visually compose queries, explore the data, generate random data, import data or build HTML5 database reports.

>> Take a look at DBSchema

Get non-trivial analysis (and trivial, too!) suggested right inside your IDE or Git platform so you can code smart, create more value, and stay confident when you push.

Get CodiumAI for free and become part of a community of over 280,000 developers who are already experiencing improved and quicker coding.

Write code that works the way you meant it to:

>> CodiumAI. Meaningful Code Tests for Busy Devs

The AI Assistant to boost Boost your productivity writing unit tests - Machinet AI .

AI is all the rage these days, but for very good reason. The highly practical coding companion, you'll get the power of AI-assisted coding and automated unit test generation . Machinet's Unit Test AI Agent utilizes your own project context to create meaningful unit tests that intelligently aligns with the behavior of the code. And, the AI Chat crafts code and fixes errors with ease, like a helpful sidekick.

Simplify Your Coding Journey with Machinet AI :

>> Install Machinet AI in your IntelliJ

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

Download the E-book

Do JSON right with Jackson

Get the most out of the Apache HTTP Client

Get Started with Apache Maven:

Working on getting your persistence layer right with Spring?

Explore the eBook

Building a REST API with Spring?

Get started with Spring and Spring Boot, through the Learn Spring course:

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Get started with Spring and Spring Boot, through the reference Learn Spring course:

>> LEARN SPRING

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth , to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project .

You can explore the course here:

>> Learn Spring Security

1. Overview

In this tutorial, we’ll learn the basics of Kafka – the use cases and core concepts anyone should know. We can then find and understand more detailed articles about Kafka .

2. What Is Kafka?

Kafka is an open-source stream processing platform developed by the Apache Software Foundation. We can use it as a messaging system to decouple message producers and consumers, but in comparison to “classical” messaging systems like ActiveMQ , it is designed to handle real-time data streams and provides a distributed, fault-tolerant, and highly scalable architecture for processing and storing data.

Therefore, we can use it in various use cases:

  • Real-time data processing and analytics
  • Log and event data aggregation
  • Monitoring and metrics collection
  • Clickstream data analysis
  • Fraud detection
  • Stream processing in big data pipelines

3. Setup A Local Environment

If we deal with Kafka for the first time, we might like to have a local installation to experience its features. We could get this quickly with the help of Docker.

3.1. Install Kafka

We download an existing image and run a container instance with this command:

This will make the so-called Kafka broker available on the host system at port 9092. Now, we would like to connect to the broker using a Kafka client. There are multiple clients that we can use.

3.2. Use Kafka CLI

The Kafka CLI is part of the installation and is available within the Docker container. We can use it by connecting to the container’s bash.

First, we need to find out the container’s name with this command:

In this sample, the name is awesome_aryabhata . We then connect to the bash using:

Now, we can, for example, create a topic (we’ll clarify this term later) and list all existing topics with this commands:

3.3. Use Offset Explorer

The Offset Explorer (formerly: Kafka Tool) is a GUI application for managing Kafka. We can download and install it quickly. Then, we create a connection and specify the host and port of the Kafka broker:

Screenshot of creating a connection with Offset Explorer

Then, we can explore the architecture:

Kafka Architecture shown in Offset Explorer

3.4. Use UI for Apache Kafka (Kafka UI)

The UI for Apache Kafka (Kafka UI) is a web UI, implemented with Spring Boot and React, and provided as a Docker container for a simple installation with the following command:

We can then open the UI in the browser using http://localhost:8080 and define a cluster, as this picture shows:

Screenshot from Kafka UI

Because the Kafka broker runs in a different container than the Kafka UI’s backend, it will not have access to localhost:9092 . We could instead address the host system using host.docker.internal:9092 , but this is just the bootstrapping URL.

Unfortunately, Kafka itself will return a response that leads to a redirection to localhost:9092 again, which won’t work. If we do not want to configure Kafka (because this would break with the other clients then), we need to create a port forwarding from the Kafka UI’s container port 9092 to the host systems port 9092. The following sketch illustrates the connections:

Sketch showing Kafka UI Port Forwarding

We can setup this container-internal port forwarding, e.g. using socat . We have to install it within the container (Alpine Linux), so we need to connect to the container’s bash with root permissions. So we need these commands, beginning within the host system’s command line:

Unfortunately, we need to run socat each time we start the container. Another possibility would be to provide an extension to the Dockerfile .

Now, we can specify localhost:9092 as the bootstrap server within the Kafka UI and should be able to view and create topics, as shown below:

Topics displayed by Kafka UI

3.5. Use Kafka Java Client

We have to add the following Maven dependency to our project:

We can then connect to Kafka and consume the messages we produced before:

Of course, there is an integration for the Kafka Client in Spring .

4. Basic Concept

4.1. producers & consumers.

We can differentiate Kafka clients into consumers and producers. Producers send messages to Kafka, while consumers receive messages from Kafka. They only receive messages by actively polling from Kafka. Kafka itself is acting in a passive way. This allows each consumer to have its own performance without blocking Kafka.

Of course, there can be multiple producers and multiple consumers at the same time. And, of course, one application can contain both producers and consumers.

Consumers are part of a Consumer Group that Kafka identifies by a simple name. Only one consumer of a consumer group will receive the message. This allows scaling out consumers with the guarantee of only-once message delivery.

The following picture shows multiple producers and consumers working together with Kafka:

Producers and Consumers

4.2. Messages

A message (we can also name it “ record ” or “ event “, depending on the use case) is the fundamental unit of data that Kafka processes. Its payload can be of any binary format as well as text formats like plain text, Avro , XML, or JSON.

Each producer has to specify a serializer to transform the message object into the binary payload format. Each consumer has to specify a corresponding deserializer to transform the payload format back to an object within its JVM. We call these components shortly SerDes . There are built-in SerDes , but we can implement custom SerDes too.

The following picture shows the payload serialization and deserialization process:

Payload SerDes Process

Additionally, a message can have the following optional attributes:

  • A key that also can be of any binary format. If we use keys, we also need SerDes. Kafka uses keys for partitioning (we’ll discuss this in more detail in the next chapter).
  • A timestamp indicates when the message was produced. Kafka uses timestamps for ordering messages or to implement retention policies.
  • We can apply headers to associate metadata with the payload. E.g. Spring adds by default type headers for serialization and deserialization.

4.3. Topics & Partitions

A topic is a logical channel or category to which producers publish messages. Consumers subscribe to a topic to receive messages from in the context of their consumer group.

By default, the retention policy of a topic is 7 days, i.e. after 7 days, Kafka deletes the messages automatically, independent of delivering to consumers or not. We can configure this if necessary.

Topics consist of partitions (at least one). To be exact, messages are stored in one partition of the topic. Within one partition, messages get an order number ( offset ). This can ensure that messages are delivered to the consumer in the same order as they were stored in the partition. And, by storing the offsets that a consumer group already received, Kafka guarantees only-once delivery.

By dealing with multiple partitions, we can determine that Kafka can provide both ordering guarantees and load balancing over a pool of consumer processes.

One consumer will be assigned to one partition when it subscribes to the topic , e.g. with the Java Kafka client API, as we have already seen:

However, for a consumer, it is possible to choose the partition(s) it wants to poll messages from:

The disadvantage of this variant is that all group consumers have to use this, so automatically assigning partitions to group consumers won’t work in combination with single consumers that connect to a special partition. Also, rebalancing is not possible in case of architectural changes like adding further consumers to the group.

Ideally, we have as many consumers as partitions , so that every consumer can be assigned to exactly one of the partitions, as shown below:

ideal count of consumers

If we have more consumers than partitions, those consumers won’t receive messages from any partition:

too much consumers

If we have fewer consumers than partitions, consumers will receive messages from multiple partitions, which conflicts with optimal load balancing:

too less consumers

Producers do not necessarily send messages to only one partition. Every produced message is assigned to one partition automatically, following these rules:

  • Producers can specify a partition as part of the message. If done so, this has the highest priority
  • If the message has a key, partitioning is done by calculating the hash of the key. Keys with the same hash will be stored in the same partition. Ideally, we have at least as many hashes as partitions
  • Otherwise, the Sticky Partitioner distributes the messages to partitions

Again, storing messages to the same partition will retain the message ordering, while storing messages to different partitions will lead to disordering but parallel processing.

If the default partitioning does not match our expectations, we can simply implement a custom partitioner. Therefore, we implement the Partitioner interface and register it during the initialization of the producer:

The following picture shows producers and consumers and their connections to the partitions:

producers and partitioners

Each producer has its own partitioner, so if we want to ensure that messages are partitioned consistently within the topic, we have to ensure that the partitioners of all producers work the same way, or we should only work with a single producer.

Partitions store messages in the order they arrive at the Kafka broker. Typically, a producer does not send each message as a single request, but it will send multiple messages within a batch. If we need to ensure the order of the messages and only-once delivery within one partition, we need transaction-aware producers and consumers .

4.4. Clusters and Partition Replicas

As we have found out, Kafka uses topic partitions to allow parallel message delivery and load balancing of consumers. But Kafka itself must be scalable and fault-tolerant. So we typically do not use a single Kafka Broker, but a Cluster of multiple brokers. These brokers do not behave completely the same, but each of them is assigned special tasks that the rest of the cluster can then absorb if one broker fails.

To understand this, we need to expand our understanding of topics. When creating a topic, we not only specify the number of partitions but also the number of brokers that jointly manage the partitions using synchronization. We call this the Replication Factor . For example, using the Kafka CLI, we could create a topic with 6 partitions, each of them synchronized on 3 brokers:

For example, a replication factor of three means, that the cluster is resilient for up to two replica failures ( N-1 resiliency ). We have to ensure that we have at least as many brokers as we specify as the replication factor. Otherwise, Kafka does not create the topic until the count of brokers increases.

For better efficiency, replication of a partition only occurs in one direction. Kafka achieves this by declaring one of the brokers as the Partition Leader . Producers only send messages to the partition leader, and the leader then synchronizes with the other brokers. Consumers will also poll from the partition leader because the increasing consumer group’s offset has to be synchronized too.

Partition leading is distributed to multiple brokers. Kafka tries to find different brokers for different partitions. Let’s see an example with four brokers and two partitions with a replication factor of three:

Image displaying Kafka Topology with 2 partitions and a replication factor of 2

Broker 1 is the leader of Partition 1, and Broker 4 is the leader of Partition 2. So each client will connect to those brokers when sending or polling messages from these partitions. To get information about the partition leaders and other available brokers (metadata), there is a special bootstrapping mechanism. In summary, we can say that every broker can provide the cluster’s metadata, so the client could initialize the connection with each of these brokers, and will redirect to the partition leaders then. That’s why we can specify multiple brokers as bootstrapping servers .

If one partition-leading broker fails, Kafka will declare one of the still-working brokers as the new partition leader. Then, all clients have to connect to the new leader. In our example, if Broker 1 fails, Broker 2 becomes the new leader of Partition 1. Then, the clients that were connected to Broker 1 have to switch to Broker 2.

Kafka Topology after Broker Failure

Kafka uses Kraft (in earlier versions: Zookeeper) for the orchestration of all brokers within the cluster.

4.4. Putting All Together

If we put producers and consumers together with a cluster of three brokers that manage a single topic with three partitions and a replication factor of 3, we’ll get this architecture:

Topics, Partitions and Replicas

5. Ecosystem

We already know that multiple clients like a CLI, a Java-based client with integration to Spring applications, and multiple GUI tools are available to connect with Kafka. Of course, there are further Client APIs for other programming languages (e.g., C/C++ , Python , or Javascript ), but those are not part of the Kafka project.

Built on top of these APIs, there are further APIs for special purposes.

5.1. Kafka Connect API

Kafka Connect is an API for exchanging data with third-party systems. There are existing connectors e.g. for AWS S3, JDBC, or even for exchanging data between different Kafka clusters. And of course, we can write custom connectors too.

5.2. Kafka Streams API

Kafka Streams is an API for implementing stream processing applications that get their input from a Kafka topic, and store the result in another Kafka topic.

KSQL is an SQL-like interface built on top of Kafka Streams. It does not require us to develop Java code, but we can declare SQL-like syntax to define stream processing of messages that are exchanged with Kafka. For this, we use the ksqlDB , which connects to the Kafka cluster. We can access ksqlDB with a CLI or with a Java client application.

5.4. Kafka REST Proxy

The Kafka REST proxy provides a RESTful interface to a Kafka cluster. This way, we do not need any Kafka clients and avoid using the native Kafka protocol. It allows web frontends to connect with Kafka and makes it possible to use network components like API gateways or firewalls.

5.5. Kafka Operators for Kubernetes (Strimzi)

Strimzi is an open-source project that provides a way to run Kafka on Kubernetes and OpenShift platforms. It introduces custom Kubernetes resources making it easier to declare and manage Kafka-related resources in a Kubernetes-native way. It follows the Operator Pattern , i.e. operators automate tasks like provisioning, scaling, rolling updates, and monitoring of Kafka clusters.

6. Conclusion

In this article, we have learned that Kafka is designed for high scalability and fault tolerance. Producers collect messages and send them in batches, topics are divided into partitions to allow parallel message delivery and load balancing of consumers, and replication is done over multiple brokers to ensure fault tolerance.

As usual, all the code implementations are available  over on GitHub .

Explore the secure, reliable, and high-performance Test Execution Cloud built for scale. Right in your IDE:

Basically, write code that works the way you meant it to.

AI is all the rage these days, but for very good reason. The highly practical coding companion, you'll get the power of AI-assisted coding and automated unit test generation . Machinet's Unit Test AI Agent utilizes your own project context to create meaningful unit tests that intelligently aligns with the behavior of the code.

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

RWS Course Banner

Introduction

Everything you need to know about Kafka in 10 minutes (clicking the image will load a video from YouTube)

what is kafka presentation

[Webinar + Demo] BMW Group’s Omnichannel Transformation Using Data Streaming | Join!

Login Contact Us

Kafka: The Definitive Guide

Try confluent cloud for free..

What is Kafka, and how does it work? In this comprehensive e-book, you'll get full introduction to Apache Kafka ® , the distributed, publish-subscribe queue for handling real-time data feeds.  Learn how Kafka works, internal architecture, what it's used for, and how to take full advantage of Kafka stream processing technology.

Authors Neha Narkhede, Gwen Shapira, and Todd Palino show you how to deploy production Kafka clusters; secure, tune, and monitor them; write rock-solid applications that use Kafka; and build stream-processing applications that help you leverage the power of real-time data.

  • Learn how Apache Kafka compares to other publish/subscribe messaging queues
  • How Kafka fits in in the big data ecosystem
  • Dive into internal architecture and design (Kafka producers, consumers, topics, brokers, logs, and more)
  • Pick up best practices for developing applications that use Kafka
  • Understand the best way to deploy Kafka in production monitoring, tuning, and maintenance tasks
  • Learn how to secure a Kafka cluster
  • Learn the most critical metrics
  • Why Kafka? Get detailed benefits and stream processing use-cases
  • Building real-time data pipelines: considerations and using Kafka Connect
  • Guide to Kafka Streams with examples and use cases

Download the free e-book and get started!

Additional Resources

cc demo

Confluent Cloud Demo

kafka microservices

Kafka Microservices

Image-Event-Driven Microservices-01

e-book: Microservices Customer Stories

IMAGES

  1. Apache Kafka Explained

    what is kafka presentation

  2. PPT

    what is kafka presentation

  3. What is Kafka?

    what is kafka presentation

  4. what is kafka presentation

    what is kafka presentation

  5. What is Kafka, Kafka Cluster and How does Kafka work?

    what is kafka presentation

  6. What is Kafka and How does it work?

    what is kafka presentation

VIDEO

  1. (PART 1) Kafka Downstream Replication with Mysql InnoDB

  2. What is Apache Kafka? #coding #eventdriven #eventdrivenarchitecture

  3. The Metamorphosis Parody song

  4. Lecture 1

  5. Introduction to Kafka Streams

  6. Belajar Kafka

COMMENTS

  1. What is Kafka, and How Does it Work? A Tutorial for Beginners

    Apache Kafka is a data streaming system used for real-time data pipelines, data integration, and event-driven systems. Learn how Kafka works with examples and use cases.

  2. Introduction to Apache Kafka | Confluent Documentation

    At its core, Kafka is a distributed publish-subscribe messaging system. Data is written to Kafka topics by producers and consumed from those topics by consumers. Kafka topics can be partitioned, enabling the parallel processing of data, and topics can be replicated across multiple brokers for fault tolerance.

  3. A visual introduction to Apache Kafka | PPT - SlideShare

    Kafka is a distributed streams processing system, it allows distributed producers to send messages to distributed consumers via a Kafka cluster.

  4. Apache Kafka for Beginners: A Comprehensive Guide

    Explore Apache Kafka with our beginner's guide. Learn the basics, get started, and uncover advanced features and real-world applications.

  5. Intro to Apache Kafka: How Kafka Works - Confluent

    Learn how Kafka works internally, complete with introductory videos for beginners—from events, topics, and brokers, to Kafka Connect, Kafka Streams, and ksqlDB.

  6. Apache Kafka in 5 minutes - YouTube

    Learn the principles of Apache Kafka and how it works through easy examples and diagrams!If you want to learn more: https://links.datacumulus.com/apache-kafk...

  7. Introduction to Apache Kafka - Baeldung

    1. Overview. In this tutorial, we’ll learn the basics of Kafka – the use cases and core concepts anyone should know. We can then find and understand more detailed articles about Kafka. 2. What Is Kafka? Kafka is an open-source stream processing platform developed by the Apache Software Foundation.

  8. Apache Kafka

    Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments.

  9. Introduction to Apache Kafka - Cloudera

    Essential Points. Hands-On Exercise: Using Kafka from the Command Line. What is Apache Kafka? Apache Kafka is a distributed commit log service. ─ Widely used for data ingest ─ Offers scalability, performance, reliability, and flexibility ─ Conceptually similar to a publish-subscribe messaging system.

  10. Apache Kafka: The Definitive Guide - Confluent

    In this comprehensive e-book, you'll get full introduction to Apache Kafka ® , the distributed, publish-subscribe queue for handling real-time data feeds. Learn how Kafka works, internal architecture, what it's used for, and how to take full advantage of Kafka stream processing technology.