How to setup Kafka environment?

In this blog, we’ll explore how to set up an Apache Kafka environment on your system. But first, let’s take a look at the different Kafka versions available.

1. Apache Kafka
open-source, community-driven version.
Best suited for developers who want complete control and are comfortable managing their ecosystem.
Requires significant setup and operational overhead.
Lacks advanced features like REST proxies or native GUI.

2. Confluent Kafka
Enterprise-ready Kafka distribution by Confluent (Licensing cost for enterprise features)
Ideal for businesses needing robust tools for monitoring, integration, and streaming analytics.
Built on Apache Kafka with added tools like Schema Registry, REST Proxy, ksqlDB, and Control Center
Offers fully managed Kafka services (Confluent Cloud).

3. Amazon Managed Streaming for Apache Kafka (MSK)
→ AWS handles the provisioning, configuration, maintenance, and scaling of Kafka clusters.

4. Strimzi
open-source project that simplifies the deployment, management, and operation of Apache Kafka on Kubernetes and OpenShift.

5. Redpanda
Kafka-compatible alternative optimized for performance and simplicity. written in c++ (No ZooKeeper required)

AKHQ (previously known as KafkaHQ) is a web-based UI for managing and monitoring Kafka clusters.
It is not tied to a specific Kafka distribution and can be configured with any Kafka setup.

How to setup Apache Kafka?

1. Prerequisites: download & install Java
2. Download & extract Apache Kafka from here
3. Start Zookeeper
zookeeper-server-start.bat ../../config/zookeeper.properties
4. Start Kafka Broker
kafka-server-start.bat ../../config/server.properties
5. Create a Topic
kafka-topics.bat --create --topic Notification-Topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
6. Send Messages Using Producer
kafka-console-producer.bat --topic Notification-Topic --bootstrap-server localhost:9092
7. Consume Messages Using Consumer
kafka-console-consumer.bat --topic Notification-Topic --bootstrap-server localhost:9092

This is a simple setup with one broker, one topic, one producer, and one consumer.


Lets configure AKHQ (UI) for our Kafka cluster.

it is easy, just need to download akhq jar file from here (current version: akhq-0.25.1-all.jar)
& prepare a config
uration file (for example: application.yml) like this.

& execute the below command.
java -Dmicronaut.config.files=application.yml -jar akhq-0.25.1-all.jar

server will start on http://localhost:8080. open the browser access server. we can see all topics & messages here.

From this UI, we can create topics, publish messages, monitor consumer groups, and check for any consumption lag.

In this blog, we understand how to setup Kafka locally, create topic, produce & consume message to topic using Kafka built in tools like kafka-console-producer & kafka-console-consumer.
let us try to understand how we can produce & consume Kafka messages using spring-boot application.


[Optional Section] : In this section, let’s add another broker to the cluster. & understand what is happening inside cluster.

Initially, we might try running the same command:
kafka-server-start.bat ../../config/server.properties
However, this will throw an error. Why?
The reason is that the same configuration file (server.properties) is being used to start a second Kafka broker. Each Kafka broker requires: A unique broker ID, A unique data directory and A unique port number.
To resolve this, we need to create a new configuration file (e.g., server2.properties) by copying the content of server.properties and updating the following properties:
broker.id=1 listeners=PLAINTEXT://:9093 log.dirs=/tmp/broker2

Now, start the new broker using the updated configuration file:
kafka-server-start.bat ../../config/server2.properties

Let’s create a topic with 2 partitions and a replication factor of 2:
kafka-topics.bat --create --topic Order-Creation-Topic --bootstrap-server localhost:9092 --partitions 2 --replication-factor 2

Each Kafka broker has its own dedicated data directory, and each partition is stored in a separate directory within that.

In this setup:

  • We created a topic with 2 partitions and a replication factor of 2, resulting in 4 partitions distributed across the brokers.

  • Broker 1 will host one leader partition and one follower partition.

Similarly, Broker 2 will also host one leader partition and one follower partition.

Previous: Kafka Architecture
Next: Spring-Boot Kafka Example

0
Subscribe to my newsletter

Read articles from Sarat Chandra Bharadwaj directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sarat Chandra Bharadwaj
Sarat Chandra Bharadwaj