Kafka Producer Overview

OpenDevOpenDev
3 min read

Table of contents

  • There are many reasons an application might need to write messages to Kafka: recording user activities for auditing or analysis, recording metrics, storing log mes‐ sages, recording information from smart appliances, communicating asynchronously with other applications, buffering information before writing to a database, and much more.

  • Those diverse use cases also imply diverse requirements: is every message critical, or can we tolerate loss of messages? Are we OK with accidentally duplicating messages? Are there any strict latency or throughput requirements we need to support?

  • In the credit card transaction processing example we introduced earlier, we can see that it is critical to never lose a single message nor duplicate any messages. Latency should be low but latencies up to 500ms can be tolerated, and throughput should be very high—we expect to process up to a million messages a second.

  • A different use case might be to store click information from a website. In that case, some message loss or a few duplicates can be tolerated; latency can be high as long as there is no impact on the user experience. In other words, we don’t mind if it takes a few seconds for the message to arrive at Kafka, as long as the next page loads immedi‐ ately after the user clicked on a link. Throughput will depend on the level of activity we anticipate on our website.

  • The different requirements will influence the way you use the producer API to write messages to Kafka and the configuration you use.

  • While the producer APIs are very simple, there is a bit more that goes on under the hood of the producer when we send data. Figure 3-1 shows the main steps involved in sending data to Kafka.

  • We start producing messages to Kafka by creating a ProducerRecord, which must include the topic we want to send the record to and a value. Optionally, we can also specify a key and/or a partition. Once we send the ProducerRecord, the first thing the producer will do is serialize the key and value objects to ByteArrays so they can be sent over the network.

  • Next, the data is sent to a partitioner. If we specified a partition in the ProducerRecord, the partitioner doesn’t do anything and simply returns the partition we specified. If we didn’t, the partitioner will choose a partition for us, usually based on the ProducerRecord key. Once a partition is selected, the producer knows which topic and partition the record will go to. It then adds the record to a batch of records that will also be sent to the same topic and partition. A separate thread is responsible for sending those batches of records to the appropriate Kafka brokers.

  • When the broker receives the messages, it sends back a response. If the messages were successfully written to Kafka, it will return a RecordMetadata object with the topic, partition, and the offset of the record within the partition. If the broker failed to write the messages, it will return an error. When the producer receives an error, it may retry sending the message a few more times before giving up and returning an error.

source: confluent-kafka-definitive-guide-complete

5
Subscribe to my newsletter

Read articles from OpenDev directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

OpenDev
OpenDev

I am a developer from Vietnamese's