Overview
In this article, I will use the Go language as an example to demonstrate how to produce and consume messages through Kafka using the Sarama SDK. For message consumption, I will demonstrate two modes: auto-committing messages that have been consumed and actively committing message consumption ACKs. In fact, this is also the practice of different message processing models (i.e., at most once and at least once). However, I will not demonstrate the mode of consuming messages exactly once here because this is not a feature supported by a simple message queue. It requires some additional business support, so I will only briefly discuss it without deep dive.
Producer
Message sending to Kafka by the producer can be divided into asynchronous and synchronous modes. Synchronous mode is easy to understand. After sending a message, we can wait for the result (whether it is successfully received and saved by the message queue). However, asynchronous mode is a bit more complicated. We cannot directly wait for Kafka to receive the message at the place where the message is produced. Instead, we need to listen to a channel to get the message result (success or failure) and determine whether our message has been successfully processed based on some identifiers from the result.
Both modes have their pros and cons:
Mode | Synchronous | Asynchronous |
---|---|---|
Advantages | - High reliability: Can provide immediate feedback on the processing result of business events, making it easy for businesses to take corresponding actions (retry/return error) | - Higher performance: Business code will not be blocked by Kafka’s communication and processing delays, allowing it to immediately return to processing other content |
Disadvantages | - Poor performance: Performance may be affected, possibly due to delays between business and Kafka or issues between different brokers of Kafka causing business blockage | - Lower maintainability: Asynchronous logic is not as straightforward, often requiring caching or other third-party dependencies to maintain the state of asynchronous messages; - Lower reliability: When a producer exception occurs, the business code may not be able to handle events indicating whether the message was processed successfully |
However, different modes have their pros and cons, so in practical use, we can choose according to our needs. Here, I will use two different modes as examples to briefly introduce simple code implementations. The complete code has been uploaded to Github: Golang Kafka Producer.
Synchronous message production
[root@liqiang.io]# cat producer/standalone_sync.go
// Create configuration
config := sarama.NewConfig()
config.Producer.RequiredAcks = sarama.WaitForAll // Wait for all replicas to respond before responding
config.Producer.Partitioner = sarama.NewRandomPartitioner // Random partitioning strategy
config.Producer.Return.Successes = true // Successful delivery messages will be returned in the success channel
// Create a producer with the given broker address and configuration
producer, err := sarama.NewSyncProducer(strings.Split(brokers, ","), config)
defer func() {
producer.Close()
}()
// Build the message to be sent
msg := &sarama.ProducerMessage{
Topic: topic,
Key: sarama.StringEncoder("hello-world"),
Value: sarama.StringEncoder("Hello, Kafka!"),
}
// Send the message
partition, offset, err := producer.SendMessage(msg)
fmt.Printf("Message %s sent to partition %d at offset %d\n", msg.Key, partition, offset)
Asynchronous message production
[root@liqiang.io]# cat producer/standalone_async.go
// Create configuration
config := sarama.NewConfig()
config.Producer.Return.Successes = true // Enable success notification
config.Producer.Return.Errors = true // Enable error notification
// Create a producer
producer, err := sarama.NewAsyncProducer(strings.Split(brokers, ","), config)
// Send messages asynchronously
go func() {
for {
select {
case msg := <-producer.Successes():
fmt.Printf("Produced message to topic %s partition %d at offset %d\n", msg.Topic, msg.Partition, msg.Offset)
case err := <-producer.Errors():
log.Printf("Error producing message: %v\n", err)
}
}
}()
producer.Input() <- &sarama.ProducerMessage{
Topic: "test-topic",
Key: sarama.StringEncoder("hello-world"),
Value: sarama.StringEncoder("Hello, Kafka!"),
}
Consumer
In Kafka consumer, we also have different modes, distinguished by how we tell the message queue that we have successfully consumed a message. Therefore, there are two scenarios here: when we get a
message, how do we tell the message queue that we have consumed the message successfully? This situation may lead to the problem that the processing function may fail to process the message (for example, if the program crashes during processing, or the processing function cannot process the message successfully at the moment), resulting in the message not being marked as processed in the message queue, which we call the at most once scenario.
Another scenario is that we hand over the message to the processing function, and only when the processing function returns successfully, do we commit the Offset to Kafka. The complete example code has been submitted to Github: Golang Kafka Consumer.
Automatic commit mode (at most once)
In fact, Sarama’s automatic commit mode is not a strict at most once semantics, because Sarama does not commit the offset when handing over the message to the processing function, but rather commits the offset after a period of time. Therefore, it is possible that some messages have been handed over to the processing function, but because the offset has not been committed to Kafka due to the interval time, if an application exception occurs at this time, it may lead to the situation that these messages are consumed multiple times.
However, it is generally considered that this automatic commit mode is at most once semantics. The example code is as follows:
[root@liqiang.io]# cat consumer/at_most_once.go
// Create configuration
config := sarama.NewConfig()
config.Consumer.Return.Errors = true
config.Consumer.Offsets.AutoCommit.Enable = true // Enable automatic commit, default is true
config.Version = sarama.V3_6_0_0 // Set Kafka version
consumer, err := sarama.NewConsumer(strings.Split(brokers, ","), config)
defer consumer.Close()
partitions, err := consumer.Partitions(topic)
for _, partition := range partitions {
partitionConsumer, err := consumer.ConsumePartition(topic, partition, sarama.OffsetOldest)
go func(pc sarama.PartitionConsumer) {
for {
msg := <-pc.Messages()
fmt.Printf("Partition: %d, Offset: %d, Key: %s, Value: %s\n", msg.Partition, msg.Offset, string(msg.Key), string(msg.Value))
}
}(partitionConsumer)
}
Manual commit mode (at least once)
In manual commit mode, we need to customize a ConsumerGroupHandler, which has three method interfaces:
type ConsumerGroupHandler interface {
Setup(ConsumerGroupSession) error
Cleanup(ConsumerGroupSession) error
ConsumeClaim(ConsumerGroupSession, ConsumerGroupClaim) error
}
Their calling timing and calling specifications are as follows:
Setup
: Before the consumer is actually started, this function is used to configure the consumer, and after execution, the actual consumption begins;ConsumeClaim
: This processing function must run a loop internally to consume messages fromConsumerGroupClaim
. If this channel is closed, the function should exit the loop and return;Cleanup
: This function is called after allConsumeClaim
goroutines have returned and before the offset is reported for the last time.
Below is an example using this custom Handler:
[root@liqiang.io]# cat consumer/at_least_once.go
// Create configuration
config := sarama.NewConfig()
config.Version = sarama.V3_6_0_0
config.Consumer.Group.Rebalance.GroupStrategies = []sarama.BalanceStrategy{
sarama.NewBalanceStrategyRoundRobin(),
}
config.Consumer.Offsets.Initial = sarama.OffsetNewest
consumer := CustomCommitConsumer{
ready: make(chan bool),
}
client, err := sarama.NewConsumerGroup(strings.Split(brokers, ","), group, config)
go func() {
for {
// If a rebalance occurs, Consume will return, and we need to create a new Consumer at this time
client.Consume(ctx, strings.Split(topic, ","), &consumer)
}
}()
<-sigterm
}
type CustomCommitConsumer struct {
ready chan bool
}
func (c *CustomCommitConsumer) Setup(sarama.ConsumerGroupSession) error {
// Mark the consumer as ready
close(c.ready)
return nil
}
func (c *CustomCommitConsumer) Cleanup(sarama.ConsumerGroupSession) error {
return nil
}
func (c *CustomCommitConsumer) ConsumeClaim(session sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {
for {
message, ok := <-claim.Messages()
log.Printf("Message claimed: value = %s, timestamp = %v, topic = %s", string(message.Value), message.Timestamp, message.Topic)
session.MarkMessage(message, "") // Manually commit Offset
}
}
Stream API
In the Java SDK, there is a Stream API, which allows applications to define a processing function for messages, specify the input and output topics, and then the SDK can help us directly put the messages from the input topic into the output topic after processing. However, this is a superficial understanding
. In fact, it also supports stateless map and reduce operations, as well as stateful aggregate, join, and other operations. In addition, it also needs to consider the processing thread model and fault tolerance. Actually, I just briefly understood it and did not delve into trying and learning it. If you are interested, you can check the Kafka official documentation: STREAMS DSL.
At the same time, for Sarama SDK in Go, it does not support Stream semantics, so I will not delve into it here.
Admin API
In a previous article I wrote about installing and operating Kafka (Kafka Installation and Operation), I introduced many Kafka operational commands. If we want to develop based on Kafka or need to create a platform to manage Kafka, we may need to see if our SDK can support similar functionality, allowing us to get or control Kafka’s status and configuration information through an API.
Unlike the Stream API, Sarama does support the Admin API. However, when initializing, we need to initialize the ClusterAdmin
object. For example, here is an example that retrieves all topics from Kafka:
[root@liqiang.io]# cat admin/topics.go
// Create Kafka configuration
config := sarama.NewConfig()
config.Version = sarama.V3_6_0_0
// Create Admin client
admin, err := sarama.NewClusterAdmin(strings.Split(brokers, ","), config)
defer admin.Close()
// Get the list of topics
topics, err := admin.ListTopics()
// Print the list of topics
for name := range topics {
fmt.Println(name)
}
Of course, ClusterAdmin
not only provides viewing and operating Topic, but also supports other information viewing and operating, such as ACL, ConsumerGroup, partition, etc., but I won’t demonstrate them one by one. It’s enough to know that these functions are supported.
Additional Topics
Exactly Once
When using a message queue, we are concerned about the message processing semantics. We have mentioned at most once and at least once semantics before, but sometimes we need exactly once, which means exactly once, neither more nor less. This problem is a more complex one, involving multiple aspects:
- Producer: To achieve exactly once, the first step is to ensure that the producer produces only once. Kafka provides message sequencing to ensure that data is not duplicated, that is, if multiple messages with the same sequence number are sent, Kafka will only save one;
- To learn: How is it implemented? What should be paid attention to?
- Message queue: After the message is successfully received by the message queue, we also need to ensure that the message queue’s own exceptions will not lead to data loss. For example, if a Kafka node goes down, through the Replication mechanism, we can ensure that other nodes still hold replicas, thereby ensuring that data is not lost. (If all nodes holding replicas go down at the same time and cannot be recovered, there is nothing we can do but lose messages.)
- Consumer: Consumers need business support to ensure that messages are not lost and that they are not duplicated. Therefore, when implementing under the at least once mode, further optimization is carried out:
- Because it is not known from the Kafka level when a message belongs to processed, it needs to be supported by the business;
- Consumer can detect duplication during re-consumption, for example, by reusing Kafka’s sequence number (can it be obtained?) and so on;
Passing Meta Information
When using a message queue, we may also need to pass some information, such as the commonly used Tracing information. When there are two services communicating using Kafka in the middle of the entire call chain, we certainly do not want the Tracing information between them to be interrupted. Therefore, a natural idea is whether Kafka can directly pass our Tracing information.
Comparing the ProducerMessage
and ConsumerMessage
of Kafka:
type ProducerMessage struct {
Topic string // The Kafka topic for this message.
Key Encoder
Value Encoder
// The headers are key-value pairs that are transparently passed
// by Kafka between producers and consumers.
Headers []RecordHeader
Metadata interface{}
Offset int64
Partition int32
...
}
type ConsumerMessage struct {
Headers []*RecordHeader // only set if kafka is version 0.11+
Timestamp time.Time // only set if kafka is version 0.10+, inner message timestamp
BlockTimestamp time.Time // only set if kafka is version 0.10+, outer (compressed) block timestamp
Key, Value []byte
Topic string
Partition int32
Offset int64
}
We can see that Kafka provides a Header field in the Message for us to pass metadata (note that ProducerMessage
also has a Metadata
field, but it is not used to pass data between Consumer and Producer, but only for Producer).
Disadvantages of Sarama Library
When looking for information, we can see that many people say that there are the following problems with using the Sarama client to send and receive messages:
- Sarama client cannot detect partition changes. When the number of partitions of a topic increases, the client needs to be restarted to consume messages normally.
- Sarama client’s maximum message processing time (
MaxProcessingTime
) default value is 100ms. Exceeding the maximum processing time may cause the consumer to be unable to consume messages. - When the consumer offset reset strategy is set to Oldest (earliest), when the client restarts, the offset is reset and may start re-consuming all messages from the minimum offset.
- When consumers subscribe to multiple topics at the same time, some partitions may not consume messages.
And the solution provided is to suggest using Confluent-Kafka-go as the Kafka client library (😂), and it is easy to find several common Golang SDK comparisons.
However, when I was studying Sarama, I found that the maintainer of this SDK has been migrated from Shopify to IBM. Previously, we imported the package as github.com/shopify/sarama
, but as you can see in my sample code, it has now changed to github.com/ibm/sarama
. So, regarding these issues:
- Sarama client cannot detect partition changes. When the number of partitions of a topic increases, the client needs to be restarted to consume messages normally.
- This has been resolved. From the example of actively reporting offsets, we can see that when the server rebalances, the Consumer will be rebuilt.
- Sarama client’s maximum message processing time (
MaxProcessingTime
) default value is 100ms. Exceeding the maximum processing time may cause the consumer to be unable to consume messages.- I am not sure about this, I need to learn more about it.
- When the consumer offset reset strategy is set to Oldest (earliest), when the client restarts, the offset is reset and may start re-consuming all messages from the minimum offset.
- I am not sure about this, I need to learn more about it.
- When consumers subscribe to multiple topics at the same time, some partitions may not consume messages.
- I am not sure about this, I need to learn more about it.