Reading Time: 4 minutes

Kafka’s exactly once semantics was recently introduced with the version 0.11 which enabled the message being delivered exactly once to the end consumer even if the producer retries to send the messages.

This major release raised many eyebrows in the community as people believed that this is not mathematically possible in distributed systems. Jay Kreps, Co-founder on Confluent, and Co-creator of Apache Kafka explained its possibility and how is it achieved in Kafka in his post.

In this blog, we will be discussing how can one take advantage of Exactly once message semantics provided by Kafka.

Overview of different message delivery semantics provided by Apache Kafka

“At most once—Messages may be lost but are never redelivered.”

In this case, the producer does not retry to send the message when an ack times out or returns an error, thus message might end up not being written to the Kafka topic, and hence not delivered to the consumer.

“At least once—Messages are never lost but may be redelivered.“

In this case, the producer retries sending the message if the ack times out or receives an error, assuming that the message was not written to the Kafka topic.

“Exactly once—this is what people actually want, each message is delivered once and only once.“

In this case, Even if a producer retries sending a message, it leads to the message being delivered exactly once to the end consumer.

Exactly-once semantics is the most desirable guarantee and requires a cooperation between the messaging system itself and the application producing and consuming the messages.

For instance, if after consuming a message successfully you rewind your Kafka consumer to a previous offset, you will receive all the messages from that offset to the latest one, all over again. This shows why the messaging system and the client application must cooperate to make exactly-once semantics happen.

What is the need of using Exactly once Semantics of Kafka?

We know that at least once guarantees that every message will be persisted at least once, without any data loss but this may cause duplicates in the stream.

For example, If the broker failed right before it sent the ack but after the message was successfully written to the Kafka topic, this retry leads to the message being written twice and hence delivered more than once to the end consumer.

In the new exactly-once semantics, Kafka’s processing semantics guarantee of delivering the message exactly once to the end consumer has been strengthened by introducing:

Idempotent producer

Atomic transactions

Idempotent Producer

An idempotent operation- means an operation that can be performed many times without causing a different effect than only being performed once.

Now in Kafka, The producer send operation can be made idempotent such that if an error occurs which causes a producer retry, the same message which is sent by the producer multiple times will only be written once to the logs on the maintained on the Kafka broker.

Idempotent producer ensures that messages are delivered exactly once to a particular topic partition during the lifetime of a single producer.

To turn on this feature and get exactly-once semantics per partition—meaning no duplicates, no data loss, and in-order semantics—configure your producer with the following property

enable.idempotence=true

With this feature turned on, each producer gets a unique id (PID), and each message is sent together with a sequence number. When either the broker or the connection fails, and the producer retries to send the message, it is only be accepted if the sequence number of that message is 1 more than the one last seen.

However, that if the producer fails and restarts, it will get a new PID. Hence, the idempotency is guaranteed only for a single producer session.

Atomic Transactions

Kafka now supports atomic writes across multiple partitions through the new transactions API. This allows a producer to send a batch of messages to multiple partitions such that either all messages in the batch are visible to all the consumers or none are ever visible to any consumer.

It allows you to commit your consumer offsets in the same transaction along with the data you have processed, thereby allowing end-to-end exactly-once semantics.

Below is an example snippet that describes ho can you send messages atomically to a set of topic partitions using the new Producer API,

{ producer.initTransactions(); try{ producer.beginTransaction(); producer.send(record0); producer.send(record1); producer.sendOffsetsToTxn(…); producer.commitTransaction(); } catch( ProducerFencedException e) { producer.close(); } catch( KafkaException e ) { producer.abortTransaction(); } }

Consumers

To use transactions, you need to configure the Consumer to use the right isolation.level and use the new Producer APIs. There are now two new isolation levels in Kafka consumer:

read_committed : Read both kind of messages that are not part of a transaction and that are, after the transaction is committed. read_uncommitted : Read all messages in offset order without waiting for transactions to be committed. This option is similar to the current semantics of a Kafka consumer.

Also, transactional.id property must be set to a unique ID in the producer config. This unique ID is needed to provide continuity of transactional state across application restarts.

For an example of how transactional producer and consumer works, you can refer to this giter8 template

References





