This uses the RabbitMQ plugin for Kafka Connect, which has been installed in the Docker container already. You can install it yourself from Confluent Hub.

curl -i -X PUT -H "Content-Type:application/json" \ http://localhost:8083/connectors/source-rabbitmq-00/config \ -d '{ "connector.class" : "io.confluent.connect.rabbitmq.RabbitMQSourceConnector", "kafka.topic" : "rabbit-test-00", "rabbitmq.queue" : "test-queue-01", "rabbitmq.username": "guest", "rabbitmq.password": "guest", "rabbitmq.host": "rabbitmq", "rabbitmq.port": "5672", "rabbitmq.virtual.host": "/", "confluent.license":"", "confluent.topic.bootstrap.servers":"kafka:29092", "confluent.topic.replication.factor":1, "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter", "key.converter": "org.apache.kafka.connect.storage.StringConverter" } '

With the connector created we check that it’s running:

curl -s "http://localhost:8083/connectors?expand=info&expand=status" | \ jq '. | to_entries[] | [ .value.info.type, .key, .value.status.connector.state,.value.status.tasks[].state,.value.info.config."connector.class"]|join(":|:")' | \ column -s : -t| sed 's/\"//g'| sort

source | source-rabbitmq-00 | RUNNING | RUNNING | io.confluent.connect.rabbitmq.RabbitMQSourceConnector

And then we can check the topic that’s being written to. Here I’m using kafkacat but you can use any Kafka consumer:

docker exec kafkacat \ kafkacat -b kafka:29092 \ -t rabbit-test-00 -C -u

The message we sent to RabbitMQ shows up in Kafka:

{"transaction": "PAYMENT", "amount": "$125.0", "timestamp": "Wed 8 Jan 2020 10:41:45 GMT"}

If you open another window and use the same curl statement (bottom pane) above to send more messages to RabbitMQ, you’ll see them appear in the Kafka topic (top pane) straight away:

One of the important things to note in the configuration of the connector is that we’re using the ByteArrayConverter for the value of the message, which just takes whatever bytes are on the RabbitMQ message and writes them to the Kafka message. Whilst on first look it appears that we’ve got a JSON message on RabbitMQ and so would evidently use the JsonConverter, this is not the case. If we do that, the converter will try to encode the bytes as JSON, and we’ll end up with this:

"eyJ0cmFuc2FjdGlvbiI6ICJQQVlNRU5UIiwgImFtb3VudCI6ICIkNDcuMyIsICJ0aW1lc3RhbXAiOiAiV2VkIDggSmFuIDIwMjAgMTM6MDE6MjEgR01UIiB9"

To understand more about converters and serialisation see this article: Kafka Connect Deep Dive – Converters and Serialization Explained

We can dig into the payload further with kafkacat to examine the headers etc:

docker exec kafkacat \ kafkacat -b kafka:29092 -t rabbit-test-00 -C -u -q \ -f 'Topic %t / Partition %p / Offset: %o / Timestamp: %T

Headers: %h

Key (%K bytes): %k

Payload (%S bytes): %s

--

'

The output looks like this: