Add a key to data ingested through Kafka Connect

Question:

How can you stream data from a source system (such as a database) into Kafka using Kafka Connect, and add a key to the data as part of the ingestion?

Edit this page

Example use case:

Kafka Connect is the integration API for Apache Kafka. It enables you to stream data from source systems (such as databases, message queues, SaaS platforms, and flat files) into Kafka, and from Kafka to target systems. When you stream data into Kafka, you often need to set the key correctly for partitioning and application logic reasons. In this example, we have a database containing data about cities, and we want to key the resulting Kafka messages by the city_id field. This tutorial will show you different ways of setting the key correctly. It will also cover how to declare the schema and use Kafka Streams to process the data using SpecificAvro.

Hands-on code example:

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2

To get started, make a new directory anywhere you’d like for this project:

mkdir connect-add-key-to-source && cd connect-add-key-to-source

Prepare the source data

3

Create a file cities.sql with commands to pre-populate the database table with city information:

DROP TABLE IF EXISTS cities;
CREATE TABLE cities (city_id INTEGER PRIMARY KEY NOT NULL, name VARCHAR(255), state VARCHAR(255));
INSERT INTO cities (city_id, name, state) VALUES (1, 'Raleigh', 'NC');
INSERT INTO cities (city_id, name, state) VALUES (2, 'Mountain View', 'CA');
INSERT INTO cities (city_id, name, state) VALUES (3, 'Knoxville', 'TN');
INSERT INTO cities (city_id, name, state) VALUES (4, 'Houston', 'TX');
INSERT INTO cities (city_id, name, state) VALUES (5, 'Olympia', 'WA');
INSERT INTO cities (city_id, name, state) VALUES (6, 'Bismarck', 'ND');

Get Confluent Platform

4

Create a Dockerfile called Dockerfile-connect that builds a custom container for Kafka Connect bundled with the free and open source JDBC connector, installed from Confluent Hub.

FROM confluentinc/cp-kafka-connect-base:7.3.0

ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"

RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.0.2

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud). Make sure that you create this file in the same place as the cities.sql file that you created above.

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
  connect:
    image: localimage/kafka-connect-jdbc:latest
    build:
      context: .
      dockerfile: Dockerfile-connect
    container_name: connect
    depends_on:
    - broker
    - schema-registry
    ports:
    - 8083:8083
    environment:
      CONNECT_BOOTSTRAP_SERVERS: broker:9092
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: kafka-connect
      CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-configs
      CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-status
      CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: '1'
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: '1'
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: '1'
  kcat:
    image: edenhill/kcat:1.7.1
    container_name: kcat
    links:
    - broker
    entrypoint:
    - /bin/sh
    - -c
    - "apk add jq; \nwhile [ 1 -eq 1 ];do sleep 60;done\n"
  postgres:
    image: postgres:11
    container_name: postgres
    environment:
    - POSTGRES_USER=postgres
    - POSTGRES_PASSWORD=postgres
    volumes:
    - ./cities.sql:/docker-entrypoint-initdb.d/cities.sql

Now launch Confluent Platform by running the following command. Note the --build argument which automatically builds the Docker image for Kafka Connect and the bundled kafka-connect-jdbc connector.

docker compose up -d --build

Check the source data

5

Check the data in the source database. Observe the city_id primary key:

echo 'SELECT * FROM cities;' | docker exec -i postgres bash -c 'psql -U $POSTGRES_USER $POSTGRES_DB'
 city_id |     name      | state
---------+---------------+-------
       1 | Raleigh       | NC
       2 | Mountain View | CA
       3 | Knoxville     | TN
       4 | Houston       | TX
       5 | Olympia       | WA
       6 | Bismarck      | ND
(6 rows)

Create the connector

6

Create the JDBC source connector. Note the transforms stanza which is responsible for setting the key to the value of the city_id field.

curl -i -X PUT http://localhost:8083/connectors/jdbc_source_postgres_01/config \
     -H "Content-Type: application/json" \
     -d '{
            "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
            "connection.url": "jdbc:postgresql://postgres:5432/postgres",
            "connection.user": "postgres",
            "connection.password": "postgres",
            "mode":"incrementing",
            "incrementing.column.name":"city_id",
            "topic.prefix":"postgres_",
            "transforms":"copyFieldToKey,extractKeyFromStruct",
            "transforms.copyFieldToKey.type":"org.apache.kafka.connect.transforms.ValueToKey",
            "transforms.copyFieldToKey.fields":"city_id",
            "transforms.extractKeyFromStruct.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
            "transforms.extractKeyFromStruct.field":"city_id"
        }'

If you run this before Kafka Connect has finished starting up you’ll get the error curl: (52) Empty reply from server - in which case, rerun the above command.

Check that the connector is running:

curl -s http://localhost:8083/connectors/jdbc_source_postgres_01/status

You should see that the state is RUNNING for both connector and tasks elements

{"name":"jdbc_source_postgres_01","connector":{"state":"RUNNING","worker_id":"connect:8083"},"tasks":[{"id":0,"state":"RUNNING","worker_id":"connect:8083"}],"type":"source"}

If you get the message {"error_code":404,"message":"No status found for connector jdbc_source_postgres_01"} then check that the step above in which you created the connector actually succeeded.

Consume events from the output topic

7

With the connector running let’s now inspect the data on the Kafka topic. Here we’ll use kcat because of its rich capabilities for inspecting and displaying details of Kafka messages:

docker exec -i kcat kcat -b broker:9092 -t postgres_cities \
            -C -s avro -r http://schema-registry:8081 -e \
            -f 'Key     (%K bytes):\t%k\nPayload (%S bytes):\t%s\n--\n'
Key     (6 bytes):      1
Payload (19 bytes):     {"city_id": 1, "name": {"string": "Raleigh"}, "state": {"string": "NC"}}
--
Key     (6 bytes):      2
Payload (25 bytes):     {"city_id": 2, "name": {"string": "Mountain View"}, "state": {"string": "CA"}}
--
Key     (6 bytes):      3
Payload (21 bytes):     {"city_id": 3, "name": {"string": "Knoxville"}, "state": {"string": "TN"}}
--
Key     (6 bytes):      4
Payload (19 bytes):     {"city_id": 4, "name": {"string": "Houston"}, "state": {"string": "TX"}}
--
Key     (6 bytes):      5
Payload (19 bytes):     {"city_id": 5, "name": {"string": "Olympia"}, "state": {"string": "WA"}}
--
Key     (6 bytes):      6
Payload (20 bytes):     {"city_id": 6, "name": {"string": "Bismarck"}, "state": {"string": "ND"}}
--

Clean up

8

Shut down the stack by running:

docker compose down

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.