How to rekey a stream with a value

Problem:

you have a Kafka topic and you want to change the key of the messages. It may be there is no key and you want to set one, or there is a key but you want to change it to a field from the record value.

Edit this page

Example use case:

Suppose you have an unkeyed stream of movie ratings from movie-goers. Because the stream is not keyed, ratings for the same movie aren't guaranteed to be placed into the same partition. In this tutorial, we'll write a program that creates a new topic keyed by the movie's name. When the key is consistent, it becomes possible to process these ratings at scale and in parallel.

Code example:

Try it

1
Initialize the project

To get started, make a new directory anywhere you’d like for this project:

mkdir rekey-a-stream && cd rekey-a-stream

Then make the following directories to set up its structure:

mkdir src test

2
Get Confluent Platform

Next, create the following docker-compose.yml file to obtain Confluent Platform:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.3.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-enterprise-kafka:5.3.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema-registry:
    image: confluentinc/cp-schema-registry:5.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'

  ksql-server:
    image: confluentinc/cp-ksql-server:5.3.0
    hostname: ksql-server
    container_name: ksql-server
    depends_on:
      - broker
      - schema-registry
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
      KSQL_BOOTSTRAP_SERVERS: "broker:9092"
      KSQL_HOST_NAME: ksql-server
      KSQL_APPLICATION_ID: "cp-all-in-one"
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

  ksql-cli:
    image: confluentinc/cp-ksql-cli:5.3.0
    container_name: ksql-cli
    depends_on:
      - broker
      - ksql-server
    entrypoint: /bin/sh
    tty: true
    volumes:
      - ./src:/opt/app/src
      - ./test:/opt/app/test

And launch it by running:

docker-compose up

3
Write the program interactively using the CLI

To begin developing interactively, open up the KSQL CLI:

docker exec -it ksql-cli ksql http://ksql-server:8088

First, you’ll need to create a Kafka topic and stream to represent the movie ratings data. The following creates both in one shot. Notice that the stream has 2 partitions and no key set.

CREATE STREAM ratings (id INT, rating DOUBLE)
    WITH (kafka_topic='ratings',
          partitions=2,
          value_format='avro');

Then insert the ratings data. Because the stream has no key, the records will be inserted in approximately a round-robin manner across the different partitions.

INSERT INTO ratings (id, rating) VALUES (294, 8.2);
INSERT INTO ratings (id, rating) VALUES (294, 8.5);
INSERT INTO ratings (id, rating) VALUES (354, 9.9);
INSERT INTO ratings (id, rating) VALUES (354, 9.7);
INSERT INTO ratings (id, rating) VALUES (782, 7.8);
INSERT INTO ratings (id, rating) VALUES (782, 7.7);
INSERT INTO ratings (id, rating) VALUES (128, 8.7);
INSERT INTO ratings (id, rating) VALUES (128, 8.4);
INSERT INTO ratings (id, rating) VALUES (780, 2.1);

Now that you have a stream, let’s examine what key Kafka used for the messages by default. First we tell KSQL to query data from the beginning of the topic:

SET 'auto.offset.reset' = 'earliest';

We can view the existing key of the messages using the ROWKEY column which KSQL provides:

SELECT ROWKEY, ID, RATING
FROM RATINGS
LIMIT 9;

This should yield roughly following output. The order will be different depending on how the records were actually inserted:

null | 780 | 2.1
null | 354 | 9.7
null | 128 | 8.7
null | 294 | 8.2
null | 782 | 7.8
null | 782 | 7.7
null | 128 | 8.4
null | 354 | 9.9
null | 294 | 8.5
Limit Reached
Query terminated

Note that the key is null for every message. This means that ratings data for the same movie could be spread across multiple partitions. This is generally not good for scalability when you care about having the same "kind" of data in a single partition.

Let’s fix that. Using KSQL’s appropriately named PARTITION BY clause we can apply a key to the messages and write it to a new stream. Here we’ll use the movie identifier, ID. Issue the following to create a new stream that is continuously populated by its query:

CREATE STREAM RATINGS_REKEYED
    WITH (KAFKA_TOPIC='ratings_keyed_by_id') AS
    SELECT *
    FROM RATINGS
    PARTITION BY ID;

To check that it’s working, use ROWKEY as before to check that the key matches the ID field:

SELECT ROWKEY, ID, RATING
FROM RATINGS_REKEYED
LIMIT 9;

This should yield roughly the following output. The order might vary from what you see here, but the data has been repartitioned such that all movies with the same ID are now in exactly one partition.

780 | 780 | 2.1
354 | 354 | 9.7
128 | 128 | 8.7
294 | 294 | 8.2
782 | 782 | 7.8
782 | 782 | 7.7
128 | 128 | 8.4
354 | 354 | 9.9
294 | 294 | 8.5
Limit Reached
Query terminated

We can also print out the contents of the output stream’s underlying topic, noting that the second field shown is the key and matches the value of the ID field in the value:

PRINT 'ratings_keyed_by_id' FROM BEGINNING LIMIT 9;

This should yield the roughly following output. PRINT pulls from all partitions of a topic.

Format:AVRO
7/25/19 3:17:10 PM UTC, 780, {"ID": 780, "RATING": 2.1}
7/25/19 3:17:10 PM UTC, 354, {"ID": 354, "RATING": 9.7}
7/25/19 3:17:10 PM UTC, 128, {"ID": 128, "RATING": 8.7}
7/25/19 3:17:10 PM UTC, 294, {"ID": 294, "RATING": 8.2}
7/25/19 3:17:10 PM UTC, 782, {"ID": 782, "RATING": 7.8}
7/25/19 3:17:10 PM UTC, 782, {"ID": 782, "RATING": 7.7}
7/25/19 3:17:10 PM UTC, 128, {"ID": 128, "RATING": 8.4}
7/25/19 3:17:10 PM UTC, 354, {"ID": 354, "RATING": 9.9}
7/25/19 3:17:10 PM UTC, 294, {"ID": 294, "RATING": 8.5}

4
Write your statements to a file

Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql with the following content:

CREATE STREAM ratings (id INT, rating DOUBLE)
    WITH (kafka_topic='ratings',
          partitions=2,
          value_format='avro');

CREATE STREAM RATINGS_REKEYED
  WITH (KAFKA_TOPIC='ratings_keyed_by_id') AS
    SELECT *
    FROM RATINGS
    PARTITION BY ID;

Test it

1
Create the test data

Create a file at test/input.json with the inputs for testing:

{
  "inputs": [
    {
      "topic": "ratings",
      "value": {
        "id": 294,
        "rating": 8.2
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 294,
        "rating": 8.5
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 354,
        "rating": 9.9
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 354,
        "rating": 9.7
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 782,
        "rating": 7.8
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 782,
        "rating": 7.7
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 128,
        "rating": 8.7
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 128,
        "rating": 8.4
      }
    },
    {
      "topic": "ratings",
      "value": {
        "id": 780,
        "rating": 2.1
      }
    }
  ]
}

Similarly, create a file at test/output.json with the expected outputs:

{
  "outputs": [
    {
      "topic": "ratings_keyed_by_id",
      "key": 294,
      "value": {
        "id": 294,
        "rating": 8.2
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 294,
      "value": {
        "id": 294,
        "rating": 8.5
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 354,
      "value": {
        "id": 354,
        "rating": 9.9
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 354,
      "value": {
        "id": 354,
        "rating": 9.7
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 782,
      "value": {
        "id": 782,
        "rating": 7.8
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 782,
      "value": {
        "id": 782,
        "rating": 7.7
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 128,
      "value": {
        "id": 128,
        "rating": 8.7
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 128,
      "value": {
        "id": 128,
        "rating": 8.4
      }
    },
    {
      "topic": "ratings_keyed_by_id",
      "key": 780,
      "value": {
        "id": 780,
        "rating": 2.1
      }
    }
  ]
}

2
Invoke the tests

Lastly, invoke the tests using the test runner and the statements file that you created earlier:

docker exec ksql-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json

Which should pass:

	 >>> Test passed!

Take it to production

1
Send the statements to the REST API

Launch your statements into production by sending them to the REST API with the following command:

statements=$(< src/statements.sql) && \
    echo '{"ksql":"'$statements'", "streamsProperties": {}}' | \
        curl -X "POST" "http://localhost:8088/ksql" \
             -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
             -d @- | \
        jq