Working with heterogenous JSON records

Question:

How do I select fields from a stream of records with different structures and possibly different values?

Edit this page

Example use case:

Suppose you have a topic with JSON-formatted records, but not all the records have the same structure and value types. You want to write a query that can handle the different structures and pull out specific fields

Code example:





Short Answer

Create a stream and define the outer-most element of the JSON structures as VARCHAR

CREATE STREAM DATA_STREAM (
  JSONType1 VARCHAR,
  JSONType2 VARCHAR,
  JSONType3 VARCHAR
  )

 WITH (KAFKA_TOPIC='source_data',
       VALUE_FORMAT='JSON',
       PARTITIONS=1);

Then you can access fields in the JSON structure using the EXTRACTJSONFIELD keyword

CREATE STREAM SUMMARY_REPORTS AS
   SELECT
    EXTRACTJSONFIELD (JSONType1, '$.oneOnlyField') AS SPECIAL_INFO,
    CAST(EXTRACTJSONFIELD (JSONType2, '$.numberField') AS DOUBLE) AS RUNFLD,
    EXTRACTJSONFIELD (JSONType3, '$.fieldD') AS DESCRIPTION
FROM
    DATA_STREAM;

Try it

1
Initialize the project

To get started, make a new directory anywhere you’d like for this project:

mkdir ksql-heterogeneous-json && cd ksql-heterogeneous-json

Then make the following directories to set up its structure:

mkdir src test

2
Get Confluent Platform

Next, create the following docker-compose.yml file to obtain Confluent Platform:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.1.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-kafka:6.1.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0

  schema-registry:
    image: confluentinc/cp-schema-registry:6.1.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:9092'

  ksqldb-server:
    image: confluentinc/ksqldb-server:0.17.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
      - schema-registry
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksqldb"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksqldb/log4j.properties"
      KSQL_BOOTSTRAP_SERVERS: "broker:9092"
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_KSQL_STREAMS_AUTO_OFFSET_RESET: "earliest"

  ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.17.0
    container_name: ksqldb-cli
    depends_on:
      - broker
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true
    environment:
      KSQL_CONFIG_DIR: "/etc/ksqldb"
    volumes:
      - ./src:/opt/app/src
      - ./test:/opt/app/test

And launch it by running:

docker-compose up -d

3
Problem description

Let’s say you have a Kafka topic source_data that contains JSON-formatted data. But each nested JSON object has a different structure. Additionally, within each object the values have a mix of types.

They each have a field that you want to pull out in a query and you don’t care about the structure of the individual JSON objects

  "JSONType1": {
    "fieldA": "some data",
    "numberField": 1.001,
    "oneOnlyField": "more data", (1)
    "randomField": "random data"
  }
  "JSONType2": {
    "fieldA": "data",
    "fieldB": "b-data",
    "numberField": 98.6   (2)
  }
  "JSONType3": {
    "fieldA": "data",
    "fieldB": "b-data",
    "numberField": 98.6,
    "fieldC": "data",
    "fieldD": "D-data"    (3)
  }
1 The field you want from JSONType1
2 The field you want from JSONType2
3 The field you want from JSONType3

The key to approaching this problem is having some way to generically model each structure, without having to know details beyond the name of the field you want to extract. Since there is varying number of fields you can’t use the ksqlDB STRUCT and because there is a mix of types in the values using the ksqlDB map function isn’t an option either.

4
Create the ksqlDB stream interactively using the CLI

To begin developing interactively, open up the ksqlDB CLI:

docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

The first thing we do is to create a stream DATA_STREAM based off the topic source_data. Within the CREATE STREAM statement, you’ll use a VARCHAR keyword to define each of outer most element of the JSON types.

CREATE STREAM DATA_STREAM (
  JSONType1 VARCHAR,          (1)
  JSONType2 VARCHAR,          (2)
  JSONType3 VARCHAR         (3)
  )

 WITH (KAFKA_TOPIC='source_data',
       VALUE_FORMAT='JSON',
       PARTITIONS=1);
1 Defining outer JSON element of type one as VARCHAR
2 Defining outer JSON element of type two as VARCHAR
3 Defining outer JSON element of type three as VARCHAR

Go ahead and create the stream now by pasting this statement into the ksqlDB window you opened at the beginning of this step. After you’ve created the stream, quit the ksqlDB CLI for now by typing exit.

By defining outer most element of the different JSON objects as VARCHAR, we’re setting ourselves up with the ability to extract arbitrary fields on the different JSON records as we’ll see in an upcoming section. But first we need to add some records to the source_data topic which we’ll do in the next step.

5
Produce events to the input topic

Now let’s produce some records for the DATA_STREAM stream

docker exec -i broker /usr/bin/kafka-console-producer --bootstrap-server broker:9092 --topic source_data

After starting the console producer it will wait for your input. To send all send all the stock transactions click on the clipboard icon on the right, then paste the following into the terminal and press enter:

{ "JSONType1": { "fieldA": "some data", "numberField": 1.001, "oneOnlyField": "more data", "randomField": "random data" }, "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6 }, "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data" }}
{ "JSONType1": { "fieldA": "some data", "numberField": 2.001, "oneOnlyField": "more data", "randomField": "random data" }, "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 99.6 }, "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-2" }}
{ "JSONType1": { "fieldA": "some data", "numberField": 3.001, "oneOnlyField": "more data", "randomField": "random data" }, "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 100.6 }, "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-3" }}
{ "JSONType1": { "fieldA": "some data", "numberField": 4.001, "oneOnlyField": "more data", "randomField": "random data" }, "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 101.6 }, "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-4" }}

After you’ve sent the records above, you can close the console producer by entering CTRL+C.

6
Run the streaming report interactively with the ksqldb-cli

To begin developing interactively, open up the ksqlDB CLI:

docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

Set ksqlDB to process data from the beginning of each Kafka topic.

SET 'auto.offset.reset' = 'earliest';

Then let’s adjust the column width so we can easily see the results of the query

SET CLI COLUMN-WIDTH 15

We need to create a query that extracts the fields we want from input sources. Since we have defined the top element of the JSON as a String using the VARCHAR keyword, we can use the ksqlDB EXTRACTJSONFIELD function to extract the different values at a specified JSONPath. If the requested JSONpath doesn’t exist, the EXTRACTJSONFIELD function returns NULL.

The result of EXTRACTJSONFIELD function is always a STRING type. To convert the result to another type you’ll need to use the CAST operator. We’ve done that with our queries in this tutorial. If
SELECT
    EXTRACTJSONFIELD (JSONType1, '$.oneOnlyField') AS SPECIAL_INFO,
    CAST(EXTRACTJSONFIELD (JSONType2, '$.numberField') AS DOUBLE) AS RUNFLD,
    EXTRACTJSONFIELD (JSONType3, '$.fieldD') AS DESCRIPTION
FROM
    DATA_STREAM
EMIT CHANGES
LIMIT 4;

This query should produce the following output:

+---------------+---------------+---------------+
|SPECIAL_INFO   |RUNFLD         |DESCRIPTION    |
+---------------+---------------+---------------+
|more data      |98.6           |D-data         |
|more data      |99.6           |D-data-2       |
|more data      |100.6          |D-data-3       |
|more data      |101.6          |D-data-4       |
Limit Reached
Query terminated

Now that the reporting query works, let’s update it to create a continous query for your reporting scenario

CREATE STREAM SUMMARY_REPORTS AS
   SELECT
    EXTRACTJSONFIELD (JSONType1, '$.oneOnlyField') AS SPECIAL_INFO,
    CAST(EXTRACTJSONFIELD (JSONType2, '$.numberField') AS DOUBLE) AS RUNFLD,
    EXTRACTJSONFIELD (JSONType3, '$.fieldD') AS DESCRIPTION
FROM
    DATA_STREAM;

We’re done with the ksqlDB CLI for now so go ahead and type exit to quit.

7
Write your statements to a file

Now that you have a series of statements that’s extracting the fields you care about, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql with the following content:

CREATE STREAM DATA_STREAM (
  JSONType1 VARCHAR,
  JSONType2 VARCHAR,
  JSONType3 VARCHAR
  )

 WITH (KAFKA_TOPIC='source_data',
       VALUE_FORMAT='JSON',
       PARTITIONS=1);


CREATE STREAM SUMMARY_REPORTS AS
   SELECT
    EXTRACTJSONFIELD (JSONType1, '$.oneOnlyField') AS SPECIAL_INFO,
    CAST(EXTRACTJSONFIELD (JSONType2, '$.numberField') AS DOUBLE) AS RUNFLD,
    EXTRACTJSONFIELD (JSONType3, '$.fieldD') AS DESCRIPTION
FROM
    DATA_STREAM;

Test it

1
Create the test data

Create a file at test/input.json with the inputs for testing:

{
  "inputs": [
    {
      "topic" : "source_data",
      "value" :
        { "JSONType1": { "fieldA": "some data", "numberField": 1.001, "oneOnlyField": "more data", "randomField": "random data" },
          "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6 },
          "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data" }
        }
    },

    {
      "topic" : "source_data",
      "value" :
        { "JSONType1": { "fieldA": "some data", "numberField": 2.001, "oneOnlyField": "more data", "randomField": "random data" },
          "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 99.6 },
          "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-2" }
        }
    },

    {
      "topic" : "source_data",
      "value" :
        { "JSONType1": { "fieldA": "some data", "numberField": 3.001, "oneOnlyField": "more data", "randomField": "random data" },
          "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 100.6 },
          "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-3" }
        }
    },

    {
      "topic" : "source_data",
      "value" :
        { "JSONType1": { "fieldA": "some data", "numberField": 4.001, "oneOnlyField": "more data", "randomField": "random data" },
          "JSONType2": { "fieldA": "data", "fieldB": "b-data", "numberField": 101.6 },
          "JSONType3": { "fieldA": "data", "fieldB": "b-data", "numberField": 98.6, "fieldC": "data", "fieldD": "D-data-4" }
        }
    }
  ]
}

Create a file at test/output.json with the expected outputs:

{
  "outputs": [
    {
      "topic": "SUMMARY_REPORTS",
      "value": {
        "SPECIAL_INFO" : "more data",
        "RUNFLD": 98.6,
        "DESCRIPTION" : "D-data"
      }
    },
    {
      "topic": "SUMMARY_REPORTS",
      "value": {
         "SPECIAL_INFO" : "more data" ,
         "RUNFLD": 99.6,
         "DESCRIPTION" : "D-data-2"
      }
    },
    {
      "topic": "SUMMARY_REPORTS",
      "value": {
         "SPECIAL_INFO" : "more data" ,
         "RUNFLD": 100.6,
         "DESCRIPTION" : "D-data-3"
      }
    },
    {
      "topic": "SUMMARY_REPORTS",
      "value": {
        "SPECIAL_INFO" : "more data" ,
        "RUNFLD": 101.6,
        "DESCRIPTION" : "D-data-4"
      }
    }
  ]
}

2
Invoke the tests

Invoke the tests using the ksqlDB test runner and the statements file that you created earlier:

docker exec ksqldb-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json

Which should pass:

	 >>> Test passed!

Take it to production

1
Send the statements to the REST API

Create a file at src/statements.sql with the following content that represents the statements we tested above without the test data.

CREATE STREAM DATA_STREAM (
  JSONType1 VARCHAR,
  JSONType2 VARCHAR,
  JSONType3 VARCHAR
  )

 WITH (KAFKA_TOPIC='source_data',
       VALUE_FORMAT='JSON',
       PARTITIONS=1);


CREATE STREAM SUMMARY_REPORTS AS
   SELECT
    EXTRACTJSONFIELD (JSONType1, '$.oneOnlyField') AS SPECIAL_INFO,
    CAST(EXTRACTJSONFIELD (JSONType2, '$.numberField') AS DOUBLE) AS RUNFLD,
    EXTRACTJSONFIELD (JSONType3, '$.fieldD') AS DESCRIPTION
FROM
    DATA_STREAM;

Launch your statements into production by sending them to the REST API with the following command:

tr '\n' ' ' < src/statements.sql | \
sed 's/;/;\'$'\n''/g' | \
while read stmt; do
    echo '{"ksql":"'$stmt'", "streamsProperties": {}}' | \
        curl -s -X "POST" "http://localhost:8088/ksql" \
             -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
             -d @- | \
        jq
done

Deploy on Confluent Cloud

1
Run your app to Confluent Cloud

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully-managed Apache Kafka service.

First, create your Kafka cluster in Confluent Cloud. Use the promo code CC100KTS to receive an additional $100 free usage (details).

Next, from the Confluent Cloud UI, click on Tools & client config to get the cluster-specific configurations, e.g. Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application.

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.