How to count a stream of events

Problem:

you have data in a Kafka topic and want to count the number of events based on some criteria.

Edit this page

Example use case:

Suppose you have a topic with events that represent ticket sales for movies. In this tutorial, we'll write a program that calculates the total number of tickets sold per movie.

Code example:

Try it

1
Initialize the project

To get started, make a new directory anywhere you’d like for this project:

mkdir aggregate-count && cd aggregate-count

Then make the following directories to set up its structure:

mkdir src test

2
Get Confluent Platform

Next, create the following docker-compose.yml file to obtain Confluent Platform:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.3.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-enterprise-kafka:5.3.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema-registry:
    image: confluentinc/cp-schema-registry:5.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'

  ksql-server:
    image: confluentinc/cp-ksql-server:5.3.0
    hostname: ksql-server
    container_name: ksql-server
    depends_on:
      - broker
      - schema-registry
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
      KSQL_BOOTSTRAP_SERVERS: "broker:9092"
      KSQL_HOST_NAME: ksql-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

  ksql-cli:
    image: confluentinc/cp-ksql-cli:5.3.0
    container_name: ksql-cli
    depends_on:
      - broker
      - ksql-server
    entrypoint: /bin/sh
    tty: true
    volumes:
      - ./src:/opt/app/src
      - ./test:/opt/app/test

And launch it by running:

docker-compose up -d

3
Write the program interactively using the CLI

The best way to interact with KSQL when you’re learning how things work is with the KSQL CLI. Fire it up as follows:

docker exec -it ksql-cli ksql http://ksql-server:8088

This tutorial takes a stream of individual movie ticket sales events and counts the total number of tickets sold per movie. Not all ticket prices are the same (apparently some of these theaters are fancier than others), but the task of the KSQL query is just to group and count regardless of ticket price.

This line of KSQL DDL creates a stream and its underlying Kafka topic to represent the annual sales totals. If the topic already exists, then KSQL simply registers is as the source of data underlying the new stream. The stream has three fields: title, the name of the movie; sale_ts, the time at which the ticket was sold; and ticket_total_value, the price paid for the ticket. The statement also the underlying Kafka topic as movie-ticket-sales, that it should have a single partition, and defines Avro as its data format.

CREATE STREAM MOVIE_TICKET_SALES (title VARCHAR, sale_ts VARCHAR, ticket_total_value INT)
    WITH (KAFKA_TOPIC='movie-ticket-sales',
          PARTITIONS=1,
          VALUE_FORMAT='avro');

We’ll need some simulated ticket sales for this tutorial to be interesting. You can copy and paste all these lines into the CLI at once, or if you prefer, open up a second KSQL CLI and copy them one at a time after you have all the subsequent steps complete, so you can see the results produced in real time.

INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:00:00Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:01:00Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T10:01:31Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('Die Hard', '2019-07-18T10:01:36Z', 24);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T10:02:00Z', 18);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Big Lebowski', '2019-07-18T11:03:21Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Big Lebowski', '2019-07-18T11:03:50Z', 12);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T11:40:00Z', 36);
INSERT INTO MOVIE_TICKET_SALES (title, sale_ts, ticket_total_value) VALUES ('The Godfather', '2019-07-18T11:40:09Z', 18);

Before we get too far, let’s set the auto.offset.reset configuration parameter to earliest. This means all new KSQL queries will automatically compute their results from the beginning of a stream, rather than the end. This isn’t always what you’ll want to do in production, but it makes query results much easier to see in examples like this.

SET 'auto.offset.reset' = 'earliest';

For the purposes of this tutorial only, we are also going to configure KSQL to buffer the aggregates as it builds them. This makes the query feel like it responds more slowly, but means that you get just one row per movie from which it is simpler to understand the concept:

SET 'ksql.streams.cache.max.bytes.buffering' = '10000000';

With our test data and configuration parameters in place, let’s try a query to compute our ticket totals. A SELECT statement all by itself in KSQL is called a transient query, meaning that after we stop it, it is gone and will not keep processing the input stream. That’s what we’re doing in this step. The counterpart to a transient query is a persistent query, which we’ll create a few steps from now.

If you’re familiar with SQL, the text of the query itself is fairly self-explanatory. We are claculating the total number of records in each group, grouped by movie title. Note that COUNT(TICKET_TOTAL_VALUE) is still just counting the number of rows in the group, and is not doing any calculation based on ticket value itself. This is a standard SQL idiom that applies in KSQL as well.

SELECT TITLE,
       COUNT(TICKET_TOTAL_VALUE) AS TICKETS_SOLD
FROM MOVIE_TICKET_SALES
GROUP BY TITLE
LIMIT 3;

This should yield the following output:

Die Hard | 3
The Big Lebowski | 2
The Godfather | 4
Limit Reached
Query terminated

Since the output looks right, the next step is to make the query persistent. This looks exactly like the transient query, except we have added a CREATE TABLE AS statement to the beginning of it. This statement returns to the CLI prompt right away, having created a persistent stream processing program running in the KSQL engine, continuously processing input records and updating the resulting MOVIE_TICKETS_SOLD table. Moreover, we don’t see the results of the query displayed in the CLI, because they are updating the newly-created table itself. That table is available to other KSQL queries for further processing, and by default all its records are produced to a topic having the same name (MOVIE_TICKETS_SOLD).

CREATE TABLE MOVIE_TICKETS_SOLD AS
    SELECT TITLE,
           COUNT(TICKET_TOTAL_VALUE) AS TICKETS_SOLD
    FROM MOVIE_TICKET_SALES
    GROUP BY TITLE;

Now let’s directly inspect that output topic using the print KSQL CLI command. We could also SELECT * FROM MOVIE_TICKETS_SOLD, but here we opt for a more direct approach.

PRINT 'MOVIE_TICKETS_SOLD' FROM BEGINNING LIMIT 3;

This should yield the following output:

Format:AVRO
7/18/19 10:01:36 AM UTC, Die Hard, {"TITLE": "Die Hard", "TICKETS_SOLD": 3}
7/18/19 11:03:50 AM UTC, The Big Lebowski, {"TITLE": "The Big Lebowski", "TICKETS_SOLD": 2}
7/18/19 11:40:09 AM UTC, The Godfather, {"TITLE": "The Godfather", "TICKETS_SOLD": 4}

4
Write your statements to a file

Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql with the following content:

CREATE STREAM MOVIE_TICKET_SALES (title VARCHAR, sale_ts VARCHAR, ticket_total_value INT)
    WITH (KAFKA_TOPIC='movie-ticket-sales',
          PARTITIONS=1,
          VALUE_FORMAT='avro',
          TIMESTAMP='sale_ts',
          TIMESTAMP_FORMAT='yyyy-MM-dd''T''HH:mm:ssX');

CREATE TABLE MOVIE_TICKETS_SOLD AS
    SELECT TITLE,
           COUNT(TICKET_TOTAL_VALUE) AS TICKETS_SOLD
    FROM MOVIE_TICKET_SALES
    GROUP BY TITLE;

Test it

1
Create the test data

The Confluent KSQL CLI Docker image contains a program called the ksql-test-runner. We can pass this program a JSON file describing our desired input data, a JSON file containing the intended output results, and a file of KSQL queries to run, and it will tell us whether our queries successfully turn the input into the output. To get started, create a file at test/input.json with the inputs for testing

{
  "inputs": [
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:00:00Z", "TICKET_TOTAL_VALUE": 12}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:01:00Z", "TICKET_TOTAL_VALUE": 12}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T10:01:31Z", "TICKET_TOTAL_VALUE": 12}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Die Hard", "SALE_TS": "2019-07-18T10:01:36Z", "TICKET_TOTAL_VALUE": 24}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T10:02:00Z", "TICKET_TOTAL_VALUE": 18}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Big Lebowski", "SALE_TS": "2019-07-18T11:03:21Z", "TICKET_TOTAL_VALUE": 12}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Big Lebowski", "SALE_TS": "2019-07-18T11:03:50Z", "TICKET_TOTAL_VALUE": 12}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T11:40:00Z", "TICKET_TOTAL_VALUE": 36}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Godfather", "SALE_TS": "2019-07-18T11:40:09Z", "TICKET_TOTAL_VALUE": 18}}
  ]
}

Next, create a file at test/output.json with the expected outputs:

{
  "outputs": [
    {"topic": "MOVIE_TICKETS_SOLD", "key": "Die Hard", "value": {"TITLE": "Die Hard", "TICKETS_SOLD": 1}, "timestamp": 1563444000000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "Die Hard", "value": {"TITLE": "Die Hard", "TICKETS_SOLD": 2}, "timestamp": 1563444060000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Godfather", "value": {"TITLE": "The Godfather", "TICKETS_SOLD": 1}, "timestamp": 1563444091000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "Die Hard", "value": {"TITLE": "Die Hard", "TICKETS_SOLD": 3}, "timestamp": 1563444096000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Godfather", "value": {"TITLE": "The Godfather", "TICKETS_SOLD": 2}, "timestamp": 1563444120000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Big Lebowski", "value": {"TITLE": "The Big Lebowski", "TICKETS_SOLD": 1}, "timestamp": 1563447801000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Big Lebowski", "value": {"TITLE": "The Big Lebowski", "TICKETS_SOLD": 2}, "timestamp": 1563447830000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Godfather", "value": {"TITLE": "The Godfather", "TICKETS_SOLD": 3}, "timestamp": 1563450000000},
    {"topic": "MOVIE_TICKETS_SOLD", "key": "The Godfather", "value": {"TITLE": "The Godfather", "TICKETS_SOLD": 4}, "timestamp": 1563450009000}
  ]
}

2
Invoke the tests

Finally, invoke the tests using the test runner and the statements file that you created earlier:

docker exec ksql-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json

When it passes (how’s that for confidence?), you will see this output:

	 >>> Test passed!

Take it to production

1
Send the statements to the REST endpoint

Launch your statements into production by sending them to the KSQL server REST endpoint with the following command:

statements=$(< src/statements.sql) && \
    echo '{"ksql":"'$statements'", "streamsProperties": {}}' | \
        curl -X "POST" "http://localhost:8088/ksql" \
             -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
             -d @- | \
        jq