How to find the min/max in a stream of events

Problem:

you have data in a Kafka topic and want to get the minimum or maximum value of a field.

Edit this page

Example use case:

Suppose you have a topic with events that represent ticket sales of movies. In this tutorial, we'll write a program that calculates the maximum and minimum revenue of movies by year.

Code example:

Try it

1
Initialize the project

To get started, make a new directory anywhere you’d like for this project:

mkdir aggregate-minmax && cd aggregate-minmax

Then make the following directories to set up the project structure:

mkdir src test

2
Get Confluent Platform

Next, create the following docker-compose.yml file to obtain Confluent Platform:

---
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.3.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-enterprise-kafka:5.3.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema-registry:
    image: confluentinc/cp-schema-registry:5.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'

  ksql-server:
    image: confluentinc/cp-ksql-server:5.3.0
    hostname: ksql-server
    container_name: ksql-server
    depends_on:
      - broker
      - schema-registry
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
      KSQL_BOOTSTRAP_SERVERS: "broker:9092"
      KSQL_HOST_NAME: ksql-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"

  ksql-cli:
    image: confluentinc/cp-ksql-cli:5.3.0
    container_name: ksql-cli
    depends_on:
      - broker
      - ksql-server
    entrypoint: /bin/sh
    tty: true
    volumes:
      - ./src:/opt/app/src
      - ./test:/opt/app/test

And launch it by running:

docker-compose up

3
Write the program interactively using the CLI

The best way to interact with KSQL when you’re learning how things work is with the KSQL CLI. Fire it up as follows:

docker exec -it ksql-cli ksql http://ksql-server:8088

Our tutorial computes the highest grossing and lowest grossing films per year in our data set. To keep things simple, we’re going to create a source Kafka topic and KSQL stream with annual sales data in it. In a real-world data pipeline, this would probably be the output of another KSQL query that takes a stream of individual sales events and aggregates them into annual totals, but we’ll save ourselves that trouble and just create the annual sales data directly.

This line of KSQL DDL creates a stream and its underlying Kafka topic to represent the annual sales totals. Note that we are defining the schema for the stream, which includes three fields: title, release_year, and total_sales. We are also specifying that the underlying Kafka topic—which KSQL will auto-create—be called movie-ticket-sales and have just one partition, and that its messages will be in Avro format.

CREATE STREAM MOVIE_SALES (title VARCHAR, release_year INT, total_sales INT)
    WITH (KAFKA_TOPIC='movie-ticket-sales',
          PARTITIONS=1,
          VALUE_FORMAT='avro');

Let’s add a small amount of data to our stream, so we can see our query work. You can copy and paste all these lines into the CLI at once, or if you prefer, open up a second KSQL CLI and copy them one at a time after you have all the subsequent steps complete, so you can see the results produced in real time.

INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Avengers: Endgame', 2019, 856980506);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Captain Marvel', 2019, 426829839);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Toy Story 4', 2019, 401486230);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('The Lion King', 2019, 385082142);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Black Panther', 2018, 700059566);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Avengers: Infinity War', 2018, 678815482);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Deadpool 2', 2018, 324512774);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Beauty and the Beast', 2017, 517218368);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Wonder Woman', 2017, 412563408);
INSERT INTO MOVIE_SALES (title, release_year, total_sales) VALUES ('Star Wars Ep. VIII: The Last Jedi', 2017, 517218368);

Before we get too far, let’s set the auto.offset.reset configuration parameter to earliest. This means all new KSQL queries will automatically compute their results from the beginning of a stream, rather than the end. This isn’t always what you’ll want to do in production, but it makes query results much easier to see in examples like this.

SET 'auto.offset.reset' = 'earliest';

To continue optimizing the configuration for our tutorial, let’s tell KSQL to buffer the aggregates as it builds them. This makes the query feel like it responds more slowly, but means that you get just one row of output per movie, which is more intuitive.

SET 'ksql.streams.cache.max.bytes.buffering' = '10000000';

With our test data in place, let’s try a query to compute the min and max. A SELECT statement all by itself in KSQL is called a transient query, meaning that after we stop it, it is gone and will not keep processing the input stream. We’ll create a persistent query, the contrast to a transient query, a few steps from now.

If you’re minimally familiar with SQL, the text of the query itself is fairly self-explanatory. We are claculating the highest and lowest grossing movie figures by year using MIN and MAX aggregations on the TOTAL_SALES field. This query will keep running, continuing to return results until you hit CTRL-C. Most KSQL queries are continuous queries that run forever in this way; there is always potentially more input available in the source stream, so the query never finishes on its own.

SELECT RELEASE_YEAR,
       MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
       MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
FROM MOVIE_SALES
GROUP BY RELEASE_YEAR
LIMIT 2;

This should yield the following output:

2019 | 385082142 | 856980506
2018 | 324512774 | 700059566
Limit Reached
Query terminated

Since the output looks right, the next step is to make the query persistent. This looks exactly like the transient query, except we have added a CREATE TABLE AS statement to the beginning of it. This statement returns to the CLI prompt right away, having created a persistent stream processing program running in the KSQL engine, continuously processing input records and updating the resulting MOVIE_FIGURES_BY_YEAR table. Moreover, we don’t see the results of the query displayed in the CLI, because they are updating the newly-created table itself. That table is available to other KSQL queries for further processing, and by default all its records are produced to a topic having the same name (MOVIE_FIGURES_BY_YEAR).

CREATE TABLE MOVIE_FIGURES_BY_YEAR AS
    SELECT RELEASE_YEAR,
           MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
           MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
    FROM MOVIE_SALES
    GROUP BY RELEASE_YEAR;

Seeing is believing, so let’s directly inspect that output topic using the print KSQL CLI command. We could also SELECT * FROM MOVIE_FIGURES_BY_YEAR, but here we opt for a more direct approach.

PRINT 'MOVIE_FIGURES_BY_YEAR' FROM BEGINNING LIMIT 2;

This should yield the following output:

Format:AVRO
8/2/19 2:49:04 PM UTC, 2019, {"RELEASE_YEAR": 2019, "MIN__TOTAL_SALES": 385082142, "MAX__TOTAL_SALES": 856980506}
8/2/19 2:49:04 PM UTC, 2018, {"RELEASE_YEAR": 2018, "MIN__TOTAL_SALES": 324512774, "MAX__TOTAL_SALES": 700059566}

4
Write your statements to a file

Now that we have a good KSQL pipeline set up, let’s take our CLI experimentation and save it to a file that we can use outside of this session. Create a file at src/statements.sql with the following content:

CREATE STREAM MOVIE_SALES (title VARCHAR, release_year INT, total_sales INT)
    WITH (KAFKA_TOPIC='movie-ticket-sales',
          PARTITIONS=1,
          VALUE_FORMAT='avro');

CREATE TABLE MOVIE_FIGURES_BY_YEAR AS
    SELECT RELEASE_YEAR,
           MIN(TOTAL_SALES) AS MIN__TOTAL_SALES,
           MAX(TOTAL_SALES) AS MAX__TOTAL_SALES
    FROM MOVIE_SALES
    GROUP BY RELEASE_YEAR;

Test it

1
Create the test data

The Confluent KSQL CLI Docker image contains a program called the ksql-test-runner. We can pass this program a JSON file describing our desired input data, a JSON file containing the intended output results, and a file of KSQL queries to run, and it will tell us whether our queries successfully turn the input into the output. To get started, create a file at test/input.json with the inputs for testing:

{
  "inputs": [
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Avengers: Endgame", "RELEASE_YEAR": 2019, "TOTAL_SALES": 856980506}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Captain Marvel", "RELEASE_YEAR": 2019, "TOTAL_SALES": 426829839}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Toy Story 4", "RELEASE_YEAR": 2019, "TOTAL_SALES": 401486230}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "The Lion King", "RELEASE_YEAR": 2019, "TOTAL_SALES": 385082142}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Black Panther", "RELEASE_YEAR": 2018, "TOTAL_SALES": 700059566}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Avengers: Infinity War", "RELEASE_YEAR": 2018, "TOTAL_SALES": 678815482}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Deadpool 2", "RELEASE_YEAR": 2018, "TOTAL_SALES": 324512774}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Beauty and the Beast", "RELEASE_YEAR": 2017, "TOTAL_SALES": 517218368}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Wonder Woman", "RELEASE_YEAR": 2017, "TOTAL_SALES": 412563408}},
    {"topic": "movie-ticket-sales", "key": null, "value": {"TITLE": "Star Wars Ep. VIII: The Last Jedi", "RELEASE_YEAR": 2017, "TOTAL_SALES": 517218368}}
  ]
}

Next, create a file at test/output.json with the expected outputs:

{
  "outputs": [
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2019", "value": {"release_year": 2019, "MIN__TOTAL_SALES" :856980506, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2019", "value": {"release_year": 2019, "MIN__TOTAL_SALES" :426829839, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2019", "value": {"release_year": 2019, "MIN__TOTAL_SALES" :401486230, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2019", "value": {"release_year": 2019, "MIN__TOTAL_SALES" :385082142, "MAX__TOTAL_SALES": 856980506}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2018", "value": {"release_year": 2018, "MIN__TOTAL_SALES" :700059566, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2018", "value": {"release_year": 2018, "MIN__TOTAL_SALES" :678815482, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2018", "value": {"release_year": 2018, "MIN__TOTAL_SALES" :324512774, "MAX__TOTAL_SALES": 700059566}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2017", "value": {"release_year": 2017, "MIN__TOTAL_SALES" :517218368, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2017", "value": {"release_year": 2017, "MIN__TOTAL_SALES" :412563408, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0},
    {"topic": "MOVIE_FIGURES_BY_YEAR", "key": "2017", "value": {"release_year": 2017, "MIN__TOTAL_SALES" :412563408, "MAX__TOTAL_SALES": 517218368}, "timestamp": 0}
  ]
}

2
Invoke the tests

Finally, invoke the tests using the test runner and the statements file that you created earlier:

docker exec ksql-cli ksql-test-runner -i /opt/app/test/input.json -s /opt/app/src/statements.sql -o /opt/app/test/output.json

If it passes (and it should), you will see this output:

	 >>> Test passed!

Take it to production

1
Send the statements to the REST endpoint

Launch your statements into production by sending them to the KSQL server REST endpoint with the following command:

statements=$(< src/statements.sql) && \
    echo '{"ksql":"'$statements'", "streamsProperties": {}}' | \
        curl -X "POST" "http://localhost:8088/ksql" \
             -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
             -d @- | \
        jq