How to sum a stream of events

Question:

How can you calculate the sum of one or more fields from all records in a Kafka topic?

Edit this page

Example use case:

Suppose you have a topic with events that represent ticket sales for movies. Each event contains the movie that the ticket was purchased for, as well as its price. In this tutorial, we'll write a program that calculates the sum of all ticket sales per movie.

Hands-on code example:

Short Answer

Use the reduce() method to apply the sum aggregation, see below.

    builder.stream(inputTopic, Consumed.with(Serdes.String(), ticketSaleSerde))
        // Set key to title and value to ticket value
        .map((k, v) -> new KeyValue<>((String) v.getTitle(), v.getTicketTotalValue()))
        // Group by title
        .groupByKey(Grouped.with(Serdes.String(), Serdes.Integer()))
        // Apply SUM aggregation
        .reduce(Integer::sum)
        // Write to stream specified by outputTopic
        .toStream().mapValues(v -> v.toString() + " total sales").to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2

To get started, make a new directory anywhere you’d like for this project:

mkdir aggregate-sum && cd aggregate-sum

Get Confluent Platform

3

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
      SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN

And launch it by running:

docker compose up -d

Configure the project

4

Create the following Gradle build file, named build.gradle for the project:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
    }
}

plugins {
    id "java"
    id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}

sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"

repositories {
    mavenCentral()

    maven {
        url "https://packages.confluent.io/maven"
    }
}

apply plugin: "com.github.johnrengelman.shadow"

dependencies {
    implementation "org.apache.avro:avro:1.11.1"
    implementation "org.slf4j:slf4j-simple:2.0.7"
    implementation 'org.apache.kafka:kafka-streams:3.4.0'
    implementation ('org.apache.kafka:kafka-clients') {
       version {
           strictly '3.4.0'
        }
      }
    implementation "io.confluent:kafka-streams-avro-serde:7.3.0"
    testImplementation "org.apache.kafka:kafka-streams-test-utils:3.4.0"
    testImplementation "junit:junit:4.13.2"
}

test {
    testLogging {
        outputs.upToDateWhen { false }
        showStandardStreams = true
        exceptionFormat = "full"
    }
}

jar {
  manifest {
    attributes(
      "Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
      "Main-Class": "io.confluent.developer.AggregatingSum"
    )
  }
}

shadowJar {
    archiveBaseName = "kstreams-aggregating-sum-standalone"
    archiveClassifier = ''
}

And be sure to run the following command to obtain the Gradle wrapper:

gradle wrapper

Next, create a directory for configuration data:

mkdir configuration

Then create a development file at configuration/dev.properties:

application.id=aggregating-sum-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=http://127.0.0.1:8081

input.topic.name=movie-ticket-sales
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=movie-revenue
output.topic.partitions=1
output.topic.replication.factor=1

Create a schema for the events

5

Create a directory for the schemas that represent the events in the stream:

mkdir -p src/main/avro

Then create the following Avro schema file at src/main/avro/ticket-sale.avsc for the ticket sale events:

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "TicketSale",
  "fields": [
    {"name": "title", "type": "string"},
    {"name": "sale_ts", "type": "string"},
    {"name": "ticket_total_value", "type": "int"}
  ]
}

Because this Avro schema is used in the Java code, it needs to compile it. Run the following:

./gradlew build

Create the Kafka Streams topology

6

Create a directory for the Java files in this project:

mkdir -p src/main/java/io/confluent/developer

Then create the following file at src/main/java/io/confluent/developer/AggregatingSum.java. Let’s take a close look at the buildTopology() method, which uses the Kafka Streams DSL.

The first thing the method does is create an instance of StreamsBuilder, which is the helper object that lets us build our topology. With our builder in hand, we can apply the following methods:

  1. Call the stream() method to create a KStream<String, TicketSale> object.

  2. Since we can’t make any assumptions about the key of this stream, we have to repartition it explicitly. We use the map() method for that, creating a new KeyValue instance for each record, using the movie title as the new key.

  3. Group the events by that new key by calling the groupByKey() method. This returns a KGroupedStream object.

  4. Use the reduce() method to apply the sum aggregation.

  5. Use the toStream() method to produce the sum results to the specified output topic.

package io.confluent.developer;

import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.Produced;

import java.io.FileInputStream;
import java.io.InputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import java.time.Duration;

import io.confluent.common.utils.TestUtils;
import io.confluent.developer.avro.TicketSale;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;

import static io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG;

public class AggregatingSum {

  private SpecificAvroSerde<TicketSale> ticketSaleSerde(final Properties allProps) {
    final SpecificAvroSerde<TicketSale> serde = new SpecificAvroSerde<>();
    Map<String, String> config = (Map)allProps;
    serde.configure(config, false);
    return serde;
  }

  public Topology buildTopology(Properties allProps,
                                final SpecificAvroSerde<TicketSale> ticketSaleSerde) {
    final StreamsBuilder builder = new StreamsBuilder();

    final String inputTopic = allProps.getProperty("input.topic.name");
    final String outputTopic = allProps.getProperty("output.topic.name");

    builder.stream(inputTopic, Consumed.with(Serdes.String(), ticketSaleSerde))
        // Set key to title and value to ticket value
        .map((k, v) -> new KeyValue<>((String) v.getTitle(), v.getTicketTotalValue()))
        // Group by title
        .groupByKey(Grouped.with(Serdes.String(), Serdes.Integer()))
        // Apply SUM aggregation
        .reduce(Integer::sum)
        // Write to stream specified by outputTopic
        .toStream().mapValues(v -> v.toString() + " total sales").to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));

    return builder.build();
  }

  public void createTopics(Properties allProps) {
    AdminClient client = AdminClient.create(allProps);

    List<NewTopic> topics = new ArrayList<>();
    topics.add(new NewTopic(
        allProps.getProperty("input.topic.name"),
        Integer.parseInt(allProps.getProperty("input.topic.partitions")),
        Short.parseShort(allProps.getProperty("input.topic.replication.factor"))));
    topics.add(new NewTopic(
        allProps.getProperty("output.topic.name"),
        Integer.parseInt(allProps.getProperty("output.topic.partitions")),
        Short.parseShort(allProps.getProperty("output.topic.replication.factor"))));

    client.createTopics(topics);
    client.close();
  }

  public Properties loadEnvProperties(String fileName) throws IOException {
    Properties allProps = new Properties();
    FileInputStream input = new FileInputStream(fileName);
    allProps.load(input);
    input.close();

    return allProps;
  }

  public static void main(String[] args) throws IOException {
    if (args.length < 1) {
      throw new IllegalArgumentException(
          "This program takes one argument: the path to an environment configuration file.");
    }

    new AggregatingSum().runRecipe(args[0]);
  }

  private void runRecipe(final String configPath) throws IOException {
    final Properties allProps = new Properties();
    try (InputStream inputStream = new FileInputStream(configPath)) {
      allProps.load(inputStream);
    }
    allProps.put(StreamsConfig.APPLICATION_ID_CONFIG, allProps.getProperty("application.id"));
    allProps.put(StreamsConfig.STATE_DIR_CONFIG, TestUtils.tempDirectory().getPath());
    allProps.put(StreamsConfig.STATESTORE_CACHE_MAX_BYTES_CONFIG, 0);

    Topology topology = this.buildTopology(allProps, this.ticketSaleSerde(allProps));
    this.createTopics(allProps);

    final KafkaStreams streams = new KafkaStreams(topology, allProps);
    final CountDownLatch latch = new CountDownLatch(1);

    // Attach shutdown handler to catch Control-C.
    Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
      @Override
      public void run() {
        streams.close(Duration.ofSeconds(5));
        latch.countDown();
      }
    });

    try {
      streams.start();
      latch.await();
    } catch (Throwable e) {
      System.exit(1);
    }
    System.exit(0);

  }
}

Compile and run the Kafka Streams program

7

In your terminal, run:

./gradlew shadowJar

Now that an uberjar for the Kafka Streams application has been built, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it:

java -jar build/libs/kstreams-aggregating-sum-standalone-0.0.1.jar configuration/dev.properties

Produce events to the input topic

8

In a new terminal, run:

docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic movie-ticket-sales --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/ticket-sale.avsc)"

When the console producer starts, it will log some messages and hang, waiting for your input. Type in one line at a time and press enter to send it. Each line represents an event. To send all of the events below, paste the following into the prompt and press enter:

{"title":"Die Hard","sale_ts":"2019-07-18T10:00:00Z","ticket_total_value":12}
{"title":"Die Hard","sale_ts":"2019-07-18T10:01:00Z","ticket_total_value":12}
{"title":"The Godfather","sale_ts":"2019-07-18T10:01:31Z","ticket_total_value":12}
{"title":"Die Hard","sale_ts":"2019-07-18T10:01:36Z","ticket_total_value":24}
{"title":"The Godfather","sale_ts":"2019-07-18T10:02:00Z","ticket_total_value":18}
{"title":"The Big Lebowski","sale_ts":"2019-07-18T11:03:21Z","ticket_total_value":12}
{"title":"The Big Lebowski","sale_ts":"2019-07-18T11:03:50Z","ticket_total_value":12}
{"title":"The Godfather","sale_ts":"2019-07-18T11:40:00Z","ticket_total_value":36}
{"title":"The Godfather","sale_ts":"2019-07-18T11:40:09Z","ticket_total_value":18}

Consume aggregated sum from the output topic

9

Leaving your original terminal running, open another to consume the events that have been filtered by your application:

docker exec -it broker /usr/bin/kafka-console-consumer --topic movie-revenue --bootstrap-server broker:9092 --from-beginning --property print.key=true

After the consumer starts, you should see the following messages. Note that for every key (movie), a sequence of output records (sum) is emitted. Each record represents an update to the sum, which is sent on every movie event specifically because caching is disabled in the code with StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG set to 0. Read more on Record caches in the DSL.

The consumer prompt will hang, waiting for more events to arrive. To continue studying the example, send more events through the input terminal prompt. Otherwise, you can Control-C to exit the process.

Die Hard	12 total sales
Die Hard	24 total sales
The Godfather	12 total sales
Die Hard	48 total sales
The Godfather	30 total sales
The Big Lebowski	12 total sales
The Big Lebowski	24 total sales
The Godfather	66 total sales
The Godfather	84 total sales

Test it

Create a test configuration file

1

First, create a test file at configuration/test.properties:

application.id=aggregating-sum-app
bootstrap.servers=127.0.0.1:29092
schema.registry.url=mock://SR_DUMMY_URL:8081

input.topic.name=movie-ticket-sales
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=movie-revenue
output.topic.partitions=1
output.topic.replication.factor=1

Write a test

2

Then, create a directory for the tests to live in:

mkdir -p src/test/java/io/confluent/developer

Create the following test file at src/test/java/io/confluent/developer/AggregatingSumTest.java:

package io.confluent.developer;

import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.TestInputTopic;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.junit.After;
import org.junit.Assert;
import org.junit.Test;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.stream.Collectors;

import io.confluent.developer.avro.TicketSale;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;

import static java.util.Arrays.asList;

public class AggregatingSumTest {

  private final static String TEST_CONFIG_FILE = "configuration/test.properties";
  private TopologyTestDriver testDriver;

  private SpecificAvroSerde<TicketSale> makeSerializer(Properties allProps) {
    SpecificAvroSerde<TicketSale> serde = new SpecificAvroSerde<>();

    Map<String, String> config = new HashMap<>();
    config.put("schema.registry.url", allProps.getProperty("schema.registry.url"));
    serde.configure(config, false);

    return serde;
  }

  @Test
  public void shouldSumTicketSales() throws IOException {
    AggregatingSum aggSum = new AggregatingSum();
    Properties allProps = aggSum.loadEnvProperties(TEST_CONFIG_FILE);

    String inputTopic = allProps.getProperty("input.topic.name");
    String outputTopic = allProps.getProperty("output.topic.name");

    final SpecificAvroSerde<TicketSale> ticketSaleSpecificAvroSerde = makeSerializer(allProps);

    Topology topology = aggSum.buildTopology(allProps, ticketSaleSpecificAvroSerde);
    testDriver = new TopologyTestDriver(topology, allProps);

    Serializer<String> keySerializer = Serdes.String().serializer();
    Deserializer<String> keyDeserializer = Serdes.String().deserializer();

    final TestInputTopic<String, TicketSale>
        testDriverInputTopic =
        testDriver.createInputTopic(inputTopic, keySerializer, ticketSaleSpecificAvroSerde.serializer());

    final List<TicketSale>
        input = asList(
                  new TicketSale("Die Hard", "2019-07-18T10:00:00Z", 12),
                  new TicketSale("Die Hard", "2019-07-18T10:01:00Z", 12),
                  new TicketSale("The Godfather", "2019-07-18T10:01:31Z", 12),
                  new TicketSale("Die Hard", "2019-07-18T10:01:36Z", 24),
                  new TicketSale("The Godfather", "2019-07-18T10:02:00Z", 18),
                  new TicketSale("The Big Lebowski", "2019-07-18T11:03:21Z", 12),
                  new TicketSale("The Big Lebowski", "2019-07-18T11:03:50Z", 12),
                  new TicketSale("The Godfather", "2019-07-18T11:40:00Z", 36),
                  new TicketSale("The Godfather", "2019-07-18T11:40:09Z", 18)
                );

    List<String> expectedOutput = new ArrayList<>(Arrays.asList("12 total sales", "24 total sales", "12 total sales", "48 total sales", "30 total sales", "12 total sales", "24 total sales", "66 total sales", "84 total sales"));

    for (TicketSale ticketSale : input) {
      testDriverInputTopic.pipeInput("", ticketSale);
    }

    List<String> actualOutput =
        testDriver
            .createOutputTopic(outputTopic, keyDeserializer, Serdes.String().deserializer())
            .readKeyValuesToList()
            .stream()
            .filter(Objects::nonNull)
            .map(record -> record.value.toString())
            .collect(Collectors.toList());

    System.out.println(actualOutput);
    Assert.assertEquals(expectedOutput, actualOutput);

  }

  @After
  public void cleanup() {
    testDriver.close();
  }

}

Invoke the tests

3

Now run the test, which is as simple as:

./gradlew test

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details).

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application. In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.

# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips

# Best practice for Kafka producer to prevent data loss
acks=all

# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.