How to schedule operations in Kafka Streams

Question:

How can you schedule recurring operations in Kafka Streams?

Edit this page

Example use case:

You'd like to have some periodic functionality execute in your Kafka Streams application. In this tutorial, you'll learn how to use punctuations in Kafka Streams to execute work at regular intervals.

Hands-on code example:

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2

To get started, make a new directory anywhere you’d like for this project:

mkdir kafka-streams-schedule-operations && cd kafka-streams-schedule-operations

Get Confluent Platform

3

Create a Dockerfile called Dockerfile-connect that builds a custom container for Kafka Connect bundled with the free and open source Kafka Connect Datagen connector, installed from Confluent Hub.

FROM confluentinc/cp-kafka-connect-base:7.3.0

ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components"

RUN confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:0.6.0

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
  connect:
    image: localimage/kafka-connect-datagen:latest
    build:
      context: .
      dockerfile: Dockerfile-connect
    container_name: connect
    depends_on:
    - broker
    - schema-registry
    ports:
    - 8083:8083
    volumes:
    - ./datagen-logintime.avsc:/tmp/datagen-logintime.avsc
    environment:
      CONNECT_BOOTSTRAP_SERVERS: broker:9092
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: kafka-connect
      CONNECT_CONFIG_STORAGE_TOPIC: _kafka-connect-configs
      CONNECT_OFFSET_STORAGE_TOPIC: _kafka-connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: _kafka-connect-status
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: '1'
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: '1'
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: '1'

Now launch Confluent Platform by running the following command. Note the --build argument which automatically builds the Docker image for Kafka Connect and the bundled kafka-connect-datagen connector.

docker compose up -d --build

Configure the project

4

Create the following Gradle build file, named build.gradle for the project:

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "gradle.plugin.com.github.jengelman.gradle.plugins:shadow:7.0.0"
    }
}

plugins {
    id "java"
    id "idea"
    id "eclipse"
    id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}

sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"

repositories {
    mavenCentral()

    maven {
        url "https://packages.confluent.io/maven"
    }
}

apply plugin: "com.github.johnrengelman.shadow"

dependencies {
    implementation "org.apache.avro:avro:1.11.1"
    implementation "org.slf4j:slf4j-simple:2.0.7"
    implementation "org.apache.kafka:kafka-streams:3.1.0"
    implementation "io.confluent:kafka-streams-avro-serde:7.1.0"
    implementation "org.apache.kafka:kafka-clients:3.1.0"
    testImplementation "org.apache.kafka:kafka-streams-test-utils:3.1.0"
    testImplementation "junit:junit:4.13.2"
    testImplementation 'org.hamcrest:hamcrest:2.2'
}

test {
    testLogging {
        outputs.upToDateWhen { false }
        showStandardStreams = true
        exceptionFormat = "full"
    }
}

jar {
  manifest {
    attributes(
      "Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
      "Main-Class": "io.confluent.developer.KafkaStreamsPunctuation"
    )
  }
}

shadowJar {
    archiveBaseName = "kafka-streams-schedule-operations-standalone"
    archiveClassifier = ''
}

And be sure to run the following command to obtain the Gradle wrapper:

gradle wrapper

Next, create a directory for configuration data:

mkdir configuration

Then create a development file at configuration/dev.properties:

application.id=kafka-streams-schedule-operations
bootstrap.servers=localhost:29092
schema.registry.url=http://localhost:8081

input.topic.name=login-events
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=output-topic
output.topic.partitions=1
output.topic.replication.factor=1

Create a schema for the model object

5

Create a directory for the schemas that represent the events in the stream:

mkdir -p src/main/avro

Then create the following Avro schema file at src/main/avro/logintime.avsc for our LoginTime object:

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "LoginTime",
  "fields": [
    {"name": "logintime", "type": "long" },
    {"name": "userid", "type": "string" },
    {"name": "appid", "type": "string" }
  ]
}

Because we will use an Avro schema in our Java code, we’ll need to compile it. The Gradle Avro plugin is a part of the build, so it will see your new Avro files, generate Java code for them, and compile those and all other Java sources. Run this command to get it all done:

./gradlew build

Create the Kafka Streams topology

6

Create a directory for the Java files in this project:

mkdir -p src/main/java/io/confluent/developer

Before you create the Kafka Streams application file let’s go over the key points of the application. In this tutorial, instead of performing an operation on each key-value pair, you want to store the results in a state store and execute your business logic at regular intervals. In other words, you want to schedule an operation and Kafka Streams will run your code at regular intervals. In this case you’ll use the ProcessorContext.schedule method.

To use the ProcessorContext you need to build your Kafka Streams application using the Processor API or use one of the DSL methods that provide Processor API integration. In this tutorial you’ll go for the latter option and use KStreams.transform

Since the KStream.transform method can potentially change the key, using this method flags the KStream instance as needing a repartition. But the repartition only happens if you perform a join or an aggregation after the transform. I used transform in this tutorial as it makes for a better example because you can use the ProcessorContext.forward method. Additionally, you’re not doing any joins or aggregations, so there’s no repartition required. But it’s important to consider your requirements and in most cases use a transformValues instead.

Now let’s take a look at some of the key points from the application.

For context your application consumes from a topic with information how long users have been logged into different applications. Your goal is to emit the user with the longest login times across all applications every five seconds. To do this you track the total login time per user in a state store. Additionally, every twenty seconds you want reset the cumulative times to zero every twenty seconds.

The following detailed sections are already included in the application file, we’re just taking a detailed step through the code before you create it.
Using the Transformer in the Kafka Streams application
final KStream<String, LoginTime> loginTimeStream = builder.stream(loginTimeInputTopic, Consumed.with(Serdes.String(), loginTimeSerde));
loginTimeStream.transform(getTransformerSupplier(loginTimeStore), Named.as("max-login-time-transformer"),loginTimeStore) (1)
               .to(outputTopic, Produced.with(Serdes.String(), Serdes.Long()));


private TransformerSupplier<String, LoginTime, KeyValue<String, Long>> getTransformerSupplier(final String storeName) {
	    return () -> new Transformer<String, LoginTime, KeyValue<String, Long>>() { (2)
	        private KeyValueStore<String, Long> store;
	        private ProcessorContext context;
            @Override
            public void init(ProcessorContext context) { (3)
                   this.context = context;
                   store = (KeyValueStore<String, Long>) this.context.getStateStore(storeName);
                   this.context.schedule(Duration.ofSeconds(5), PunctuationType.STREAM_TIME, this::streamTimePunctuator); (4)
                   this.context.schedule(Duration.ofSeconds(20), PunctuationType.WALL_CLOCK_TIME, this::wallClockTimePunctuator); (5)
            }


@Override
public KeyValue<String, Long> transform(String key, LoginTime value) { (6)
       Long currentVT = store.putIfAbsent(key, value.getLogintime());
       if (currentVT != null) {
           store.put(key, currentVT + value.getLogintime());
       }
       return null;
}
1 Adding a transform operation to the KStream
2 Using a lambda since the TransformerSupplier interface only has one method, get(). Calling get() should always return a new instance of a Transformer
3 The init method used to configure the transformer. It’s in the init method you schedule any punctuations. Kafka Streams calls the init method for all processors/transformers.
4 Scheduling a punctuation to occur based on STREAM_TIME every five seconds. The third parameter is a method handle used for the Punctuator interface.
5 Scheduling a punctuation to fire based on WALL_CLOCK_TIME every twenty seconds. The third parameter is a method handle used for the Punctuator interface.
6 The transform method. All you are doing here is incrementing the total time a user is logged in and storing it in a state store.

From the above code section, you are adding a transform operation to the stream reading from the input topic. The key parts of this section are points four and five where you schedule the punctuations. There are two schedule operations one using STREAM_TIME and another using WALL_CLOCK_TIME. These two are from the PunctuationType enum.

The stream-time punctuation fires based on timestamps on the incoming records, stream-time only advances as records arrive. The wall-clock-time punctuation fires based on system time advanced at the polling interval and is independent of the rate incoming messages. Read how Kafka Streams supports notions of time for more information.

Next let’s talk discuss the scheduling in a little more detail.

When you schedule a punctuation, you provide three parameters:

  1. How often the punctuation should execute defined as a type of Duration

  2. The PunctuationType either stream-time or wall-clock tome

  3. An instance of the Punctuator interface. Since the Punctuator interface has only one , punctuate, typically you’ll use either a lambda expression or a method reference. In this case we’ve used a method reference.

Now let’s take a look at these methods.

Method references used for punctuations
 void wallClockTimePunctuator(Long timestamp){ (1)
                try (KeyValueIterator<String, Long> iterator = store.all()) {
                    while (iterator.hasNext()) {
                        KeyValue<String, Long> keyValue = iterator.next();
                        store.put(keyValue.key, 0L);
                    }
                }
                System.out.println("@" + new Date(timestamp) +" Reset all view-times to zero");
            }

void streamTimePunctuator(Long timestamp) { (2)
        Long maxValue = Long.MIN_VALUE;
        String maxValueKey = "";
        try (KeyValueIterator<String, Long> iterator = store.all()) {
            while (iterator.hasNext()) {
                KeyValue<String, Long> keyValue = iterator.next();
                if (keyValue.value > maxValue) {
                    maxValue = keyValue.value;
                    maxValueKey = keyValue.key;
                }
            }
        }
        context.forward(maxValueKey +" @" + new Date(timestamp), maxValue); (3)
    }
1 The wallClockTimePunctuator resets the times for all users to zero every 20 seconds.
2 The streamTimePunctuator calculates the user with the largest logged in time
3 Forwarding the results, in this case to a topic

That wraps up our discussion for the finer points of the code for this tutorial. Now create the following file at src/main/java/io/confluent/developer/KafkaStreamsPunctuation.java

package io.confluent.developer;


import io.confluent.developer.avro.LoginTime;
import io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import org.apache.avro.specific.SpecificRecord;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Named;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.kstream.Transformer;
import org.apache.kafka.streams.kstream.TransformerSupplier;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.PunctuationType;
import org.apache.kafka.streams.state.KeyValueIterator;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;

import java.io.FileInputStream;
import java.io.IOException;
import java.time.Duration;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;

public class KafkaStreamsPunctuation {


	public Properties buildStreamsProperties(Properties envProps) {
        Properties props = new Properties();

        props.put(StreamsConfig.APPLICATION_ID_CONFIG, envProps.getProperty("application.id"));
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, envProps.getProperty("bootstrap.servers"));
        props.put(AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));

        return props;
    }

    public Topology buildTopology(Properties envProps) {
        final StreamsBuilder builder = new StreamsBuilder();
        final String loginTimeInputTopic = envProps.getProperty("input.topic.name");
        final String outputTopic = envProps.getProperty("output.topic.name");
        final String loginTimeStore = "logintime-store";
        final Serde<LoginTime> loginTimeSerde = getSpecificAvroSerde(envProps);
        StoreBuilder<KeyValueStore<String, Long>> storeBuilder = Stores.keyValueStoreBuilder(Stores.inMemoryKeyValueStore(loginTimeStore),Serdes.String(), Serdes.Long());
        builder.addStateStore(storeBuilder);
        final KStream<String, LoginTime> loginTimeStream = builder.stream(loginTimeInputTopic, Consumed.with(Serdes.String(), loginTimeSerde));

        loginTimeStream.transform(getTransformerSupplier(loginTimeStore), Named.as("max-login-time-transformer"),loginTimeStore)
                      .to(outputTopic, Produced.with(Serdes.String(), Serdes.Long()));

        return builder.build();
    }


    private TransformerSupplier<String, LoginTime, KeyValue<String, Long>> getTransformerSupplier(final String storeName) {
	    return () -> new Transformer<String, LoginTime, KeyValue<String, Long>>() {
	        private KeyValueStore<String, Long> store;
	        private ProcessorContext context;
            @Override
            public void init(ProcessorContext context) {
                   this.context = context;
                   store = (KeyValueStore<String, Long>) this.context.getStateStore(storeName);
                   this.context.schedule(Duration.ofSeconds(5), PunctuationType.STREAM_TIME, this::streamTimePunctuator);
                   this.context.schedule(Duration.ofSeconds(20), PunctuationType.WALL_CLOCK_TIME, this::wallClockTimePunctuator);
            }

            void wallClockTimePunctuator(Long timestamp){
                try (KeyValueIterator<String, Long> iterator = store.all()) {
                    while (iterator.hasNext()) {
                        KeyValue<String, Long> keyValue = iterator.next();
                        store.put(keyValue.key, 0L);
                    }
                }
                System.out.println("@" + new Date(timestamp) +" Reset all view-times to zero");
            }

            void streamTimePunctuator(Long timestamp) {
                Long maxValue = Long.MIN_VALUE;
                String maxValueKey = "";
                try (KeyValueIterator<String, Long> iterator = store.all()) {
                    while (iterator.hasNext()) {
                        KeyValue<String, Long> keyValue = iterator.next();
                        if (keyValue.value > maxValue) {
                            maxValue = keyValue.value;
                            maxValueKey = keyValue.key;
                        }
                    }
                }
                context.forward(maxValueKey +" @" + new Date(timestamp), maxValue);
            }

            @Override
            public KeyValue<String, Long> transform(String key, LoginTime value) {
                   Long currentVT = store.putIfAbsent(key, value.getLogintime());
                   if (currentVT != null) {
                       store.put(key, currentVT + value.getLogintime());
                   }
                   return null;
            }

            @Override
            public void close() {

            }
        };
    }



    static <T extends SpecificRecord> SpecificAvroSerde<T> getSpecificAvroSerde(final Properties envProps) {
        final SpecificAvroSerde<T> specificAvroSerde = new SpecificAvroSerde<>();

        final HashMap<String, String> serdeConfig = new HashMap<>();
        serdeConfig.put(AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG,
                envProps.getProperty("schema.registry.url"));

        specificAvroSerde.configure(serdeConfig, false);
        return specificAvroSerde;
    }

    public void createTopics(final Properties envProps) {
        final Map<String, Object> config = new HashMap<>();
        config.put("bootstrap.servers", envProps.getProperty("bootstrap.servers"));
        try (final AdminClient client = AdminClient.create(config)) {

        final List<NewTopic> topics = new ArrayList<>();

            topics.add(new NewTopic(
                    envProps.getProperty("output.topic.name"),
                    Integer.parseInt(envProps.getProperty("output.topic.partitions")),
                    Short.parseShort(envProps.getProperty("output.topic.replication.factor"))));

            client.createTopics(topics);
        }
    }

    public Properties loadEnvProperties(String fileName) throws IOException {
        final Properties envProps = new Properties();
        final FileInputStream input = new FileInputStream(fileName);
        envProps.load(input);
        input.close();

        return envProps;
    }

    public static void main(String[] args) throws Exception {

        if (args.length < 1) {
            throw new IllegalArgumentException("This program takes one argument: the path to an environment configuration file.");
        }

        final KafkaStreamsPunctuation instance = new KafkaStreamsPunctuation();
        final Properties envProps = instance.loadEnvProperties(args[0]);
        final Properties streamProps = instance.buildStreamsProperties(envProps);
        final Topology topology = instance.buildTopology(envProps);

        instance.createTopics(envProps);

        final KafkaStreams streams = new KafkaStreams(topology, streamProps);
        final CountDownLatch latch = new CountDownLatch(1);

        // Attach shutdown handler to catch Control-C.
        Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
            @Override
            public void run() {
                streams.close(Duration.ofSeconds(5));
                latch.countDown();
            }
        });

        try {
            streams.start();
            latch.await();
        } catch (Throwable e) {
            System.exit(1);
        }
        System.exit(0);
    }

}

Start data generation for the Kafka Streams application

7

Before you start your Kafka Streams application, we need to provide data for it. Fortunately this is as simple as using a HTTP PUT request, as you’re going to use the DatagenConnector.

Now create the following Avro schema file datagen-logintime.avsc in the current working directory (kafka-streams-schedule-operations) for the tutorial:

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "LoginTime",
  "fields": [
    {"name": "logintime", "type": {
      "type": "long",
      "format_as_time" : "unix_long",
      "arg.properties": {
        "iteration": { "start": 1, "step": 100}
      }
    }},
    {"name": "userid", "type": {
      "type": "string",
      "arg.properties": {
        "regex": "User_[1-9]{0,1}"
      }
    }},
    {"name": "appid", "type": {
      "type": "string",
      "arg.properties": {
        "regex": "App[1-9][0-9]?"
      }
    }}
  ]
}

This schema file is pretty much idential to the one you created earlier.

The only difference is this schema contains instructions for data generation. The kafka-connect-datagen connector uses the Avro Random Generator to generate data.

Open an new terminal window and run this command to start the data generator:

curl -i -X PUT http://localhost:8083/connectors/datagen_local_01/config \
     -H "Content-Type: application/json" \
     -d '{
            "connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
            "key.converter": "org.apache.kafka.connect.storage.StringConverter",
            "kafka.topic": "login-events",
            "schema.filename": "/tmp/datagen-logintime.avsc",
            "schema.keyfield": "userid",
            "max.interval": 1000,
            "iterations": 10000000,
            "tasks.max": "1"
        }'

You should see something like this on the console indicating the datagen connector sucessfuly started

HTTP/1.1 200 OK
Date: Thu, 20 Aug 2020 20:15:22 GMT
Content-Type: application/json
Content-Length: 441
Server: Jetty(9.4.24.v20191120)

{"name":"datagen_local_01","config":{"connector.class":"io.confluent.kafka.connect.datagen.DatagenConnector","key.converter":"org.apache.kafka.connect.storage.StringConverter","kafka.topic":"login-events","schema.filename":"/schemas/datagen-logintime.avsc","schema.keyfield":"userid","max.interval":"1000","iterations":"10000000","tasks.max":"1","name":"datagen_local_01"},"tasks":[{"connector":"datagen_local_01","task":0}],"type":"source"}

Compile and run the Kafka Streams program

8

Now that we have data generation working, let’s build your application by running:

./gradlew shadowJar

Now that you have an uberjar for the Kafka Streams application, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it. There is always another message to process, so streaming applications don’t exit until you force them.

java -jar build/libs/kafka-streams-schedule-operations-standalone-0.0.1.jar configuration/dev.properties

Consume data from the output topic

9

Now that your Kafka Streams application is running, start a console-consumer to confirm the output:

docker exec -t broker kafka-console-consumer \
 --bootstrap-server broker:9092 \
 --topic output-topic \
 --property print.key=true \
 --value-deserializer "org.apache.kafka.common.serialization.LongDeserializer" \
 --property key.separator=" : "  \
 --from-beginning \
 --max-messages 10

Your results should look someting like this:


User_6 @Thu Aug 20 16:30:33 EDT 2020 : 1
User_9 @Thu Aug 20 16:30:35 EDT 2020 : 601
User_9 @Thu Aug 20 16:30:40 EDT 2020 : 2903
User_4 @Thu Aug 20 16:30:45 EDT 2020 : 5904
User_3 @Thu Aug 20 16:30:50 EDT 2020 : 13305
User_8 @Thu Aug 20 16:30:55 EDT 2020 : 28909
User_9 @Thu Aug 20 16:31:00 EDT 2020 : 18303
User_9 @Thu Aug 20 16:31:05 EDT 2020 : 24804
User_9 @Thu Aug 20 16:31:10 EDT 2020 : 32205
User_9 @Thu Aug 20 16:31:15 EDT 2020 : 58108

The timestamp after the user-id is there to help see the time when Kafka Streams executed the punctuation. In practice you most likely wouldn’t append a timestamp to your key.

Test it

Create a test configuration file

1

First, create a test file at configuration/test.properties:

application.id=kafka-streams-schedule-operations
bootstrap.servers=localhost:29092
schema.registry.url=mock://kafka-streams-schedule-operations-test

input.topic.name=login-events
input.topic.partitions=1
input.topic.replication.factor=1

output.topic.name=output-topic
output.topic.partitions=1
output.topic.replication.factor=1

Write a test

2

Create a directory for the tests to live in:

mkdir -p src/test/java/io/confluent/developer

Testing a Kafka streams application requires a bit of test harness code, but happily the org.apache.kafka.streams.TopologyTestDriver class makes this much more pleasant that it would otherwise be.

There is only one method in KafkaStreamsPunctuationTest annotated with @Test, and that is punctuationTest(). This method actually runs our Streams topology using the TopologyTestDriver and some mocked data that is set up inside the test method.

This test is fairly vanilla, but there is one section we should look into a little more

final List<LoginTime> loggedOnTimes = new ArrayList<>();
loggedOnTimes.add(LoginTime.newBuilder().setLogintime(5L).setAppid("test-page").setUserid("user-1").build());
loggedOnTimes.add(LoginTime.newBuilder().setLogintime(5L).setAppid("test-page").setUserid("user-2").build());
loggedOnTimes.add(LoginTime.newBuilder().setLogintime(10L).setAppid("test-page").setUserid("user-1").build());
loggedOnTimes.add(LoginTime.newBuilder().setLogintime(25L).setAppid("test-page").setUserid("user-3").build());
loggedOnTimes.add(LoginTime.newBuilder().setLogintime(10L).setAppid("test-page").setUserid("user-2").build());

List<KeyValue<String, LoginTime>> keyValues = loggedOnTimes.stream().map(o -> KeyValue.pair(o.getUserid(),o)).collect(Collectors.toList());
inputTopic.pipeKeyValueList(keyValues,      (1)
                            Instant.now(),  (2)
                            Duration.ofSeconds(2)); (3)
1 Piping through all the records.
2 Setting the initial timestamp.
3 How much to increase each subsequent timestamp.

The TestInputTopic provides useful methods when testing your topology and you need timestamps to drive behavior. In this case, you expect the streams application to fire a punctuation every 5 seconds. The TestInputTopic.pipeKeyValueList gives you the ability to achieve the desired behavior.

Now create the following file at src/test/java/io/confluent/developer/KafkaStreamsPunctuationTest.java.

package io.confluent.developer;


import io.confluent.developer.avro.LoginTime;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.TestInputTopic;
import org.apache.kafka.streams.TestOutputTopic;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.apache.kafka.streams.state.KeyValueStore;
import org.junit.Test;

import java.io.IOException;
import java.time.Duration;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertSame;



public class KafkaStreamsPunctuationTest {

    private final static String TEST_CONFIG_FILE = "configuration/test.properties";

    @Test
    public void punctuationTest() throws IOException {
        final KafkaStreamsPunctuation instance = new KafkaStreamsPunctuation();
        final Properties envProps = instance.loadEnvProperties(TEST_CONFIG_FILE);

        final Properties streamProps = instance.buildStreamsProperties(envProps);
        final String pageviewsInputTopic = envProps.getProperty("input.topic.name");
        final String outputTopicName = envProps.getProperty("output.topic.name");

        final Topology topology = instance.buildTopology(envProps);
        try (final TopologyTestDriver testDriver = new TopologyTestDriver(topology, streamProps)) {

            final SpecificAvroSerde<LoginTime> exampleAvroSerde = KafkaStreamsPunctuation.getSpecificAvroSerde(envProps);

            final Serializer<String> keySerializer = Serdes.String().serializer();
            final Serializer<LoginTime> exampleSerializer = exampleAvroSerde.serializer();
            final Deserializer<Long> valueDeserializer = Serdes.Long().deserializer();
            final Deserializer<String> keyDeserializer = Serdes.String().deserializer();

            final TestInputTopic<String, LoginTime>  inputTopic = testDriver.createInputTopic(pageviewsInputTopic,
                                                                                              keySerializer,
                                                                                              exampleSerializer);

            final TestOutputTopic<String, Long> outputTopic = testDriver.createOutputTopic(outputTopicName, keyDeserializer, valueDeserializer);

            final List<LoginTime> loggedOnTimes = new ArrayList<>();
            loggedOnTimes.add(LoginTime.newBuilder().setLogintime(5L).setAppid("test-page").setUserid("user-1").build());
            loggedOnTimes.add(LoginTime.newBuilder().setLogintime(5L).setAppid("test-page").setUserid("user-2").build());
            loggedOnTimes.add(LoginTime.newBuilder().setLogintime(10L).setAppid("test-page").setUserid("user-1").build());
            loggedOnTimes.add(LoginTime.newBuilder().setLogintime(25L).setAppid("test-page").setUserid("user-3").build());
            loggedOnTimes.add(LoginTime.newBuilder().setLogintime(10L).setAppid("test-page").setUserid("user-2").build());

            List<KeyValue<String, LoginTime>> keyValues = loggedOnTimes.stream().map(o -> KeyValue.pair(o.getUserid(),o)).collect(Collectors.toList());
            inputTopic.pipeKeyValueList(keyValues, Instant.now(), Duration.ofSeconds(2));

            final List<KeyValue<String, Long>> actualResults = outputTopic.readKeyValuesToList();
            assertThat(actualResults.size(), is(greaterThanOrEqualTo(1)));

            KeyValueStore<String, Long> store = testDriver.getKeyValueStore("logintime-store");

            testDriver.advanceWallClockTime(Duration.ofSeconds(20));

            assertSame(store.get("user-1"), 0L);
            assertSame(store.get("user-2"), 0L);
            assertSame(store.get("user-3"), 0L);
        }
    }
}

Invoke the tests

3

Now run the test, which is as simple as:

./gradlew test

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application. In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.

# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips

# Best practice for Kafka producer to prevent data loss
acks=all

# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.