How to build a User-Defined Function (UDF) to transform events

Problem:

you have events in a Kafka topic, and you want to transform the values using a stateless scalar function not already provided by KSQL

Edit this page

Example use case:

Consider a topic of stock price events that you want to calculate the volume-weighted average price (VWAP) for each event, publishing the result to a new topic. There is no built-in function for VWAP, so we'll write a custom KSQL UDF that performs the calculation.

Code example:

Try it

1
Initialize the project

To get started, make a new directory anywhere you’d like for this project:

mkdir udf && cd udf

Then make the following directories:

mkdir src extensions

Create the following Gradle build file, named build.gradle for the project:

buildscript {
    repositories {
        jcenter()
    }
}

plugins {
    id "java"
}

sourceCompatibility = "1.8"
targetCompatibility = "1.8"
version = "0.0.1"

repositories {
    mavenCentral()
    jcenter()

    maven {
        url "http://packages.confluent.io/maven"
    }
}

dependencies {
    compile 'io.confluent.ksql:ksql-udf:5.4.0'
    testCompile 'junit:junit:4.12'
}

task copyJar(type: Copy) {
    from jar
    into "extensions/"
}

build.dependsOn copyJar

test {
    testLogging {
        outputs.upToDateWhen { false }
        showStandardStreams = true
        exceptionFormat = "full"
    }
}

The build.gradle also contains a copyJar step to copy the jar file to the extensions/ directory where it will be picked up by KSQL. This is convenient when you are iterating on a function. For example, you might have tested your UDF against your suite of unit tests and you are now ready to test against steams in KSQL. With the jar in the correct place, a restart of KSQL will load your updated jar.

And be sure to run the following command to obtain the Gradle wrapper:

gradle wrapper

2
Implement the KSQL User-Defined Function

Create a directory for the Java files in this project:

mkdir -p src/main/java/io/confluent/developer

Then create the following file at src/main/java/io/confluent/developer/VwapUdf.java. This file contains the Java logic of your custom function. Read through the code to familiarize yourself.

package io.confluent.developer;

import io.confluent.ksql.function.udf.Udf;
import io.confluent.ksql.function.udf.UdfDescription;
import io.confluent.ksql.function.udf.UdfParameter;

@UdfDescription(name = "vwap", description = "Volume weighted average price")
public class VwapUdf {

    @Udf(description = "vwap for market prices as integers, returns double")
    public double vwap(
            @UdfParameter(value = "bid")
            final int bid,
            @UdfParameter(value = "bidQty")
            final int bidQty,
            @UdfParameter(value = "ask")
            final int ask,
            @UdfParameter(value = "askQty")
            final int askQty) {
        return ((ask * askQty) + (bid * bidQty)) / (bidQty + askQty);
    }

    @Udf(description = "vwap for market prices as integers, returns double")
    public double vwap(
            @UdfParameter(value = "bid")
            final double bid,
            @UdfParameter(value = "bidQty")
            final int bidQty,
            @UdfParameter(value = "ask")
            final double ask,
            @UdfParameter(value = "askQty")
            final int askQty) {
        return ((ask * askQty) + (bid * bidQty)) / (bidQty + askQty);
    }
}

Here we have created a new Class which defines two functions, both of which are annotated with Udf, indicating they are ksqlDB UDF function definitions. Both functions take parameters of type double or int and produce a single result of type double, representing the calculated volume-weighted average price of the inputs.

3
Build the JAR

In your terminal, run:

./gradlew build

The copyJar gradle task will automatically deliver the jar to the extensions/ directory.

4
Get Confluent Platform

Next, create the following docker-compose.yml file to obtain Confluent Platform:

version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.4.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-enterprise-kafka:5.4.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'

  schema-registry:
    image: confluentinc/cp-schema-registry:5.4.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'

  ksqldb-server:
    image: confluentinc/ksqldb-server:0.9.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
      - schema-registry
    volumes:
      - ./extensions:/etc/ksqldb/ext
    ports:
      - "8088:8088"
    environment:
      KSQL_CONFIG_DIR: "/etc/ksqldb"
      KSQL_KSQL_EXTENSION_DIR: "/etc/ksqldb/ext/"
      KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksqldb/log4j.properties"
      KSQL_BOOTSTRAP_SERVERS: "broker:9092"
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"

  ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.9.0
    container_name: ksqldb-cli
    depends_on:
      - broker
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true
    environment:
      KSQL_CONFIG_DIR: "/etc/ksqldb"
    volumes:
      - ./src:/opt/app/src
      - ./test:/opt/app/test

Note docker-compose.yml has configured the ksql-server container with KSQL_KSQL_EXTENSION_DIR: "/etc/ksql/ext/", mapping the local extensions directory to /etc/ksql/ext in the container. KSQL is now configured to look in this location for your extensions such as custom functions.

Launch the platform by running:

docker-compose up -d

5
Write the program interactively using the CLI

To begin developing interactively, open up the ksqlDB CLI:

docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

Let’s confirm the UDF jar has been loaded correctly. You will see VWAP in the list of functions.

SHOW FUNCTIONS;

You can see some additional detail about the function with DESCRIBE FUNCTION.

DESCRIBE FUNCTION VWAP;

The result gives you a description of the function including input parameters and the return type.

Name        : VWAP
Overview    : Volume weighted average price
Type        : SCALAR
Jar         : /etc/ksqldb/ext/udf-0.0.1.jar
Variations  :

	Variation   : VWAP(bid DOUBLE, bidQty INT, ask DOUBLE, askQty INT)
	Returns     : DOUBLE
	Description : vwap for market prices as integers, returns double

	Variation   : VWAP(bid INT, bidQty INT, ask INT, askQty INT)
	Returns     : DOUBLE
	Description : vwap for market prices as integers, returns double

You’ll need to create a Kafka topic and stream to represent the stock quote stream. The following creates both in one shot:

CREATE STREAM raw_quotes(ticker varchar, bid int, ask int, bidqty int, askqty int)
    WITH (kafka_topic='stockquotes', value_format='avro', key='ticker', partitions=1);

Then produce the following events to the stream:

INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZTEST', 15, 25, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVV',   25, 35, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVZZT', 35, 45, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZXZZT', 45, 55, 100, 100);

INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZTEST', 10, 20, 50, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVV',   30, 40, 100, 50);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVZZT', 30, 40, 50, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZXZZT', 50, 60, 100, 50);

INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZTEST', 15, 20, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVV',   25, 35, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZVZZT', 35, 45, 100, 100);
INSERT INTO raw_quotes (ticker, bid, ask, bidqty, askqty) VALUES ('ZXZZT', 45, 55, 100, 100);

Now that you have stream with some events in it, let’s read them out. The first thing to do is set the following properties to ensure that you’re reading from the beginning of the stream:

SET 'auto.offset.reset' = 'earliest';

Let’s invoke the vwap function for every observed raw quote. Pay attention to the parameter ordering of the UDF when invoking from the ksqlDB syntax.

SELECT ticker, vwap(bid, bidqty, ask, askqty) AS vwap FROM raw_quotes EMIT CHANGES LIMIT 12;

This should yield the following output:

+--------------------+--------------------+
|TICKER              |VWAP                |
+--------------------+--------------------+
|ZTEST               |20.0                |
|ZVV                 |30.0                |
|ZVZZT               |40.0                |
|ZXZZT               |50.0                |
|ZTEST               |16.0                |
|ZVV                 |33.0                |
|ZVZZT               |36.0                |
|ZXZZT               |53.0                |
|ZTEST               |17.0                |
|ZVV                 |30.0                |
|ZVZZT               |40.0                |
|ZXZZT               |50.0                |
Limit Reached
Query terminated

Since the output looks right, the next step is to make the query continuous. Issue the following to create a new stream that is continuously populated by its query:

CREATE STREAM vwap WITH (kafka_topic = 'vwap', partitions = 1) AS
    SELECT ticker,
           vwap(bid, bidqty, ask, askqty) AS vwap
    FROM raw_quotes
    EMIT CHANGES;

To check that it’s working, print out the contents of the output stream’s underlying topic:

PRINT vwap FROM BEGINNING LIMIT 12;

This should yield the following output:

Key format: KAFKA_STRING
Value format: AVRO
rowtime: 2020/05/04 23:03:23.467 Z, key: ZTEST, value: {"TICKER": "ZTEST", "VWAP": 20.0}
rowtime: 2020/05/04 23:03:23.672 Z, key: ZVV, value: {"TICKER": "ZVV", "VWAP": 30.0}
rowtime: 2020/05/04 23:03:23.801 Z, key: ZVZZT, value: {"TICKER": "ZVZZT", "VWAP": 40.0}
rowtime: 2020/05/04 23:03:23.967 Z, key: ZXZZT, value: {"TICKER": "ZXZZT", "VWAP": 50.0}
rowtime: 2020/05/04 23:03:24.100 Z, key: ZTEST, value: {"TICKER": "ZTEST", "VWAP": 16.0}
rowtime: 2020/05/04 23:03:24.399 Z, key: ZVV, value: {"TICKER": "ZVV", "VWAP": 33.0}
rowtime: 2020/05/04 23:03:24.551 Z, key: ZVZZT, value: {"TICKER": "ZVZZT", "VWAP": 36.0}
rowtime: 2020/05/04 23:03:24.705 Z, key: ZXZZT, value: {"TICKER": "ZXZZT", "VWAP": 53.0}
rowtime: 2020/05/04 23:03:24.844 Z, key: ZTEST, value: {"TICKER": "ZTEST", "VWAP": 17.0}
rowtime: 2020/05/04 23:03:24.980 Z, key: ZVV, value: {"TICKER": "ZVV", "VWAP": 30.0}
rowtime: 2020/05/04 23:03:25.096 Z, key: ZVZZT, value: {"TICKER": "ZVZZT", "VWAP": 40.0}
rowtime: 2020/05/04 23:03:25.400 Z, key: ZXZZT, value: {"TICKER": "ZXZZT", "VWAP": 50.0}
Topic printing ceased

6
Write your statements to a file

Now that you have a series of statements that’s doing the right thing, the last step is to put them into a file so that they can be used outside the CLI session. Create a file at src/statements.sql with the following content:

CREATE STREAM raw_quotes(ticker varchar, bid int, ask int, bidqty int, askqty int)
    WITH (kafka_topic='stockquotes', value_format='avro', key='ticker', partitions=1);

CREATE STREAM vwap WITH (kafka_topic = 'vwap', partitions = 1) AS
    SELECT ticker,
           vwap(bid, bidqty, ask, askqty) AS vwap
    FROM raw_quotes;

Test it

1
Write a test

Then, create a directory for the tests to live in:

mkdir -p src/test/java/io/confluent/developer

Create the following test file at src/test/java/io/confluent/developer/VwapUdfTest.java:

package io.confluent.developer;

import static org.junit.Assert.*;
import org.junit.Test;

public class VwapUdfTest {

    @Test
    public void testVwapAllInts() {
        assertEquals(100D,
                new VwapUdf().vwap(95, 100, 105, 100),
               0D);
    }
    @Test
    public void testVwap() {
        assertEquals(100D,
                new VwapUdf().vwap(95D, 100, 105D, 100),
                0D);
    }
}

2
Invoke the tests

Now run the test, which is as simple as:

./gradlew test

Take it to production

1
Send the statements to the REST API

Launch your statements into production by sending them to the REST API with the following command:

tr '\n' ' ' < src/statements.sql | \
sed 's/;/;\'$'\n''/g' | \
while read stmt; do
    echo '{"ksql":"'$stmt'", "streamsProperties": {}}' | \
        curl -s -X "POST" "http://localhost:8088/ksql" \
             -H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
             -d @- | \
        jq
done