How to convert a stream's serialization format

Question:

If you have a Kafka topic with the data serialized in a particular format, how can you change that format?

Edit this page

Example use case:

Consider a topic with events that represent movie releases. The events in the topic are formatted with Avro. In this tutorial, we'll write a program that creates a new topic with the same events, but formatted with Protobuf.

Hands-on code example:

Run it

Prerequisites

1

This tutorial installs Confluent Platform using Docker. Before proceeding:

  • • Install Docker Desktop (version 4.0.0 or later) or Docker Engine (version 19.03.0 or later) if you don’t already have it

  • • Install the Docker Compose plugin if you don’t already have it. This isn’t necessary if you have Docker Desktop since it includes Docker Compose.

  • • Start Docker if it’s not already running, either by starting Docker Desktop or, if you manage Docker Engine with systemd, via systemctl

  • • Verify that Docker is set up properly by ensuring no errors are output when you run docker info and docker compose version on the command line

Initialize the project

2
A tutorial prerequisites

To successfully complete the steps of the tutorial, you need to have following software installed prior executing tutorial steps

To get started, make a new directory anywhere you’d like for this project:

mkdir kstreams-serialization && cd kstreams-serialization

Get Confluent Platform

3

Next, create the following docker-compose.yml file to obtain Confluent Platform (for Kafka in the cloud, see Confluent Cloud):

version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.4.1
    hostname: broker
    container_name: broker
    ports:
    - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@broker:29093
      KAFKA_LISTENERS: PLAINTEXT://broker:9092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:29092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /tmp/kraft-combined-logs
      CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
  schema-registry:
    image: confluentinc/cp-schema-registry:7.3.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
    - broker
    ports:
    - 8081:8081
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:9092
      SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL: WARN

And launch it by running:

docker compose up -d

Configure the project

4

Create the following build file, named build.gradle:

buildscript {
  repositories {
    mavenCentral()
  }
  dependencies {
    classpath "com.google.protobuf:protobuf-gradle-plugin:0.9.2"
  }
}

plugins {
  id "java"
  id "application"
  id "idea"

  id "com.github.johnrengelman.shadow" version "6.1.0"
  id "com.google.protobuf" version "0.9.2"
  id "com.github.davidmc24.gradle.plugin.avro" version "1.7.0"
}

sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
version = "0.0.1"

repositories {
  mavenCentral()

  maven {
    url "https://packages.confluent.io/maven"
  }
}

apply plugin: "com.github.johnrengelman.shadow"

dependencies {
  implementation "org.apache.avro:avro:1.11.1"
  implementation "com.google.protobuf:protobuf-java:3.22.2"
  implementation "org.slf4j:slf4j-simple:2.0.7"
  implementation "org.apache.kafka:kafka-streams:3.3.0"
  implementation "io.confluent:kafka-streams-avro-serde:7.3.0"
  implementation "io.confluent:kafka-streams-protobuf-serde:7.3.0"
  implementation 'com.google.code.gson:gson:2.10.1'

  testImplementation "org.apache.kafka:kafka-streams-test-utils:3.3.0"
  testImplementation 'junit:junit:4.13.2'
  testImplementation 'org.assertj:assertj-core:3.24.2'
}

test {
  testLogging {
    outputs.upToDateWhen { false }
    showStandardStreams = true
    events "passed", "skipped", "failed"
    exceptionFormat "full"
  }
}

jar {
  manifest {
    attributes(
        "Class-Path": configurations.compileClasspath.collect { it.getName() }.join(" "),
        "Main-Class": "io.confluent.developer.SerializationTutorial"
    )
  }
}

shadowJar {
  archiveBaseName = "kstreams-serialization-standalone"
  archiveClassifier = ''
}

// Define the main class for the application
mainClassName = 'io.confluent.developer.serialization.SerializationTutorial'

protobuf {
  generatedFilesBaseDir = "$buildDir/generated-main-proto-java/"

  protoc {
    artifact = "com.google.protobuf:protoc:3.22.2"
  }

}

clean {
  delete protobuf.generatedFilesBaseDir
}

idea {
  module {
    sourceDirs += file("${buildDir}/generated-main-proto-java/")
  }
}

And be sure to run the following command to obtain the Gradle wrapper:

gradle wrapper 

Next, create a directory for configuration data:

mkdir configuration

Then create a development file at configuration/dev.properties:

application.id=serialization-app
bootstrap.servers=localhost:29092
schema.registry.url=http://localhost:8081

input.avro.movies.topic.name=avro-movies
input.avro.movies.topic.partitions=1
input.avro.movies.topic.replication.factor=1

output.proto.movies.topic.name=proto-movies
output.proto.movies.topic.partitions=1
output.proto.movies.topic.replication.factor=1

Create an Avro schema for input events, and a Protobuf schema for the output

5

Out input events are in Avro format, we’ll need to specify a schema for them. Go ahead and create a directory for your schemas:

mkdir -p src/main/avro

Next, create an Avro schema file at src/main/avro/movie.avsc for the stream of movies:

{
  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "Movie",
  "fields": [
    {
      "name": "movie_id",
      "type": "long"
    },
    {
      "name": "title",
      "type": "string"
    },
    {
      "name": "release_year",
      "type": "int"
    }
  ]
}

Since we’ll be converting events into Protobuf, we’ll need to specify a proto-schema for them. In this case, our events represent movies with a few attributes, such as the release year. Go ahead and create a directory for your schemas:

mkdir -p src/main/proto

Next, create an Protobuf schema file at src/main/proto/movie.proto for the stream of movies:

syntax = "proto3";

package io.confluent.developer.proto;
option java_outer_classname = "MovieProtos";

message Movie {
  int64   movie_id = 1;
  string  title = 2;
  int32   release_year = 3;
}

Because we will use this Avro schema in our Java code, we’ll need to compile it. The Gradle Avro plugin and Gradle Protobuf plugin are a part of the build, so it will see new Avro and Protobuf files, generate Java code for them, and compile those and all other Java sources. Run this command to get it all done:

./gradlew build

Create the Kafka Streams topology

6

Create a directory for the Java files in this project:

mkdir -p src/main/java/io/confluent/developer/serialization

Let’s take a close look at the buildTopology() method, which uses the Kafka Streams DSL. This particular topology is pretty simple.

buildTopology()
protected Topology buildTopology(Properties envProps,
                                   final SpecificAvroSerde<Movie> movieSpecificAvroSerde,
                                   final KafkaProtobufSerde<MovieProtos.Movie> movieProtoSerde) {

    final String inputAvroTopicName = envProps.getProperty("input.avro.movies.topic.name");
    final String outProtoTopicName = envProps.getProperty("output.proto.movies.topic.name");

    final StreamsBuilder builder = new StreamsBuilder(); (1)

    final KStream<Long, Movie> avroMovieStream =
        builder.stream(inputAvroTopicName, Consumed.with(Long(), movieSpecificAvroSerde));  (2)

    avroMovieStream
        .map((key, avroMovie) ->
                 new KeyValue<>(key, MovieProtos.Movie.newBuilder()
                     .setMovieId(avroMovie.getMovieId())
                     .setTitle(avroMovie.getTitle())
                     .setReleaseYear(avroMovie.getReleaseYear())
                     .build()))
        .to(outProtoTopicName, Produced.with(Long(), movieProtoSerde)); (3)

    return builder.build();
  }
1 The first thing the method does is create an instance of StreamsBuilder, which is the helper object that lets us build our topology.
2 We call the stream() method to create a KStream<Long, Movie> object.
3 Lastly, we call to() to send the events to another topic.
All of the work to work to convert the events between Avro and Protobuf happens through parameterized serializers.

You see, even though we specified default serializers with StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG and StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG in Streams Configuration, the Kafka Streams DSL allows us to use a specific serializer / deserializer each time we interact with a topic.

In this case, Consumed.with() allows us to consume the events with SpecificAvroSerde, and Produced.with() allows us to produce the events back to a topic with Protobuf.

Now, go ahead and create the following file at src/main/java/io/confluent/developer/serialization/SerializationTutorial.java.

package io.confluent.developer.serialization;

import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;

import java.io.FileInputStream;
import java.io.IOException;
import java.time.Duration;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;

import io.confluent.developer.avro.Movie;
import io.confluent.developer.proto.MovieProtos;
import io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import io.confluent.kafka.streams.serdes.protobuf.KafkaProtobufSerde;

import static io.confluent.kafka.serializers.AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG;
import static java.lang.Integer.parseInt;
import static java.lang.Short.parseShort;
import static org.apache.kafka.common.serialization.Serdes.Long;
import static org.apache.kafka.common.serialization.Serdes.String;

public class SerializationTutorial {

  protected Properties buildStreamsProperties(Properties envProps) {
    Properties props = new Properties();

    props.put(StreamsConfig.APPLICATION_ID_CONFIG, envProps.getProperty("application.id"));
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, envProps.getProperty("bootstrap.servers"));
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, String().getClass());
    props.put(AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));

    return props;
  }

  private void createTopics(Properties envProps) {
    Map<String, Object> config = new HashMap<>();

    config.put("bootstrap.servers", envProps.getProperty("bootstrap.servers"));
    AdminClient client = AdminClient.create(config);

    List<NewTopic> topics = new ArrayList<>();

    topics.add(new NewTopic(
        envProps.getProperty("input.avro.movies.topic.name"),
        parseInt(envProps.getProperty("input.avro.movies.topic.partitions")),
        parseShort(envProps.getProperty("input.avro.movies.topic.replication.factor"))));

    topics.add(new NewTopic(
        envProps.getProperty("output.proto.movies.topic.name"),
        parseInt(envProps.getProperty("output.proto.movies.topic.partitions")),
        parseShort(envProps.getProperty("output.proto.movies.topic.replication.factor"))));

    client.createTopics(topics);
    client.close();
  }

  protected SpecificAvroSerde<Movie> movieAvroSerde(Properties envProps) {
    SpecificAvroSerde<Movie> movieAvroSerde = new SpecificAvroSerde<>();

    Map<String, String> serdeConfig = new HashMap<>();
    serdeConfig.put(SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));
    movieAvroSerde.configure(
        serdeConfig, false);
    return movieAvroSerde;
  }

  protected KafkaProtobufSerde<MovieProtos.Movie> movieProtobufSerde(Properties envProps) {
    final KafkaProtobufSerde<MovieProtos.Movie> protobufSerde = new KafkaProtobufSerde<>();

    Map<String, String> serdeConfig = new HashMap<>();
    serdeConfig.put(SCHEMA_REGISTRY_URL_CONFIG, envProps.getProperty("schema.registry.url"));
    protobufSerde.configure(
        serdeConfig, false);
    return protobufSerde;
  }

  protected Topology buildTopology(Properties envProps,
                                   final SpecificAvroSerde<Movie> movieSpecificAvroSerde,
                                   final KafkaProtobufSerde<MovieProtos.Movie> movieProtoSerde) {

    final String inputAvroTopicName = envProps.getProperty("input.avro.movies.topic.name");
    final String outProtoTopicName = envProps.getProperty("output.proto.movies.topic.name");

    final StreamsBuilder builder = new StreamsBuilder();

    // topic contains values in avro format
    final KStream<Long, Movie> avroMovieStream =
        builder.stream(inputAvroTopicName, Consumed.with(Long(), movieSpecificAvroSerde));

    //convert and write movie data in protobuf format
    avroMovieStream
        .map((key, avroMovie) ->
                 new KeyValue<>(key, MovieProtos.Movie.newBuilder()
                     .setMovieId(avroMovie.getMovieId())
                     .setTitle(avroMovie.getTitle())
                     .setReleaseYear(avroMovie.getReleaseYear())
                     .build()))
        .to(outProtoTopicName, Produced.with(Long(), movieProtoSerde));

    return builder.build();
  }

  protected Properties loadEnvProperties(String fileName) throws IOException {
    Properties envProps = new Properties();
    FileInputStream input = new FileInputStream(fileName);
    envProps.load(input);
    input.close();
    return envProps;
  }

  private void runTutorial(String configPath) throws IOException {

    Properties envProps = this.loadEnvProperties(configPath);
    Properties streamProps = this.buildStreamsProperties(envProps);

    Topology topology = this.buildTopology(envProps,
                                           this.movieAvroSerde(envProps),
                                           this.movieProtobufSerde(envProps));
    this.createTopics(envProps);

    final KafkaStreams streams = new KafkaStreams(topology, streamProps);
    final CountDownLatch latch = new CountDownLatch(1);

    // Attach shutdown handler to catch Control-C.
    Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
      @Override
      public void run() {
        streams.close(Duration.ofSeconds(5));
        latch.countDown();
      }
    });

    try {
      streams.cleanUp();
      streams.start();
      latch.await();
    } catch (Throwable e) {
      System.exit(1);
    }
    System.exit(0);
  }

  public static void main(String[] args) throws IOException {
    if (args.length < 1) {
      throw new IllegalArgumentException(
          "This program takes one argument: the path to an environment configuration file.");
    }

    new SerializationTutorial().runTutorial(args[0]);
  }
}

Compile and run the Kafka Streams program

7

In your terminal, run:

./gradlew shadowJar

This will produce an uberjar, which is a jar that contains your application code and all its dependencies.

Now that you have an uberjar for the Kafka Streams application, you can launch it locally. When you run the following, the prompt won’t return, because the application will run until you exit it. There is always another message to process, so streaming applications don’t exit until you force them.

java -jar build/libs/kstreams-serialization-standalone-0.0.1.jar configuration/dev.properties

Get ready to observe the Protobuf movies in the output topic

8

Before you start producing input data, it’s a good idea to set up the consumer on the output topic. This way, as soon as you produce movie in different format (Protobuf), you’ll see the results right away.

We’re using the kafka-protobuf-console-consumer tool to do that. Confluent Platform ships with a specialized command line consumer out of the box to read Protobuf formatted messages.

Run this to get ready to consume the records:

docker exec -i schema-registry /usr/bin/kafka-protobuf-console-consumer --bootstrap-server broker:9092 --topic proto-movies --from-beginning

You won’t see any results until the next step.

Produce some Avro-formatted movies to the input topic

9

When the console producer starts, it will log some text and hang, waiting for your input. You can copy and paste all of the test data at once to see the results.

Start the console producer with this command in a terminal window of its own:

docker exec -i schema-registry /usr/bin/kafka-avro-console-producer --topic avro-movies --bootstrap-server broker:9092 --property value.schema="$(< src/main/avro/movie.avsc)"

When the producer starts up, copy and paste these JSON lines into the terminal:

{"movie_id":1,"title":"Lethal Weapon","release_year":1992}
{"movie_id":2,"title":"Die Hard","release_year":1988}
{"movie_id":3,"title":"Predator","release_year":1987}
{"movie_id":128,"title":"The Big Lebowski","release_year":1998}
{"movie_id":354,"title":"Tree of Life","release_year":2011}
{"movie_id":782,"title":"A Walk in the Clouds","release_year":1995}

Looking back in the consumer terminal, these are the results you should see if you paste in all the movies above:

{"movieId":"1","title":"Lethal Weapon","releaseYear":1992}
{"movieId":"2","title":"Die Hard","releaseYear":1988}
{"movieId":"3","title":"Predator","releaseYear":1987}
{"movieId":"128","title":"The Big Lebowski","releaseYear":1998}
{"movieId":"354","title":"Tree of Life","releaseYear":2011}
{"movieId":"782","title":"A Walk in the Clouds","releaseYear":1995}

You’ll notice that they look identical to the input that you produced. The contents are in fact the same. But since Avro isn’t a human-readable format, the kafka-protobuf-console-consumer tool helpfully formatted the contents in something we can read, which happens to be JSON.

Congrats! You’ve converted formats across two topics.

Test it

Create a test configuration file

1

First, create a test file at configuration/test.properties:

application.id=serialization-app
bootstrap.servers=127.0.0.1:9092
schema.registry.url=mock://SR_DUMMY_URL:8081

input.avro.movies.topic.name=avro-movies
input.avro.movies.topic.partitions=1
input.avro.movies.topic.replication.factor=1

output.proto.movies.topic.name=proto-movies
output.proto.movies.topic.partitions=1
output.proto.movies.topic.replication.factor=1

Test the streams topology

2

Create a directory for the tests to live in:

mkdir -p src/test/java/io/confluent/developer/serialization

Now create the following file at src/test/java/io/confluent/developer/serialization/SerializationTutorialTest.java. Testing a Kafka streams application requires a bit of test harness code, but the org.apache.kafka.streams.TopologyTestDriver class makes this easy.

There is only one method in SerializationTutorialTest annotated with @Test, and that is shouldChangeSerializationFormat(). This method actually runs our Streams topology using the TopologyTestDriver and some mocked data that is set up inside the test method.

package io.confluent.developer.serialization;

import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.TopologyTestDriver;
import org.junit.Test;

import java.io.IOException;
import java.util.List;
import java.util.Properties;

import io.confluent.developer.avro.Movie;
import io.confluent.developer.proto.MovieProtos;
import io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde;
import io.confluent.kafka.streams.serdes.protobuf.KafkaProtobufSerde;

import static io.confluent.developer.proto.MovieProtos.Movie.newBuilder;
import static org.apache.kafka.common.serialization.Serdes.Long;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.MatcherAssert.assertThat;

public class SerializationTutorialTest {

  private final static String TEST_CONFIG_FILE = "configuration/test.properties";

  @Test
  public void shouldChangeSerializationFormat() throws IOException {
    SerializationTutorial tutorial = new SerializationTutorial();
    final Properties envProps = tutorial.loadEnvProperties(TEST_CONFIG_FILE);
    final Properties streamsProps = tutorial.buildStreamsProperties(envProps);

    String inputTopicName = envProps.getProperty("input.avro.movies.topic.name");
    String outputTopicName = envProps.getProperty("output.proto.movies.topic.name");

    final SpecificAvroSerde<Movie> avroSerde = tutorial.movieAvroSerde(envProps);
    final KafkaProtobufSerde<MovieProtos.Movie> protobufSerde = tutorial.movieProtobufSerde(envProps);

    Topology topology = tutorial.buildTopology(envProps, avroSerde, protobufSerde);
    streamsProps.put("statestore.cache.max.bytes", 0);
    TopologyTestDriver testDriver = new TopologyTestDriver(topology, streamsProps);

    testDriver
        .createInputTopic(inputTopicName, Long().serializer(), avroSerde.serializer())
        .pipeValueList(this.prepareInputFixture());

    final List<MovieProtos.Movie> moviesProto =
        testDriver.createOutputTopic(outputTopicName, Long().deserializer(), protobufSerde.deserializer())
            .readValuesToList();

    assertThat(moviesProto, equalTo(expectedMovies()));
  }

  /**
   * Prepares expected movies in protobuf format
   *
   * @return a list of three (3) movie
   */
  private List<MovieProtos.Movie> expectedMovies() {
    List<MovieProtos.Movie> movieList = new java.util.ArrayList<>();
    movieList.add(newBuilder().setMovieId(1L).setTitle("Lethal Weapon").setReleaseYear(1992).build());
    movieList.add(newBuilder().setMovieId(2L).setTitle("Die Hard").setReleaseYear(1988).build());
    movieList.add(newBuilder().setMovieId(3L).setTitle("Predator").setReleaseYear(1987).build());
    return movieList;
  }

  /**
   * Prepares test data in AVRO format
   *
   * @return a list of three (3) movies
   */
  private List<Movie> prepareInputFixture() {
    List<Movie> movieList = new java.util.ArrayList<>();
    movieList.add(new Movie(1L, "Lethal Weapon", 1992));
    movieList.add(new Movie(2L, "Die Hard", 1988));
    movieList.add(new Movie(3L, "Predator", 1987));
    return movieList;
  }
}

Invoke the tests

3

Now run the test, which is as simple as:

./gradlew test

Deploy on Confluent Cloud

Run your app with Confluent Cloud

1

Instead of running a local Kafka cluster, you may use Confluent Cloud, a fully managed Apache Kafka service.

  1. Sign up for Confluent Cloud, a fully managed Apache Kafka service.

  2. After you log in to Confluent Cloud Console, click Environments in the lefthand navigation, click on Add cloud environment, and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  3. From the Billing & payment section in the menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details). To avoid having to enter a credit card, add an additional promo code CONFLUENTDEV1. With this promo code, you will not have to enter a credit card for 30 days or until your credits run out.

  4. Click on LEARN and follow the instructions to launch a Kafka cluster and enable Schema Registry.

Confluent Cloud

Next, from the Confluent Cloud Console, click on Clients to get the cluster-specific configurations, e.g., Kafka cluster bootstrap servers and credentials, Confluent Cloud Schema Registry and credentials, etc., and set the appropriate parameters in your client application. In the case of this tutorial, add the following properties to the client application’s input properties file, substituting all curly braces with your Confluent Cloud values.

# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BROKER_ENDPOINT }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
# Required for correctness in Apache Kafka clients prior to 2.6
client.dns.lookup=use_all_dns_ips

# Best practice for Kafka producer to prevent data loss
acks=all

# Required connection configs for Confluent Cloud Schema Registry
schema.registry.url=https://{{ SR_ENDPOINT }}
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}

Now you’re all set to run your streaming application locally, backed by a Kafka cluster fully managed by Confluent Cloud.