How to optimize your Kafka producer for throughput

Question:

How do I optimize my Kafka producer application for throughput?

Edit this page

Example use case:

When optimizing for performance, you'll typically need to consider tradeoffs between throughput and latency. Because of Kafka’s design, writing large volumes of data into it is not a hard thing to do, however, many of the Kafka configuration parameters have default settings that optimize for latency. If your use case calls for higher throughput, this tutorial walks you through how to use `kafka-producer-perf-test` to measure baseline performance and tune your producer for large volumes of data.

Hands-on code example:

New to Confluent Cloud? Sign up and run this tutorial for free.




Short Answer

Here are some producer configuration parameters you can set to increase throughput. The values shown below are for demonstration purposes, and you will need to further tune these for your environment.

  • batch.size: increase to 100000–200000 (default 16384)

  • linger.ms: increase to 10–100 (default 0)

  • compression.type=lz4 (default none, i.e., no compression)

  • acks=1 (default all, since Apache Kafka version 3.0)

For a detailed explanation of these and other configuration parameters, read these recommendations for Kafka developers.

Run it

1
Provision your Kafka cluster

This tutorial requires access to an Apache Kafka cluster, and the quickest way to get started free is on Confluent Cloud, which provides Kafka as a fully managed service. First, sign up for Confluent Cloud.

  1. After you log in to Confluent Cloud, click on Add cloud environment and name the environment learn-kafka. Using a new environment keeps your learning resources separate from your other Confluent Cloud resources.

  2. From the Billing & payment section in the Menu, apply the promo code CC100KTS to receive an additional $100 free usage on Confluent Cloud (details).

  3. Click on LEARN and follow the instructions to launch a Kafka cluster and to enable Schema Registry.

Confluent Cloud

2
Initialize the project

Make a local directory anywhere you’d like for this project:

mkdir optimize-producer-throughput && cd optimize-producer-throughput

Next, create a directory for configuration data:

mkdir configuration

3
Write the cluster information into a local file

From the Confluent Cloud Console, navigate to your Kafka cluster. From the Clients view, get the connection information customized to your cluster.

Create new credentials for your Kafka cluster, and then Confluent Cloud will show a configuration similar to below with your new credentials automatically populated (make sure show API keys is checked). Copy and paste it into a configuration/ccloud.properties file on your machine.

# Required connection configs for Kafka producer, consumer, and admin
bootstrap.servers={{ BOOTSTRAP_SERVERS }}
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
sasl.mechanism=PLAIN
Do not directly copy and paste the above configuration. You must copy it from the Confluent Cloud Console so that it includes your Confluent Cloud information and credentials.

4
Download and setup the Confluent CLI

This tutorial has some steps for Kafka topic management and/or reading from or writing to Kafka topics, for which you can use the Confluent Cloud Console or install the Confluent CLI. Instructions for installing Confluent CLI and configuring it to your Confluent Cloud environment is available from within the Confluent Cloud Console: navigate to your Kafka cluster, click on the CLI and tools link, and run through the steps in the Confluent CLI tab.

The CLI clients for Confluent Cloud (ccloud) and Confluent Platform (confluent v1.0) have been unified into a single client Confluent CLI confluent v2.0. This tutorial uses the unified Confluent CLI confluent v2.0 (ccloud client will continue to work until sunset on May 9, 2022, and you can read the migration instructions to the unified confluent CLI at https://docs.confluent.io/confluent-cli/current/migrate.html).

5
Create a topic

In this step we’re going to create a topic for use during this tutorial. Use the following command to create the topic:

confluent kafka topic create topic-perf

This creates a topic called topic-perf with a default number of 6 partitions. A topic partition is the unit of parallelism in Kafka, and messages to different partitions can be sent in parallel by producers, written in parallel by different brokers, and read in parallel by different consumers. In general, a higher number of topic partitions results in higher throughput, and to maximize throughput, you want enough partitions to distribute them across the brokers in your cluster. Although it might seem tempting just to create topics with a very large number of partitions, there are trade-offs to increasing the number of partitions. Choose the partition count carefully after benchmarking producer and consumer throughput in your environment. Also take into consideration the design of your data patterns and key assignments so that messages are distributed as evenly as possible across topic partitions to avoid a partition imbalance.

6
Run a baseline producer performance test

Run a performance test to capture a baseline measurement for your Kafka producer, using default configuration parameters. This test will send 10000 records of size 8000 bytes each.

docker run -v $PWD/configuration/ccloud.properties:/etc/ccloud.properties confluentinc/cp-server:6.2.1 /usr/bin/kafka-producer-perf-test \
    --topic topic-perf \
    --num-records 10000 \
    --record-size 8000 \
    --throughput -1 \
    --producer.config /etc/ccloud.properties

Your results will vary depending on your connectivity and bandwidth to the Kafka cluster.

10000 records sent, 134.560525 records/sec (1.03 MB/sec), 25175.34 ms avg latency, 44637.00 ms max latency, 26171 ms 50th, 39656 ms 95th, 42469 ms 99th, 44377 ms 99.9th.

The key result to note is in the last line: the throughput being 134.560525 records/sec (1.03 MB/sec). This is the baseline producer performance with default values.

7
Run a producer performance test with optimized throughput

Run the Kafka producer performance test again, sending the exact same number of records of the same size as the previous test, but this time use configuration values optimized for throughput.

Here are some producer configuration parameters you can set to increase throughput. The values shown below are for demonstration purposes, and you will need to further tune these for your environment.

  • batch.size: increase to 100000–200000 (default 16384)

  • linger.ms: increase to 10–100 (default 0)

  • compression.type=lz4 (default none, i.e., no compression)

  • acks=1 (default all, since Apache Kafka version 3.0)

For a detailed explanation of these and other configuration parameters, read these recommendations for Kafka developers.

docker run -v $PWD/configuration/ccloud.properties:/etc/ccloud.properties confluentinc/cp-server:6.2.1 /usr/bin/kafka-producer-perf-test \
    --topic topic-perf \
    --num-records 10000 \
    --record-size 8000 \
    --throughput -1 \
    --producer.config /etc/ccloud.properties \
    --producer-props \
        batch.size=200000 \
        linger.ms=100 \
        compression.type=lz4 \
        acks=1

Your results will vary depending on your connectivity and bandwidth to the Kafka cluster.

10000 records sent, 740.960285 records/sec (5.65 MB/sec), 3801.36 ms avg latency, 8198.00 ms max latency, 3297 ms 50th, 7525 ms 95th, 7949 ms 99th, 8130 ms 99.9th.

The key result to note is in the last line: the throughput being 740.960285 records/sec (5.65 MB/sec). For the test shown here, 5.65 MB/sec throughput is about a 5x improvement over the 1.03 MB/sec baseline throughput, but again, the improvement factor will vary depending on your environment.

This tutorial has demonstrated how to get started with improving the producer throughput, and you should do further testing in your environment. Continue to tune these configuration parameters, and test it with your specific Kafka producer, not just using kafka-producer-perf-test.

8
Teardown Confluent Cloud resources

You may try another Kafka tutorial, but if you don’t plan on doing other tutorials, use the Confluent Cloud Console or CLI to destroy all the resources you created. Verify they are destroyed to avoid unexpected charges.