Skip to main content

Command Palette

Search for a command to run...

Visualizing Kafka Data in Grafana: Consuming Real-Time Messages for Dashboards

Updated
9 min read
Visualizing Kafka Data in Grafana: Consuming Real-Time Messages for Dashboards

Ever tried visualizing data with Grafana? Whether you’re building a simple dashboard with Prometheus to track resource usage or running complex queries on Loki or Elasticsearch, Grafana is one of the popular go-to tools.

Grafana offers open-source and self-hosted options, as well as enterprise and cloud solutions, all designed to help you gain insights from almost anything. And by anything, I really mean it: from time-series databases like Prometheus and InfluxDB, to relational databases such as PostgreSQL, and even cloud-native metrics from services like AWS CloudWatch and Google Cloud Monitoring.

However, in modern data platforms, a huge amount of data doesn’t sit in databases necessarily; it flows through streaming systems. And one name shows up almost everywhere: Kafka!

Got Kafka? Let's Talk About It!

Kafka is a powerful broker, and if you're using it, you probably have a bunch of questions about your Kafka cluster. These questions usually fall into two categories:

Curious About Kafka Metrics?

  • How many clusters are we running?

  • How much disk space is each broker using?

  • Is the cluster healthy?

  • What’s the throughput in messages per second or bytes per second?

  • How many topics and partitions do we have?

These are all Kafka cluster metrics. The good news is that this part is already well covered: you can export Kafka metrics to Prometheus and visualize them in Grafana using ready-made dashboards.

Wondering About Kafka Data?

  • What messages are being produced on a specific topic?

  • Can I stream messages in real time to see what’s actually happening?

  • Can I build dashboards based on the data flowing through Kafka?

  • My messages are in JSON, Avro, or Protobuf and are fairly complex. How can I visualize them effectively?

If these questions sound familiar, then you’re no longer just interested in metrics. You want visibility into the data itself.

This is where a Grafana data source plugin becomes essential, allowing Grafana to connect directly to Kafka, consume messages, and turn streaming data into dashboards.

Existing Tools for Inspecting Kafka Data

Kafka ecosystems already offer excellent tools for working directly with Kafka data, and many teams rely on them on a daily basis.

  • kcat (formerly kafkacat) is a lightweight CLI tool that’s extremely useful for quick inspections, debugging producers and consumers, and validating message payloads.

  • AKHQ, Conduktor, Confluent Control Center, Redpanda Console, and similar UIs provide rich, Kafka-focused interfaces for browsing topics, inspecting messages, and working with schemas.

These tools are purpose-built for Kafka-specific workflows. Beyond message inspection, they offer deep visibility into Kafka itself, including cluster health, metadata, Schema Registry, Kafka Connect, and overall operational state. When teams need to understand or operate Kafka as a system, these tools are often a natural fit.

Then, Why Visualize Kafka Data in Grafana?

The Kafka Data Source plugin does not aim to replace these tools. Instead, it focuses on a different and complementary use case.

In many organizations, Grafana is already the central place for dashboards and observability. Metrics, logs, traces, and business KPIs often live side by side in Grafana, giving teams a shared view of how systems behave.

By bringing Kafka data into Grafana, teams can:

  • Visualize Kafka data alongside existing dashboards, rather than in a separate, Kafka-only tool

  • Correlate Kafka messages with application metrics, infrastructure metrics, and alerts

  • Use Grafana’s alerting and visualization capabilities to gain deeper insights

  • Follow the principle of having all operational insights in one place

While the aforementioned tools focus exclusively on Kafka, Grafana provides a broader observability context. The Kafka Data Source plugin helps bridge that gap, making Kafka data part of the same story as the rest of your systems.

Kafka Data Source Plugin

The Kafka Data Source plugin works as a Kafka consumer. It reads messages from the topics and partitions you specify and makes that data available directly inside Grafana, so you can visualize streaming data instead of just monitoring cluster metrics.

Get Started

To use the plugin, you first need to install it in Grafana. You can do this either via the Grafana UI or the CLI. Alternatively, you can install it using the plugin ZIP file or by following the provisioning instructions available on the plugin’s GitHub page.

grafana-cli plugins install hamedkarbasi93-kafka-datasource

In this article, we’ll focus on using the plugin to visualize nested JSON messages in Grafana. I’ll cover other features such as Avro in future articles.

What You Need Before You Start

Before diving in, make sure you have:

  • Grafana with the Kafka Data Source plugin installed

  • Access to a Kafka cluster, including its connection details (bootstrap servers and TLS/SASL credentials, if required)

Setting Up Your Data Source in Grafana

Once the plugin is installed, you can add a new Kafka data source in Grafana and configure the following settings.

Connection Settings

  • Bootstrap servers
    Enter a comma-separated list of Kafka brokers.

  • Security protocol
    Choose one of the supported protocols:

    • PLAINTEXT

    • SSL

    • SASL_PLAINTEXT

    • SASL_SSL

Authentication and TLS

Depending on the selected security protocol, you may need to configure authentication and TLS:

  • SASL settings

    • SASL mechanism

    • SASL username

    • SASL password

  • TLS settings

    • Server certificate

    • Client certificate

    • Client key

These fields are required only when SASL or TLS is enabled.

Schema Registry (Avro Only)

Schema Registry settings are only required when the message format is set to Avro. If you’re working with JSON messages, you can safely skip this section.

Advanced JSON Settings

For JSON messages, the plugin provides advanced controls to help manage complex and deeply nested structures:

  • Maximum JSON flatten depth
    Controls how deeply nested objects are flattened.

  • JSON field limit
    Sets the maximum number of fields that will be expanded.

These settings keep dashboards readable, prevent overly wide tables, and help protect Grafana from heavy data loads.

Key Query Editor Fields

Once the data source is configured, you’ll use the query editor to define what data you want to read from Kafka and how it should appear in Grafana.

  • Topic — Enter the Kafka topic name. You can click Fetch to retrieve the available partitions for that topic.

  • Partition — Choose All partitions or a specific partition.

  • Offset — Select the starting position for the query:

    • Latest — read the newest messages.

    • Last N messages — set N to read the most recent N messages.

    • Earliest — start from the earliest available offset.

  • Timestamp Mode — Choose the timestamp for Grafana points:

    • Kafka Event Time — uses Kafka message metadata timestamp.

    • Dashboard received time — uses the time Grafana receives the message.

  • Message Format — Set to JSON for JSON message parsing.

Mapping JSON to Grafana

When the Message Format is set to JSON, the plugin parses the Kafka record value as a JSON object and prepares it for visualization in Grafana. To make nested structures usable in dashboards, the plugin automatically flattens nested JSON into dot-delimited fields.

Example JSON Message

{
  "service": {
    "name": "api",
    "latency": {
      "p50": 23,
      "p95": 75
    }
  },
  "host": "node-2",
  "requests": 128
}

After Flattening

The JSON above is transformed into the following fields:

  • service.name"api"

  • service.latency.p5023 (numeric)

  • service.latency.p9575 (numeric)

  • host"node-2"

  • requests128 (numeric)

How Grafana Uses These Fields

  • Numeric fields (such as latency percentiles or request counts) are ideal for time series visualizations like line charts.

  • String or non-numeric fields work best in table panels or as labels and metadata.

This mapping allows you to turn raw Kafka messages into meaningful dashboards that you can leverage Grafana transformations like group-by or filtering to achieve what you are looking for.

Example Workflow

Let’s walk through a simple case study to see how everything comes together.

Step 1: Produce Sample Data

Start by producing some sample JSON messages to a Kafka topic (for example, test). You can use the example producer provided in the repository to generate messages like the one below:

{
  "alerts": [
    {
      "severity": "warning",
      "type": "cpu_high",
      "value": 25.867063332016414
    },
    {
      "severity": "info",
      "type": "mem_low",
      "value": 55.72157456801047
    }
  ],
  "host": {
    "ip": "127.0.0.1",
    "name": "srv-01"
  },
  "metrics": {
    "cpu": {
      "load": 0.25867063332016416,
      "temp": 65.21794911901341
    },
    "mem": {
      "free": 9065,
      "used": 2767
    }
  },
  "processes": [
    "nginx",
    "mysql",
    "redis"
  ],
  "tags": [
    "prod",
    "edge"
  ],
  "value1": 0.25867063332016416,
  "value2": 1.1144314913602094
}

This message includes a mix of nested objects, arrays, numeric values, and strings, a fairly realistic example of what Kafka messages often look like in practice.

Step 2: Explore the Data in Grafana

Next, open the Explore page in your Grafana instance. Select the Kafka data source and configure the query editor with the following values:

  • Topic: test

  • Partition: All partitions

  • Offset: Latest

  • Timestamp Mode: Kafka Event Time

  • Message Format: JSON

Once the query is running, you should see live data streaming into Grafana as a table. Nested JSON fields will be flattened automatically, making it easy to inspect both numeric and non-numeric values.

Step 3: Visualize the Data with Dashboards

To take this a step further, you can use the provisioned dashboard included in the repository to visualize the same Kafka data using multiple panels.

This dashboard demonstrates how different parts of the message can be visualized:

  • numeric fields as time series

  • structured data in tables

  • multiple signals derived from the same Kafka topic

It’s a simple example, but it highlights the core idea: Kafka data can be treated just like any other data source in Grafana and combined with existing dashboards and observability signals.

Conclusion

Kafka plays a central role in many modern data platforms, but working with its data often requires jumping between specialized tools. At the same time, Grafana has become the place where teams come together to observe, correlate, and understand how their systems behave. The Kafka Data Source plugin is designed to bridge that gap.

By making Kafka data available directly in Grafana, the plugin allows you to visualize streaming data alongside existing dashboards, correlate it with metrics and alerts, and gain insights without leaving the tools your organization already relies on. It doesn’t replace Kafka-native tools; instead, it complements them by bringing Kafka data into the broader observability picture.

In this article, we focused on visualizing JSON messages and building simple, real-time workflows. In upcoming articles, we’ll explore more advanced features such as Avro support, schema registry integration, and additional use cases that build on the same core ideas.

Your Turn

  • How would visualizing Kafka data in Grafana change the way your team monitors and understands your systems?

  • What challenges have you faced when working with Kafka data, and how do you currently deal with them?

  • What’s the first Kafka use case you’d try visualizing in Grafana?