Security¶
Related Notes
messaging/kafka/index | messaging/kafka/architecture | messaging/kafka/operations | messaging/index
Threat Model¶
| Threat Vector | Impact | Mitigation |
|---|---|---|
| Broker compromise | Direct read of all topic data on disk; ability to forge produce/fetch responses | Disk encryption (LUKS / EBS encryption); restrict OS access; audit kafka-authorizer.log; network isolation |
| KRaft controller compromise | Forge metadata records (create topics, alter ACLs); rewire partition leadership | Run controllers in isolated mode on hardened hosts; require mTLS between broker and controller listener; restrict controller.listener.names to internal-only network paths |
| Client impersonation | Producer writes records as another principal; consumer reads topics they shouldn't | SASL or mTLS authentication everywhere; ACLs with explicit deny defaults |
| Data exfiltration via consumer | Authenticated consumer subscribes to sensitive topics and exfiltrates | Topic-scoped ACLs (Read on Topic:payments granted to specific principals only); per-user consumer_byte_rate quotas; egress monitoring |
| MITM on wire | Eavesdrop produce/fetch traffic; downgrade to PLAINTEXT | TLS on every listener (SSL or SASL_SSL); disable PLAINTEXT listeners on prod; pin TLS 1.2+ |
| Replay attacks | Re-send captured produce requests to duplicate writes | Idempotent producer (PID + sequence number deduplication); transactional producer with fencing |
| JAAS module injection (CVE-2025-27818) | Attacker with AlterConfigs permission triggers RCE via LDAP login module |
Apply 3.9.1/4.0.0+; set org.apache.kafka.disallowed.login.modules; restrict who can call AlterConfigs |
| OAUTHBEARER URL abuse (CVE-2025-27817) | Arbitrary file read / SSRF through sasl.oauthbearer.token.endpoint.url |
Apply 3.9.1/4.0.0+; set -Dorg.apache.kafka.sasl.oauthbearer.allowed.urls=https://idp.example.com/... |
| MM2 cross-cluster credential leak | Compromised replicator can read source and write target | Separate principals for MM2 source/target; least-privilege ACLs on internal MM2 topics; encrypt MM2 worker-to-broker traffic |
| ZooKeeper compromise (legacy clusters) | Read/alter cluster metadata directly | Migrate off ZK to KRaft (Kafka 4.0+); never expose ZK to untrusted networks; SASL on ZK if you must keep it |
| Sensitive logging | Credentials/payloads written to broker logs at DEBUG level | Keep root logger at INFO; never enable NetworkClient DEBUG in prod (KIP-714 / CVE related disclosures) |
Authentication (SASL & mTLS)¶
Kafka supports several authentication mechanisms on each listener. The mechanism is selected via security.protocol (transport) and sasl.mechanism (when SASL is in use).
Listener Configuration¶
# server.properties — three listeners with three protocols
listeners=PLAINTEXT://:9092,SASL_SSL://:9094,SSL://:9095
advertised.listeners=PLAINTEXT://broker1:9092,SASL_SSL://broker1.example.com:9094,SSL://broker1.example.com:9095
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SASL_SSL:SASL_SSL,SSL:SSL,CONTROLLER:SSL
inter.broker.listener.name=SSL # broker-to-broker uses mTLS
sasl.enabled.mechanisms=SCRAM-SHA-512,OAUTHBEARER
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
SASL Mechanisms¶
| Mechanism | When to Use | Strengths | Weaknesses |
|---|---|---|---|
| PLAIN | Lab / over a TLS-only listener with an external trust boundary | Simple; widely supported | Sends plaintext password to broker (must be wrapped in TLS) |
| SCRAM-SHA-256 / SCRAM-SHA-512 | General-purpose username/password without an external IdP | No password on the wire (challenge-response); credentials stored in cluster metadata | SCRAM-without-TLS is exploitable — always pair with TLS |
| GSSAPI (Kerberos) | Enterprise environments with existing Active Directory / Heimdal KDC | Strong mutual auth; well-understood by ops | Heavyweight client setup; keytab management |
| OAUTHBEARER | Modern microservices; integration with corporate IdP (Keycloak, Okta, Auth0, Azure AD) | Token-based; integrates with OAuth 2.0 / OIDC; short-lived credentials | Default Kafka implementation creates unsecured JWTs (RFC 7515 unsecured) — usable only for dev. Production deployments need a real OIDC validator |
SCRAM Setup Example¶
# Create a SCRAM-SHA-512 credential for User:alice
bin/kafka-configs.sh --bootstrap-server kafka:9094 \
--command-config admin.properties \
--alter --add-config 'SCRAM-SHA-512=[iterations=8192,password=ChangeMeNow]' \
--entity-type users --entity-name alice
# Client config snippet
sasl.mechanism=SCRAM-SHA-512
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="alice" password="ChangeMeNow";
OAUTHBEARER (Production with OIDC)¶
# Client side — exchange a refresh token at the IdP, get a short-lived JWT
security.protocol=SASL_SSL
sasl.mechanism=OAUTHBEARER
sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler
sasl.oauthbearer.token.endpoint.url=https://idp.example.com/realms/prod/protocol/openid-connect/token
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
clientId="orders-svc" clientSecret="${env:OIDC_CLIENT_SECRET}";
# Broker side — validate JWTs against the IdP's JWKS
listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class=\
org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler
listener.name.sasl_ssl.oauthbearer.sasl.oauthbearer.jwks.endpoint.url=\
https://idp.example.com/realms/prod/protocol/openid-connect/certs
listener.name.sasl_ssl.oauthbearer.sasl.oauthbearer.expected.audience=kafka
listener.name.sasl_ssl.oauthbearer.sasl.oauthbearer.expected.issuer=\
https://idp.example.com/realms/prod
# Required since 4.0 — explicitly allow-list the JWKS / token endpoints
listener.name.sasl_ssl.oauthbearer.sasl.oauthbearer.allowed.urls=\
https://idp.example.com/realms/prod/protocol/openid-connect/certs,\
https://idp.example.com/realms/prod/protocol/openid-connect/token
OAUTHBEARER in 4.0+
From Kafka 4.0, sasl.oauthbearer.allowed.urls defaults to empty as a hardening for CVE-2025-27817. Operators must explicitly list trusted IdP URLs. In 3.9.1, all URLs are allowed for backward compatibility but the JVM property -Dorg.apache.kafka.sasl.oauthbearer.allowed.urls should be set.
Mutual TLS (mTLS / SSL)¶
# server.properties — full TLS listener with client-cert auth
listeners=SSL://:9095
ssl.keystore.location=/etc/kafka/tls/server.keystore.jks
ssl.keystore.password=${env:KAFKA_KEYSTORE_PASSWORD}
ssl.key.password=${env:KAFKA_KEY_PASSWORD}
ssl.truststore.location=/etc/kafka/tls/server.truststore.jks
ssl.truststore.password=${env:KAFKA_TRUSTSTORE_PASSWORD}
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.3,TLSv1.2
ssl.cipher.suites=TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256
ssl.endpoint.identification.algorithm=https
The client's Subject DN (or any field selected via ssl.principal.mapping.rules) becomes the principal used for ACL evaluation.
Authorization (ACLs)¶
Kafka uses a default-deny ACL model when an authorizer is configured. The built-in standard authorizer is org.apache.kafka.metadata.authorizer.StandardAuthorizer (KRaft-native); the older kafka.security.authorizer.AclAuthorizer was tied to ZooKeeper and is removed in 4.0.
# server.properties
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
super.users=User:admin;User:CN=cluster-admin
allow.everyone.if.no.acl.found=false # default deny
Common ACL Recipes¶
# Producer permissions on a topic
bin/kafka-acls.sh --bootstrap-server kafka:9094 --command-config admin.properties \
--add --allow-principal User:orders-svc \
--producer --topic orders
# Consumer permissions for a specific group
bin/kafka-acls.sh --bootstrap-server kafka:9094 --command-config admin.properties \
--add --allow-principal User:billing-svc \
--consumer --topic orders --group billing-aggregator
# Prefix-based topic ACL (every topic starting with "events.")
bin/kafka-acls.sh --bootstrap-server kafka:9094 --command-config admin.properties \
--add --allow-principal User:platform-events \
--producer --topic events. --resource-pattern-type prefixed
# Transactional producer (transactional.id ACL)
bin/kafka-acls.sh --bootstrap-server kafka:9094 --command-config admin.properties \
--add --allow-principal User:orders-svc \
--operation Write --operation Describe \
--transactional-id orders-svc-txn-1
# Cluster admin (use sparingly)
bin/kafka-acls.sh --bootstrap-server kafka:9094 --command-config admin.properties \
--add --allow-principal User:CN=cluster-admin \
--operation All --cluster
OPA Integration¶
For policy-as-code, several open-source authorizers wrap the Kafka authorizer SPI to delegate decisions to Open Policy Agent (e.g. Bisnode's kafka-open-policy-agent-plugin, Aiven's similar plugin, and StyraHub's Rego-based examples). The plugin runs in-broker, calls OPA over localhost, and caches decisions. This gives fine-grained, attribute-based authorization (time-of-day, business unit, message-attribute checks) that vanilla ACLs cannot express.
Encryption¶
TLS In-Transit¶
Use TLS on every listener that crosses an untrusted boundary:
- Client → broker
- Broker → broker (
inter.broker.listener.name) - Broker → controller (
controller.listener.names) - MirrorMaker 2 source → MM2 worker → target
Pin ssl.enabled.protocols=TLSv1.3,TLSv1.2; disable older protocols. Use cert-manager (or the Strimzi operator's built-in CA) to rotate certificates automatically — Strimzi rotates the cluster CA every 365 days by default.
Encryption At Rest¶
Apache Kafka does not ship native end-to-end record encryption. Options:
| Layer | Approach | Notes |
|---|---|---|
| Disk | LUKS / EBS volume encryption | Transparent; broker is unaware. Standard practice. |
| Application | Client-side envelope encryption (Vault Transit / KMS) | Producer encrypts before send; consumer decrypts after receive. Schema/contract for the key-id header. |
| Kafka Connect | SMT (Single Message Transform) for field-level encryption | Several community SMTs implement field-level encryption with KMS-backed DEKs. |
| Confluent Platform | Client-Side Field-Level Encryption (CSFLE) | Commercial; integrates with Schema Registry tags. |
Native FS encryption
There is no shipped KIP that gives Apache Kafka transparent broker-side record encryption with operator-managed keys (analogous to Pulsar's encryption support). Several KIPs have been discussed (KIP-317, KIP-1124) but as of Kafka 4.2 nothing is GA. Until then, disk-encryption + client-side encryption for sensitive fields is the standard layered defense.
Audit Logging¶
Apache Kafka does not ship a dedicated "audit log" subsystem. The de facto audit mechanism is the kafka.authorizer.logger log4j logger, which the StandardAuthorizer (and AclAuthorizer historically) writes to:
- INFO: every
Denydecision is logged with principal, host, operation, and resource. - DEBUG: every
Allowdecision is logged (off by default; high volume).
# log4j2.properties (Kafka 4.x uses log4j2)
appender.authorizer.type = RollingFile
appender.authorizer.name = AuthorizerFile
appender.authorizer.fileName = ${sys:kafka.logs.dir}/kafka-authorizer.log
appender.authorizer.filePattern = ${sys:kafka.logs.dir}/kafka-authorizer.log.%d{yyyy-MM-dd-HH}
appender.authorizer.layout.type = PatternLayout
appender.authorizer.layout.pattern = [%d] %p %m (%c)%n
appender.authorizer.policies.type = Policies
appender.authorizer.policies.time.type = TimeBasedTriggeringPolicy
appender.authorizer.policies.time.interval = 1
logger.authorizer.name = kafka.authorizer.logger
logger.authorizer.level = INFO # raise to DEBUG to also log Allows
logger.authorizer.appenderRef.file.ref = AuthorizerFile
logger.authorizer.additivity = false
A typical authorizer log entry:
[2026-04-28 14:32:11,455] INFO Principal = User:CN=consumer is Denied Operation = Read
from host = 10.0.4.17 on resource = Topic:LITERAL:payments for request = Fetch
with resourceRefCount = 1 (kafka.authorizer.logger)
For richer audit trails:
- Confluent Platform ships
confluent.security.event.routerwhich publishes audit events to a dedicated audit topic in JSON CloudEvents format. - Conduktor, Lenses.io, and proxy-based products (e.g. Kroxylicious) provide topic-level proxy logging for record-level audit (which records were consumed by which principal).
- Forward
kafka-authorizer.logto a SIEM (Splunk, Elastic, Loki, Datadog) for retention and alerting; alert on bursts of Deny decisions.
MirrorMaker 2 Cross-Cluster Replication Security¶
# mm2.properties — secure SASL_SSL on both source and target clusters
clusters = src, dst
src.bootstrap.servers = src-broker1:9094,src-broker2:9094,src-broker3:9094
dst.bootstrap.servers = dst-broker1:9094,dst-broker2:9094,dst-broker3:9094
src.security.protocol = SASL_SSL
src.sasl.mechanism = SCRAM-SHA-512
src.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \
username="mm2-replicator" password="${env:MM2_SRC_PASSWORD}";
src.ssl.truststore.location = /etc/mm2/src-truststore.jks
src.ssl.truststore.password = ${env:MM2_SRC_TRUST_PASSWORD}
dst.security.protocol = SASL_SSL
dst.sasl.mechanism = SCRAM-SHA-512
dst.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \
username="mm2-replicator" password="${env:MM2_DST_PASSWORD}";
dst.ssl.truststore.location = /etc/mm2/dst-truststore.jks
dst.ssl.truststore.password = ${env:MM2_DST_TRUST_PASSWORD}
src->dst.enabled = true
src->dst.topics = orders.*, events.*
src->dst.replication.factor = 3
src->dst.sync.topic.acls.enabled = true
src->dst.sync.group.offsets.enabled = true
Best practices:
- Use separate dedicated principals for MM2 in the source (read-only) and target (write + create-internal-topics).
- Restrict MM2 internal topics (
mm2-offsets.dst.internal,mm2-status.dst.internal,mm2-configs.dst.internal,heartbeats,*.checkpoints.internal) with ACLs; only the MM2 worker principal needs to write them. - Pin TLS 1.2+ on both clusters; verify the target cluster's certificate chain in the MM2 truststore.
- Replicate ACLs (
sync.topic.acls.enabled=true) so failover doesn't silently break authorization. - Encrypt the MM2 worker host's local Connect state directories; treat MM2 workers as part of the Kafka security boundary.
Recent CVEs (2024–2025)¶
| CVE | Score | Title | Fixed In |
|---|---|---|---|
| CVE-2025-27817 | 7.5 (HIGH) | Apache Kafka Client SASL/OAUTHBEARER arbitrary file read & SSRF via sasl.oauthbearer.token.endpoint.url / jwks.endpoint.url |
3.9.1, 4.0.0 (set sasl.oauthbearer.allowed.urls) |
| CVE-2025-27818 | 8.8 (HIGH) | Authenticated AlterConfigs operator can configure LdapLoginModule JAAS to trigger Java deserialization RCE on broker / Connect worker |
3.9.1, 4.0.0 (disallowed.login.modules defaults updated) |
| CVE-2025-27819 | 7.5 (HIGH) | SASL JAAS configuration vulnerability allowing RCE/DoS for principals holding AlterConfigs on cluster resource |
3.9.1, 4.0.0 |
| CVE-2024-31141 | 6.5 (MEDIUM) | Kafka Client ConfigProvider plugins (FileConfigProvider, DirectoryConfigProvider, EnvVarConfigProvider) allow disclosure of disk content / env vars when client config is supplied by an untrusted party |
Documented mitigation; restrict who supplies client configs |
| CVE-2024-27309 | (NVD) | MirrorMaker SCRAM credential exposure under specific replication settings |
Mitigation guidance in advisory |
Patching Cadence
The 2025 CVE cluster (27817/27818/27819) has been backported by every major Kafka redistributor (Confluent, Strimzi, Bitnami, AWS MSK, Azure Event Hubs). If you operate Kafka 3.x, upgrade to 3.9.1 minimum and apply the documented JVM properties. For new deployments, target Kafka 4.1+ which has the hardened defaults (empty allowed.urls, disallowed login modules) baked in.
The complete list of Apache-disclosed Kafka vulnerabilities is maintained at kafka.apache.org/cve-list.html.
Sources¶
- Apache Kafka — Security documentation
- Apache Kafka — Authentication using SASL
- Apache Kafka — Authorization and ACLs
- Apache Kafka — CVE list
- NVD — CVE-2025-27817
- NVD — CVE-2025-27818
- NVD — CVE-2025-27819
- Confluent Developer — Audit Logs with Log4j
- Strimzi — Securing Kafka
- NetApp Instaclustr — Multiple Kafka CVEs (June 2025)