The Kafka Listener is able to consume messages from one or more Kafka topics. Kafka topics are the main way in which Kafka organizes data. Each topic is chopped up in separate partitions which will contain part of the messages in the topic. Some of the major features of Kafka are its fault tolerance and throughput. This is achieved by replicating these topic partitions across multiple servers (called Kafka brokers).
Note that the term "consume" is used loosely here because the listener will never actually remove messages from a Kafka cluster. Although a Kafka cluster can dispose of messages through settings related to log compaction and retention, this will not be triggered by a listener. Instead the listener keeps track of an offset per topic. "Consuming a message" then means raising the offset by one. The current offset state for this and all other listeners is tracked within the Kafka cluster's "__consumer_offsets" topic.
Kafka listeners can be organized in groups that are responsible for consuming one or more topics once. Each group gets an internal identifier, the Group ID. Since Kafka Listeners do not actually remove messages from topics a Kafka Listener with a different Group ID can consume the same topic that might be consumed by another listener with a different Group ID.
A Kafka message consists of a value, an optional key, a timestamp and a set of headers. Most important of these is the value of the message which is the payload. Kafka stores this payload (as well as the message key and header values) as binary data making no assumptions about the structure.
Configuring a Kafka Listener requires a list of bootstrap servers. These are the addresses of Kafka brokers in the cluster which the Kafka Listener will contact during startup.
There is also support for further customization of the Kafka Listener through custom properties. See the documentation on the official Kafka site for more information, https://kafka.apache.org/documentation/#consumerapi. Note that custom properties should be used with care because they will overwrite other behavior.
Limitations of ConnectPlaza's Kafka Listener:
- The Kafka Listener can only deal with textual data. Although we do support specifying encodings.
- The Kafka Listener uses an auto commit strategy for offsets. We currently do not support more advanced commit strategies.
- There is no support for transport security, authentication or authorization.
In the table below, you will find an explanation of these properties.
All attributes with a ‘*’ are mandatory.
|By default, we fill this out with the technical tag, followed by a serial number. Changing the name is optional
|Set this value to true, if you want this listener to be enabled.
|listener will be started at startup of interface.
|Name of the MessagePart in the outgoing ConnectMessage.
|Topic Specification Type
|The manner in which to define the set of topics this listener will subscribe to.
|Only available if Topic Specification Type is set to LIST. A comma separated list of Kafka topics to listen to.
|Only available if Topic Specification Type is set to PATTERN. A Java regex specifying the topics to listen to.
|The Kafka listener group id.
|Kafka Bootstrap Servers*
|A comma-separated list of Kafka brokers.
|The polling rate in milliseconds.
|The source encoding used when reading the value, key and headers of a Kafka message.
|A comma-separated list of headers that are mapped from the Kafka message to the ConnectMessage.
|Auto Offset Reset
|The offset to use if there exists no initial offset for this listener group and a specific topic: EARLIEST, reset to the earliest offset; LATEST, reset to the latest offset; NONE, throw exception.
|Allow Multi Fetch
|Allow the listener to fetch multiple records at once.
|Max Poll Records
|Only available if Allow Multi Fetch is set to true. The maximum number of records returned in a single poll.
|Kafka Listener Properties Filename
|The name of the Java properties file containing additional config for the Kafka Listener.