Class: Kafka::Client
- Inherits:
-
Object
- Object
- Kafka::Client
- Defined in:
- lib/kafka/client.rb
Instance Method Summary collapse
-
#async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **options) ⇒ AsyncProducer
Creates a new AsyncProducer instance.
-
#close ⇒ nil
Closes all connections to the Kafka brokers and frees up used resources.
-
#consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10) ⇒ Consumer
Creates a new Kafka consumer.
- #deliver_message(value, key: nil, topic:, partition: nil, partition_key: nil) ⇒ Object
-
#each_message(topic:, offset: :earliest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) ⇒ Object
EXPERIMENTAL: Enumerates all messages in a topic.
-
#fetch_messages(topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576) ⇒ Array<Kafka::FetchedMessage>
Fetches a batch of messages from a single partition.
-
#initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil) ⇒ Client
constructor
Initializes a new Kafka client.
-
#partitions_for(topic) ⇒ Integer
Counts the number of partitions in a topic.
-
#producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) ⇒ Kafka::Producer
Initializes a new Kafka producer.
-
#topics ⇒ Array<String>
Lists all topics in the cluster.
Constructor Details
#initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil) ⇒ Client
Initializes a new Kafka client.
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
# File 'lib/kafka/client.rb', line 43 def initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil) @logger = logger || Logger.new(nil) @instrumenter = Instrumenter.new(client_id: client_id) @seed_brokers = normalize_seed_brokers(seed_brokers) ssl_context = build_ssl_context(ssl_ca_cert, ssl_client_cert, ssl_client_cert_key) @connection_builder = ConnectionBuilder.new( client_id: client_id, connect_timeout: connect_timeout, socket_timeout: socket_timeout, ssl_context: ssl_context, logger: @logger, instrumenter: @instrumenter, ) @cluster = initialize_cluster end |
Instance Method Details
#async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **options) ⇒ AsyncProducer
Creates a new AsyncProducer instance.
All parameters allowed by #producer can be passed. In addition to this, a few extra parameters can be passed when creating an async producer.
178 179 180 181 182 183 184 185 186 187 188 |
# File 'lib/kafka/client.rb', line 178 def async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **) sync_producer = producer(**) AsyncProducer.new( sync_producer: sync_producer, delivery_interval: delivery_interval, delivery_threshold: delivery_threshold, max_queue_size: max_queue_size, instrumenter: @instrumenter, ) end |
#close ⇒ nil
Closes all connections to the Kafka brokers and frees up used resources.
358 359 360 |
# File 'lib/kafka/client.rb', line 358 def close @cluster.disconnect end |
#consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10) ⇒ Consumer
Creates a new Kafka consumer.
203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 |
# File 'lib/kafka/client.rb', line 203 def consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10) cluster = initialize_cluster instrumenter = DecoratingInstrumenter.new(@instrumenter, { group_id: group_id, }) group = ConsumerGroup.new( cluster: cluster, logger: @logger, group_id: group_id, session_timeout: session_timeout, ) offset_manager = OffsetManager.new( group: group, logger: @logger, commit_interval: offset_commit_interval, commit_threshold: offset_commit_threshold, ) heartbeat = Heartbeat.new( group: group, interval: heartbeat_interval, ) Consumer.new( cluster: cluster, logger: @logger, instrumenter: instrumenter, group: group, offset_manager: offset_manager, session_timeout: session_timeout, heartbeat: heartbeat, ) end |
#deliver_message(value, key: nil, topic:, partition: nil, partition_key: nil) ⇒ Object
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
# File 'lib/kafka/client.rb', line 62 def (value, key: nil, topic:, partition: nil, partition_key: nil) create_time = Time.now = PendingMessage.new( value, key, topic, partition, partition_key, create_time, key.to_s.bytesize + value.to_s.bytesize ) if partition.nil? partition_count = @cluster.partitions_for(topic).count partition = Partitioner.partition_for_key(partition_count, ) end buffer = MessageBuffer.new buffer.write( value: .value, key: .key, topic: .topic, partition: partition, create_time: .create_time, ) @cluster.add_target_topics([topic]) compressor = Compressor.new( instrumenter: @instrumenter, ) operation = ProduceOperation.new( cluster: @cluster, buffer: buffer, required_acks: 1, ack_timeout: 10, compressor: compressor, logger: @logger, instrumenter: @instrumenter, ) operation.execute unless buffer.empty? raise DeliveryFailed end end |
#each_message(topic:, offset: :earliest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) ⇒ Object
EXPERIMENTAL: Enumerates all messages in a topic.
315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 |
# File 'lib/kafka/client.rb', line 315 def (topic:, offset: :earliest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) offsets = Hash.new { offset } loop do operation = FetchOperation.new( cluster: @cluster, logger: @logger, min_bytes: min_bytes, max_wait_time: max_wait_time, ) @cluster.partitions_for(topic).map(&:partition_id).each do |partition| partition_offset = offsets[partition] operation.fetch_from_partition(topic, partition, offset: partition_offset, max_bytes: max_bytes) end batches = operation.execute batches.each do |batch| batch..each(&block) offsets[batch.partition] = batch.last_offset end end end |
#fetch_messages(topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576) ⇒ Array<Kafka::FetchedMessage>
This API is still alpha level. Don't try to use it in production.
Fetches a batch of messages from a single partition. Note that it's possible to get back empty batches.
The starting point for the fetch can be configured with the :offset
argument.
If you pass a number, the fetch will start at that offset. However, there are
two special Symbol values that can be passed instead:
:earliest
— the first offset in the partition.:latest
— the next offset that will be written to, effectively making the call block until there is a new message in the partition.
The Kafka protocol specifies the numeric values of these two options: -2 and -1, respectively. You can also pass in these numbers directly.
Example
When enumerating the messages in a partition, you typically fetch batches sequentially.
offset = :earliest
loop do
= kafka.(
topic: "my-topic",
partition: 42,
offset: offset,
)
.each do ||
puts .offset, .key, .value
# Set the next offset that should be read to be the subsequent
# offset.
offset = .offset + 1
end
end
See a working example in examples/simple-consumer.rb
.
301 302 303 304 305 306 307 308 309 310 311 312 |
# File 'lib/kafka/client.rb', line 301 def (topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576) operation = FetchOperation.new( cluster: @cluster, logger: @logger, min_bytes: min_bytes, max_wait_time: max_wait_time, ) operation.fetch_from_partition(topic, partition, offset: offset, max_bytes: max_bytes) operation.execute.flat_map {|batch| batch. } end |
#partitions_for(topic) ⇒ Integer
Counts the number of partitions in a topic.
351 352 353 |
# File 'lib/kafka/client.rb', line 351 def partitions_for(topic) @cluster.partitions_for(topic).count end |
#producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) ⇒ Kafka::Producer
Initializes a new Kafka producer.
143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
# File 'lib/kafka/client.rb', line 143 def producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) compressor = Compressor.new( codec_name: compression_codec, threshold: compression_threshold, instrumenter: @instrumenter, ) Producer.new( cluster: initialize_cluster, logger: @logger, instrumenter: @instrumenter, compressor: compressor, ack_timeout: ack_timeout, required_acks: required_acks, max_retries: max_retries, retry_backoff: retry_backoff, max_buffer_size: max_buffer_size, max_buffer_bytesize: max_buffer_bytesize, ) end |
#topics ⇒ Array<String>
Lists all topics in the cluster.
343 344 345 |
# File 'lib/kafka/client.rb', line 343 def topics @cluster.topics end |