Class: Rdkafka::Consumer
- Inherits:
-
Object
- Object
- Rdkafka::Consumer
- Includes:
- Enumerable
- Defined in:
- lib/rdkafka/consumer.rb,
lib/rdkafka/consumer/headers.rb,
lib/rdkafka/consumer/message.rb,
lib/rdkafka/consumer/partition.rb,
lib/rdkafka/consumer/topic_partition_list.rb
Overview
A consumer of Kafka messages. It uses the high-level consumer approach where the Kafka
brokers automatically assign partitions and load balance partitions over consumers that
have the same :"group.id"
set in their configuration.
To create a consumer set up a Config and call consumer on that. It is
mandatory to set :"group.id"
in the configuration.
Consumer implements Enumerable
, so you can use each
to consume messages, or for example
each_slice
to consume batches of messages.
Defined Under Namespace
Classes: Headers, Message, Partition, TopicPartitionList
Instance Method Summary collapse
-
#assign(list) ⇒ Object
Atomic assignment of partitions to consume.
-
#assignment ⇒ TopicPartitionList
Returns the current partition assignment.
-
#close ⇒ nil
Close this consumer.
- #closed_consumer_check(method) ⇒ Object
-
#cluster_id ⇒ String?
Returns the ClusterId as reported in broker metadata.
-
#commit(list = nil, async = false) ⇒ nil
Manually commit the current offsets of this consumer.
-
#committed(list = nil, timeout_ms = 1200) ⇒ TopicPartitionList
Return the current committed offset per partition for this consumer group.
-
#each {|message| ... } ⇒ nil
Poll for new messages and yield for each received one.
-
#each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250, yield_on_error: false) {|messages, pending_exception| ... } ⇒ nil
Poll for new messages and yield them in batches that may contain messages from more than one partition.
-
#lag(topic_partition_list, watermark_timeout_ms = 100) ⇒ Hash<String, Hash<Integer, Integer>>
Calculate the consumer lag per partition for the provided topic partition list.
-
#member_id ⇒ String?
Returns this client's broker-assigned group member id.
-
#pause(list) ⇒ nil
Pause producing or consumption for the provided list of partitions.
-
#poll(timeout_ms) ⇒ Message?
Poll for the next message on one of the subscribed topics.
-
#query_watermark_offsets(topic, partition, timeout_ms = 200) ⇒ Integer
Query broker for low (oldest/beginning) and high (newest/end) offsets for a partition.
-
#resume(list) ⇒ nil
Resume producing consumption for the provided list of partitions.
-
#seek(message) ⇒ nil
Seek to a particular message.
-
#store_offset(message) ⇒ nil
Store offset of a message to be used in the next commit of this consumer.
-
#subscribe(*topics) ⇒ nil
Subscribe to one or more topics letting Kafka handle partition assignments.
-
#subscription ⇒ TopicPartitionList
Return the current subscription to topics and partitions.
-
#unsubscribe ⇒ nil
Unsubscribe from all subscribed topics.
Instance Method Details
#assign(list) ⇒ Object
Atomic assignment of partitions to consume
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
# File 'lib/rdkafka/consumer.rb', line 154 def assign(list) closed_consumer_check(__method__) unless list.is_a?(TopicPartitionList) raise TypeError.new("list has to be a TopicPartitionList") end tpl = list.to_native_tpl begin response = Rdkafka::Bindings.rd_kafka_assign(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error assigning '#{list.to_h}'") end ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) end end |
#assignment ⇒ TopicPartitionList
Returns the current partition assignment.
178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
# File 'lib/rdkafka/consumer.rb', line 178 def assignment closed_consumer_check(__method__) ptr = FFI::MemoryPointer.new(:pointer) response = Rdkafka::Bindings.rd_kafka_assignment(@native_kafka, ptr) if response != 0 raise Rdkafka::RdkafkaError.new(response) end tpl = ptr.read_pointer if !tpl.null? begin Rdkafka::Consumer::TopicPartitionList.from_native_tpl(tpl) ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy tpl end end ensure ptr.free unless ptr.nil? end |
#close ⇒ nil
Close this consumer
22 23 24 25 26 27 28 29 |
# File 'lib/rdkafka/consumer.rb', line 22 def close return unless @native_kafka @closing = true Rdkafka::Bindings.rd_kafka_consumer_close(@native_kafka) Rdkafka::Bindings.rd_kafka_destroy(@native_kafka) @native_kafka = nil end |
#closed_consumer_check(method) ⇒ Object
471 472 473 |
# File 'lib/rdkafka/consumer.rb', line 471 def closed_consumer_check(method) raise Rdkafka::ClosedConsumerError.new(method) if @native_kafka.nil? end |
#cluster_id ⇒ String?
Returns the ClusterId as reported in broker metadata.
299 300 301 302 |
# File 'lib/rdkafka/consumer.rb', line 299 def cluster_id closed_consumer_check(__method__) Rdkafka::Bindings.rd_kafka_clusterid(@native_kafka) end |
#commit(list = nil, async = false) ⇒ nil
Manually commit the current offsets of this consumer.
To use this set enable.auto.commit
to false
to disable automatic triggering
of commits.
If enable.auto.offset.store
is set to true
the offset of the last consumed
message for every partition is used. If set to false
you can use #store_offset to
indicate when a message has been fully processed.
395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 |
# File 'lib/rdkafka/consumer.rb', line 395 def commit(list=nil, async=false) closed_consumer_check(__method__) if !list.nil? && !list.is_a?(TopicPartitionList) raise TypeError.new("list has to be nil or a TopicPartitionList") end tpl = list ? list.to_native_tpl : nil begin response = Rdkafka::Bindings.rd_kafka_commit(@native_kafka, tpl, async) if response != 0 raise Rdkafka::RdkafkaError.new(response) end ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) if tpl end end |
#committed(list = nil, timeout_ms = 1200) ⇒ TopicPartitionList
Return the current committed offset per partition for this consumer group. The offset field of each requested partition will either be set to stored offset or to -1001 in case there was no stored offset for that partition.
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
# File 'lib/rdkafka/consumer.rb', line 209 def committed(list=nil, timeout_ms=1200) closed_consumer_check(__method__) if list.nil? list = assignment elsif !list.is_a?(TopicPartitionList) raise TypeError.new("list has to be nil or a TopicPartitionList") end tpl = list.to_native_tpl begin response = Rdkafka::Bindings.rd_kafka_committed(@native_kafka, tpl, timeout_ms) if response != 0 raise Rdkafka::RdkafkaError.new(response) end TopicPartitionList.from_native_tpl(tpl) ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) end end |
#each {|message| ... } ⇒ nil
Poll for new messages and yield for each received one. Iteration will end when the consumer is closed.
If enable.partition.eof
is turned on in the config this will raise an
error when an eof is reached, so you probably want to disable that when
using this method of iteration.
456 457 458 459 460 461 462 463 464 465 466 467 468 469 |
# File 'lib/rdkafka/consumer.rb', line 456 def each loop do = poll(250) if yield() else if @closing break else next end end end end |
#each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250, yield_on_error: false) {|messages, pending_exception| ... } ⇒ nil
Poll for new messages and yield them in batches that may contain messages from more than one partition.
Rather than yield each message immediately as soon as it is received,
each_batch will attempt to wait for as long as timeout_ms
in order
to create a batch of up to but no more than max_items
in size.
Said differently, if more than max_items
are available within
timeout_ms
, then each_batch
will yield early with max_items
in the
array, but if timeout_ms
passes by with fewer messages arriving, it
will yield an array of fewer messages, quite possibly zero.
In order to prevent wrongly auto committing many messages at once across
possibly many partitions, callers must explicitly indicate which messages
have been successfully processed as some consumed messages may not have
been yielded yet. To do this, the caller should set
enable.auto.offset.store
to false and pass processed messages to
#store_offset. It is also possible, though more complex, to set
'enable.auto.commit' to false and then pass a manually assembled
TopicPartitionList to #commit.
As with each
, iteration will end when the consumer is closed.
Exception behavior is more complicated than with each
, in that if
:yield_on_error is true, and an exception is raised during the
poll, and messages have already been received, they will be yielded to
the caller before the exception is allowed to propagate.
If you are setting either auto.commit or auto.offset.store to false in the consumer configuration, then you should let yield_on_error keep its default value of false because you are guaranteed to see these messages again. However, if both auto.commit and auto.offset.store are set to true, you should set yield_on_error to true so you can process messages that you may or may not see again.
which will be propagated after processing of the partial batch is complete.
524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
# File 'lib/rdkafka/consumer.rb', line 524 def each_batch(max_items: 100, bytes_threshold: Float::INFINITY, timeout_ms: 250, yield_on_error: false, &block) closed_consumer_check(__method__) slice = [] bytes = 0 end_time = monotonic_now + timeout_ms / 1000.0 loop do break if @closing max_wait = end_time - monotonic_now max_wait_ms = if max_wait <= 0 0 # should not block, but may retrieve a message else (max_wait * 1000).floor end = nil begin = poll max_wait_ms rescue Rdkafka::RdkafkaError => error raise unless yield_on_error raise if slice.empty? yield slice.dup, error raise end if slice << bytes += .payload.bytesize end if slice.size == max_items || bytes >= bytes_threshold || monotonic_now >= end_time - 0.001 yield slice.dup, nil slice.clear bytes = 0 end_time = monotonic_now + timeout_ms / 1000.0 end end end |
#lag(topic_partition_list, watermark_timeout_ms = 100) ⇒ Hash<String, Hash<Integer, Integer>>
Calculate the consumer lag per partition for the provided topic partition list. You can get a suitable list by calling #committed or position (TODO). It is also possible to create one yourself, in this case you have to provide a list that already contains all the partitions you need the lag for.
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 |
# File 'lib/rdkafka/consumer.rb', line 275 def lag(topic_partition_list, watermark_timeout_ms=100) out = {} topic_partition_list.to_h.each do |topic, partitions| # Query high watermarks for this topic's partitions # and compare to the offset in the list. topic_out = {} partitions.each do |p| next if p.offset.nil? low, high = query_watermark_offsets( topic, p.partition, watermark_timeout_ms ) topic_out[p.partition] = high - p.offset end out[topic] = topic_out end out end |
#member_id ⇒ String?
Returns this client's broker-assigned group member id
This currently requires the high-level KafkaConsumer
309 310 311 312 |
# File 'lib/rdkafka/consumer.rb', line 309 def member_id closed_consumer_check(__method__) Rdkafka::Bindings.rd_kafka_memberid(@native_kafka) end |
#pause(list) ⇒ nil
Pause producing or consumption for the provided list of partitions
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# File 'lib/rdkafka/consumer.rb', line 78 def pause(list) closed_consumer_check(__method__) unless list.is_a?(TopicPartitionList) raise TypeError.new("list has to be a TopicPartitionList") end tpl = list.to_native_tpl begin response = Rdkafka::Bindings.rd_kafka_pause_partitions(@native_kafka, tpl) if response != 0 list = TopicPartitionList.from_native_tpl(tpl) raise Rdkafka::RdkafkaTopicPartitionListError.new(response, list, "Error pausing '#{list.to_h}'") end ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) end end |
#poll(timeout_ms) ⇒ Message?
Poll for the next message on one of the subscribed topics
421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 |
# File 'lib/rdkafka/consumer.rb', line 421 def poll(timeout_ms) closed_consumer_check(__method__) = Rdkafka::Bindings.rd_kafka_consumer_poll(@native_kafka, timeout_ms) if .null? nil else # Create struct wrapper = Rdkafka::Bindings::Message.new() # Raise error if needed if [:err] != 0 raise Rdkafka::RdkafkaError.new([:err]) end # Create a message to pass out Rdkafka::Consumer::Message.new() end ensure # Clean up rdkafka message if there is one if !.nil? && !.null? Rdkafka::Bindings.() end end |
#query_watermark_offsets(topic, partition, timeout_ms = 200) ⇒ Integer
Query broker for low (oldest/beginning) and high (newest/end) offsets for a partition.
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
# File 'lib/rdkafka/consumer.rb', line 240 def query_watermark_offsets(topic, partition, timeout_ms=200) closed_consumer_check(__method__) low = FFI::MemoryPointer.new(:int64, 1) high = FFI::MemoryPointer.new(:int64, 1) response = Rdkafka::Bindings.rd_kafka_query_watermark_offsets( @native_kafka, topic, partition, low, high, timeout_ms, ) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error querying watermark offsets for partition #{partition} of #{topic}") end return low.read_array_of_int64(1).first, high.read_array_of_int64(1).first ensure low.free unless low.nil? high.free unless high.nil? end |
#resume(list) ⇒ nil
Resume producing consumption for the provided list of partitions
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
# File 'lib/rdkafka/consumer.rb', line 106 def resume(list) closed_consumer_check(__method__) unless list.is_a?(TopicPartitionList) raise TypeError.new("list has to be a TopicPartitionList") end tpl = list.to_native_tpl begin response = Rdkafka::Bindings.rd_kafka_resume_partitions(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error resume '#{list.to_h}'") end ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) end end |
#seek(message) ⇒ nil
Seek to a particular message. The next poll on the topic/partition will return the message at the given offset.
355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 |
# File 'lib/rdkafka/consumer.rb', line 355 def seek() closed_consumer_check(__method__) # rd_kafka_offset_store is one of the few calls that does not support # a string as the topic, so create a native topic for it. native_topic = Rdkafka::Bindings.rd_kafka_topic_new( @native_kafka, .topic, nil ) response = Rdkafka::Bindings.rd_kafka_seek( native_topic, .partition, .offset, 0 # timeout ) if response != 0 raise Rdkafka::RdkafkaError.new(response) end ensure if native_topic && !native_topic.null? Rdkafka::Bindings.rd_kafka_topic_destroy(native_topic) end end |
#store_offset(message) ⇒ nil
Store offset of a message to be used in the next commit of this consumer
When using this enable.auto.offset.store
should be set to false
in the config.
323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 |
# File 'lib/rdkafka/consumer.rb', line 323 def store_offset() closed_consumer_check(__method__) # rd_kafka_offset_store is one of the few calls that does not support # a string as the topic, so create a native topic for it. native_topic = Rdkafka::Bindings.rd_kafka_topic_new( @native_kafka, .topic, nil ) response = Rdkafka::Bindings.rd_kafka_offset_store( native_topic, .partition, .offset ) if response != 0 raise Rdkafka::RdkafkaError.new(response) end ensure if native_topic && !native_topic.null? Rdkafka::Bindings.rd_kafka_topic_destroy(native_topic) end end |
#subscribe(*topics) ⇒ nil
Subscribe to one or more topics letting Kafka handle partition assignments.
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
# File 'lib/rdkafka/consumer.rb', line 38 def subscribe(*topics) closed_consumer_check(__method__) # Create topic partition list with topics and no partition set tpl = Rdkafka::Bindings.rd_kafka_topic_partition_list_new(topics.length) topics.each do |topic| Rdkafka::Bindings.rd_kafka_topic_partition_list_add(tpl, topic, -1) end # Subscribe to topic partition list and check this was successful response = Rdkafka::Bindings.rd_kafka_subscribe(@native_kafka, tpl) if response != 0 raise Rdkafka::RdkafkaError.new(response, "Error subscribing to '#{topics.join(', ')}'") end ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(tpl) unless tpl.nil? end |
#subscription ⇒ TopicPartitionList
Return the current subscription to topics and partitions
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
# File 'lib/rdkafka/consumer.rb', line 130 def subscription closed_consumer_check(__method__) ptr = FFI::MemoryPointer.new(:pointer) response = Rdkafka::Bindings.rd_kafka_subscription(@native_kafka, ptr) if response != 0 raise Rdkafka::RdkafkaError.new(response) end native = ptr.read_pointer begin Rdkafka::Consumer::TopicPartitionList.from_native_tpl(native) ensure Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(native) end end |
#unsubscribe ⇒ nil
Unsubscribe from all subscribed topics.
62 63 64 65 66 67 68 69 |
# File 'lib/rdkafka/consumer.rb', line 62 def unsubscribe closed_consumer_check(__method__) response = Rdkafka::Bindings.rd_kafka_unsubscribe(@native_kafka) if response != 0 raise Rdkafka::RdkafkaError.new(response) end end |