Class: Kafka::Client
- Inherits:
-
Object
- Object
- Kafka::Client
- Defined in:
- lib/kafka/client.rb
Instance Method Summary collapse
-
#alter_topic(name, configs = {}) ⇒ nil
Alter the configuration of a topic.
- #apis ⇒ Object
-
#async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **options) ⇒ AsyncProducer
Creates a new AsyncProducer instance.
-
#close ⇒ nil
Closes all connections to the Kafka brokers and frees up used resources.
-
#consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10, offset_retention_time: nil) ⇒ Consumer
Creates a new Kafka consumer.
-
#create_partitions_for(name, num_partitions: 1, timeout: 30) ⇒ nil
Create partitions for a topic.
-
#create_topic(name, num_partitions: 1, replication_factor: 1, timeout: 30, config: {}) ⇒ nil
Creates a topic in the cluster.
-
#delete_topic(name, timeout: 30) ⇒ nil
Delete a topic in the cluster.
-
#deliver_message(value, key: nil, topic:, partition: nil, partition_key: nil, retries: 1) ⇒ nil
Delivers a single message to the Kafka cluster.
-
#describe_topic(name, configs = []) ⇒ Hash<String, String>
Describe the configuration of a topic.
-
#each_message(topic:, start_from_beginning: true, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) ⇒ nil
Enumerate all messages in a topic.
-
#fetch_messages(topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, retries: 1) ⇒ Array<Kafka::FetchedMessage>
Fetches a batch of messages from a single partition.
-
#groups ⇒ Array<String>
Lists all consumer groups in the cluster.
- #has_topic?(topic) ⇒ Boolean
-
#initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert_file_path: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil, sasl_gssapi_principal: nil, sasl_gssapi_keytab: nil, sasl_plain_authzid: '', sasl_plain_username: nil, sasl_plain_password: nil, sasl_scram_username: nil, sasl_scram_password: nil, sasl_scram_mechanism: nil, ssl_ca_certs_from_system: false) ⇒ Client
constructor
Initializes a new Kafka client.
-
#last_offset_for(topic, partition) ⇒ Integer
Retrieve the offset of the last message in a partition.
-
#last_offsets_for(*topics) ⇒ Hash<String, Hash<Integer, Integer>>
Retrieve the offset of the last message in each partition of the specified topics.
-
#partitions_for(topic) ⇒ Integer
Counts the number of partitions in a topic.
-
#producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) ⇒ Kafka::Producer
Initializes a new Kafka producer.
-
#supports_api?(api_key, version = nil) ⇒ Boolean
Check whether current cluster supports a specific version or not.
-
#topics ⇒ Array<String>
Lists all topics in the cluster.
Constructor Details
#initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert_file_path: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil, sasl_gssapi_principal: nil, sasl_gssapi_keytab: nil, sasl_plain_authzid: '', sasl_plain_username: nil, sasl_plain_password: nil, sasl_scram_username: nil, sasl_scram_password: nil, sasl_scram_mechanism: nil, ssl_ca_certs_from_system: false) ⇒ Client
Initializes a new Kafka client.
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/kafka/client.rb', line 58 def initialize(seed_brokers:, client_id: "ruby-kafka", logger: nil, connect_timeout: nil, socket_timeout: nil, ssl_ca_cert_file_path: nil, ssl_ca_cert: nil, ssl_client_cert: nil, ssl_client_cert_key: nil, sasl_gssapi_principal: nil, sasl_gssapi_keytab: nil, sasl_plain_authzid: '', sasl_plain_username: nil, sasl_plain_password: nil, sasl_scram_username: nil, sasl_scram_password: nil, sasl_scram_mechanism: nil, ssl_ca_certs_from_system: false) @logger = logger || Logger.new(nil) @instrumenter = Instrumenter.new(client_id: client_id) @seed_brokers = normalize_seed_brokers(seed_brokers) ssl_context = SslContext.build( ca_cert_file_path: ssl_ca_cert_file_path, ca_cert: ssl_ca_cert, client_cert: ssl_client_cert, client_cert_key: ssl_client_cert_key, ca_certs_from_system: ssl_ca_certs_from_system, ) sasl_authenticator = SaslAuthenticator.new( sasl_gssapi_principal: sasl_gssapi_principal, sasl_gssapi_keytab: sasl_gssapi_keytab, sasl_plain_authzid: sasl_plain_authzid, sasl_plain_username: sasl_plain_username, sasl_plain_password: sasl_plain_password, sasl_scram_username: sasl_scram_username, sasl_scram_password: sasl_scram_password, sasl_scram_mechanism: sasl_scram_mechanism, logger: @logger ) if sasl_authenticator.enabled? && ssl_context.nil? raise ArgumentError, "SASL authentication requires that SSL is configured" end @connection_builder = ConnectionBuilder.new( client_id: client_id, connect_timeout: connect_timeout, socket_timeout: socket_timeout, ssl_context: ssl_context, logger: @logger, instrumenter: @instrumenter, sasl_authenticator: sasl_authenticator ) @cluster = initialize_cluster end |
Instance Method Details
#alter_topic(name, configs = {}) ⇒ nil
This is an alpha level API and is subject to change.
Alter the configuration of a topic.
Configuration keys must match Kafka's topic-level configs.
539 540 541 |
# File 'lib/kafka/client.rb', line 539 def alter_topic(name, configs = {}) @cluster.alter_topic(name, configs) end |
#apis ⇒ Object
624 625 626 |
# File 'lib/kafka/client.rb', line 624 def apis @cluster.apis end |
#async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **options) ⇒ AsyncProducer
Creates a new AsyncProducer instance.
All parameters allowed by #producer can be passed. In addition to this, a few extra parameters can be passed when creating an async producer.
253 254 255 256 257 258 259 260 261 262 263 264 |
# File 'lib/kafka/client.rb', line 253 def async_producer(delivery_interval: 0, delivery_threshold: 0, max_queue_size: 1000, **) sync_producer = producer(**) AsyncProducer.new( sync_producer: sync_producer, delivery_interval: delivery_interval, delivery_threshold: delivery_threshold, max_queue_size: max_queue_size, instrumenter: @instrumenter, logger: @logger, ) end |
#close ⇒ nil
Closes all connections to the Kafka brokers and frees up used resources.
631 632 633 |
# File 'lib/kafka/client.rb', line 631 def close @cluster.disconnect end |
#consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10, offset_retention_time: nil) ⇒ Consumer
Creates a new Kafka consumer.
281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 |
# File 'lib/kafka/client.rb', line 281 def consumer(group_id:, session_timeout: 30, offset_commit_interval: 10, offset_commit_threshold: 0, heartbeat_interval: 10, offset_retention_time: nil) cluster = initialize_cluster instrumenter = .new(@instrumenter, { group_id: group_id, }) # The Kafka protocol expects the retention time to be in ms. retention_time = (offset_retention_time && offset_retention_time * 1_000) || -1 group = ConsumerGroup.new( cluster: cluster, logger: @logger, group_id: group_id, session_timeout: session_timeout, retention_time: retention_time, instrumenter: instrumenter, ) fetcher = Fetcher.new( cluster: initialize_cluster, logger: @logger, instrumenter: instrumenter, ) offset_manager = OffsetManager.new( cluster: cluster, group: group, fetcher: fetcher, logger: @logger, commit_interval: offset_commit_interval, commit_threshold: offset_commit_threshold, offset_retention_time: offset_retention_time ) heartbeat = Heartbeat.new( group: group, interval: heartbeat_interval, ) Consumer.new( cluster: cluster, logger: @logger, instrumenter: instrumenter, group: group, offset_manager: offset_manager, fetcher: fetcher, session_timeout: session_timeout, heartbeat: heartbeat, ) end |
#create_partitions_for(name, num_partitions: 1, timeout: 30) ⇒ nil
Create partitions for a topic.
the topic partitions to be added.
551 552 553 |
# File 'lib/kafka/client.rb', line 551 def create_partitions_for(name, num_partitions: 1, timeout: 30) @cluster.create_partitions_for(name, num_partitions: num_partitions, timeout: timeout) end |
#create_topic(name, num_partitions: 1, replication_factor: 1, timeout: 30, config: {}) ⇒ nil
Creates a topic in the cluster.
486 487 488 489 490 491 492 493 494 |
# File 'lib/kafka/client.rb', line 486 def create_topic(name, num_partitions: 1, replication_factor: 1, timeout: 30, config: {}) @cluster.create_topic( name, num_partitions: num_partitions, replication_factor: replication_factor, timeout: timeout, config: config, ) end |
#delete_topic(name, timeout: 30) ⇒ nil
Delete a topic in the cluster.
502 503 504 |
# File 'lib/kafka/client.rb', line 502 def delete_topic(name, timeout: 30) @cluster.delete_topic(name, timeout: timeout) end |
#deliver_message(value, key: nil, topic:, partition: nil, partition_key: nil, retries: 1) ⇒ nil
Delivers a single message to the Kafka cluster.
Note: Only use this API for low-throughput scenarios. If you want to deliver many messages at a high rate, or if you want to configure the way messages are sent, use the #producer or #async_producer APIs instead.
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
# File 'lib/kafka/client.rb', line 121 def (value, key: nil, topic:, partition: nil, partition_key: nil, retries: 1) create_time = Time.now = PendingMessage.new( value, key, topic, partition, partition_key, create_time, ) if partition.nil? partition_count = @cluster.partitions_for(topic).count partition = Partitioner.partition_for_key(partition_count, ) end buffer = MessageBuffer.new buffer.write( value: .value, key: .key, topic: .topic, partition: partition, create_time: .create_time, ) @cluster.add_target_topics([topic]) compressor = Compressor.new( instrumenter: @instrumenter, ) operation = ProduceOperation.new( cluster: @cluster, buffer: buffer, required_acks: 1, ack_timeout: 10, compressor: compressor, logger: @logger, instrumenter: @instrumenter, ) attempt = 1 begin operation.execute unless buffer.empty? raise DeliveryFailed.new(nil, []) end rescue Kafka::Error => e @cluster.mark_as_stale! if attempt >= (retries + 1) raise else attempt += 1 @logger.warn "Error while delivering message, #{e.class}: #{e.message}; retrying after 1s..." sleep 1 retry end end end |
#describe_topic(name, configs = []) ⇒ Hash<String, String>
This is an alpha level API and is subject to change.
Describe the configuration of a topic.
Retrieves the topic configuration from the Kafka brokers. Configuration names refer to Kafka's topic-level configs.
521 522 523 |
# File 'lib/kafka/client.rb', line 521 def describe_topic(name, configs = []) @cluster.describe_topic(name, configs) end |
#each_message(topic:, start_from_beginning: true, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) ⇒ nil
Enumerate all messages in a topic.
440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 |
# File 'lib/kafka/client.rb', line 440 def (topic:, start_from_beginning: true, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, &block) default_offset ||= start_from_beginning ? :earliest : :latest offsets = Hash.new { default_offset } loop do operation = FetchOperation.new( cluster: @cluster, logger: @logger, min_bytes: min_bytes, max_wait_time: max_wait_time, ) @cluster.partitions_for(topic).map(&:partition_id).each do |partition| partition_offset = offsets[partition] operation.fetch_from_partition(topic, partition, offset: partition_offset, max_bytes: max_bytes) end batches = operation.execute batches.each do |batch| batch..each(&block) offsets[batch.partition] = batch.last_offset + 1 end end end |
#fetch_messages(topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, retries: 1) ⇒ Array<Kafka::FetchedMessage>
Fetches a batch of messages from a single partition. Note that it's possible to get back empty batches.
The starting point for the fetch can be configured with the :offset argument.
If you pass a number, the fetch will start at that offset. However, there are
two special Symbol values that can be passed instead:
:earliest— the first offset in the partition.:latest— the next offset that will be written to, effectively making the call block until there is a new message in the partition.
The Kafka protocol specifies the numeric values of these two options: -2 and -1, respectively. You can also pass in these numbers directly.
Example
When enumerating the messages in a partition, you typically fetch batches sequentially.
offset = :earliest
loop do
= kafka.(
topic: "my-topic",
partition: 42,
offset: offset,
)
.each do ||
puts .offset, .key, .value
# Set the next offset that should be read to be the subsequent
# offset.
offset = .offset + 1
end
end
See a working example in examples/simple-consumer.rb.
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 |
# File 'lib/kafka/client.rb', line 392 def (topic:, partition:, offset: :latest, max_wait_time: 5, min_bytes: 1, max_bytes: 1048576, retries: 1) operation = FetchOperation.new( cluster: @cluster, logger: @logger, min_bytes: min_bytes, max_bytes: max_bytes, max_wait_time: max_wait_time, ) operation.fetch_from_partition(topic, partition, offset: offset, max_bytes: max_bytes) attempt = 1 begin operation.execute.flat_map {|batch| batch. } rescue Kafka::Error => e @cluster.mark_as_stale! if attempt >= (retries + 1) raise else attempt += 1 @logger.warn "Error while fetching messages, #{e.class}: #{e.message}; retrying..." retry end end end |
#groups ⇒ Array<String>
Lists all consumer groups in the cluster
565 566 567 |
# File 'lib/kafka/client.rb', line 565 def groups @cluster.list_groups end |
#has_topic?(topic) ⇒ Boolean
569 570 571 572 573 |
# File 'lib/kafka/client.rb', line 569 def has_topic?(topic) @cluster.clear_target_topics @cluster.add_target_topics([topic]) @cluster.topics.include?(topic) end |
#last_offset_for(topic, partition) ⇒ Integer
Retrieve the offset of the last message in a partition. If there are no messages in the partition -1 is returned.
590 591 592 593 594 |
# File 'lib/kafka/client.rb', line 590 def last_offset_for(topic, partition) # The offset resolution API will return the offset of the "next" message to # be written when resolving the "latest" offset, so we subtract one. @cluster.resolve_offset(topic, partition, :latest) - 1 end |
#last_offsets_for(*topics) ⇒ Hash<String, Hash<Integer, Integer>>
Retrieve the offset of the last message in each partition of the specified topics.
606 607 608 609 610 611 612 613 |
# File 'lib/kafka/client.rb', line 606 def last_offsets_for(*topics) @cluster.add_target_topics(topics) topics.map {|topic| partition_ids = @cluster.partitions_for(topic).collect(&:partition_id) partition_offsets = @cluster.resolve_offsets(topic, partition_ids, :latest) [topic, partition_offsets.collect { |k, v| [k, v - 1] }.to_h] }.to_h end |
#partitions_for(topic) ⇒ Integer
Counts the number of partitions in a topic.
579 580 581 |
# File 'lib/kafka/client.rb', line 579 def partitions_for(topic) @cluster.partitions_for(topic).count end |
#producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) ⇒ Kafka::Producer
Initializes a new Kafka producer.
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 |
# File 'lib/kafka/client.rb', line 218 def producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000) compressor = Compressor.new( codec_name: compression_codec, threshold: compression_threshold, instrumenter: @instrumenter, ) Producer.new( cluster: initialize_cluster, logger: @logger, instrumenter: @instrumenter, compressor: compressor, ack_timeout: ack_timeout, required_acks: required_acks, max_retries: max_retries, retry_backoff: retry_backoff, max_buffer_size: max_buffer_size, max_buffer_bytesize: max_buffer_bytesize, ) end |
#supports_api?(api_key, version = nil) ⇒ Boolean
Check whether current cluster supports a specific version or not
620 621 622 |
# File 'lib/kafka/client.rb', line 620 def supports_api?(api_key, version = nil) @cluster.supports_api?(api_key, version) end |
#topics ⇒ Array<String>
Lists all topics in the cluster.
558 559 560 |
# File 'lib/kafka/client.rb', line 558 def topics @cluster.list_topics end |