Method: Kafka::Client#producer

Defined in:
lib/kafka/client.rb

#producer(compression_codec: nil, compression_threshold: 1, ack_timeout: 5, required_acks: :all, max_retries: 2, retry_backoff: 1, max_buffer_size: 1000, max_buffer_bytesize: 10_000_000, idempotent: false, transactional: false, transactional_id: nil, transactional_timeout: 60, interceptors: []) ⇒ Kafka::Producer

Initializes a new Kafka producer.

Parameters:

  • ack_timeout (Integer) (defaults to: 5)

    The number of seconds a broker can wait for replicas to acknowledge a write before responding with a timeout.

  • required_acks (Integer, Symbol) (defaults to: :all)

    The number of replicas that must acknowledge a write, or :all if all in-sync replicas must acknowledge.

  • max_retries (Integer) (defaults to: 2)

    the number of retries that should be attempted before giving up sending messages to the cluster. Does not include the original attempt.

  • retry_backoff (Integer) (defaults to: 1)

    the number of seconds to wait between retries.

  • max_buffer_size (Integer) (defaults to: 1000)

    the number of messages allowed in the buffer before new writes will raise BufferOverflow exceptions.

  • max_buffer_bytesize (Integer) (defaults to: 10_000_000)

    the maximum size of the buffer in bytes. attempting to produce messages when the buffer reaches this size will result in BufferOverflow being raised.

  • compression_codec (Symbol, nil) (defaults to: nil)

    the name of the compression codec to use, or nil if no compression should be performed. Valid codecs: :snappy, :gzip, :lz4, :zstd

  • compression_threshold (Integer) (defaults to: 1)

    the number of messages that needs to be in a message set before it should be compressed. Note that message sets are per-partition rather than per-topic or per-producer.

  • interceptors (Array<Object>) (defaults to: [])

    a list of producer interceptors the implement call(Kafka::PendingMessage).

Returns:



278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
# File 'lib/kafka/client.rb', line 278

def producer(
  compression_codec: nil,
  compression_threshold: 1,
  ack_timeout: 5,
  required_acks: :all,
  max_retries: 2,
  retry_backoff: 1,
  max_buffer_size: 1000,
  max_buffer_bytesize: 10_000_000,
  idempotent: false,
  transactional: false,
  transactional_id: nil,
  transactional_timeout: 60,
  interceptors: []
)
  cluster = initialize_cluster
  compressor = Compressor.new(
    codec_name: compression_codec,
    threshold: compression_threshold,
    instrumenter: @instrumenter,
  )

  transaction_manager = TransactionManager.new(
    cluster: cluster,
    logger: @logger,
    idempotent: idempotent,
    transactional: transactional,
    transactional_id: transactional_id,
    transactional_timeout: transactional_timeout,
  )

  Producer.new(
    cluster: cluster,
    transaction_manager: transaction_manager,
    logger: @logger,
    instrumenter: @instrumenter,
    compressor: compressor,
    ack_timeout: ack_timeout,
    required_acks: required_acks,
    max_retries: max_retries,
    retry_backoff: retry_backoff,
    max_buffer_size: max_buffer_size,
    max_buffer_bytesize: max_buffer_bytesize,
    partitioner: @partitioner,
    interceptors: interceptors
  )
end