Method: Sidekiq::Client#push_bulk

Defined in:
lib/sidekiq/client.rb

#push_bulk(items) ⇒ Object

Push a large number of jobs to Redis. This method cuts out the redis network round trip latency. I wouldn't recommend pushing more than 1000 per call but YMMV based on network quality, size of job args, etc. A large number of jobs can cause a bit of Redis command processing latency.

Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.

Returns an array of the of pushed jobs' jids. The number of jobs pushed can be less than the number given if the middleware stopped processing for one or more jobs.

Raises:

  • (ArgumentError)

92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
# File 'lib/sidekiq/client.rb', line 92

def push_bulk(items)
  args = items["args"]
  raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless args.is_a?(Array) && args.all?(Array)
  return [] if args.empty? # no jobs to push

  at = items.delete("at")
  raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all?(Numeric))
  raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size

  normed = normalize_item(items)
  payloads = args.map.with_index { |job_args, index|
    copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f)
    copy["at"] = (at.is_a?(Array) ? at[index] : at) if at

    result = process_single(items["class"], copy)
    result || nil
  }.compact

  raw_push(payloads) unless payloads.empty?
  payloads.collect { |payload| payload["jid"] }
end