Class: Sidekiq::Client
- Inherits:
-
Object
- Object
- Sidekiq::Client
- Includes:
- JobUtil, TestingClient
- Defined in:
- lib/sidekiq/client.rb
Instance Attribute Summary collapse
-
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
Class Method Summary collapse
-
.enqueue(klass, *args) ⇒ Object
Resque compatibility helpers.
-
.enqueue_in(interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_in(3.minutes, MyWorker, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to(queue, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to(:queue_name, MyWorker, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyWorker, ‘foo’, 1, :bat => ‘bar’).
- .push(item) ⇒ Object
- .push_bulk(items) ⇒ Object
-
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances.
Instance Method Summary collapse
-
#initialize(redis_pool = nil) ⇒ Client
constructor
Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances (for scalability reasons, e.g.).
-
#middleware(&block) ⇒ Object
Define client-side middleware:.
-
#push(item) ⇒ Object
The main method used to push a job to Redis.
-
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis.
Methods included from JobUtil
#normalize_item, #normalized_hash, #validate
Constructor Details
#initialize(redis_pool = nil) ⇒ Client
Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances (for scalability reasons, e.g.)
Sidekiq::Client.new(ConnectionPool.new { Redis.new })
Generally this is only needed for very large Sidekiq installs processing thousands of jobs per second. I don’t recommend sharding unless you cannot scale any other way (e.g. splitting your app into smaller apps).
44 45 46 |
# File 'lib/sidekiq/client.rb', line 44 def initialize(redis_pool = nil) @redis_pool = redis_pool || Thread.current[:sidekiq_via_pool] || Sidekiq.redis_pool end |
Instance Attribute Details
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
32 33 34 |
# File 'lib/sidekiq/client.rb', line 32 def redis_pool @redis_pool end |
Class Method Details
.enqueue(klass, *args) ⇒ Object
155 156 157 |
# File 'lib/sidekiq/client.rb', line 155 def enqueue(klass, *args) klass.client_push("class" => klass, "args" => args) end |
.enqueue_in(interval, klass, *args) ⇒ Object
183 184 185 |
# File 'lib/sidekiq/client.rb', line 183 def enqueue_in(interval, klass, *args) klass.perform_in(interval, *args) end |
.enqueue_to(queue, klass, *args) ⇒ Object
162 163 164 |
# File 'lib/sidekiq/client.rb', line 162 def enqueue_to(queue, klass, *args) klass.client_push("queue" => queue, "class" => klass, "args" => args) end |
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
169 170 171 172 173 174 175 176 177 178 |
# File 'lib/sidekiq/client.rb', line 169 def enqueue_to_in(queue, interval, klass, *args) int = interval.to_f now = Time.now.to_f ts = (int < 1_000_000_000 ? now + int : int) item = {"class" => klass, "args" => args, "at" => ts, "queue" => queue} item.delete("at") if ts <= now klass.client_push(item) end |
.push(item) ⇒ Object
139 140 141 |
# File 'lib/sidekiq/client.rb', line 139 def push(item) new.push(item) end |
.push_bulk(items) ⇒ Object
143 144 145 |
# File 'lib/sidekiq/client.rb', line 143 def push_bulk(items) new.push_bulk(items) end |
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances. All jobs defined within the block will use the given Redis connection pool.
pool = ConnectionPool.new { Redis.new }
Sidekiq::Client.via(pool) do
SomeWorker.perform_async(1,2,3)
SomeOtherWorker.perform_async(1,2,3)
end
Generally this is only needed for very large Sidekiq installs processing thousands of jobs per second. I do not recommend sharding unless you cannot scale any other way (e.g. splitting your app into smaller apps).
129 130 131 132 133 134 135 136 |
# File 'lib/sidekiq/client.rb', line 129 def self.via(pool) raise ArgumentError, "No pool given" if pool.nil? current_sidekiq_pool = Thread.current[:sidekiq_via_pool] Thread.current[:sidekiq_via_pool] = pool yield ensure Thread.current[:sidekiq_via_pool] = current_sidekiq_pool end |
Instance Method Details
#middleware(&block) ⇒ Object
23 24 25 26 27 28 29 30 |
# File 'lib/sidekiq/client.rb', line 23 def middleware(&block) @chain ||= Sidekiq.client_middleware if block @chain = @chain.dup yield @chain end @chain end |
#push(item) ⇒ Object
The main method used to push a job to Redis. Accepts a number of options:
queue - the named queue to use, default 'default'
class - the worker class to call, required
args - an array of simple arguments to the perform method, must be JSON-serializable
at - timestamp to schedule the job (optional), must be Numeric (e.g. Time.now.to_f)
retry - whether to retry this job if it fails, default true or an integer number of retries
backtrace - whether to save any error backtrace, default false
If class is set to the class name, the jobs’ options will be based on Sidekiq’s default worker options. Otherwise, they will be based on the job class’s options.
Any options valid for a worker class’s sidekiq_options are also available here.
All options must be strings, not symbols. NB: because we are serializing to JSON, all symbols in ‘args’ will be converted to strings. Note that backtrace: true can take quite a bit of space in Redis; a large volume of failing jobs can start Redis swapping if you aren’t careful.
Returns a unique Job ID. If middleware stops the job, nil will be returned instead.
Example:
push('queue' => 'my_queue', 'class' => MyWorker, 'args' => ['foo', 1, :bat => 'bar'])
72 73 74 75 76 77 78 79 80 |
# File 'lib/sidekiq/client.rb', line 72 def push(item) normed = normalize_item(item) payload = process_single(item["class"], normed) if payload raw_push([payload]) payload["jid"] end end |
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis. This method cuts out the redis network round trip latency. I wouldn’t recommend pushing more than 1000 per call but YMMV based on network quality, size of job args, etc. A large number of jobs can cause a bit of Redis command processing latency.
Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.
Returns an array of the of pushed jobs’ jids. The number of jobs pushed can be less than the number given if the middleware stopped processing for one or more jobs.
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
# File 'lib/sidekiq/client.rb', line 95 def push_bulk(items) args = items["args"] raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" unless args.is_a?(Array) && args.all?(Array) return [] if args.empty? # no jobs to push at = items.delete("at") raise ArgumentError, "Job 'at' must be a Numeric or an Array of Numeric timestamps" if at && (Array(at).empty? || !Array(at).all? { |entry| entry.is_a?(Numeric) }) raise ArgumentError, "Job 'at' Array must have same size as 'args' Array" if at.is_a?(Array) && at.size != args.size normed = normalize_item(items) payloads = args.map.with_index { |job_args, index| copy = normed.merge("args" => job_args, "jid" => SecureRandom.hex(12), "enqueued_at" => Time.now.to_f) copy["at"] = (at.is_a?(Array) ? at[index] : at) if at result = process_single(items["class"], copy) result || nil }.compact raw_push(payloads) unless payloads.empty? payloads.collect { |payload| payload["jid"] } end |