Class: Sidekiq::Client
- Inherits:
-
Object
- Object
- Sidekiq::Client
- Defined in:
- lib/sidekiq/client.rb,
lib/sidekiq/testing.rb
Instance Attribute Summary collapse
-
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
Class Method Summary collapse
-
.enqueue(klass, *args) ⇒ Object
Resque compatibility helpers.
-
.enqueue_in(interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_in(3.minutes, MyWorker, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to(queue, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to(:queue_name, MyWorker, ‘foo’, 1, :bat => ‘bar’).
-
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
Example usage: Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyWorker, ‘foo’, 1, :bat => ‘bar’).
- .push(item) ⇒ Object
- .push_bulk(items) ⇒ Object
-
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances.
Instance Method Summary collapse
-
#initialize(redis_pool = nil) ⇒ Client
constructor
Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances (for scalability reasons, e.g.).
-
#middleware(&block) ⇒ Object
Define client-side middleware:.
-
#push(item) ⇒ Object
The main method used to push a job to Redis.
-
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis.
- #raw_push(payloads) ⇒ Object
- #raw_push_real ⇒ Object
Constructor Details
#initialize(redis_pool = nil) ⇒ Client
Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances (for scalability reasons, e.g.)
Sidekiq::Client.new(ConnectionPool.new { Redis.new })
Generally this is only needed for very large Sidekiq installs processing thousands of jobs per second. I don’t recommend sharding unless you cannot scale any other way (e.g. splitting your app into smaller apps).
41 42 43 |
# File 'lib/sidekiq/client.rb', line 41 def initialize(redis_pool=nil) @redis_pool = redis_pool || Thread.current[:sidekiq_via_pool] || Sidekiq.redis_pool end |
Instance Attribute Details
#redis_pool ⇒ Object
Returns the value of attribute redis_pool.
29 30 31 |
# File 'lib/sidekiq/client.rb', line 29 def redis_pool @redis_pool end |
Class Method Details
.enqueue(klass, *args) ⇒ Object
144 145 146 |
# File 'lib/sidekiq/client.rb', line 144 def enqueue(klass, *args) klass.client_push('class'.freeze => klass, 'args'.freeze => args) end |
.enqueue_in(interval, klass, *args) ⇒ Object
172 173 174 |
# File 'lib/sidekiq/client.rb', line 172 def enqueue_in(interval, klass, *args) klass.perform_in(interval, *args) end |
.enqueue_to(queue, klass, *args) ⇒ Object
151 152 153 |
# File 'lib/sidekiq/client.rb', line 151 def enqueue_to(queue, klass, *args) klass.client_push('queue'.freeze => queue, 'class'.freeze => klass, 'args'.freeze => args) end |
.enqueue_to_in(queue, interval, klass, *args) ⇒ Object
158 159 160 161 162 163 164 165 166 167 |
# File 'lib/sidekiq/client.rb', line 158 def enqueue_to_in(queue, interval, klass, *args) int = interval.to_f now = Time.now.to_f ts = (int < 1_000_000_000 ? now + int : int) item = { 'class'.freeze => klass, 'args'.freeze => args, 'at'.freeze => ts, 'queue'.freeze => queue } item.delete('at'.freeze) if ts <= now klass.client_push(item) end |
.push(item) ⇒ Object
128 129 130 |
# File 'lib/sidekiq/client.rb', line 128 def push(item) new.push(item) end |
.push_bulk(items) ⇒ Object
132 133 134 |
# File 'lib/sidekiq/client.rb', line 132 def push_bulk(items) new.push_bulk(items) end |
.via(pool) ⇒ Object
Allows sharding of jobs across any number of Redis instances. All jobs defined within the block will use the given Redis connection pool.
pool = ConnectionPool.new { Redis.new }
Sidekiq::Client.via(pool) do
SomeWorker.perform_async(1,2,3)
SomeOtherWorker.perform_async(1,2,3)
end
Generally this is only needed for very large Sidekiq installs processing thousands of jobs per second. I do not recommend sharding unless you cannot scale any other way (e.g. splitting your app into smaller apps).
116 117 118 119 120 121 122 123 124 |
# File 'lib/sidekiq/client.rb', line 116 def self.via(pool) raise ArgumentError, "No pool given" if pool.nil? current_sidekiq_pool = Thread.current[:sidekiq_via_pool] raise RuntimeError, "Sidekiq::Client.via is not re-entrant" if current_sidekiq_pool && current_sidekiq_pool != pool Thread.current[:sidekiq_via_pool] = pool yield ensure Thread.current[:sidekiq_via_pool] = nil end |
Instance Method Details
#middleware(&block) ⇒ Object
20 21 22 23 24 25 26 27 |
# File 'lib/sidekiq/client.rb', line 20 def middleware(&block) @chain ||= Sidekiq.client_middleware if block_given? @chain = @chain.dup yield @chain end @chain end |
#push(item) ⇒ Object
The main method used to push a job to Redis. Accepts a number of options:
queue - the named queue to use, default 'default'
class - the worker class to call, required
args - an array of simple arguments to the perform method, must be JSON-serializable
at - timestamp to schedule the job (optional), must be Numeric (e.g. Time.now.to_f)
retry - whether to retry this job if it fails, default true or an integer number of retries
backtrace - whether to save any error backtrace, default false
Any options valid for a worker class’s sidekiq_options are also available here.
All options must be strings, not symbols. NB: because we are serializing to JSON, all symbols in ‘args’ will be converted to strings. Note that backtrace: true can take quite a bit of space in Redis; a large volume of failing jobs can start Redis swapping if you aren’t careful.
Returns a unique Job ID. If middleware stops the job, nil will be returned instead.
Example:
push('queue' => 'my_queue', 'class' => MyWorker, 'args' => ['foo', 1, :bat => 'bar'])
66 67 68 69 70 71 72 73 74 |
# File 'lib/sidekiq/client.rb', line 66 def push(item) normed = normalize_item(item) payload = process_single(item['class'.freeze], normed) if payload raw_push([payload]) payload['jid'.freeze] end end |
#push_bulk(items) ⇒ Object
Push a large number of jobs to Redis. In practice this method is only useful if you are pushing thousands of jobs or more. This method cuts out the redis network round trip latency.
Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.
Returns an array of the of pushed jobs’ jids. The number of jobs pushed can be less than the number given if the middleware stopped processing for one or more jobs.
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/sidekiq/client.rb', line 88 def push_bulk(items) arg = items['args'.freeze].first return [] unless arg # no jobs to push raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" if !arg.is_a?(Array) normed = normalize_item(items) payloads = items['args'.freeze].map do |args| copy = normed.merge('args'.freeze => args, 'jid'.freeze => SecureRandom.hex(12), 'enqueued_at'.freeze => Time.now.to_f) result = process_single(items['class'.freeze], copy) result ? result : nil end.compact raw_push(payloads) if !payloads.empty? payloads.collect { |payload| payload['jid'.freeze] } end |
#raw_push(payloads) ⇒ Object
179 180 181 182 183 184 185 186 |
# File 'lib/sidekiq/client.rb', line 179 def raw_push(payloads) @redis_pool.with do |conn| conn.multi do atomic_push(conn, payloads) end end true end |
#raw_push_real ⇒ Object
76 77 78 79 80 81 82 83 |
# File 'lib/sidekiq/testing.rb', line 76 def raw_push(payloads) @redis_pool.with do |conn| conn.multi do atomic_push(conn, payloads) end end true end |