Class: Sidekiq::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/sidekiq/client.rb,
lib/sidekiq/testing.rb

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(redis_pool = nil) ⇒ Client

Sidekiq::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Sidekiq jobs across several Redis instances (for scalability reasons, e.g.)

Sidekiq::Client.new(ConnectionPool.new { Redis.new })

Generally this is only needed for very large Sidekiq installs processing more than thousands jobs per second. I do not recommend sharding unless you truly cannot scale any other way (e.g. splitting your app into smaller apps). Some features, like the API, do not support sharding: they are designed to work against a single Redis instance only.



42
43
44
# File 'lib/sidekiq/client.rb', line 42

def initialize(redis_pool=nil)
  @redis_pool = redis_pool || Thread.current[:sidekiq_via_pool] || Sidekiq.redis_pool
end

Instance Attribute Details

#redis_poolObject

Returns the value of attribute redis_pool.



28
29
30
# File 'lib/sidekiq/client.rb', line 28

def redis_pool
  @redis_pool
end

Class Method Details

.defaultObject



123
124
125
# File 'lib/sidekiq/client.rb', line 123

def default
  @default ||= new
end

.enqueue(klass, *args) ⇒ Object

Resque compatibility helpers. Note all helpers should go through Worker#client_push.

Example usage:

Sidekiq::Client.enqueue(MyWorker, 'foo', 1, :bat => 'bar')

Messages are enqueued to the ‘default’ queue.



143
144
145
# File 'lib/sidekiq/client.rb', line 143

def enqueue(klass, *args)
  klass.client_push('class' => klass, 'args' => args)
end

.enqueue_in(interval, klass, *args) ⇒ Object

Example usage:

Sidekiq::Client.enqueue_in(3.minutes, MyWorker, 'foo', 1, :bat => 'bar')


171
172
173
# File 'lib/sidekiq/client.rb', line 171

def enqueue_in(interval, klass, *args)
  klass.perform_in(interval, *args)
end

.enqueue_to(queue, klass, *args) ⇒ Object

Example usage:

Sidekiq::Client.enqueue_to(:queue_name, MyWorker, 'foo', 1, :bat => 'bar')


150
151
152
# File 'lib/sidekiq/client.rb', line 150

def enqueue_to(queue, klass, *args)
  klass.client_push('queue' => queue, 'class' => klass, 'args' => args)
end

.enqueue_to_in(queue, interval, klass, *args) ⇒ Object

Example usage:

Sidekiq::Client.enqueue_to_in(:queue_name, 3.minutes, MyWorker, 'foo', 1, :bat => 'bar')


157
158
159
160
161
162
163
164
165
166
# File 'lib/sidekiq/client.rb', line 157

def enqueue_to_in(queue, interval, klass, *args)
  int = interval.to_f
  now = Time.now.to_f
  ts = (int < 1_000_000_000 ? now + int : int)

  item = { 'class' => klass, 'args' => args, 'at' => ts, 'queue' => queue }
  item.delete('at') if ts <= now

  klass.client_push(item)
end

.push(item) ⇒ Object



127
128
129
# File 'lib/sidekiq/client.rb', line 127

def push(item)
  default.push(item)
end

.push_bulk(items) ⇒ Object



131
132
133
# File 'lib/sidekiq/client.rb', line 131

def push_bulk(items)
  default.push_bulk(items)
end

.via(pool) ⇒ Object

Allows sharding of jobs across any number of Redis instances. All jobs defined within the block will use the given Redis connection pool.

pool = ConnectionPool.new { Redis.new }
Sidekiq::Client.via(pool) do
  SomeWorker.perform_async(1,2,3)
  SomeOtherWorker.perform_async(1,2,3)
end

Generally this is only needed for very large Sidekiq installs processing more than thousands jobs per second. I do not recommend sharding unless you truly cannot scale any other way (e.g. splitting your app into smaller apps). Some features, like the API, do not support sharding: they are designed to work against a single Redis instance.



112
113
114
115
116
117
118
119
# File 'lib/sidekiq/client.rb', line 112

def self.via(pool)
  raise ArgumentError, "No pool given" if pool.nil?
  raise RuntimeError, "Sidekiq::Client.via is not re-entrant" if x = Thread.current[:sidekiq_via_pool] && x != pool
  Thread.current[:sidekiq_via_pool] = pool
  yield
ensure
  Thread.current[:sidekiq_via_pool] = nil
end

Instance Method Details

#middleware(&block) ⇒ Object

Define client-side middleware:

client = Sidekiq::Client.new
client.middleware do |chain|
  chain.use MyClientMiddleware
end
client.push('class' => 'SomeWorker', 'args' => [1,2,3])

All client instances default to the globally-defined Sidekiq.client_middleware but you can change as necessary.



19
20
21
22
23
24
25
26
# File 'lib/sidekiq/client.rb', line 19

def middleware(&block)
  @chain ||= Sidekiq.client_middleware
  if block_given?
    @chain = @chain.dup
    yield @chain
  end
  @chain
end

#push(item) ⇒ Object

The main method used to push a job to Redis. Accepts a number of options:

queue - the named queue to use, default 'default'
class - the worker class to call, required
args - an array of simple arguments to the perform method, must be JSON-serializable
retry - whether to retry this job if it fails, true or false, default true
backtrace - whether to save any error backtrace, default false

All options must be strings, not symbols. NB: because we are serializing to JSON, all symbols in ‘args’ will be converted to strings.

Returns nil if not pushed to Redis or a unique Job ID if pushed.

Example:

push('queue' => 'my_queue', 'class' => MyWorker, 'args' => ['foo', 1, :bat => 'bar'])


63
64
65
66
67
68
69
70
# File 'lib/sidekiq/client.rb', line 63

def push(item)
  normed = normalize_item(item)
  payload = process_single(item['class'], normed)

  pushed = false
  pushed = raw_push([payload]) if payload
  pushed ? payload['jid'] : nil
end

#push_bulk(items) ⇒ Object

Push a large number of jobs to Redis. In practice this method is only useful if you are pushing tens of thousands of jobs or more, or if you need to ensure that a batch doesn’t complete prematurely. This method basically cuts down on the redis round trip latency.

Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.

Returns an array of the of pushed jobs’ jids or nil if the pushed failed. The number of jobs pushed can be less than the number given if the middleware stopped processing for one or more jobs.



86
87
88
89
90
91
92
93
94
95
96
# File 'lib/sidekiq/client.rb', line 86

def push_bulk(items)
  normed = normalize_item(items)
  payloads = items['args'].map do |args|
    raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" if !args.is_a?(Array)
    process_single(items['class'], normed.merge('args' => args, 'jid' => SecureRandom.hex(12), 'enqueued_at' => Time.now.to_f))
  end.compact

  pushed = false
  pushed = raw_push(payloads) if !payloads.empty?
  pushed ? payloads.collect { |payload| payload['jid'] } : nil
end

#raw_push(payloads) ⇒ Object



178
179
180
181
182
183
184
185
# File 'lib/sidekiq/client.rb', line 178

def raw_push(payloads)
  @redis_pool.with do |conn|
    conn.multi do
      atomic_push(conn, payloads)
    end
  end
  true
end

#raw_push_realObject



60
61
62
63
64
65
66
67
# File 'lib/sidekiq/testing.rb', line 60

def raw_push(payloads)
  @redis_pool.with do |conn|
    conn.multi do
      atomic_push(conn, payloads)
    end
  end
  true
end