Class: Roundhouse::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/roundhouse/client.rb,
lib/roundhouse/testing.rb

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(redis_pool = nil) ⇒ Client

Roundhouse::Client normally uses the default Redis pool but you may pass a custom ConnectionPool if you want to shard your Roundhouse jobs across several Redis instances (for scalability reasons, e.g.)

Roundhouse::Client.new(ConnectionPool.new { Redis.new })

Generally this is only needed for very large Roundhouse installs processing more than thousands jobs per second. I do not recommend sharding unless you truly cannot scale any other way (e.g. splitting your app into smaller apps). Some features, like the API, do not support sharding: they are designed to work against a single Redis instance only.



43
44
45
# File 'lib/roundhouse/client.rb', line 43

def initialize(redis_pool=nil)
  @redis_pool = redis_pool || Thread.current[:roundhouse_via_pool] || Roundhouse.redis_pool
end

Instance Attribute Details

#redis_poolObject

Returns the value of attribute redis_pool.



29
30
31
# File 'lib/roundhouse/client.rb', line 29

def redis_pool
  @redis_pool
end

Class Method Details

.defaultObject

deprecated



130
131
132
# File 'lib/roundhouse/client.rb', line 130

def default
  @default ||= new
end

.enqueue_to(queue_id, klass, *args) ⇒ Object

Example usage:

Roundhouse::Client.enqueue_to(queue_id, MyWorker, 'foo', 1, :bat => 'bar')


148
149
150
# File 'lib/roundhouse/client.rb', line 148

def enqueue_to(queue_id, klass, *args)
  klass.client_push('queue_id' => queue_id, 'class' => klass, 'args' => args)
end

.enqueue_to_in(queue_id, interval, klass, *args) ⇒ Object

Example usage:

Roundhouse::Client.enqueue_to_in(queue_id, 3.minutes, MyWorker, 'foo', 1, :bat => 'bar')


155
156
157
158
159
160
161
162
163
164
# File 'lib/roundhouse/client.rb', line 155

def enqueue_to_in(queue_id, interval, klass, *args)
  int = interval.to_f
  now = Time.now.to_f
  ts = (int < 1_000_000_000 ? now + int : int)

  item = { 'class' => klass, 'args' => args, 'at' => ts, 'queue_id' => queue_id }
  item.delete('at'.freeze) if ts <= now

  klass.client_push(item)
end

.push(item) ⇒ Object



134
135
136
# File 'lib/roundhouse/client.rb', line 134

def push(item)
  new.push(item)
end

.push_bulk(items) ⇒ Object



138
139
140
# File 'lib/roundhouse/client.rb', line 138

def push_bulk(items)
  new.push_bulk(items)
end

.via(pool) ⇒ Object

Allows sharding of jobs across any number of Redis instances. All jobs defined within the block will use the given Redis connection pool.

pool = ConnectionPool.new { Redis.new }
Roundhouse::Client.via(pool) do
  SomeWorker.perform_async(1,2,3)
  SomeOtherWorker.perform_async(1,2,3)
end

Generally this is only needed for very large Roundhouse installs processing more than thousands jobs per second. I do not recommend sharding unless you truly cannot scale any other way (e.g. splitting your app into smaller apps). Some features, like the API, do not support sharding: they are designed to work against a single Redis instance.



116
117
118
119
120
121
122
123
124
125
# File 'lib/roundhouse/client.rb', line 116

def self.via(pool)
  raise NotImplementedError, 'Roundhouse does not support sharding at this point.'

  raise ArgumentError, "No pool given" if pool.nil?
  raise RuntimeError, "Roundhouse::Client.via is not re-entrant" if x = Thread.current[:roundhouse_via_pool] && x != pool
  Thread.current[:roundhouse_via_pool] = pool
  yield
ensure
  Thread.current[:roundhouse_via_pool] = nil
end

Instance Method Details

#middleware(&block) ⇒ Object

Define client-side middleware:

client = Roundhouse::Client.new
client.middleware do |chain|
  chain.use MyClientMiddleware
end
client.push('class' => 'SomeWorker', 'args' => [1,2,3])

All client instances default to the globally-defined Roundhouse.client_middleware but you can change as necessary.



20
21
22
23
24
25
26
27
# File 'lib/roundhouse/client.rb', line 20

def middleware(&block)
  @chain ||= Roundhouse.client_middleware
  if block_given?
    @chain = @chain.dup
    yield @chain
  end
  @chain
end

#push(item) ⇒ Object

The main method used to push a job to Redis. Accepts a number of options:

queue_id - integer queue_id (required, no default)
class - the worker class to call, required
args - an array of simple arguments to the perform method, must be JSON-serializable
retry - whether to retry this job if it fails, true or false, default true
backtrace - whether to save any error backtrace, default false

All options must be strings, not symbols. NB: because we are serializing to JSON, all symbols in ‘args’ will be converted to strings.

Returns a unique Job ID. If middleware stops the job, nil will be returned instead.

Example:

push('queue' => 1, 'class' => MyWorker, 'args' => ['foo', 1, :bat => 'bar'])


64
65
66
67
68
69
70
71
72
# File 'lib/roundhouse/client.rb', line 64

def push(item)
  normed = normalize_item(item)
  payload = process_single(item['class'], normed)

  if payload
    raw_push([payload])
    payload['jid']
  end
end

#push_bulk(items) ⇒ Object

Push a large number of jobs to Redis. In practice this method is only useful if you are pushing tens of thousands of jobs or more, or if you need to ensure that a batch doesn’t complete prematurely. This method basically cuts down on the redis round trip latency.

Note: Roundhouse implementation does not use MULTI, so this is not going to be as fast as Sidekiq. As such, this is not officially supported.

Takes the same arguments as #push except that args is expected to be an Array of Arrays. All other keys are duplicated for each job. Each job is run through the client middleware pipeline and each job gets its own Job ID as normal.

Returns an array of the of pushed jobs’ jids. The number of jobs pushed can be less than the number given if the middleware stopped processing for one or more jobs.



90
91
92
93
94
95
96
97
98
99
100
# File 'lib/roundhouse/client.rb', line 90

def push_bulk(items)
  Roundhouse.logger.warn '#push_bulk is not officially supported. Use at your own risk.'
  normed = normalize_item(items)
  payloads = items['args'].map do |args|
    raise ArgumentError, "Bulk arguments must be an Array of Arrays: [[1], [2]]" if !args.is_a?(Array)
    process_single(items['class'], normed.merge('args' => args, 'jid' => SecureRandom.hex(12), 'enqueued_at' => Time.now.to_f))
  end.compact

  raw_push(payloads) if !payloads.empty?
  payloads.collect { |payload| payload['jid'] }
end

#raw_push(payloads) ⇒ Object



170
171
172
173
174
175
# File 'lib/roundhouse/client.rb', line 170

def raw_push(payloads)
  @redis_pool.with do |conn|
    Roundhouse::Monitor.push_job(conn, payloads)
  end
  true
end

#raw_push_realObject



60
61
62
63
64
65
# File 'lib/roundhouse/testing.rb', line 60

def raw_push(payloads)
  @redis_pool.with do |conn|
    Roundhouse::Monitor.push_job(conn, payloads)
  end
  true
end