Class: Delayed::Backend::ActiveRecord::Job
Overview
A job object that is persisted to the database. Contains the work object as a YAML field.
Direct Known Subclasses
Failed
Defined Under Namespace
Classes: Failed
Constant Summary
Constants included
from Base
Base::ON_HOLD_BLOCKER, Base::ON_HOLD_COUNT, Base::ON_HOLD_LOCKED_BY
Instance Attribute Summary collapse
Class Method Summary
collapse
-
.advisory_lock(lock_name) ⇒ Object
-
.all_available(queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil, forced_latency: nil) ⇒ Object
-
.apply_temp_strand!(job_scope, max_concurrent: 1) ⇒ Object
given a scope of non-stranded queued jobs, apply a temporary strand to throttle their execution returns [job_count, new_strand] (this is designed for use in a Rails console or the Canvas Jobs interface).
-
.attempt_advisory_lock(lock_name) ⇒ Object
-
.bulk_update(action, opts) ⇒ Object
perform a bulk update of a set of jobs action is :hold, :unhold, or :destroy to specify the jobs to act on, either pass opts = [list of job ids] or opts = <some flavor> to perform on all jobs of that flavor.
-
.by_priority ⇒ Object
-
.clear_locks!(worker_name) ⇒ Object
When a worker is exiting, make sure we don’t have any locked jobs.
-
.create(attributes) ⇒ Object
-
.create_singleton(options) ⇒ Object
Create the job on the specified strand, but only if there aren’t any other non-running jobs on that strand.
-
.current ⇒ Object
-
.failed ⇒ Object
-
.find_available(limit, queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil) ⇒ Object
-
.future ⇒ Object
-
.get_and_lock_next_available(worker_names, queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil, prefetch: 0, prefetch_owner: nil, forced_latency: nil) ⇒ Object
-
.jobs_count(flavor, query = nil) ⇒ Object
get the total job count for the given flavor see list_jobs for documentation on arguments.
-
.list_jobs(flavor, limit, offset = 0, query = nil) ⇒ Object
get a list of jobs of the given flavor in the given queue flavor is :current, :future, :failed, :strand or :tag depending on the flavor, query has a different meaning: for :current and :future, it’s the queue name (defaults to Delayed::Settings.queue) for :strand it’s the strand name for :tag it’s the tag name for :failed it’s ignored.
-
.maybe_silence_periodic_log ⇒ Object
-
.n_strand_options(strand_name, num_strands) ⇒ Object
This overwrites the previous behavior so rather than changing the strand and balancing at queue time, this keeps the strand intact and uses triggers to limit the number running.
-
.prefetch_jobs_lock_name ⇒ Object
-
.processes_locked_locally(name: nil) ⇒ Object
-
.ready_to_run(forced_latency: nil) ⇒ Object
a nice stress test: 10_000.times do |i| Kernel.delay(strand: ‘s1’, run_at: (24.hours.ago + (rand(24.hours.to_i))).system(“echo #i >> test1.txt”) end 500.times { |i| “ohai”.delay(run_at: (12.hours.ago + (rand(24.hours.to_i))).reverse } then fire up your workers you can check out strand correctness: diff test1.txt <(sort -n test1.txt).
-
.reconnect! ⇒ Object
-
.running ⇒ Object
-
.running_jobs ⇒ Object
-
.scope_for_flavor(flavor, query) ⇒ Object
-
.strand_size(strand) ⇒ Object
-
.tag_counts(flavor, limit, offset = 0) ⇒ Object
returns a list of hashes { :tag => tag_name, :count => current_count } in descending count order flavor is :current or :all.
-
.unlock(jobs) ⇒ Object
-
.unlock_orphaned_prefetched_jobs ⇒ Object
Instance Method Summary
collapse
Methods included from Base
#batch?, #expired?, #failed?, #full_name, #hold!, included, #inferred_max_attempts, #initialize_defaults, #invoke_job, #locked?, #name, #on_hold?, #payload_object, #payload_object=, #permanent_failure, #reschedule, #reschedule_at, #unhold!, #unlock
#encode_with, load_for_delayed_job
Instance Attribute Details
#enqueue_result ⇒ Object
Returns the value of attribute enqueue_result.
28
29
30
|
# File 'lib/delayed/backend/active_record.rb', line 28
def enqueue_result
@enqueue_result
end
|
Class Method Details
.advisory_lock(lock_name) ⇒ Object
53
54
55
56
57
|
# File 'lib/delayed/backend/active_record.rb', line 53
def advisory_lock(lock_name)
fn_name = connection.quote_table_name("half_md5_as_bigint")
lock_name = connection.quote_string(lock_name)
connection.execute("SELECT pg_advisory_xact_lock(#{fn_name}('#{lock_name}'));")
end
|
.all_available(queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil, forced_latency: nil) ⇒ Object
481
482
483
484
485
486
487
488
489
490
491
492
493
494
|
# File 'lib/delayed/backend/active_record.rb', line 481
def self.all_available(queue = Delayed::Settings.queue,
min_priority = nil,
max_priority = nil,
forced_latency: nil)
min_priority ||= Delayed::MIN_PRIORITY
max_priority ||= Delayed::MAX_PRIORITY
check_queue(queue)
check_priorities(min_priority, max_priority)
ready_to_run(forced_latency:)
.where(priority: min_priority..max_priority, queue:)
.by_priority
end
|
.apply_temp_strand!(job_scope, max_concurrent: 1) ⇒ Object
given a scope of non-stranded queued jobs, apply a temporary strand to throttle their execution returns [job_count, new_strand] (this is designed for use in a Rails console or the Canvas Jobs interface)
331
332
333
334
335
336
337
338
339
340
341
342
343
344
|
# File 'lib/delayed/backend/active_record.rb', line 331
def self.apply_temp_strand!(job_scope, max_concurrent: 1)
if job_scope.where("strand IS NOT NULL OR singleton IS NOT NULL").exists?
raise ArgumentError, "can't apply strand to already stranded jobs"
end
job_count = 0
new_strand = "tmp_strand_#{SecureRandom.alphanumeric(16)}"
::Delayed::Job.transaction do
job_count = job_scope.update_all(strand: new_strand, max_concurrent:, next_in_strand: false)
::Delayed::Job.where(strand: new_strand).order(:id).limit(max_concurrent).update_all(next_in_strand: true)
end
[job_count, new_strand]
end
|
.attempt_advisory_lock(lock_name) ⇒ Object
47
48
49
50
51
|
# File 'lib/delayed/backend/active_record.rb', line 47
def attempt_advisory_lock(lock_name)
fn_name = connection.quote_table_name("half_md5_as_bigint")
lock_name = connection.quote_string(lock_name)
connection.select_value("SELECT pg_try_advisory_xact_lock(#{fn_name}('#{lock_name}'));")
end
|
.bulk_update(action, opts) ⇒ Object
perform a bulk update of a set of jobs action is :hold, :unhold, or :destroy to specify the jobs to act on, either pass opts = [list of job ids] or opts = <some flavor> to perform on all jobs of that flavor
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
|
# File 'lib/delayed/backend/active_record.rb', line 277
def self.bulk_update(action, opts)
raise("Can't #{action} failed jobs") if opts[:flavor].to_s == "failed" && action.to_s != "destroy"
scope = if opts[:ids]
if opts[:flavor] == "failed"
Delayed::Job::Failed.where(id: opts[:ids])
else
where(id: opts[:ids])
end
elsif opts[:flavor]
scope_for_flavor(opts[:flavor], opts[:query])
end
return 0 unless scope
case action.to_s
when "hold"
scope = scope.where(locked_by: nil)
scope.update_all(locked_by: ON_HOLD_LOCKED_BY, locked_at: db_time_now, attempts: ON_HOLD_COUNT)
when "unhold"
now = db_time_now
scope = scope.where(locked_by: ON_HOLD_LOCKED_BY)
scope.update_all([" locked_by=NULL, locked_at=NULL, attempts=0, run_at=(CASE WHEN run_at > ? THEN run_at ELSE ? END), failed_at=NULL\n SQL\n when \"destroy\"\n scope = scope.where(\"locked_by IS NULL OR locked_by=?\", ON_HOLD_LOCKED_BY) unless opts[:flavor] == \"failed\"\n scope.delete_all\n end\nend\n".squish, now, now])
|
.by_priority ⇒ Object
208
209
210
|
# File 'lib/delayed/backend/active_record.rb', line 208
def self.by_priority
order(:priority, :run_at, :id)
end
|
.clear_locks!(worker_name) ⇒ Object
When a worker is exiting, make sure we don’t have any locked jobs.
213
214
215
|
# File 'lib/delayed/backend/active_record.rb', line 213
def self.clear_locks!(worker_name)
where(locked_by: worker_name).update_all(locked_by: nil, locked_at: nil)
end
|
.create(attributes) ⇒ Object
40
41
42
43
44
45
|
# File 'lib/delayed/backend/active_record.rb', line 40
def create(attributes, &)
on_conflict = attributes.delete(:on_conflict)
job = new(attributes, &)
job.single_step_create(on_conflict:)
end
|
.create_singleton(options) ⇒ Object
Create the job on the specified strand, but only if there aren’t any other non-running jobs on that strand. (in other words, the job will still be created if there’s another job on the strand but it’s already running)
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
|
# File 'lib/delayed/backend/active_record.rb', line 500
def self.create_singleton(options)
strand = options[:singleton]
on_conflict = options.delete(:on_conflict) || :use_earliest
transaction_for_singleton(singleton, on_conflict) do
job = where(strand:, locked_at: nil).next_in_strand_order.first
new_job = new(options)
if job
new_job.initialize_defaults
job.run_at =
case on_conflict
when :use_earliest, :patient
[job.run_at, new_job.run_at].min
when :overwrite
new_job.run_at
when :loose
job.run_at
end
job.handler = new_job.handler if on_conflict == :overwrite
job.save! if job.changed?
else
new_job.save!
end
job || new_job
end
end
|
.current ⇒ Object
179
180
181
|
# File 'lib/delayed/backend/active_record.rb', line 179
def self.current
where("run_at<=?", db_time_now)
end
|
.failed ⇒ Object
187
188
189
|
# File 'lib/delayed/backend/active_record.rb', line 187
def self.failed
where.not(failed_at: nil)
end
|
.find_available(limit, queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil) ⇒ Object
474
475
476
477
478
479
|
# File 'lib/delayed/backend/active_record.rb', line 474
def self.find_available(limit,
queue = Delayed::Settings.queue,
min_priority = nil,
max_priority = nil)
all_available(queue, min_priority, max_priority).limit(limit).to_a
end
|
.future ⇒ Object
183
184
185
|
# File 'lib/delayed/backend/active_record.rb', line 183
def self.future
where("run_at>?", db_time_now)
end
|
.get_and_lock_next_available(worker_names, queue = Delayed::Settings.queue, min_priority = nil, max_priority = nil, prefetch: 0, prefetch_owner: nil, forced_latency: nil) ⇒ Object
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
|
# File 'lib/delayed/backend/active_record.rb', line 354
def self.get_and_lock_next_available(worker_names,
queue = Delayed::Settings.queue,
min_priority = nil,
max_priority = nil,
prefetch: 0,
prefetch_owner: nil,
forced_latency: nil)
check_queue(queue)
check_priorities(min_priority, max_priority)
loop do
jobs = maybe_silence_periodic_log do
if connection.adapter_name == "PostgreSQL" && !Settings.select_random_from_batch
effective_worker_names = Array(worker_names)
lock = nil
lock = "FOR UPDATE SKIP LOCKED" if connection.postgresql_version >= 90_500
target_jobs = all_available(queue,
min_priority,
max_priority,
forced_latency:)
.limit(effective_worker_names.length + prefetch)
.lock(lock)
jobs_with_row_number = all.from(target_jobs)
.select("id, ROW_NUMBER() OVER () AS row_number")
updates = +"locked_by = CASE row_number "
effective_worker_names.each_with_index do |worker, i|
updates << "WHEN #{i + 1} THEN #{connection.quote(worker)} "
end
updates << "ELSE #{connection.quote(prefetch_owner)} " if prefetch_owner
updates << "END, locked_at = #{connection.quote(db_time_now)}"
query = " WITH limited_jobs AS (\#{jobs_with_row_number.to_sql})\n UPDATE \#{quoted_table_name} SET \#{updates} FROM limited_jobs WHERE limited_jobs.id=\#{quoted_table_name}.id\n RETURNING \#{quoted_table_name}.*\n SQL\n\n begin\n jobs = find_by_sql(query)\n rescue ::ActiveRecord::RecordNotUnique => e\n # if we got a unique violation on a singleton, it's because next_in_strand\n # somehow got set to true on the non-running job when there is a running\n # job. AFAICT this is not possible from inst-jobs itself, but has happened\n # in production - either due to manual manipulation of jobs, or possibly\n # a process in something like switchman-inst-jobs\n raise unless e.message.include?('\"index_delayed_jobs_on_singleton_running\"')\n\n # just repair the \"strand\"\n singleton = e.message.match(/Key \\(singleton\\)=\\((.+)\\) already exists.$/)[1]\n raise unless singleton\n\n transaction do\n # very conservatively lock the locked job, so that it won't unlock from underneath us and\n # leave an orphaned strand\n advisory_lock(\"singleton:\#{singleton}\")\n locked_jobs = where(singleton:).where.not(locked_by: nil).lock.pluck(:id)\n # if it's already gone, then we're good and should be able to just retry\n if locked_jobs.length == 1\n updated = where(singleton:, next_in_strand: true)\n .where(locked_by: nil)\n .update_all(next_in_strand: false)\n raise if updated.zero?\n end\n end\n\n retry\n end\n # because this is an atomic query, we don't have to return more jobs than we needed\n # to try and lock them, nor is there a possibility we need to try again because\n # all of the jobs we tried to lock had already been locked by someone else\n return jobs.first unless worker_names.is_a?(Array)\n\n result = jobs.index_by(&:locked_by)\n # all of the prefetched jobs can come back as an array\n result[prefetch_owner] = jobs.select { |j| j.locked_by == prefetch_owner } if prefetch_owner\n return result\n else\n batch_size = Settings.fetch_batch_size\n batch_size *= worker_names.length if worker_names.is_a?(Array)\n find_available(batch_size, queue, min_priority, max_priority)\n end\n end\n if jobs.empty?\n return worker_names.is_a?(Array) ? {} : nil\n end\n\n jobs = jobs.sort_by { rand } if Settings.select_random_from_batch\n if worker_names.is_a?(Array)\n result = {}\n jobs.each do |job|\n break if worker_names.empty?\n\n worker_name = worker_names.first\n if job.send(:lock_exclusively!, worker_name)\n result[worker_name] = job\n worker_names.shift\n end\n end\n return result\n else\n locked_job = jobs.detect do |job|\n job.send(:lock_exclusively!, worker_names)\n end\n return locked_job if locked_job\n end\n end\nend\n".squish
|
.jobs_count(flavor, query = nil) ⇒ Object
get the total job count for the given flavor see list_jobs for documentation on arguments
267
268
269
270
271
|
# File 'lib/delayed/backend/active_record.rb', line 267
def self.jobs_count(flavor,
query = nil)
scope = scope_for_flavor(flavor, query)
scope.count
end
|
.list_jobs(flavor, limit, offset = 0, query = nil) ⇒ Object
get a list of jobs of the given flavor in the given queue flavor is :current, :future, :failed, :strand or :tag depending on the flavor, query has a different meaning: for :current and :future, it’s the queue name (defaults to Delayed::Settings.queue) for :strand it’s the strand name for :tag it’s the tag name for :failed it’s ignored
256
257
258
259
260
261
262
263
|
# File 'lib/delayed/backend/active_record.rb', line 256
def self.list_jobs(flavor,
limit,
offset = 0,
query = nil)
scope = scope_for_flavor(flavor, query)
order = (flavor.to_s == "future") ? "run_at" : "id desc"
scope.order(order).limit(limit).offset(offset).to_a
end
|
.maybe_silence_periodic_log ⇒ Object
346
347
348
349
350
351
352
|
# File 'lib/delayed/backend/active_record.rb', line 346
def self.maybe_silence_periodic_log(&)
if Settings.silence_periodic_log
::ActiveRecord::Base.logger.silence(&)
else
yield
end
end
|
.n_strand_options(strand_name, num_strands) ⇒ Object
This overwrites the previous behavior so rather than changing the strand and balancing at queue time, this keeps the strand intact and uses triggers to limit the number running
175
176
177
|
# File 'lib/delayed/backend/active_record.rb', line 175
def self.n_strand_options(strand_name, num_strands)
{ strand: strand_name, max_concurrent: num_strands }
end
|
.prefetch_jobs_lock_name ⇒ Object
532
533
534
|
# File 'lib/delayed/backend/active_record.rb', line 532
def self.prefetch_jobs_lock_name
"Delayed::Job.unlock_orphaned_prefetched_jobs"
end
|
.processes_locked_locally(name: nil) ⇒ Object
527
528
529
530
|
# File 'lib/delayed/backend/active_record.rb', line 527
def self.processes_locked_locally(name: nil)
name ||= Socket.gethostname rescue x
where("locked_by LIKE ?", "#{name}:%").pluck(:locked_by).map { |locked_by| locked_by.split(":").last.to_i }
end
|
.ready_to_run(forced_latency: nil) ⇒ Object
a nice stress test: 10_000.times do |i|
Kernel.delay(strand: 's1', run_at: (24.hours.ago + (rand(24.hours.to_i))).system("echo #{i} >> test1.txt")
end 500.times { |i| “ohai”.delay(run_at: (12.hours.ago + (rand(24.hours.to_i))).reverse } then fire up your workers you can check out strand correctness: diff test1.txt <(sort -n test1.txt)
202
203
204
205
206
|
# File 'lib/delayed/backend/active_record.rb', line 202
def self.ready_to_run(forced_latency: nil)
now = db_time_now
now -= forced_latency if forced_latency
where("run_at<=? AND locked_at IS NULL AND next_in_strand=?", now, true)
end
|
.reconnect! ⇒ Object
35
36
37
|
# File 'lib/delayed/backend/active_record.rb', line 35
def self.reconnect!
::ActiveRecord::Base.connection_handler.clear_all_connections!(nil)
end
|
.running ⇒ Object
191
192
193
|
# File 'lib/delayed/backend/active_record.rb', line 191
def self.running
where("locked_at IS NOT NULL AND locked_by<>'on hold'")
end
|
.running_jobs ⇒ Object
221
222
223
|
# File 'lib/delayed/backend/active_record.rb', line 221
def self.running_jobs
running.order(:locked_at)
end
|
.scope_for_flavor(flavor, query) ⇒ Object
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
|
# File 'lib/delayed/backend/active_record.rb', line 225
def self.scope_for_flavor(flavor, query)
scope = case flavor.to_s
when "current"
current
when "future"
future
when "failed"
Delayed::Job::Failed
when "strand"
where(strand: query)
when "tag"
where(tag: query)
else
raise ArgumentError, "invalid flavor: #{flavor.inspect}"
end
if %w[current future].include?(flavor.to_s)
queue = query.presence || Delayed::Settings.queue
scope = scope.where(queue:)
end
scope
end
|
.strand_size(strand) ⇒ Object
217
218
219
|
# File 'lib/delayed/backend/active_record.rb', line 217
def self.strand_size(strand)
where(strand:).count
end
|
.tag_counts(flavor, limit, offset = 0) ⇒ Object
returns a list of hashes { :tag => tag_name, :count => current_count } in descending count order flavor is :current or :all
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
|
# File 'lib/delayed/backend/active_record.rb', line 312
def self.tag_counts(flavor,
limit,
offset = 0)
raise(ArgumentError, "invalid flavor: #{flavor}") unless %w[current all].include?(flavor.to_s)
scope = case flavor.to_s
when "current"
current
when "all"
self
end
scope = scope.group(:tag).offset(offset).limit(limit)
scope.order(Arel.sql("COUNT(tag) DESC")).count.map { |t, c| { tag: t, count: c } }
end
|
.unlock(jobs) ⇒ Object
547
548
549
550
551
|
# File 'lib/delayed/backend/active_record.rb', line 547
def self.unlock(jobs)
unlocked = where(id: jobs).update_all(locked_at: nil, locked_by: nil)
jobs.each(&:unlock)
unlocked
end
|
.unlock_orphaned_prefetched_jobs ⇒ Object
536
537
538
539
540
541
542
543
544
545
|
# File 'lib/delayed/backend/active_record.rb', line 536
def self.unlock_orphaned_prefetched_jobs
transaction do
next unless attempt_advisory_lock(prefetch_jobs_lock_name)
horizon = db_time_now - (Settings.parent_process[:prefetched_jobs_timeout] * 4)
where("locked_by LIKE 'prefetch:%' AND locked_at<?", horizon).update_all(locked_at: nil, locked_by: nil)
end
end
|
Instance Method Details
#create_and_lock!(worker) ⇒ Object
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
|
# File 'lib/delayed/backend/active_record.rb', line 599
def create_and_lock!(worker)
raise "job already exists" unless new_record?
if singleton
single_step_create
lock_exclusively!(worker)
return
end
self.locked_at = Delayed::Job.db_time_now
self.locked_by = worker
single_step_create
end
|
#destroy ⇒ Object
143
144
145
146
|
# File 'lib/delayed/backend/active_record.rb', line 143
def destroy
destroy_row
end
|
#fail! ⇒ Object
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
|
# File 'lib/delayed/backend/active_record.rb', line 615
def fail!
attrs = attributes
attrs["original_job_id"] = attrs.delete("id") if Failed.columns_hash.key?("original_job_id")
attrs["failed_at"] ||= self.class.db_time_now
attrs.delete("next_in_strand")
attrs.delete("max_concurrent")
self.class.transaction do
failed_job = Failed.create(attrs)
destroy
failed_job
end
rescue
destroy
raise
end
|
#lock_strand_on_create ⇒ Object
164
165
166
167
168
169
170
|
# File 'lib/delayed/backend/active_record.rb', line 164
def lock_strand_on_create
return unless strand.present? && instance_of?(Job)
fn_name = self.class.connection.quote_table_name("half_md5_as_bigint")
quoted_strand_name = self.class.connection.quote(strand)
self.class.connection.execute("SELECT pg_advisory_xact_lock(#{fn_name}(#{quoted_strand_name}))")
end
|
#single_step_create(on_conflict: nil) ⇒ Object
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
|
# File 'lib/delayed/backend/active_record.rb', line 60
def single_step_create(on_conflict: nil)
connection = self.class.connection
initialize_defaults
current_time = current_time_from_proper_timezone
all_timestamp_attributes_in_model.each do |column|
_write_attribute(column, current_time) unless attribute_present?(column)
end
attribute_names = attribute_names_for_partial_inserts
attribute_names = attributes_for_create(attribute_names)
values = attributes_with_values(attribute_names)
im = Arel::InsertManager.new(self.class.arel_table)
im.insert(values.transform_keys { |name| self.class.arel_table[name] })
lock_and_insert = values["strand"] && instance_of?(Job)
sql, binds = if lock_and_insert
connection.unprepared_statement do
connection.send(:to_sql_and_binds, im)
end
else
connection.send(:to_sql_and_binds, im)
end
sql = +sql
if singleton && instance_of?(Job)
sql << " ON CONFLICT (singleton) WHERE singleton IS NOT NULL AND locked_by IS NULL DO "
sql << case on_conflict
when :patient, :loose
"NOTHING"
when :overwrite
"UPDATE SET run_at=EXCLUDED.run_at, handler=EXCLUDED.handler"
else
"UPDATE SET run_at=EXCLUDED.run_at WHERE EXCLUDED.run_at<delayed_jobs.run_at"
end
end
sql << " RETURNING id, (xmax = 0) AS inserted"
if lock_and_insert
if values["strand"] && instance_of?(Job)
fn_name = connection.quote_table_name("half_md5_as_bigint")
quoted_strand = connection.quote(values["strand"].value)
sql = "SELECT pg_advisory_xact_lock(#{fn_name}(#{quoted_strand})); #{sql}"
end
result = connection.execute(sql, "#{self.class} Create")
self.id = result.values.first&.first
inserted = result.values.first&.second
result.clear
else
result = connection.exec_query(sql, "#{self.class} Create", binds)
self.id = connection.send(:last_inserted_id, result)
inserted = result.rows.first&.second
end
self.enqueue_result = if id.present? && inserted
:inserted
elsif id.present? && !inserted
:updated
else
:dropped
end
if id
@new_record = false
changes_applied
end
self
end
|
#transfer_lock!(from:, to:) ⇒ Object
573
574
575
576
577
578
579
580
581
582
583
|
# File 'lib/delayed/backend/active_record.rb', line 573
def transfer_lock!(from:, to:)
now = self.class.db_time_now
affected_rows = self.class.where(id: self, locked_by: from).update_all(locked_at: now, locked_by: to)
if affected_rows == 1
mark_as_locked!(now, to)
true
else
false
end
end
|