Module: DatastaxRails::Batches
Overview
Relation methods for operating on batches of records.
Instance Method Summary collapse
-
#find_each(options = {}) {|record| ... } ⇒ Object
Yields each record that was found by the find
options
. -
#find_each_with_index(options = {}) ⇒ Object
Same as #find_each but yields the index as a second parameter.
-
#find_in_batches(options = {}) {|records| ... } ⇒ Object
Yields each batch of records that was found by the find
options
as an array.
Instance Method Details
#find_each(options = {}) {|record| ... } ⇒ Object
Yields each record that was found by the find options
. The find is performed by find_in_batches with a batch size of 1000 (or as specified by the :batch_size
option).
Example:
Person.where(in_college: true).find_each do |person|
person.party_all_night!
end
Note: This method is only intended to use for batch processing of large amounts of records that wouldn’t fit in memory all at once. If you just need to loop over less than 1000 records, it’s probably better just to use the regular find methods.
You can also pass the :start
option to specify an offset to control the starting point.
24 25 26 27 28 |
# File 'lib/datastax_rails/relation/batches.rb', line 24 def find_each( = {}) find_in_batches() do |records| records.each { |record| yield record } end end |
#find_each_with_index(options = {}) ⇒ Object
Same as #find_each but yields the index as a second parameter.
31 32 33 34 35 36 37 38 39 |
# File 'lib/datastax_rails/relation/batches.rb', line 31 def find_each_with_index( = {}) idx = 0 find_in_batches() do |records| records.each do |record| yield record, idx idx += 1 end end end |
#find_in_batches(options = {}) {|records| ... } ⇒ Object
Yields each batch of records that was found by the find options
as an array. The size of each batch is set by the :batch_size
option; the default is 1000.
You can control the starting point for the batch processing by supplying the :start
option. This is especially useful if you want multiple workers dealing with the same processing queue. You can make worker 1 handle all the records between id 0 and 10,000 and worker 2 handle from 10,000 and beyond (by setting the :start
option on that worker).
It’s not possible to set the order. For Cassandra based batching, the order is set according to Cassandra’s key placement strategy. For Solr based batching, the order is ascending order of the primary key. You can’t set the limit either. That’s used to control the batch sizes.
Example:
Person.where(in_college: true).find_in_batches do |group|
sleep(50) # Make sure it doesn't get too crowded in there!
group.each { |person| person.party_all_night! }
end
rubocop:disable Metrics/PerceivedComplexity
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/datastax_rails/relation/batches.rb', line 67 def find_in_batches( = {}) relation = self unless @order_values.empty? DatastaxRails::Base.logger.warn('Scoped order and limit are ignored, ' \ "it's forced to be batch order and batch size") relation = relation.clone relation.order_values.clear end if ( = .except(:start, :batch_size)).present? fail "You can't specify an order, it's forced to be #{@klass.primary_key}" if [:order].present? fail "You can't specify a limit, it's forced to be the batch_size" if [:limit].present? relation = () end start = .delete(:start) batch_size = .delete(:batch_size) || 1000 relation = relation.limit(batch_size) relation = relation.order(@klass.primary_key) if relation.use_solr_value records = start ? relation.where(@klass.primary_key).greater_than(start).to_a : relation.to_a while records.size > 0 records_size = records.size offset = records.last.__id yield records break if records_size < batch_size if offset offset = ::Cql::Uuid.new(offset.value + 1) if relation.use_solr_value records = relation.where(@klass.primary_key).greater_than(offset).to_a else fail 'Batch order not included in the custom select clause' end end end |