Class: Gitlab::Database::PartitioningMigrationHelpers::BackfillPartitionedTable
- Inherits:
-
BackgroundMigration::BaseJob
- Object
- BackgroundMigration::BaseJob
- Gitlab::Database::PartitioningMigrationHelpers::BackfillPartitionedTable
- Includes:
- DynamicModelHelpers
- Defined in:
- lib/gitlab/database/partitioning_migration_helpers/backfill_partitioned_table.rb
Overview
Class that will generically copy data from a given table into its corresponding partitioned table
Constant Summary collapse
- SUB_BATCH_SIZE =
2_500
- PAUSE_SECONDS =
0.25
Constants included from DynamicModelHelpers
DynamicModelHelpers::BATCH_SIZE
Instance Method Summary collapse
Methods included from DynamicModelHelpers
#define_batchable_model, #each_batch, #each_batch_range
Methods inherited from BackgroundMigration::BaseJob
Constructor Details
This class inherits a constructor from Gitlab::BackgroundMigration::BaseJob
Instance Method Details
#perform(start_id, stop_id, source_table, partitioned_table, source_column) ⇒ Object
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
# File 'lib/gitlab/database/partitioning_migration_helpers/backfill_partitioned_table.rb', line 13 def perform(start_id, stop_id, source_table, partitioned_table, source_column) if transaction_open? raise "Aborting job to backfill partitioned #{source_table} table! Do not run this job in a transaction block!" end unless table_exists?(partitioned_table) logger.warn "exiting backfill migration because partitioned table '#{partitioned_table}' does not exist. " \ "This could be due to the migration being rolled back after migration jobs were enqueued in sidekiq" return end bulk_copy = Gitlab::Database::PartitioningMigrationHelpers::BulkCopy.new(source_table, partitioned_table, source_column, connection: connection) parent_batch_relation = relation_scoped_to_range(source_table, source_column, start_id, stop_id) parent_batch_relation.each_batch(of: SUB_BATCH_SIZE) do |sub_batch| sub_start_id, sub_stop_id = sub_batch.pick(Arel.sql("MIN(#{source_column}), MAX(#{source_column})")) bulk_copy.copy_between(sub_start_id, sub_stop_id) sleep(PAUSE_SECONDS) end mark_jobs_as_succeeded(start_id, stop_id, source_table, partitioned_table, source_column) end |