Module: Gitlab::Database::MigrationHelpers

Constant Summary collapse

MAX_IDENTIFIER_NAME_LENGTH =
63
PERMITTED_TIMESTAMP_COLUMNS =
%i[created_at updated_at deleted_at].to_set.freeze
DEFAULT_TIMESTAMP_COLUMNS =
%i[created_at updated_at].freeze

Constants included from Gitlab::Database::Migrations::BackgroundMigrationHelpers

Gitlab::Database::Migrations::BackgroundMigrationHelpers::BACKGROUND_MIGRATION_BATCH_SIZE, Gitlab::Database::Migrations::BackgroundMigrationHelpers::BACKGROUND_MIGRATION_JOB_BUFFER_SIZE

Instance Method Summary collapse

Methods included from Gitlab::Database::Migrations::BackgroundMigrationHelpers

#bulk_migrate_async, #bulk_migrate_in, #bulk_queue_background_migration_jobs_by_range, #migrate_async, #migrate_in, #perform_background_migration_inline?, #queue_background_migration_jobs_by_range_at_intervals, #with_migration_context

Instance Method Details

#add_check_constraint(table, check, constraint_name, validate: true) ⇒ Object

Adds a check constraint to a table

This method is the generic helper for adding any check constraint More specialized helpers may use it (e.g. add_text_limit or add_not_null)

This method only requires minimal locking:

  • The constraint is added using NOT VALID This allows us to add the check constraint without validating it

  • The check will be enforced for new data (inserts) coming in

  • If `validate: true` the constraint is also validated Otherwise, validate_check_constraint() can be used at a later stage

  • Check comments on add_concurrent_foreign_key for more info

table - The table the constraint will be added to check - The check clause to add

e.g. 'char_length(name) <= 5' or 'store IS NOT NULL'

constraint_name - The name of the check constraint (otherwise auto-generated)

Should be unique per table (not per column)

validate - Whether to validate the constraint in this call


1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
# File 'lib/gitlab/database/migration_helpers.rb', line 1086

def add_check_constraint(table, check, constraint_name, validate: true)
  validate_check_constraint_name!(constraint_name)

  # Transactions would result in ALTER TABLE locks being held for the
  # duration of the transaction, defeating the purpose of this method.
  if transaction_open?
    raise 'add_check_constraint can not be run inside a transaction'
  end

  if check_constraint_exists?(table, constraint_name)
    warning_message = <<~MESSAGE
      Check constraint was not created because it exists already
      (this may be due to an aborted migration or similar)
      table: #{table}, check: #{check}, constraint name: #{constraint_name}
    MESSAGE

    Gitlab::AppLogger.warn warning_message
  else
    # Only add the constraint without validating it
    # Even though it is fast, ADD CONSTRAINT requires an EXCLUSIVE lock
    # Use with_lock_retries to make sure that this operation
    # will not timeout on tables accessed by many processes
    with_lock_retries do
      execute <<-EOF.strip_heredoc
      ALTER TABLE #{table}
      ADD CONSTRAINT #{constraint_name}
      CHECK ( #{check} )
      NOT VALID;
      EOF
    end
  end

  if validate
    validate_check_constraint(table, constraint_name)
  end
end

#add_column_with_default(table, column, type, default:, limit: nil, allow_null: false) ⇒ Object

Deprecated.

With PostgreSQL 11, adding columns with a default does not lead to a table rewrite anymore. As such, this method is not needed anymore and the default `add_column` helper should be used. This helper is subject to be removed in a >13.0 release.

Adds a column with a default value without locking an entire table.


461
462
463
464
465
# File 'lib/gitlab/database/migration_helpers.rb', line 461

def add_column_with_default(table, column, type, default:, limit: nil, allow_null: false)
  raise 'Deprecated: add_column_with_default does not support being passed blocks anymore' if block_given?

  add_column(table, column, type, default: default, limit: limit, null: allow_null)
end

#add_concurrent_foreign_key(source, target, column:, on_delete: :cascade, name: nil, validate: true) ⇒ Object

Adds a foreign key with only minimal locking on the tables involved.

This method only requires minimal locking

source - The source table containing the foreign key. target - The target table the key points to. column - The name of the column to create the foreign key on. on_delete - The action to perform when associated data is removed,

defaults to "CASCADE".

name - The name of the foreign key.


166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/gitlab/database/migration_helpers.rb', line 166

def add_concurrent_foreign_key(source, target, column:, on_delete: :cascade, name: nil, validate: true)
  # Transactions would result in ALTER TABLE locks being held for the
  # duration of the transaction, defeating the purpose of this method.
  if transaction_open?
    raise 'add_concurrent_foreign_key can not be run inside a transaction'
  end

  options = {
    column: column,
    on_delete: on_delete,
    name: name.presence || concurrent_foreign_key_name(source, column)
  }

  if foreign_key_exists?(source, target, options)
    warning_message = "Foreign key not created because it exists already " \
      "(this may be due to an aborted migration or similar): " \
      "source: #{source}, target: #{target}, column: #{options[:column]}, "\
      "name: #{options[:name]}, on_delete: #{options[:on_delete]}"

    Gitlab::AppLogger.warn warning_message
  else
    # Using NOT VALID allows us to create a key without immediately
    # validating it. This means we keep the ALTER TABLE lock only for a
    # short period of time. The key _is_ enforced for any newly created
    # data.

    with_lock_retries do
      execute <<-EOF.strip_heredoc
      ALTER TABLE #{source}
      ADD CONSTRAINT #{options[:name]}
      FOREIGN KEY (#{options[:column]})
      REFERENCES #{target} (id)
      #{on_delete_statement(options[:on_delete])}
      NOT VALID;
      EOF
    end
  end

  # Validate the existing constraint. This can potentially take a very
  # long time to complete, but fortunately does not lock the source table
  # while running.
  # Disable this check by passing `validate: false` to the method call
  # The check will be enforced for new data (inserts) coming in,
  # but validating existing data is delayed.
  #
  # Note this is a no-op in case the constraint is VALID already

  if validate
    disable_statement_timeout do
      execute("ALTER TABLE #{source} VALIDATE CONSTRAINT #{options[:name]};")
    end
  end
end

#add_concurrent_index(table_name, column_name, options = {}) ⇒ Object

Creates a new index, concurrently

Example:

add_concurrent_index :users, :some_column

See Rails' `add_index` for more info on the available arguments.


80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
# File 'lib/gitlab/database/migration_helpers.rb', line 80

def add_concurrent_index(table_name, column_name, options = {})
  if transaction_open?
    raise 'add_concurrent_index can not be run inside a transaction, ' \
      'you can disable transactions by calling disable_ddl_transaction! ' \
      'in the body of your migration class'
  end

  options = options.merge({ algorithm: :concurrently })

  if index_exists?(table_name, column_name, options)
    Gitlab::AppLogger.warn "Index not created because it already exists (this may be due to an aborted migration or similar): table_name: #{table_name}, column_name: #{column_name}"
    return
  end

  disable_statement_timeout do
    add_index(table_name, column_name, options)
  end
end

#add_not_null_constraint(table, column, constraint_name: nil, validate: true) ⇒ Object

Migration Helpers for managing not null constraints


1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
# File 'lib/gitlab/database/migration_helpers.rb', line 1173

def add_not_null_constraint(table, column, constraint_name: nil, validate: true)
  if column_is_nullable?(table, column)
    add_check_constraint(
      table,
      "#{column} IS NOT NULL",
      not_null_constraint_name(table, column, name: constraint_name),
      validate: validate
    )
  else
    warning_message = <<~MESSAGE
      NOT NULL check constraint was not created:
      column #{table}.#{column} is already defined as `NOT NULL`
    MESSAGE

    Gitlab::AppLogger.warn warning_message
  end
end

#add_text_limit(table, column, limit, constraint_name: nil, validate: true) ⇒ Object

Migration Helpers for adding limit to text columns


1151
1152
1153
1154
1155
1156
1157
1158
# File 'lib/gitlab/database/migration_helpers.rb', line 1151

def add_text_limit(table, column, limit, constraint_name: nil, validate: true)
  add_check_constraint(
    table,
    "char_length(#{column}) <= #{limit}",
    text_limit_name(table, column, name: constraint_name),
    validate: validate
  )
end

#add_timestamps_with_timezone(table_name, options = {}) ⇒ Object

Adds `created_at` and `updated_at` columns with timezone information.

This method is an improved version of Rails' built-in method `add_timestamps`.

By default, adds `created_at` and `updated_at` columns, but these can be specified as:

add_timestamps_with_timezone(:my_table, columns: [:created_at, :deleted_at])

This allows you to create just the timestamps you need, saving space.

Available options are:

:default - The default value for the column.
:null - When set to `true` the column will allow NULL values.
      The default is to not allow NULL values.
:columns - the column names to create. Must be one
           of `Gitlab::Database::MigrationHelpers::PERMITTED_TIMESTAMP_COLUMNS`.
           Default value: `DEFAULT_TIMESTAMP_COLUMNS`

All options are optional.


33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# File 'lib/gitlab/database/migration_helpers.rb', line 33

def add_timestamps_with_timezone(table_name, options = {})
  options[:null] = false if options[:null].nil?
  columns = options.fetch(:columns, DEFAULT_TIMESTAMP_COLUMNS)
  default_value = options[:default]

  validate_not_in_transaction!(:add_timestamps_with_timezone, 'with default value') if default_value

  columns.each do |column_name|
    validate_timestamp_column_name!(column_name)

    # If default value is presented, use `add_column_with_default` method instead.
    if default_value
      add_column_with_default(
        table_name,
        column_name,
        :datetime_with_timezone,
        default: default_value,
        allow_null: options[:null]
      )
    else
      add_column(table_name, column_name, :datetime_with_timezone, options)
    end
  end
end

#backfill_iids(table) ⇒ Object

Note this should only be used with very small tables


1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
# File 'lib/gitlab/database/migration_helpers.rb', line 1019

def backfill_iids(table)
  sql = <<-END
    UPDATE #{table}
    SET iid = #{table}_with_calculated_iid.iid_num
    FROM (
      SELECT id, ROW_NUMBER() OVER (PARTITION BY project_id ORDER BY id ASC) AS iid_num FROM #{table}
    ) AS #{table}_with_calculated_iid
    WHERE #{table}.id = #{table}_with_calculated_iid.id
  END

  execute(sql)
end

#change_column_type_concurrently(table, column, new_type, type_cast_function: nil) ⇒ Object

Changes the type of a column concurrently.

table - The table containing the column. column - The name of the column to change. new_type - The new column type.


541
542
543
544
545
# File 'lib/gitlab/database/migration_helpers.rb', line 541

def change_column_type_concurrently(table, column, new_type, type_cast_function: nil)
  temp_column = "#{column}_for_type_change"

  rename_column_concurrently(table, column, temp_column, type: new_type, type_cast_function: type_cast_function)
end

#change_column_type_using_background_migration(relation, column, new_type, batch_size: 10_000, interval: 10.minutes) ⇒ Object

Changes the column type of a table using a background migration.

Because this method uses a background migration it's more suitable for large tables. For small tables it's better to use `change_column_type_concurrently` since it can complete its work in a much shorter amount of time and doesn't rely on Sidekiq.

Example usage:

class Issue < ActiveRecord::Base
  self.table_name = 'issues'

  include EachBatch

  def self.to_migrate
    where('closed_at IS NOT NULL')
  end
end

change_column_type_using_background_migration(
  Issue.to_migrate,
  :closed_at,
  :datetime_with_timezone
)

Reverting a migration like this is done exactly the same way, just with a different type to migrate to (e.g. `:datetime` in the above example).

relation - An ActiveRecord relation to use for scheduling jobs and

figuring out what table we're modifying. This relation _must_
have the EachBatch module included.

column - The name of the column for which the type will be changed.

new_type - The new type of the column.

batch_size - The number of rows to schedule in a single background

migration.

interval - The time interval between every background migration.


651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
# File 'lib/gitlab/database/migration_helpers.rb', line 651

def change_column_type_using_background_migration(
  relation,
  column,
  new_type,
  batch_size: 10_000,
  interval: 10.minutes
)

  unless relation.model < EachBatch
    raise TypeError, 'The relation must include the EachBatch module'
  end

  temp_column = "#{column}_for_type_change"
  table = relation.table_name
  max_index = 0

  add_column(table, temp_column, new_type)
  install_rename_triggers(table, column, temp_column)

  # Schedule the jobs that will copy the data from the old column to the
  # new one. Rows with NULL values in our source column are skipped since
  # the target column is already NULL at this point.
  relation.where.not(column => nil).each_batch(of: batch_size) do |batch, index|
    start_id, end_id = batch.pluck('MIN(id), MAX(id)').first
    max_index = index

    migrate_in(
      index * interval,
      'CopyColumn',
      [table, column, temp_column, start_id, end_id]
    )
  end

  # Schedule the renaming of the column to happen (initially) 1 hour after
  # the last batch finished.
  migrate_in(
    (max_index * interval) + 1.hour,
    'CleanupConcurrentTypeChange',
    [table, column, temp_column]
  )

  if perform_background_migration_inline?
    # To ensure the schema is up to date immediately we perform the
    # migration inline in dev / test environments.
    Gitlab::BackgroundMigration.steal('CopyColumn')
    Gitlab::BackgroundMigration.steal('CleanupConcurrentTypeChange')
  end
end

#check_constraint_exists?(table, constraint_name) ⇒ Boolean

Returns:

  • (Boolean)

1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
# File 'lib/gitlab/database/migration_helpers.rb', line 1050

def check_constraint_exists?(table, constraint_name)
  # Constraint names are unique per table in Postgres, not per schema
  # Two tables can have constraints with the same name, so we filter by
  # the table name in addition to using the constraint_name
  check_sql = <<~SQL
    SELECT COUNT(*)
    FROM pg_constraint
    JOIN pg_class ON pg_constraint.conrelid = pg_class.oid
    WHERE pg_constraint.contype = 'c'
    AND pg_constraint.conname = '#{constraint_name}'
    AND pg_class.relname = '#{table}'
  SQL

  connection.select_value(check_sql) > 0
end

#check_constraint_name(table, column, type) ⇒ Object

Returns the name for a check constraint

type:

  • Any value, as long as it is unique

  • Constraint names are unique per table in Postgres, and, additionally, we can have multiple check constraints over a column So we use the (table, column, type) triplet as a unique name

  • e.g. we use 'max_length' when adding checks for text limits

    or 'not_null' when adding a NOT NULL constraint
    

1042
1043
1044
1045
1046
1047
1048
# File 'lib/gitlab/database/migration_helpers.rb', line 1042

def check_constraint_name(table, column, type)
  identifier = "#{table}_#{column}_check_#{type}"
  # Check concurrent_foreign_key_name() for info on why we use a hash
  hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)

  "check_#{hashed_identifier}"
end

#check_not_null_constraint_exists?(table, column, constraint_name: nil) ⇒ Boolean

Returns:

  • (Boolean)

1205
1206
1207
1208
1209
1210
# File 'lib/gitlab/database/migration_helpers.rb', line 1205

def check_not_null_constraint_exists?(table, column, constraint_name: nil)
  check_constraint_exists?(
    table,
    not_null_constraint_name(table, column, name: constraint_name)
  )
end

#check_text_limit_exists?(table, column, constraint_name: nil) ⇒ Boolean

Returns:

  • (Boolean)

1168
1169
1170
# File 'lib/gitlab/database/migration_helpers.rb', line 1168

def check_text_limit_exists?(table, column, constraint_name: nil)
  check_constraint_exists?(table, text_limit_name(table, column, name: constraint_name))
end

#check_trigger_permissions!(table) ⇒ Object


954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
# File 'lib/gitlab/database/migration_helpers.rb', line 954

def check_trigger_permissions!(table)
  unless Grant.create_and_execute_trigger?(table)
    dbname = Database.database_name
    user = Database.username

    raise <<-EOF
Your database user is not allowed to create, drop, or execute triggers on the
table #{table}.

If you are using PostgreSQL you can solve this by logging in to the GitLab
database (#{dbname}) using a super user and running:

    ALTER #{user} WITH SUPERUSER

This query will grant the user super user permissions, ensuring you don't run
into similar problems in the future (e.g. when new tables are created).
    EOF
  end
end

#cleanup_concurrent_column_rename(table, old, new) ⇒ Object

Cleans up a concurrent column name.

This method takes care of removing previously installed triggers as well as removing the old column.

table - The name of the database table. old - The name of the old column. new - The name of the new column.


571
572
573
574
575
576
577
578
579
# File 'lib/gitlab/database/migration_helpers.rb', line 571

def cleanup_concurrent_column_rename(table, old, new)
  trigger_name = rename_trigger_name(table, old, new)

  check_trigger_permissions!(table)

  remove_rename_triggers_for_postgresql(table, trigger_name)

  remove_column(table, old)
end

#cleanup_concurrent_column_type_change(table, column) ⇒ Object

Performs cleanup of a concurrent type change.

table - The table containing the column. column - The name of the column to change. new_type - The new column type.


552
553
554
555
556
557
558
559
560
561
# File 'lib/gitlab/database/migration_helpers.rb', line 552

def cleanup_concurrent_column_type_change(table, column)
  temp_column = "#{column}_for_type_change"

  transaction do
    # This has to be performed in a transaction as otherwise we might have
    # inconsistent data.
    cleanup_concurrent_column_rename(table, column, temp_column)
    rename_column(table, temp_column, column)
  end
end

#column_for(table, name) ⇒ Object

Returns the column for the given table and column name.


907
908
909
910
911
912
913
914
# File 'lib/gitlab/database/migration_helpers.rb', line 907

def column_for(table, name)
  name = name.to_s

  column = columns(table).find { |column| column.name == name }
  raise(missing_schema_object_message(table, "column", name)) if column.nil?

  column
end

#concurrent_foreign_key_name(table, column, prefix: 'fk_') ⇒ Object

Returns the name for a concurrent foreign key.

PostgreSQL constraint names have a limit of 63 bytes. The logic used here is based on Rails' foreign_key_name() method, which unfortunately is private so we can't rely on it directly.

prefix:

  • The default prefix is `fk_` for backward compatibility with the existing

concurrent foreign key helpers.

  • For standard rails foreign keys the prefix is `fk_rails_`


250
251
252
253
254
255
# File 'lib/gitlab/database/migration_helpers.rb', line 250

def concurrent_foreign_key_name(table, column, prefix: 'fk_')
  identifier = "#{table}_#{column}_fk"
  hashed_identifier = Digest::SHA256.hexdigest(identifier).first(10)

  "#{prefix}#{hashed_identifier}"
end

#copy_foreign_keys(table, old, new) ⇒ Object

Copies all foreign keys for the old column to the new column.

table - The table containing the columns and indexes. old - The old column. new - The new column.


897
898
899
900
901
902
903
904
# File 'lib/gitlab/database/migration_helpers.rb', line 897

def copy_foreign_keys(table, old, new)
  foreign_keys_for(table, old).each do |fk|
    add_concurrent_foreign_key(fk.from_table,
                               fk.to_table,
                               column: new,
                               on_delete: fk.on_delete)
  end
end

#copy_indexes(table, old, new) ⇒ Object

Copies all indexes for the old column to a new column.

table - The table containing the columns and indexes. old - The old column. new - The new column.


850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
# File 'lib/gitlab/database/migration_helpers.rb', line 850

def copy_indexes(table, old, new)
  old = old.to_s
  new = new.to_s

  indexes_for(table, old).each do |index|
    new_columns = index.columns.map do |column|
      column == old ? new : column
    end

    # This is necessary as we can't properly rename indexes such as
    # "ci_taggings_idx".
    unless index.name.include?(old)
      raise "The index #{index.name} can not be copied as it does not "\
        "mention the old column. You have to rename this index manually first."
    end

    name = index.name.gsub(old, new)

    options = {
      unique: index.unique,
      name: name,
      length: index.lengths,
      order: index.orders
    }

    options[:using] = index.using if index.using
    options[:where] = index.where if index.where

    unless index.opclasses.blank?
      opclasses = index.opclasses.dup

      # Copy the operator classes for the old column (if any) to the new
      # column.
      opclasses[new] = opclasses.delete(old) if opclasses[old]

      options[:opclasses] = opclasses
    end

    add_concurrent_index(table, new_columns, options)
  end
end

#create_extension(extension) ⇒ Object


1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
# File 'lib/gitlab/database/migration_helpers.rb', line 1212

def create_extension(extension)
  execute('CREATE EXTENSION IF NOT EXISTS %s' % extension)
rescue ActiveRecord::StatementInvalid => e
  dbname = Database.database_name
  user = Database.username

  warn(<<~MSG) if e.to_s =~ /permission denied/
    GitLab requires the PostgreSQL extension '#{extension}' installed in database '#{dbname}', but
    the database user is not allowed to install the extension.

    You can either install the extension manually using a database superuser:

      CREATE EXTENSION IF NOT EXISTS #{extension}

    Or, you can solve this by logging in to the GitLab
    database (#{dbname}) using a superuser and running:

        ALTER #{user} WITH SUPERUSER

    This query will grant the user superuser permissions, ensuring any database extensions
    can be installed through migrations.

    For more information, refer to https://docs.gitlab.com/ee/install/postgresql_extensions.html.
  MSG

  raise
end

#create_or_update_plan_limit(limit_name, plan_name, limit_value) ⇒ Object


1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
# File 'lib/gitlab/database/migration_helpers.rb', line 1006

def create_or_update_plan_limit(limit_name, plan_name, limit_value)
  limit_name_quoted = quote_column_name(limit_name)
  plan_name_quoted = quote(plan_name)
  limit_value_quoted = quote(limit_value)

  execute <<~SQL
    INSERT INTO plan_limits (plan_id, #{limit_name_quoted})
    SELECT id, #{limit_value_quoted} FROM plans WHERE name = #{plan_name_quoted} LIMIT 1
    ON CONFLICT (plan_id) DO UPDATE SET #{limit_name_quoted} = EXCLUDED.#{limit_name_quoted};
  SQL
end

#disable_statement_timeoutObject

Long-running migrations may take more than the timeout allowed by the database. Disable the session's statement timeout to ensure migrations don't get killed prematurely.

There are two possible ways to disable the statement timeout:

  • Per transaction (this is the preferred and default mode)

  • Per connection (requires a cleanup after the execution)

When using a per connection disable statement, code must be inside a block so we can automatically execute `RESET ALL` after block finishes otherwise the statement will still be disabled until connection is dropped or `RESET ALL` is executed


270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
# File 'lib/gitlab/database/migration_helpers.rb', line 270

def disable_statement_timeout
  if block_given?
    if statement_timeout_disabled?
      # Don't do anything if the statement_timeout is already disabled
      # Allows for nested calls of disable_statement_timeout without
      # resetting the timeout too early (before the outer call ends)
      yield
    else
      begin
        execute('SET statement_timeout TO 0')

        yield
      ensure
        execute('RESET ALL')
      end
    end
  else
    unless transaction_open?
      raise <<~ERROR
        Cannot call disable_statement_timeout() without a transaction open or outside of a transaction block.
        If you don't want to use a transaction wrap your code in a block call:

        disable_statement_timeout { # code that requires disabled statement here }

        This will make sure statement_timeout is disabled before and reset after the block execution is finished.
      ERROR
    end

    execute('SET LOCAL statement_timeout TO 0')
  end
end

#drop_extension(extension) ⇒ Object


1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
# File 'lib/gitlab/database/migration_helpers.rb', line 1240

def drop_extension(extension)
  execute('DROP EXTENSION IF EXISTS %s' % extension)
rescue ActiveRecord::StatementInvalid => e
  dbname = Database.database_name
  user = Database.username

  warn(<<~MSG) if e.to_s =~ /permission denied/
    This migration attempts to drop the PostgreSQL extension '#{extension}'
    installed in database '#{dbname}', but the database user is not allowed
    to drop the extension.

    You can either drop the extension manually using a database superuser:

      DROP EXTENSION IF EXISTS #{extension}

    Or, you can solve this by logging in to the GitLab
    database (#{dbname}) using a superuser and running:

        ALTER #{user} WITH SUPERUSER

    This query will grant the user superuser permissions, ensuring any database extensions
    can be dropped through migrations.

    For more information, refer to https://docs.gitlab.com/ee/install/postgresql_extensions.html.
  MSG

  raise
end

#false_valueObject


346
347
348
# File 'lib/gitlab/database/migration_helpers.rb', line 346

def false_value
  Database.false_value
end

#foreign_key_exists?(source, target = nil, **options) ⇒ Boolean

Returns:

  • (Boolean)

232
233
234
235
236
237
# File 'lib/gitlab/database/migration_helpers.rb', line 232

def foreign_key_exists?(source, target = nil, **options)
  foreign_keys(source).any? do |foreign_key|
    tables_match?(target.to_s, foreign_key.to_table.to_s) &&
      options_match?(foreign_key.options, options)
  end
end

#foreign_keys_for(table, column) ⇒ Object

Returns an Array containing the foreign keys for the given column.


839
840
841
842
843
# File 'lib/gitlab/database/migration_helpers.rb', line 839

def foreign_keys_for(table, column)
  column = column.to_s

  foreign_keys(table).select { |fk| fk.column == column }
end

#index_exists_by_name?(table, index) ⇒ Boolean

Fetches indexes on a column by name for postgres.

This will include indexes using an expression on the column, for example: `CREATE INDEX CONCURRENTLY index_name ON table (LOWER(column));`

We can remove this when upgrading to Rails 5 with an updated `index_exists?`:

Or this can be removed when we no longer support postgres < 9.5, so we can use `CREATE INDEX IF NOT EXISTS`.

Returns:

  • (Boolean)

984
985
986
987
988
989
990
991
992
# File 'lib/gitlab/database/migration_helpers.rb', line 984

def index_exists_by_name?(table, index)
  # We can't fall back to the normal `index_exists?` method because that
  # does not find indexes without passing a column name.
  if indexes(table).map(&:name).include?(index.to_s)
    true
  else
    postgres_exists_by_name?(table, index)
  end
end

#indexes_for(table, column) ⇒ Object

Returns an Array containing the indexes for the given column


832
833
834
835
836
# File 'lib/gitlab/database/migration_helpers.rb', line 832

def indexes_for(table, column)
  column = column.to_s

  indexes(table).select { |index| index.columns.include?(column) }
end

#install_rename_triggers(table, old_column, new_column) ⇒ Object

Installs triggers in a table that keep a new column in sync with an old one.

table - The name of the table to install the trigger in. old_column - The name of the old column. new_column - The name of the new column.


522
523
524
525
526
527
528
529
530
531
532
533
534
# File 'lib/gitlab/database/migration_helpers.rb', line 522

def install_rename_triggers(table, old_column, new_column)
  trigger_name = rename_trigger_name(table, old_column, new_column)
  quoted_table = quote_table_name(table)
  quoted_old = quote_column_name(old_column)
  quoted_new = quote_column_name(new_column)

  install_rename_triggers_for_postgresql(
    trigger_name,
    quoted_table,
    quoted_old,
    quoted_new
  )
end

#install_rename_triggers_for_postgresql(trigger, table, old, new) ⇒ Object

Performs a concurrent column rename when using PostgreSQL.


792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
# File 'lib/gitlab/database/migration_helpers.rb', line 792

def install_rename_triggers_for_postgresql(trigger, table, old, new)
  execute <<-EOF.strip_heredoc
  CREATE OR REPLACE FUNCTION #{trigger}()
  RETURNS trigger AS
  $BODY$
  BEGIN
    NEW.#{new} := NEW.#{old};
    RETURN NEW;
  END;
  $BODY$
  LANGUAGE 'plpgsql'
  VOLATILE
  EOF

  execute <<-EOF.strip_heredoc
  DROP TRIGGER IF EXISTS #{trigger}
  ON #{table}
  EOF

  execute <<-EOF.strip_heredoc
  CREATE TRIGGER #{trigger}
  BEFORE INSERT OR UPDATE
  ON #{table}
  FOR EACH ROW
  EXECUTE FUNCTION #{trigger}()
  EOF
end

#postgres_exists_by_name?(table, name) ⇒ Boolean

Returns:

  • (Boolean)

994
995
996
997
998
999
1000
1001
1002
1003
1004
# File 'lib/gitlab/database/migration_helpers.rb', line 994

def postgres_exists_by_name?(table, name)
  index_sql = <<~SQL
    SELECT COUNT(*)
    FROM pg_index
    JOIN pg_class i ON (indexrelid=i.oid)
    JOIN pg_class t ON (indrelid=t.oid)
    WHERE i.relname = '#{name}' AND t.relname = '#{table}'
  SQL

  connection.select_value(index_sql).to_i > 0
end

#remove_check_constraint(table, constraint_name) ⇒ Object


1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
# File 'lib/gitlab/database/migration_helpers.rb', line 1137

def remove_check_constraint(table, constraint_name)
  validate_check_constraint_name!(constraint_name)

  # DROP CONSTRAINT requires an EXCLUSIVE lock
  # Use with_lock_retries to make sure that this will not timeout
  with_lock_retries do
    execute <<-EOF.strip_heredoc
    ALTER TABLE #{table}
    DROP CONSTRAINT IF EXISTS #{constraint_name}
    EOF
  end
end

#remove_concurrent_index(table_name, column_name, options = {}) ⇒ Object

Removes an existed index, concurrently

Example:

remove_concurrent_index :users, :some_column

See Rails' `remove_index` for more info on the available arguments.


106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
# File 'lib/gitlab/database/migration_helpers.rb', line 106

def remove_concurrent_index(table_name, column_name, options = {})
  if transaction_open?
    raise 'remove_concurrent_index can not be run inside a transaction, ' \
      'you can disable transactions by calling disable_ddl_transaction! ' \
      'in the body of your migration class'
  end

  options = options.merge({ algorithm: :concurrently })

  unless index_exists?(table_name, column_name, options)
    Gitlab::AppLogger.warn "Index not removed because it does not exist (this may be due to an aborted migration or similar): table_name: #{table_name}, column_name: #{column_name}"
    return
  end

  disable_statement_timeout do
    remove_index(table_name, options.merge({ column: column_name }))
  end
end

#remove_concurrent_index_by_name(table_name, index_name, options = {}) ⇒ Object

Removes an existing index, concurrently

Example:

remove_concurrent_index :users, "index_X_by_Y"

See Rails' `remove_index` for more info on the available arguments.


132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
# File 'lib/gitlab/database/migration_helpers.rb', line 132

def remove_concurrent_index_by_name(table_name, index_name, options = {})
  if transaction_open?
    raise 'remove_concurrent_index_by_name can not be run inside a transaction, ' \
      'you can disable transactions by calling disable_ddl_transaction! ' \
      'in the body of your migration class'
  end

  index_name = index_name[:name] if index_name.is_a?(Hash)

  raise 'remove_concurrent_index_by_name must get an index name as the second argument' if index_name.blank?

  options = options.merge({ algorithm: :concurrently })

  unless index_exists_by_name?(table_name, index_name)
    Gitlab::AppLogger.warn "Index not removed because it does not exist (this may be due to an aborted migration or similar): table_name: #{table_name}, index_name: #{index_name}"
    return
  end

  disable_statement_timeout do
    remove_index(table_name, options.merge({ name: index_name }))
  end
end

#remove_foreign_key_if_exists(*args) ⇒ Object


929
930
931
932
933
# File 'lib/gitlab/database/migration_helpers.rb', line 929

def remove_foreign_key_if_exists(*args)
  if foreign_key_exists?(*args)
    remove_foreign_key(*args)
  end
end

#remove_foreign_key_without_error(*args) ⇒ Object


935
936
937
938
# File 'lib/gitlab/database/migration_helpers.rb', line 935

def remove_foreign_key_without_error(*args)
  remove_foreign_key(*args)
rescue ArgumentError
end

#remove_not_null_constraint(table, column, constraint_name: nil) ⇒ Object


1198
1199
1200
1201
1202
1203
# File 'lib/gitlab/database/migration_helpers.rb', line 1198

def remove_not_null_constraint(table, column, constraint_name: nil)
  remove_check_constraint(
    table,
    not_null_constraint_name(table, column, name: constraint_name)
  )
end

#remove_rename_triggers_for_postgresql(table, trigger) ⇒ Object

Removes the triggers used for renaming a PostgreSQL column concurrently.


821
822
823
824
# File 'lib/gitlab/database/migration_helpers.rb', line 821

def remove_rename_triggers_for_postgresql(table, trigger)
  execute("DROP TRIGGER IF EXISTS #{trigger} ON #{table}")
  execute("DROP FUNCTION IF EXISTS #{trigger}()")
end

#remove_text_limit(table, column, constraint_name: nil) ⇒ Object


1164
1165
1166
# File 'lib/gitlab/database/migration_helpers.rb', line 1164

def remove_text_limit(table, column, constraint_name: nil)
  remove_check_constraint(table, text_limit_name(table, column, name: constraint_name))
end

#remove_timestamps(table_name, options = {}) ⇒ Object

To be used in the `#down` method of migrations that use `#add_timestamps_with_timezone`.

Available options are:

:columns - the column names to remove. Must be one
           Default value: `DEFAULT_TIMESTAMP_COLUMNS`

All options are optional.


66
67
68
69
70
71
# File 'lib/gitlab/database/migration_helpers.rb', line 66

def remove_timestamps(table_name, options = {})
  columns = options.fetch(:columns, DEFAULT_TIMESTAMP_COLUMNS)
  columns.each do |column_name|
    remove_column(table_name, column_name)
  end
end

#rename_column_concurrently(table, old, new, type: nil, type_cast_function: nil, batch_column_name: :id) ⇒ Object

Renames a column without requiring downtime.

Concurrent renames work by using database triggers to ensure both the old and new column are in sync. However, this method will not remove the triggers or the old column automatically; this needs to be done manually in a post-deployment migration. This can be done using the method `cleanup_concurrent_column_rename`.

table - The name of the database table containing the column. old - The old column name. new - The new column name. type - The type of the new column. If no type is given the old column's

type is used.

batch_column_name - option is for tables without primary key, in this

case another unique integer column can be used. Example: :user_id

482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
# File 'lib/gitlab/database/migration_helpers.rb', line 482

def rename_column_concurrently(table, old, new, type: nil, type_cast_function: nil, batch_column_name: :id)
  unless column_exists?(table, batch_column_name)
    raise "Column #{batch_column_name} does not exist on #{table}"
  end

  if transaction_open?
    raise 'rename_column_concurrently can not be run inside a transaction'
  end

  check_trigger_permissions!(table)

  create_column_from(table, old, new, type: type, batch_column_name: batch_column_name, type_cast_function: type_cast_function)

  install_rename_triggers(table, old, new)
end

#rename_column_using_background_migration(table, old_column, new_column, type: nil, batch_size: 10_000, interval: 10.minutes) ⇒ Object

Renames a column using a background migration.

Because this method uses a background migration it's more suitable for large tables. For small tables it's better to use `rename_column_concurrently` since it can complete its work in a much shorter amount of time and doesn't rely on Sidekiq.

Example usage:

rename_column_using_background_migration(
  :users,
  :feed_token,
  :rss_token
)

table - The name of the database table containing the column.

old - The old column name.

new - The new column name.

type - The type of the new column. If no type is given the old column's

type is used.

batch_size - The number of rows to schedule in a single background

migration.

interval - The time interval between every background migration.


728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
# File 'lib/gitlab/database/migration_helpers.rb', line 728

def rename_column_using_background_migration(
  table,
  old_column,
  new_column,
  type: nil,
  batch_size: 10_000,
  interval: 10.minutes
)

  check_trigger_permissions!(table)

  old_col = column_for(table, old_column)
  new_type = type || old_col.type
  max_index = 0

  add_column(table, new_column, new_type,
             limit: old_col.limit,
             precision: old_col.precision,
             scale: old_col.scale)

  # We set the default value _after_ adding the column so we don't end up
  # updating any existing data with the default value. This isn't
  # necessary since we copy over old values further down.
  change_column_default(table, new_column, old_col.default) if old_col.default

  install_rename_triggers(table, old_column, new_column)

  model = Class.new(ActiveRecord::Base) do
    self.table_name = table

    include ::EachBatch
  end

  # Schedule the jobs that will copy the data from the old column to the
  # new one. Rows with NULL values in our source column are skipped since
  # the target column is already NULL at this point.
  model.where.not(old_column => nil).each_batch(of: batch_size) do |batch, index|
    start_id, end_id = batch.pluck('MIN(id), MAX(id)').first
    max_index = index

    migrate_in(
      index * interval,
      'CopyColumn',
      [table, old_column, new_column, start_id, end_id]
    )
  end

  # Schedule the renaming of the column to happen (initially) 1 hour after
  # the last batch finished.
  migrate_in(
    (max_index * interval) + 1.hour,
    'CleanupConcurrentRename',
    [table, old_column, new_column]
  )

  if perform_background_migration_inline?
    # To ensure the schema is up to date immediately we perform the
    # migration inline in dev / test environments.
    Gitlab::BackgroundMigration.steal('CopyColumn')
    Gitlab::BackgroundMigration.steal('CleanupConcurrentRename')
  end
end

#rename_trigger_name(table, old, new) ⇒ Object

Returns the (base) name to use for triggers when renaming columns.


827
828
829
# File 'lib/gitlab/database/migration_helpers.rb', line 827

def rename_trigger_name(table, old, new)
  'trigger_' + Digest::SHA256.hexdigest("#{table}_#{old}_#{new}").first(12)
end

#replace_sql(column, pattern, replacement) ⇒ Object

This will replace the first occurrence of a string in a column with the replacement using `regexp_replace`


918
919
920
921
922
923
924
925
926
927
# File 'lib/gitlab/database/migration_helpers.rb', line 918

def replace_sql(column, pattern, replacement)
  quoted_pattern = Arel::Nodes::Quoted.new(pattern.to_s)
  quoted_replacement = Arel::Nodes::Quoted.new(replacement.to_s)

  replace = Arel::Nodes::NamedFunction.new(
    "regexp_replace", [column, quoted_pattern, quoted_replacement]
  )

  Arel::Nodes::SqlLiteral.new(replace.to_sql)
end

#sidekiq_queue_length(queue_name) ⇒ Object


948
949
950
951
952
# File 'lib/gitlab/database/migration_helpers.rb', line 948

def sidekiq_queue_length(queue_name)
  Sidekiq.redis do |conn|
    conn.llen("queue:#{queue_name}")
  end
end

#sidekiq_queue_migrate(queue_from, to:) ⇒ Object


940
941
942
943
944
945
946
# File 'lib/gitlab/database/migration_helpers.rb', line 940

def sidekiq_queue_migrate(queue_from, to:)
  while sidekiq_queue_length(queue_from) > 0
    Sidekiq.redis do |conn|
      conn.rpoplpush "queue:#{queue_from}", "queue:#{to}"
    end
  end
end

#true_valueObject


342
343
344
# File 'lib/gitlab/database/migration_helpers.rb', line 342

def true_value
  Database.true_value
end

#undo_cleanup_concurrent_column_rename(table, old, new, type: nil, batch_column_name: :id) ⇒ Object

Reverses the operations performed by cleanup_concurrent_column_rename.

This method adds back the old_column removed by cleanup_concurrent_column_rename. It also adds back the (old_column > new_column) trigger that is removed by cleanup_concurrent_column_rename.

table - The name of the database table containing the column. old - The old column name. new - The new column name. type - The type of the old column. If no type is given the new column's

type is used.

batch_column_name - option is for tables without primary key, in this

case another unique integer column can be used. Example: :user_id

595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
# File 'lib/gitlab/database/migration_helpers.rb', line 595

def undo_cleanup_concurrent_column_rename(table, old, new, type: nil, batch_column_name: :id)
  unless column_exists?(table, batch_column_name)
    raise "Column #{batch_column_name} does not exist on #{table}"
  end

  if transaction_open?
    raise 'undo_cleanup_concurrent_column_rename can not be run inside a transaction'
  end

  check_trigger_permissions!(table)

  create_column_from(table, new, old, type: type, batch_column_name: batch_column_name)

  install_rename_triggers(table, old, new)
end

#undo_rename_column_concurrently(table, old, new) ⇒ Object

Reverses operations performed by rename_column_concurrently.

This method takes care of removing previously installed triggers as well as removing the new column.

table - The name of the database table. old - The name of the old column. new - The name of the new column.


506
507
508
509
510
511
512
513
514
# File 'lib/gitlab/database/migration_helpers.rb', line 506

def undo_rename_column_concurrently(table, old, new)
  trigger_name = rename_trigger_name(table, old, new)

  check_trigger_permissions!(table)

  remove_rename_triggers_for_postgresql(table, trigger_name)

  remove_column(table, new)
end

#update_column_in_batches(table, column, value, batch_size: nil, batch_column_name: :id) ⇒ Object

Updates the value of a column in batches.

This method updates the table in batches of 5% of the total row count. A `batch_size` option can also be passed to set this to a fixed number. This method will continue updating rows until no rows remain.

When given a block this method will yield two values to the block:

  1. An instance of `Arel::Table` for the table that is being updated.

  2. The query to run as an Arel object.

By supplying a block one can add extra conditions to the queries being executed. Note that the same block is used for all queries.

Example:

update_column_in_batches(:projects, :foo, 10) do |table, query|
  query.where(table[:some_column].eq('hello'))
end

This would result in this method updating only rows where `projects.some_column` equals “hello”.

table - The name of the table. column - The name of the column to update. value - The value for the column.

The `value` argument is typically a literal. To perform a computed update, an Arel literal can be used instead:

update_value = Arel.sql('bar * baz')

update_column_in_batches(:projects, :foo, update_value) do |table, query|
  query.where(table[:some_column].eq('hello'))
end

Rubocop's Metrics/AbcSize metric is disabled for this method as Rubocop determines this method to be too complex while there's no way to make it less “complex” without introducing extra methods (which actually will make things more complex).

`batch_column_name` option is for tables without primary key, in this case another unique integer column can be used. Example: :user_id

rubocop: disable Metrics/AbcSize


395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
# File 'lib/gitlab/database/migration_helpers.rb', line 395

def update_column_in_batches(table, column, value, batch_size: nil, batch_column_name: :id)
  if transaction_open?
    raise 'update_column_in_batches can not be run inside a transaction, ' \
      'you can disable transactions by calling disable_ddl_transaction! ' \
      'in the body of your migration class'
  end

  table = Arel::Table.new(table)

  count_arel = table.project(Arel.star.count.as('count'))
  count_arel = yield table, count_arel if block_given?

  total = exec_query(count_arel.to_sql).to_a.first['count'].to_i

  return if total == 0

  if batch_size.nil?
    # Update in batches of 5% until we run out of any rows to update.
    batch_size = ((total / 100.0) * 5.0).ceil
    max_size = 1000

    # The upper limit is 1000 to ensure we don't lock too many rows. For
    # example, for "merge_requests" even 1% of the table is around 35 000
    # rows for GitLab.com.
    batch_size = max_size if batch_size > max_size
  end

  start_arel = table.project(table[batch_column_name]).order(table[batch_column_name].asc).take(1)
  start_arel = yield table, start_arel if block_given?
  start_id = exec_query(start_arel.to_sql).to_a.first[batch_column_name.to_s].to_i

  loop do
    stop_arel = table.project(table[batch_column_name])
      .where(table[batch_column_name].gteq(start_id))
      .order(table[batch_column_name].asc)
      .take(1)
      .skip(batch_size)

    stop_arel = yield table, stop_arel if block_given?
    stop_row = exec_query(stop_arel.to_sql).to_a.first

    update_arel = Arel::UpdateManager.new
      .table(table)
      .set([[table[column], value]])
      .where(table[batch_column_name].gteq(start_id))

    if stop_row
      stop_id = stop_row[batch_column_name.to_s].to_i
      start_id = stop_id
      update_arel = update_arel.where(table[batch_column_name].lt(stop_id))
    end

    update_arel = yield table, update_arel if block_given?

    execute(update_arel.to_sql)

    # There are no more rows left to update.
    break unless stop_row
  end
end

#validate_check_constraint(table, constraint_name) ⇒ Object


1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
# File 'lib/gitlab/database/migration_helpers.rb', line 1123

def validate_check_constraint(table, constraint_name)
  validate_check_constraint_name!(constraint_name)

  unless check_constraint_exists?(table, constraint_name)
    raise missing_schema_object_message(table, "check constraint", constraint_name)
  end

  disable_statement_timeout do
    # VALIDATE CONSTRAINT only requires a SHARE UPDATE EXCLUSIVE LOCK
    # It only conflicts with other validations and creating indexes
    execute("ALTER TABLE #{table} VALIDATE CONSTRAINT #{constraint_name};")
  end
end

#validate_foreign_key(source, column, name: nil) ⇒ Object


220
221
222
223
224
225
226
227
228
229
230
# File 'lib/gitlab/database/migration_helpers.rb', line 220

def validate_foreign_key(source, column, name: nil)
  fk_name = name || concurrent_foreign_key_name(source, column)

  unless foreign_key_exists?(source, name: fk_name)
    raise missing_schema_object_message(source, "foreign key", fk_name)
  end

  disable_statement_timeout do
    execute("ALTER TABLE #{source} VALIDATE CONSTRAINT #{fk_name};")
  end
end

#validate_not_null_constraint(table, column, constraint_name: nil) ⇒ Object


1191
1192
1193
1194
1195
1196
# File 'lib/gitlab/database/migration_helpers.rb', line 1191

def validate_not_null_constraint(table, column, constraint_name: nil)
  validate_check_constraint(
    table,
    not_null_constraint_name(table, column, name: constraint_name)
  )
end

#validate_text_limit(table, column, constraint_name: nil) ⇒ Object


1160
1161
1162
# File 'lib/gitlab/database/migration_helpers.rb', line 1160

def validate_text_limit(table, column, constraint_name: nil)
  validate_check_constraint(table, text_limit_name(table, column, name: constraint_name))
end

#with_lock_retries(**args, &block) ⇒ Object

Executes the block with a retry mechanism that alters the lock_timeout and sleep_time between attempts. The timings can be controlled via the timing_configuration parameter. If the lock was not acquired within the retry period, a last attempt is made without using lock_timeout.

Examples

# Invoking without parameters
with_lock_retries do
  drop_table :my_table
end

# Invoking with custom +timing_configuration+
t = [
  [1.second, 1.second],
  [2.seconds, 2.seconds]
]

with_lock_retries(timing_configuration: t) do
  drop_table :my_table # this will be retried twice
end

# Disabling the retries using an environment variable
> export DISABLE_LOCK_RETRIES=true

with_lock_retries do
  drop_table :my_table # one invocation, it will not retry at all
end

Parameters

  • timing_configuration - [[ActiveSupport::Duration, ActiveSupport::Duration], …] lock timeout for the block, sleep time before the next iteration, defaults to `Gitlab::Database::WithLockRetries::DEFAULT_TIMING_CONFIGURATION`

  • logger - [Gitlab::JsonLogger]

  • env - [Hash] custom environment hash, see the example with `DISABLE_LOCK_RETRIES`


333
334
335
336
337
338
339
340
# File 'lib/gitlab/database/migration_helpers.rb', line 333

def with_lock_retries(**args, &block)
  merged_args = {
    klass: self.class,
    logger: Gitlab::BackgroundMigration::Logger
  }.merge(args)

  Gitlab::Database::WithLockRetries.new(merged_args).run(&block)
end