Class: Ci::BuildTraceChunk
- Inherits:
-
ApplicationRecord
- Object
- ActiveRecord::Base
- ApplicationRecord
- ApplicationRecord
- Ci::BuildTraceChunk
- Includes:
- Checksummable, Partitionable, Comparable, FastDestroyAll, Gitlab::ExclusiveLeaseHelpers, Gitlab::OptimisticLocking
- Defined in:
- app/models/ci/build_trace_chunk.rb
Constant Summary collapse
- CHUNK_SIZE =
128.kilobytes
- WRITE_LOCK_RETRY =
10- WRITE_LOCK_SLEEP =
0.01.seconds
- WRITE_LOCK_TTL =
1.minute
- FailedToPersistDataError =
Class.new(StandardError)
- DATA_STORES =
{ redis: 1, database: 2, fog: 3, redis_trace_chunks: 4 }.freeze
- STORE_TYPES =
DATA_STORES.keys.index_with do |store| "Ci::BuildTraceChunks::#{store.to_s.camelize}".constantize end.freeze
- LIVE_STORES =
%i[redis redis_trace_chunks].freeze
Constants included from Gitlab::OptimisticLocking
Gitlab::OptimisticLocking::MAX_RETRIES
Constants included from Gitlab::ExclusiveLeaseHelpers
Gitlab::ExclusiveLeaseHelpers::FailedToObtainLockError
Constants included from FastDestroyAll
FastDestroyAll::ForbiddenActionError
Constants inherited from ApplicationRecord
Constants included from HasCheckConstraints
HasCheckConstraints::NOT_NULL_CHECK_PATTERN
Constants included from ResetOnColumnErrors
ResetOnColumnErrors::MAX_RESET_PERIOD
Class Method Summary collapse
- .all_stores ⇒ Object
-
.begin_fast_destroy ⇒ Object
FastDestroyAll concerns.
-
.finalize_fast_destroy(keys) ⇒ Object
FastDestroyAll concerns.
- .get_store_class(store) ⇒ Object
-
.metadata_attributes ⇒ Object
Sometimes we do not want to read raw data.
- .persistable_store ⇒ Object
-
.with_read_consistency(build, &block) ⇒ Object
Sometime we need to ensure that the first read goes to a primary database, what is especially important in EE.
Instance Method Summary collapse
- #<=>(other) ⇒ Object
- #append(new_data, offset) ⇒ Object
- #crc32 ⇒ Object
- #data ⇒ Object
- #end_offset ⇒ Object
-
#final? ⇒ Boolean
Build trace chunk is final (the last one that we do not expect to ever become full) when a runner submitted a build pending state and there is no chunk with higher index in the database.
- #flushed? ⇒ Boolean
- #live? ⇒ Boolean
- #migrated? ⇒ Boolean
-
#persist_data! ⇒ Object
It is possible that we run into two concurrent migrations.
- #range ⇒ Object
- #schedule_to_persist! ⇒ Object
- #size ⇒ Object
- #start_offset ⇒ Object
- #truncate(offset = 0) ⇒ Object
Methods included from Gitlab::OptimisticLocking
log_optimistic_lock_retries, retry_lock, retry_lock_histogram, retry_lock_logger
Methods included from Gitlab::ExclusiveLeaseHelpers
Methods included from Partitionable
Methods inherited from ApplicationRecord
Methods inherited from ApplicationRecord
===, cached_column_list, #create_or_load_association, current_transaction, declarative_enum, default_select_columns, delete_all_returning, #deleted_from_database?, id_in, id_not_in, iid_in, nullable_column?, primary_key_in, #readable_by?, safe_ensure_unique, safe_find_or_create_by, safe_find_or_create_by!, #to_ability_name, underscore, where_exists, where_not_exists, with_fast_read_statement_timeout, without_order
Methods included from Organizations::Sharding
Methods included from ResetOnColumnErrors
#reset_on_union_error, #reset_on_unknown_attribute_error
Methods included from Gitlab::SensitiveSerializableHash
Class Method Details
.all_stores ⇒ Object
55 56 57 |
# File 'app/models/ci/build_trace_chunk.rb', line 55 def all_stores STORE_TYPES.keys end |
.begin_fast_destroy ⇒ Object
FastDestroyAll concerns
73 74 75 76 77 78 79 80 |
# File 'app/models/ci/build_trace_chunk.rb', line 73 def begin_fast_destroy all_stores.each_with_object({}) do |store, result| relation = public_send(store) # rubocop:disable GitlabSecurity/PublicSend keys = get_store_class(store).keys(relation) result[store] = keys if keys.present? end end |
.finalize_fast_destroy(keys) ⇒ Object
FastDestroyAll concerns
84 85 86 87 88 |
# File 'app/models/ci/build_trace_chunk.rb', line 84 def finalize_fast_destroy(keys) keys.each do |store, value| get_store_class(store).delete_keys(value) end end |
.get_store_class(store) ⇒ Object
63 64 65 66 67 68 69 |
# File 'app/models/ci/build_trace_chunk.rb', line 63 def get_store_class(store) store = store.to_sym raise "Unknown store type: #{store}" unless STORE_TYPES.key?(store) STORE_TYPES[store].new end |
.metadata_attributes ⇒ Object
Sometimes we do not want to read raw data. This method makes it easier to find attributes that are just metadata excluding raw data.
104 105 106 |
# File 'app/models/ci/build_trace_chunk.rb', line 104 def attribute_names - %w[raw_data] end |
.persistable_store ⇒ Object
59 60 61 |
# File 'app/models/ci/build_trace_chunk.rb', line 59 def persistable_store STORE_TYPES[:fog].available? ? :fog : :database end |
.with_read_consistency(build, &block) ⇒ Object
Sometime we need to ensure that the first read goes to a primary database, what is especially important in EE. This method does not change the behavior in CE.
95 96 97 98 |
# File 'app/models/ci/build_trace_chunk.rb', line 95 def with_read_consistency(build, &block) ::Gitlab::Database::Consistency .with_read_consistency(&block) end |
Instance Method Details
#<=>(other) ⇒ Object
217 218 219 220 221 |
# File 'app/models/ci/build_trace_chunk.rb', line 217 def <=>(other) return unless build_id == other.build_id chunk_index <=> other.chunk_index end |
#append(new_data, offset) ⇒ Object
124 125 126 127 128 129 130 131 132 |
# File 'app/models/ci/build_trace_chunk.rb', line 124 def append(new_data, offset) raise ArgumentError, 'New data is missing' unless new_data raise ArgumentError, 'Offset is out of range' if offset < 0 || offset > size raise ArgumentError, 'Chunk size overflow' if CHUNK_SIZE < (offset + new_data.bytesize) in_lock(lock_key, **lock_params) { unsafe_append_data!(new_data, offset) } schedule_to_persist! if full? end |
#crc32 ⇒ Object
113 114 115 |
# File 'app/models/ci/build_trace_chunk.rb', line 113 def crc32 checksum.to_i end |
#data ⇒ Object
109 110 111 |
# File 'app/models/ci/build_trace_chunk.rb', line 109 def data @data ||= get_data.to_s end |
#end_offset ⇒ Object
142 143 144 |
# File 'app/models/ci/build_trace_chunk.rb', line 142 def end_offset start_offset + size end |
#final? ⇒ Boolean
Build trace chunk is final (the last one that we do not expect to ever become full) when a runner submitted a build pending state and there is no chunk with higher index in the database.
201 202 203 |
# File 'app/models/ci/build_trace_chunk.rb', line 201 def final? build.pending_state.present? && chunks_max_index == chunk_index end |
#flushed? ⇒ Boolean
205 206 207 |
# File 'app/models/ci/build_trace_chunk.rb', line 205 def flushed? !live? end |
#live? ⇒ Boolean
213 214 215 |
# File 'app/models/ci/build_trace_chunk.rb', line 213 def live? LIVE_STORES.include?(data_store.to_sym) end |
#migrated? ⇒ Boolean
209 210 211 |
# File 'app/models/ci/build_trace_chunk.rb', line 209 def migrated? flushed? end |
#persist_data! ⇒ Object
It is possible that we run into two concurrent migrations. It might happen that a chunk gets migrated after being loaded by another worker but before the worker acquires a lock to perform the migration.
We are using Redis locking to ensure that we perform this operation inside an exclusive lock, but this does not prevent us from running into race conditions related to updating a model representation in the database. Optimistic locking is another mechanism that help here.
We are using optimistic locking combined with Redis locking to ensure that a chunk gets migrated properly.
We are using until_executed deduplication strategy for workers, which should prevent duplicated workers running in parallel for the same build trace, and causing an exception related to an exclusive lock not being acquired
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
# File 'app/models/ci/build_trace_chunk.rb', line 174 def persist_data! in_lock(lock_key, **lock_params) do # exclusive Redis lock is acquired first raise FailedToPersistDataError, 'Modifed build trace chunk detected' if has_changes_to_save? self.class.with_read_consistency(build) do reset.unsafe_persist_data! end end rescue FailedToObtainLockError metrics.increment_trace_operation(operation: :stalled) raise FailedToPersistDataError, 'Data migration failed due to a worker duplication' rescue ActiveRecord::StaleObjectError raise FailedToPersistDataError, <<~MSG Data migration race condition detected store: #{data_store} build: #{build.id} index: #{chunk_index} MSG end |
#range ⇒ Object
146 147 148 |
# File 'app/models/ci/build_trace_chunk.rb', line 146 def range (start_offset...end_offset) end |
#schedule_to_persist! ⇒ Object
150 151 152 153 154 |
# File 'app/models/ci/build_trace_chunk.rb', line 150 def schedule_to_persist! return if flushed? Ci::BuildTraceChunkFlushWorker.perform_async(id) end |
#size ⇒ Object
134 135 136 |
# File 'app/models/ci/build_trace_chunk.rb', line 134 def size @size ||= @data&.bytesize || current_store.size(self) || data&.bytesize end |
#start_offset ⇒ Object
138 139 140 |
# File 'app/models/ci/build_trace_chunk.rb', line 138 def start_offset chunk_index * CHUNK_SIZE end |
#truncate(offset = 0) ⇒ Object
117 118 119 120 121 122 |
# File 'app/models/ci/build_trace_chunk.rb', line 117 def truncate(offset = 0) raise ArgumentError, 'Offset is out of range' if offset > size || offset < 0 return if offset == size # Skip the following process as it doesn't affect anything append(+"", offset) end |