Module: Google::Cloud::Bigtable::MutationOperations

Included in:
Table
Defined in:
lib/google/cloud/bigtable/mutation_operations.rb

Overview

MutationOperations

Collection of mutations APIs.

  • Mutate single row
  • Mutate multiple rows
  • Read modify and write row atomically on the server
  • Check and mutate row

Instance Method Summary collapse

Instance Method Details

#check_and_mutate_row(key, predicate, on_match: nil, otherwise: nil) ⇒ Boolean

Mutates a row atomically based on the output of a predicate reader filter.

NOTE: Condition predicate filter is not supported.

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

predicate_filter = Google::Cloud::Bigtable::RowFilter.key("user-10")
on_match_mutations = Google::Cloud::Bigtable::MutationEntry.new
on_match_mutations.set_cell(
  "cf-1",
  "field-1",
  "XYZ",
  timestamp: (Time.now.to_f * 1000000).round(-3) # microseconds
).delete_cells("cf2", "field02")

otherwise_mutations = Google::Cloud::Bigtable::MutationEntry.new
otherwise_mutations.delete_from_family("cf3")

response = table.check_and_mutate_row(
  "user01",
  predicate_filter,
  on_match: on_match_mutations,
  otherwise: otherwise_mutations
)

if response
  puts "All predicates matched"
end

Parameters:

  • key (String)

    Row key. The row key of the row to which the conditional mutation should be applied.

  • predicate (SimpleFilter, ChainFilter, InterleaveFilter)

    Predicate filter. The filter to be applied to the contents of the specified row. Depending on whether or not any results are yielded, either +true_mutations+ or +false_mutations+ will be executed. If unset, checks that the row contains any values.

  • on_match (Google::Cloud::Bigtable::MutationEntry) (defaults to: nil)

    Mutation entry applied to predicate filter match. Changes to be atomically applied to the specified row if +predicate_filter+ yields at least one cell when applied to +row_key+. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if +false_mutations+ is empty and at most 100,000 entries.

  • otherwise (Google::Cloud::Bigtable::MutationEntry) (defaults to: nil)

    Mutation entry applied when predicate filter does not match. Changes to be atomically applied to the specified row if +predicate_filter+ does not yield any cells when applied to +row_key+. Entries are applied in order, meaning that earlier mutations can be masked by later ones. Must contain at least one entry if +true_mutations+ is empty and at most 100,000 entries.

Returns:

  • (Boolean)

    Predicate match or not status



236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 236

def check_and_mutate_row \
    key,
    predicate,
    on_match: nil,
    otherwise: nil
  true_mutations = on_match.mutations if on_match
  false_mutations = otherwise.mutations if otherwise
  response = client.check_and_mutate_row(
    path,
    key,
    predicate_filter: predicate.to_grpc,
    true_mutations: true_mutations,
    false_mutations: false_mutations,
    app_profile_id: @app_profile_id
  )
  response.predicate_matched
end

#mutate_row(entry) ⇒ Boolean

Mutate row.

Mutates a row atomically. Cells in the row are left unchanged unless explicitly changed by +mutation+. Changes to be atomically applied to the specified row. Entries are applied in order, meaning that earlier mutations can be masked by later mutations. Must contain at least one mutation entry and at most 100,000.

Examples:

Single mutation on row.

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry.new("user-1")
entry.set_cell("cf1", "field1", "XYZ")
table.mutate_row(entry)

Multiple mutations on row.

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry("user-1")
entry.set_cell(
  "cf-1",
  "field-1",
  "XYZ",
  timestamp: (Time.now.to_f * 1000000).round(-3) # microseconds
).delete_cells("cf2", "field02")

table.mutate_row(entry)

Parameters:

Returns:

  • (Boolean)


75
76
77
78
79
80
81
82
83
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 75

def mutate_row entry
  client.mutate_row(
    path,
    entry.row_key,
    entry.mutations,
    app_profile_id: @app_profile_id
  )
  true
end

#mutate_rows(entries) ⇒ Array<Google::Bigtable::V2::MutateRowsResponse::Entry>

Mutates multiple rows in a batch. Each individual row is mutated atomically as in MutateRow, but the entire batch is not executed atomically.

Examples:

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new

table = bigtable.table("my-instance", "my-table")

entries = []
entries << table.new_mutation_entry("row-1").set_cell("cf1", "field1", "XYZ")
entries << table.new_mutation_entry("row-2").set_cell("cf1", "field1", "ABC")
table.mutate_row(entries)

Parameters:

  • entries (Array<Google::Cloud::Bigtable::MutationEntry>)

    The row keys and corresponding mutations to be applied in bulk. Each entry is applied as an atomic mutation, but the entries may be applied in arbitrary order (even between entries for the same row). At least one entry must be specified, and in total the entries can contain a maximum of 100,000 mutations.

Returns:



109
110
111
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 109

def mutate_rows entries
  RowsMutator.new(self, entries).apply_mutations
end

#new_mutation_entry(row_key = nil) ⇒ Google::Cloud::Bigtable::MutationEntry

Create an instance of mutation_entry

Examples:

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

entry = table.new_mutation_entry("row-key-1")

# Without row key
entry = table.new_mutation_entry

Parameters:

  • row_key (String) (defaults to: nil)

    Row key. Optional The row key of the row to which the mutation should be applied.

Returns:



305
306
307
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 305

def new_mutation_entry row_key = nil
  Google::Cloud::Bigtable::MutationEntry.new(row_key)
end

#new_read_modify_write_rule(family, qualifier) ⇒ Google::Cloud::Bigtable::ReadModifyWriteRule

Create an instance of ReadModifyWriteRule to append or increment the value of the cell qualifier.

Examples:

Create rule to append to qualifier value.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")
rule = table.new_read_modify_write_rule("cf", "qualifier-1")
rule.append("append-xyz")

Create rule to increment qualifier value.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")
rule = table.new_read_modify_write_rule("cf", "qualifier-1")
rule.increment(100)

Parameters:

  • family (String)

    The name of the column family to which the read/modify/write should be applied.

  • qualifier (String)

    The qualifier of the column to which the read/modify/write should be applied.

Returns:



334
335
336
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 334

def new_read_modify_write_rule family, qualifier
  Google::Cloud::Bigtable::ReadModifyWriteRule.new(family, qualifier)
end

#read_modify_write_row(key, rules) ⇒ Google::Cloud::Bigtable::Row

Modifies a row atomically on the server. The method reads the latest existing timestamp and value from the specified columns and writes a new entry based on pre-defined read/modify/write rules. The new value for the timestamp is the greater of the existing timestamp or the current server time. The method returns the new contents of all modified cells.

Examples:

Apply multiple modification rules.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

rule_1 = table.new_read_modify_write_rule("cf", "field01")
rule_1.append("append-xyz")

rule_2 = table.new_read_modify_write_rule("cf", "field01")
rule_2.increment(1)

row = table.read_modify_write_row("user01", [rule_1, rule_2])

puts row.cells

Apply single modification rules.

require "google/cloud/bigtable"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

rule = table.new_read_modify_write_rule("cf", "field01").append("append-xyz")

row = table.read_modify_write_row("user01", rule)

puts row.cells

Parameters:

Returns:



154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 154

def read_modify_write_row key, rules
  res_row = client.read_modify_write_row(
    path,
    key,
    Array(rules).map(&:to_grpc),
    app_profile_id: @app_profile_id
  ).row
  row = Row.new(res_row.key)

  res_row.families.each do |family|
    family.columns.each do |column|
      column.cells.each do |cell|
        row_cell = Row::Cell.new(
          family.name,
          column.qualifier,
          cell.timestamp_micros,
          cell.value,
          cell.labels
        )
        row.cells[family.name] << row_cell
      end
    end
  end
  row
end

#sample_row_keys:yields: sample_row_key

Read sample row keys.

Returns a sample of row keys in the table. The returned row keys will delimit contiguous sections of the table of approximately equal size. The sections can be used to break up the data for distributed tasks like MapReduces.

Examples:

require "google/cloud"

bigtable = Google::Cloud::Bigtable.new
table = bigtable.table("my-instance", "my-table")

table.sample_row_keys.each do |sample_row_key|
  p sample_row_key.key # user00116
  p sample_row_key.offset # 805306368
end

Yield Returns:

Returns:

  • (:yields: sample_row_key)

    Yield block for each processed SampleRowKey.



276
277
278
279
280
281
282
283
284
285
286
# File 'lib/google/cloud/bigtable/mutation_operations.rb', line 276

def sample_row_keys
  return enum_for(:sample_row_keys) unless block_given?

  response = client.sample_row_keys(
    path,
    app_profile_id: @app_profile_id
  )
  response.each do |grpc|
    yield SampleRowKey.from_grpc(grpc)
  end
end