Class: Google::Cloud::Spanner::Client
- Inherits:
-
Object
- Object
- Google::Cloud::Spanner::Client
- Defined in:
- lib/google/cloud/spanner/client.rb
Overview
Instance Method Summary collapse
-
#batch_write(exclude_txn_from_change_streams: false, request_options: nil, call_options: nil) {|batch_write| ... } ⇒ Google::Cloud::Spanner::BatchWriteResults
Batches the supplied mutation groups in a collection of efficient transactions.
-
#close ⇒ Object
Closes the client connection and releases resources.
-
#commit(exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) {|commit| ... } ⇒ Time, CommitResponse
Creates and commits a transaction for writes that execute atomically at a single logical point in time across columns, rows, and tables in a database.
-
#commit_timestamp ⇒ ColumnValue
Creates a column value object representing setting a field's value to the timestamp of the commit.
-
#database ⇒ Database
The Spanner database connected to.
-
#database_id ⇒ String
The unique identifier for the database.
-
#database_role ⇒ String
The Spanner session creator role.
-
#delete(table, keys = [], exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Deletes rows from a table.
-
#directed_read_options ⇒ Hash
A hash of values to specify the custom directed read options for executing SQL query.
-
#execute_partition_update(sql, params: nil, types: nil, exclude_txn_from_change_streams: false, query_options: nil, request_options: nil, call_options: nil) ⇒ Integer
(also: #execute_pdml)
Executes a Partitioned DML SQL statement.
-
#execute_query(sql, params: nil, types: nil, single_use: nil, query_options: nil, request_options: nil, call_options: nil, directed_read_options: nil) ⇒ Google::Cloud::Spanner::Results
(also: #execute, #query, #execute_sql)
Executes a SQL query.
-
#fields(types) ⇒ Fields
Creates a configuration object (Fields) that may be provided to queries or used to create STRUCT objects.
-
#insert(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Inserts new rows in a table.
-
#instance ⇒ Instance
The Spanner instance connected to.
-
#instance_id ⇒ String
The unique identifier for the instance.
-
#project ⇒ ::Google::Cloud::Spanner::Project
The Spanner project connected to.
-
#project_id ⇒ String
The unique identifier for the project.
-
#query_options ⇒ Hash
A hash of values to specify the custom query options for executing SQL query.
-
#range(beginning, ending, exclude_begin: false, exclude_end: false) ⇒ Google::Cloud::Spanner::Range
Creates a Spanner Range.
-
#read(table, columns, keys: nil, index: nil, limit: nil, single_use: nil, request_options: nil, call_options: nil, directed_read_options: nil, order_by: nil, lock_hint: nil) ⇒ Google::Cloud::Spanner::Results
Read rows from a database table, as a simple alternative to #execute_query.
-
#replace(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Inserts or replaces rows in a table.
-
#reset! ⇒ Object
Reset the client sessions.
-
#snapshot(strong: nil, timestamp: nil, read_timestamp: nil, staleness: nil, exact_staleness: nil, call_options: nil) {|snapshot| ... } ⇒ Object
Creates a snapshot read-only transaction for reads that execute atomically at a single logical point in time across columns, rows, and tables in a database.
-
#transaction(deadline: 120, exclude_txn_from_change_streams: false, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) {|transaction| ... } ⇒ Time, CommitResponse
Creates a transaction for reads and writes that execute atomically at a single logical point in time across columns, rows, and tables in a database.
-
#update(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Updates existing rows in a table.
-
#upsert(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
(also: #save)
Inserts or updates rows in a table.
Instance Method Details
#batch_write(exclude_txn_from_change_streams: false, request_options: nil, call_options: nil) {|batch_write| ... } ⇒ Google::Cloud::Spanner::BatchWriteResults
Batches the supplied mutation groups in a collection of efficient transactions.
All mutations in a group are committed atomically. However, mutations across groups can be committed non-atomically in an unspecified order and thus they must be independent of each other. Partial failure is possible, i.e., some groups may have been committed successfully, while others may have failed. The results of individual batches are streamed into the response as the batches are applied.
BatchWrite requests are not replay protected, meaning that each mutation group may be applied more than once. Replays of non-idempotent mutations may have undesirable effects. For example, replays of an insert mutation may produce an already exists error or if you use generated or commit timestamp-based keys, it may result in additional rows being added to the mutation's table. We recommend structuring your mutation groups to be idempotent to avoid this issue.
2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 |
# File 'lib/google/cloud/spanner/client.rb', line 2026 def batch_write exclude_txn_from_change_streams: false, request_options: nil, call_options: nil, &block raise ArgumentError, "Must provide a block" unless block_given? @pool.with_session do |session| session.batch_write( exclude_txn_from_change_streams: exclude_txn_from_change_streams, request_options: , call_options: , &block ) end end |
#close ⇒ Object
Closes the client connection and releases resources.
2576 2577 2578 |
# File 'lib/google/cloud/spanner/client.rb', line 2576 def close @pool.close end |
#commit(exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) {|commit| ... } ⇒ Time, CommitResponse
Creates and commits a transaction for writes that execute atomically at a single logical point in time across columns, rows, and tables in a database.
All changes are accumulated in memory until the block completes. Unlike #transaction, which can also perform reads, this operation accepts only mutations and makes a single API request.
Note: This method does not feature replay protection present in #transaction. This method makes a single RPC, whereas #transaction requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind changes.
1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 |
# File 'lib/google/cloud/spanner/client.rb', line 1923 def commit exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil, &block raise ArgumentError, "Must provide a block" unless block_given? = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| session.commit( exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode, &block ) end end |
#commit_timestamp ⇒ ColumnValue
Creates a column value object representing setting a field's value to the timestamp of the commit. (See Google::Cloud::Spanner::ColumnValue.commit_timestamp)
This placeholder value can only be used for timestamp columns that have set the option "(allow_commit_timestamp=true)" in the schema.
2569 2570 2571 |
# File 'lib/google/cloud/spanner/client.rb', line 2569 def ColumnValue. end |
#database ⇒ Database
The Spanner database connected to.
133 134 135 |
# File 'lib/google/cloud/spanner/client.rb', line 133 def database @project.database instance_id, database_id end |
#database_id ⇒ String
The unique identifier for the database.
115 116 117 |
# File 'lib/google/cloud/spanner/client.rb', line 115 def database_id @database_id end |
#database_role ⇒ String
The Spanner session creator role.
139 140 141 |
# File 'lib/google/cloud/spanner/client.rb', line 139 def database_role @database_role end |
#delete(table, keys = [], exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Deletes rows from a table. Succeeds whether or not the specified rows were present.
Changes are made immediately upon calling this method using a single-use transaction. To make multiple changes in the same single-use transaction use #commit. To make changes in a transaction that supports reads and automatic retry protection use #transaction.
Note: This method does not feature replay protection present in Transaction#delete (See #transaction). This method makes a single RPC, whereas Transaction#delete requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind deletions.
1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 |
# File 'lib/google/cloud/spanner/client.rb', line 1781 def delete table, keys = [], exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| session.delete table, keys, exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode end end |
#directed_read_options ⇒ Hash
A hash of values to specify the custom directed read options for executing SQL query.
153 154 155 |
# File 'lib/google/cloud/spanner/client.rb', line 153 def end |
#execute_partition_update(sql, params: nil, types: nil, exclude_txn_from_change_streams: false, query_options: nil, request_options: nil, call_options: nil) ⇒ Integer Also known as: execute_pdml
Executes a Partitioned DML SQL statement.
Partitioned DML is an alternate implementation with looser semantics to enable large-scale changes without running into transaction size limits or (accidentally) locking the entire table in one large transaction. At a high level, it partitions the keyspace and executes the statement on each partition in separate internal transactions.
Partitioned DML does not guarantee database-wide atomicity of the statement - it guarantees row-based atomicity, which includes updates to any indices. Additionally, it does not guarantee that it will execute exactly one time against each row - it guarantees "at least once" semantics.
Where DML statements must be executed using Transaction (see Transaction#execute_update), Partitioned DML statements are executed outside of a read/write transaction.
Not all DML statements can be executed in the Partitioned DML mode and the backend will return an error for the statements which are not supported.
DML statements must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. InvalidArgumentError is raised if the statement does not qualify.
The method will block until the update is complete. Running a DML statement with this method does not offer exactly once semantics, and therefore the DML statement should be idempotent. The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table. This is a Partitioned DML transaction in which a single Partitioned DML statement is executed. Partitioned DML partitions the and runs the DML statement over each partition in parallel using separate, internal transactions that commit independently. Partitioned DML transactions do not need to be committed.
Partitioned DML updates are used to execute a single DML statement with a different execution strategy that provides different, and often better, scalability properties for large, table-wide operations than DML in a Transaction#execute_update transaction. Smaller scoped statements, such as an OLTP workload, should prefer using Transaction#execute_update.
That said, Partitioned DML is not a drop-in replacement for standard DML used in Transaction#execute_update.
- The DML statement must be fully-partitionable. Specifically, the statement must be expressible as the union of many statements which each access only a single row of the table.
- The statement is not applied atomically to all rows of the table. Rather, the statement is applied atomically to partitions of the table, in independent internal transactions. Secondary index rows are updated atomically with the base table rows.
- Partitioned DML does not guarantee exactly-once execution semantics
against a partition. The statement will be applied at least once to
each partition. It is strongly recommended that the DML statement
should be idempotent to avoid unexpected results. For instance, it
is potentially dangerous to run a statement such as
UPDATE table SET column = column + 1as it could be run multiple times against some rows. - The partitions are committed automatically - there is no support for Commit or Rollback. If the call returns an error, or if the client issuing the DML statement dies, it is possible that some rows had the statement executed on them successfully. It is also possible that statement was never executed against other rows.
- If any error is encountered during the execution of the partitioned DML operation (for instance, a UNIQUE INDEX violation, division by zero, or a value that cannot be stored due to schema constraints), then the operation is stopped at that point and an error is returned. It is possible that at this point, some partitions have been committed (or even committed multiple times), and other partitions have not been run at all.
Given the above, Partitioned DML is good fit for large, database-wide, operations that are idempotent, such as deleting old rows from a very large table.
773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 |
# File 'lib/google/cloud/spanner/client.rb', line 773 def execute_partition_update sql, params: nil, types: nil, exclude_txn_from_change_streams: false, query_options: nil, request_options: nil, call_options: nil ensure_service! params, types = Convert.to_input_params_and_types params, types = Convert. , tag_type: :request_tag route_to_leader = LARHeaders.partition_query results = nil @pool.with_session do |session| transaction = pdml_transaction session, exclude_txn_from_change_streams: exclude_txn_from_change_streams results = session.execute_query \ sql, params: params, types: types, transaction: transaction, query_options: , request_options: , call_options: , route_to_leader: route_to_leader end # Stream all PartialResultSet to get ResultSetStats results.rows.to_a # Raise an error if there is not a row count returned if results.row_count.nil? raise Google::Cloud::InvalidArgumentError, "Partitioned DML statement is invalid." end results.row_count end |
#execute_query(sql, params: nil, types: nil, single_use: nil, query_options: nil, request_options: nil, call_options: nil, directed_read_options: nil) ⇒ Google::Cloud::Spanner::Results Also known as: execute, query, execute_sql
Executes a SQL query.
The following settings can be provided:
:exclude_replicas(Hash) Exclude_replicas indicates what replicas should be excluded from serving requests. Spanner will not route requests to the replicas in this list.:include_replicas(Hash) Include_replicas indicates the order of replicas to process the request. If auto_failover_disabled is set to true and all replicas are exhausted without finding a healthy replica, Spanner will wait for a replica in the list to become available, requests may fail due to DEADLINE_EXCEEDED errors.
493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 |
# File 'lib/google/cloud/spanner/client.rb', line 493 def execute_query sql, params: nil, types: nil, single_use: nil, query_options: nil, request_options: nil, call_options: nil, directed_read_options: nil validate_single_use_args! single_use ensure_service! params, types = Convert.to_input_params_and_types params, types = Convert. , tag_type: :request_tag single_use_tx = single_use_transaction single_use route_to_leader = LARHeaders.execute_query false results = nil @pool.with_session do |session| results = session.execute_query \ sql, params: params, types: types, transaction: single_use_tx, query_options: , request_options: , call_options: , directed_read_options: || , route_to_leader: route_to_leader end results end |
#fields(types) ⇒ Fields
Creates a configuration object (Fields) that may be provided to queries or used to create STRUCT objects. (The STRUCT will be represented by the Data class.) See #execute and/or Fields#struct.
For more information, see Data Types - Constructing a STRUCT.
2481 2482 2483 |
# File 'lib/google/cloud/spanner/client.rb', line 2481 def fields types Fields.new types end |
#insert(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Inserts new rows in a table. If any of the rows already exist, the write or request fails with AlreadyExistsError.
Changes are made immediately upon calling this method using a single-use transaction. To make multiple changes in the same single-use transaction use #commit. To make changes in a transaction that supports reads and automatic retry protection use #transaction.
Note: This method does not feature replay protection present in Transaction#insert (See #transaction). This method makes a single RPC, whereas Transaction#insert requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind inserts.
1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 |
# File 'lib/google/cloud/spanner/client.rb', line 1333 def insert table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| session.insert table, rows, exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode end end |
#instance ⇒ Instance
The Spanner instance connected to.
127 128 129 |
# File 'lib/google/cloud/spanner/client.rb', line 127 def instance @project.instance instance_id end |
#instance_id ⇒ String
The unique identifier for the instance.
109 110 111 |
# File 'lib/google/cloud/spanner/client.rb', line 109 def instance_id @instance_id end |
#project ⇒ ::Google::Cloud::Spanner::Project
The Spanner project connected to.
121 122 123 |
# File 'lib/google/cloud/spanner/client.rb', line 121 def project @project end |
#project_id ⇒ String
The unique identifier for the project.
103 104 105 |
# File 'lib/google/cloud/spanner/client.rb', line 103 def project_id @project.service.project end |
#query_options ⇒ Hash
A hash of values to specify the custom query options for executing SQL query.
146 147 148 |
# File 'lib/google/cloud/spanner/client.rb', line 146 def end |
#range(beginning, ending, exclude_begin: false, exclude_end: false) ⇒ Google::Cloud::Spanner::Range
Creates a Spanner Range. This can be used in place of a Ruby Range when needing to exclude the beginning value.
2538 2539 2540 2541 2542 |
# File 'lib/google/cloud/spanner/client.rb', line 2538 def range beginning, ending, exclude_begin: false, exclude_end: false Range.new beginning, ending, exclude_begin: exclude_begin, exclude_end: exclude_end end |
#read(table, columns, keys: nil, index: nil, limit: nil, single_use: nil, request_options: nil, call_options: nil, directed_read_options: nil, order_by: nil, lock_hint: nil) ⇒ Google::Cloud::Spanner::Results
Read rows from a database table, as a simple alternative to #execute_query.
The following settings can be provided:
:exclude_replicas(Hash) Exclude_replicas indicates what replicas should be excluded from serving requests. Spanner will not route requests to the replicas in this list.:include_replicas(Hash) Include_replicas indicates the order of replicas to process the request. If auto_failover_disabled is set to true and all replicas are exhausted without finding a healthy replica, Spanner will wait for a replica in the list to become available, requests may fail due to DEADLINE_EXCEEDED errors.
1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 |
# File 'lib/google/cloud/spanner/client.rb', line 1003 def read table, columns, keys: nil, index: nil, limit: nil, single_use: nil, request_options: nil, call_options: nil, directed_read_options: nil, order_by: nil, lock_hint: nil validate_single_use_args! single_use ensure_service! columns = Array(columns).map(&:to_s) keys = Convert.to_key_set keys single_use_tx = single_use_transaction single_use route_to_leader = LARHeaders.read false = Convert. , tag_type: :request_tag results = nil @pool.with_session do |session| results = session.read \ table, columns, keys: keys, index: index, limit: limit, transaction: single_use_tx, request_options: , call_options: , directed_read_options: || , route_to_leader: route_to_leader, order_by: order_by, lock_hint: lock_hint end results end |
#replace(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Inserts or replaces rows in a table. If any of the rows already exist,
it is deleted, and the column values provided are inserted instead.
Unlike #upsert, this means any values not explicitly written become
NULL.
Changes are made immediately upon calling this method using a single-use transaction. To make multiple changes in the same single-use transaction use #commit. To make changes in a transaction that supports reads and automatic retry protection use #transaction.
Note: This method does not feature replay protection present in Transaction#replace (See #transaction). This method makes a single RPC, whereas Transaction#replace requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind replaces.
1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 |
# File 'lib/google/cloud/spanner/client.rb', line 1651 def replace table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil @pool.with_session do |session| session.replace table, rows, exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode end end |
#reset! ⇒ Object
Reset the client sessions.
2583 2584 2585 |
# File 'lib/google/cloud/spanner/client.rb', line 2583 def reset! @pool.reset! end |
#snapshot(strong: nil, timestamp: nil, read_timestamp: nil, staleness: nil, exact_staleness: nil, call_options: nil) {|snapshot| ... } ⇒ Object
Creates a snapshot read-only transaction for reads that execute atomically at a single logical point in time across columns, rows, and tables in a database. For transactions that only read, snapshot read-only transactions provide simpler semantics and are almost always faster than read-write transactions.
2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 |
# File 'lib/google/cloud/spanner/client.rb', line 2384 def snapshot strong: nil, timestamp: nil, read_timestamp: nil, staleness: nil, exact_staleness: nil, call_options: nil validate_snapshot_args! strong: strong, timestamp: , read_timestamp: , staleness: staleness, exact_staleness: exact_staleness ensure_service! unless Thread.current[IS_TRANSACTION_RUNNING_KEY].nil? raise "Nested snapshots are not allowed" end @pool.with_session do |session| snp_grpc = @project.service.create_snapshot \ session.path, strong: strong, timestamp: || , staleness: staleness || exact_staleness, call_options: Thread.current[IS_TRANSACTION_RUNNING_KEY] = true snp = Snapshot.from_grpc snp_grpc, session, directed_read_options: yield snp if block_given? ensure Thread.current[IS_TRANSACTION_RUNNING_KEY] = nil end nil end |
#transaction(deadline: 120, exclude_txn_from_change_streams: false, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) {|transaction| ... } ⇒ Time, CommitResponse
Creates a transaction for reads and writes that execute atomically at a single logical point in time across columns, rows, and tables in a database.
The transaction will always commit unless an error is raised. If the error raised is Rollback the transaction method will return without passing on the error. All other errors will be passed on.
All changes are accumulated in memory until the block completes.
Transactions will be automatically retried when possible, until
deadline is reached. This operation makes separate API requests to
begin and commit the transaction.
2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 |
# File 'lib/google/cloud/spanner/client.rb', line 2215 def transaction deadline: 120, exclude_txn_from_change_streams: false, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil ensure_service! unless Thread.current[IS_TRANSACTION_RUNNING_KEY].nil? raise "Nested transactions are not allowed" end deadline = validate_deadline deadline backoff = 1.0 start_time = current_time = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| tx = session.create_empty_transaction \ exclude_txn_from_change_streams: exclude_txn_from_change_streams, read_lock_mode: read_lock_mode if tx.transaction_tag = [:transaction_tag] end begin Thread.current[IS_TRANSACTION_RUNNING_KEY] = true yield tx unless tx.existing_transaction? # This typically will happen if the yielded `tx` object was only used to add mutations. # Then it never called any RPCs and didn't create a server-side Transaction object. # In which case we should make an explicit BeginTransaction call here. tx.safe_begin_transaction!( exclude_from_change_streams: exclude_txn_from_change_streams, request_options: , call_options: , read_lock_mode: read_lock_mode ) end transaction_id = tx.transaction_id # This "inner retry" mechanism is for Commit Response protocol. # Unlike the retry on `Aborted` errors it will not re-create a transaction. # This is intentional, as these retries are not related to e.g. # transactions deadlocking, so it's OK to retry "as-is". should_retry = true while should_retry commit_resp = @project.service.commit( tx.session.path, tx.mutations, transaction_id: transaction_id, exclude_txn_from_change_streams: exclude_txn_from_change_streams, commit_options: , request_options: , call_options: , precommit_token: tx.precommit_token, read_lock_mode: read_lock_mode ) tx.precommit_token = commit_resp.precommit_token should_retry = !commit_resp.precommit_token.nil? end resp = CommitResponse.from_grpc commit_resp ? resp : resp. rescue GRPC::Aborted, Google::Cloud::AbortedError, GRPC::Internal, Google::Cloud::InternalError => e check_and_propagate_err! e, (current_time - start_time > deadline) # Sleep the amount from RetryDelay, or incremental backoff sleep(delay_from_aborted(e) || backoff *= 1.3) # Create new transaction on the session and retry the block previous_transaction_id = tx.transaction_id if tx.existing_transaction? tx = session.create_empty_transaction( exclude_txn_from_change_streams: exclude_txn_from_change_streams, previous_transaction_id: previous_transaction_id, read_lock_mode: read_lock_mode ) if tx.transaction_tag = [:transaction_tag] end retry rescue StandardError => e # Rollback transaction when handling unexpected error tx.session.rollback tx.transaction_id if tx.existing_transaction? # Return nil if raised with rollback. return nil if e.is_a? Rollback # Re-raise error. raise e ensure Thread.current[IS_TRANSACTION_RUNNING_KEY] = nil end end end |
#update(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse
Updates existing rows in a table. If any of the rows does not already exist, the request fails with NotFoundError.
Changes are made immediately upon calling this method using a single-use transaction. To make multiple changes in the same single-use transaction use #commit. To make changes in a transaction that supports reads and automatic retry protection use #transaction.
Note: This method does not feature replay protection present in Transaction#update (See #transaction). This method makes a single RPC, whereas Transaction#update requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind updates.
1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 |
# File 'lib/google/cloud/spanner/client.rb', line 1491 def update table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| session.update table, rows, exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode end end |
#upsert(table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil) ⇒ Time, CommitResponse Also known as: save
Inserts or updates rows in a table. If any of the rows already exist, then its column values are overwritten with the ones provided. Any column values not explicitly written are preserved.
Changes are made immediately upon calling this method using a single-use transaction. To make multiple changes in the same single-use transaction use #commit. To make changes in a transaction that supports reads and automatic retry protection use #transaction.
Note: This method does not feature replay protection present in Transaction#upsert (See #transaction). This method makes a single RPC, whereas Transaction#upsert requires two RPCs (one of which may be performed in advance), and so this method may be appropriate for latency sensitive and/or high throughput blind upserts.
1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 |
# File 'lib/google/cloud/spanner/client.rb', line 1172 def upsert table, rows, exclude_txn_from_change_streams: false, isolation_level: nil, commit_options: nil, request_options: nil, call_options: nil, read_lock_mode: nil = Convert. \ , tag_type: :transaction_tag @pool.with_session do |session| session.upsert table, rows, exclude_txn_from_change_streams: exclude_txn_from_change_streams, isolation_level: isolation_level, commit_options: , request_options: , call_options: , read_lock_mode: read_lock_mode end end |