Class: Google::Cloud::Bigquery::QueryJob::Updater
- Inherits:
-
Google::Cloud::Bigquery::QueryJob
- Object
- Job
- Google::Cloud::Bigquery::QueryJob
- Google::Cloud::Bigquery::QueryJob::Updater
- Defined in:
- lib/google/cloud/bigquery/query_job.rb
Overview
Yielded to a block to accumulate changes for a patch request.
Attributes collapse
-
#cache=(value) ⇒ Object
Specifies to look in the query cache for results.
- #cancel ⇒ Object
-
#clustering_fields=(fields) ⇒ Object
Sets one or more fields on which the destination table should be clustered.
-
#create=(value) ⇒ Object
Sets the create disposition for creating the query results table.
-
#dataset=(value) ⇒ Object
Sets the default dataset of tables referenced in the query.
-
#dryrun=(value) ⇒ Object
(also: #dry_run=)
Sets the dry run flag for the query job.
-
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
-
#external=(value) ⇒ Object
Sets definitions for external tables used in the query.
-
#flatten=(value) ⇒ Object
Flatten nested and repeated fields in legacy SQL queries.
-
#labels=(value) ⇒ Object
Sets the labels to use for the job.
-
#large_results=(value) ⇒ Object
Allow large results for a legacy SQL query.
-
#legacy_sql=(value) ⇒ Object
Sets the query syntax to legacy SQL.
-
#location=(value) ⇒ Object
Sets the geographic location where the job should run.
-
#maximum_bytes_billed=(value) ⇒ Object
Sets the maximum bytes billed for the query.
-
#params=(params) ⇒ Object
Sets the query parameters.
-
#priority=(value) ⇒ Object
Sets the priority of the query.
-
#range_partitioning_end=(range_end) ⇒ Object
Sets the end of range partitioning, exclusive, for the destination table.
-
#range_partitioning_field=(field) ⇒ Object
Sets the field on which to range partition the table.
-
#range_partitioning_interval=(range_interval) ⇒ Object
Sets width of each interval for data in range partitions.
-
#range_partitioning_start=(range_start) ⇒ Object
Sets the start of range partitioning, inclusive, for the destination table.
- #reload! ⇒ Object (also: #refresh!)
- #rerun! ⇒ Object
-
#set_params_and_types(params, types = nil) ⇒ Object
Sets the query parameters.
-
#standard_sql=(value) ⇒ Object
Sets the query syntax to standard SQL.
-
#table=(value) ⇒ Object
Sets the destination for the query results table.
-
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the partition expiration for the destination table.
-
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to partition the destination table.
-
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified.
-
#time_partitioning_type=(type) ⇒ Object
Sets the partitioning for the destination table.
-
#udfs=(value) ⇒ Object
Sets user defined functions for the query.
- #wait_until_done! ⇒ Object
-
#write=(value) ⇒ Object
Sets the write disposition for when the query results table exists.
Methods inherited from Google::Cloud::Bigquery::QueryJob
#batch?, #bytes_processed, #cache?, #cache_hit?, #clustering?, #clustering_fields, #data, #ddl?, #ddl_operation_performed, #ddl_target_routine, #ddl_target_table, #destination, #dml?, #dryrun?, #encryption, #flatten?, #interactive?, #large_results?, #legacy_sql?, #maximum_billing_tier, #maximum_bytes_billed, #num_dml_affected_rows, #query_plan, #range_partitioning?, #range_partitioning_end, #range_partitioning_field, #range_partitioning_interval, #range_partitioning_start, #standard_sql?, #statement_type, #time_partitioning?, #time_partitioning_expiration, #time_partitioning_field, #time_partitioning_require_filter?, #time_partitioning_type, #udfs
Methods inherited from Job
#configuration, #created_at, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #num_child_jobs, #parent_job_id, #pending?, #project_id, #running?, #script_statistics, #started_at, #state, #statistics, #status, #user_email
Instance Method Details
#cache=(value) ⇒ Object
Specifies to look in the query cache for results.
803 804 805 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 803 def cache= value @gapi.configuration.query.use_query_cache = value end |
#cancel ⇒ Object
1492 1493 1494 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1492 def cancel raise "not implemented in #{self.class}" end |
#clustering_fields=(fields) ⇒ Object
Sets one or more fields on which the destination table should be clustered. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
1487 1488 1489 1490 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1487 def clustering_fields= fields @gapi.configuration.query.clustering ||= Google::Apis::BigqueryV2::Clustering.new @gapi.configuration.query.clustering.fields = fields end |
#create=(value) ⇒ Object
Sets the create disposition for creating the query results table.
create new tables. The default value is needed
.
The following values are supported:
needed
- Create the table if it does not exist.never
- The table must already exist. A 'notFound' error is raised if the table does not exist.
970 971 972 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 970 def create= value @gapi.configuration.query.create_disposition = Convert.create_disposition value end |
#dataset=(value) ⇒ Object
Sets the default dataset of tables referenced in the query.
840 841 842 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 840 def dataset= value @gapi.configuration.query.default_dataset = @service.dataset_ref_from value end |
#dryrun=(value) ⇒ Object Also known as: dry_run=
Sets the dry run flag for the query job.
1001 1002 1003 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1001 def dryrun= value @gapi.configuration.dry_run = value end |
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
1141 1142 1143 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1141 def encryption= val @gapi.configuration.query.update! destination_encryption_configuration: val.to_gapi end |
#external=(value) ⇒ Object
Sets definitions for external tables used in the query.
1099 1100 1101 1102 1103 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1099 def external= value external_table_pairs = value.map { |name, obj| [String(name), obj.to_gapi] } external_table_hash = Hash[external_table_pairs] @gapi.configuration.query.table_definitions = external_table_hash end |
#flatten=(value) ⇒ Object
Flatten nested and repeated fields in legacy SQL queries.
829 830 831 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 829 def flatten= value @gapi.configuration.query.flatten_results = value end |
#labels=(value) ⇒ Object
Sets the labels to use for the job.
1053 1054 1055 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1053 def labels= value @gapi.configuration.update! labels: value end |
#large_results=(value) ⇒ Object
Allow large results for a legacy SQL query.
816 817 818 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 816 def large_results= value @gapi.configuration.query.allow_large_results = value end |
#legacy_sql=(value) ⇒ Object
Sets the query syntax to legacy SQL.
1069 1070 1071 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1069 def legacy_sql= value @gapi.configuration.query.use_legacy_sql = value end |
#location=(value) ⇒ Object
Sets the geographic location where the job should run. Required except for US and EU.
773 774 775 776 777 778 779 780 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 773 def location= value @gapi.job_reference.location = value return unless value.nil? # Treat assigning value of nil the same as unsetting the value. unset = @gapi.job_reference.instance_variables.include? :@location @gapi.job_reference.remove_instance_variable :@location if unset end |
#maximum_bytes_billed=(value) ⇒ Object
Sets the maximum bytes billed for the query.
1027 1028 1029 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1027 def maximum_bytes_billed= value @gapi.configuration.query.maximum_bytes_billed = value end |
#params=(params) ⇒ Object
Sets the query parameters. Standard SQL only.
Use #set_params_and_types to set both params and types.
876 877 878 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 876 def params= params set_params_and_types params end |
#priority=(value) ⇒ Object
Sets the priority of the query.
789 790 791 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 789 def priority= value @gapi.configuration.query.priority = priority_value value end |
#range_partitioning_end=(range_end) ⇒ Object
Sets the end of range partitioning, exclusive, for the destination table. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_field=.
1296 1297 1298 1299 1300 1301 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1296 def range_partitioning_end= range_end @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.query.range_partitioning.range.end = range_end end |
#range_partitioning_field=(field) ⇒ Object
Sets the field on which to range partition the table. See Creating and using integer range partitioned tables.
See #range_partitioning_start=, #range_partitioning_interval= and #range_partitioning_end=.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
1179 1180 1181 1182 1183 1184 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1179 def range_partitioning_field= field @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.query.range_partitioning.field = field end |
#range_partitioning_interval=(range_interval) ⇒ Object
Sets width of each interval for data in range partitions. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_field=, #range_partitioning_start= and #range_partitioning_end=.
1257 1258 1259 1260 1261 1262 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1257 def range_partitioning_interval= range_interval @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.query.range_partitioning.range.interval = range_interval end |
#range_partitioning_start=(range_start) ⇒ Object
Sets the start of range partitioning, inclusive, for the destination table. See Creating and using integer range partitioned tables.
You can only set range partitioning when creating a table. BigQuery does not allow you to change partitioning on an existing table.
See #range_partitioning_field=, #range_partitioning_interval= and #range_partitioning_end=.
1218 1219 1220 1221 1222 1223 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1218 def range_partitioning_start= range_start @gapi.configuration.query.range_partitioning ||= Google::Apis::BigqueryV2::RangePartitioning.new( range: Google::Apis::BigqueryV2::RangePartitioning::Range.new ) @gapi.configuration.query.range_partitioning.range.start = range_start end |
#reload! ⇒ Object Also known as: refresh!
1500 1501 1502 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1500 def reload! raise "not implemented in #{self.class}" end |
#rerun! ⇒ Object
1496 1497 1498 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1496 def rerun! raise "not implemented in #{self.class}" end |
#set_params_and_types(params, types = nil) ⇒ Object
Sets the query parameters. Standard SQL only.
934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 934 def set_params_and_types params, types = nil types ||= params.class.new raise ArgumentError, "types must use the same format as params" if types.class != params.class case params when Array then @gapi.configuration.query.use_legacy_sql = false @gapi.configuration.query.parameter_mode = "POSITIONAL" @gapi.configuration.query.query_parameters = params.zip(types).map do |param, type| Convert.to_query_param param, type end when Hash then @gapi.configuration.query.use_legacy_sql = false @gapi.configuration.query.parameter_mode = "NAMED" @gapi.configuration.query.query_parameters = params.map do |name, param| type = types[name] Convert.to_query_param(param, type).tap { |named_param| named_param.name = String name } end else raise ArgumentError, "params must be an Array or a Hash" end end |
#standard_sql=(value) ⇒ Object
Sets the query syntax to standard SQL.
1085 1086 1087 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1085 def standard_sql= value @gapi.configuration.query.use_legacy_sql = !value end |
#table=(value) ⇒ Object
Sets the destination for the query results table.
1014 1015 1016 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1014 def table= value @gapi.configuration.query.destination_table = table_ref_from value end |
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the partition expiration for the destination table. See Partitioned Tables.
The destination table must also be partitioned. See #time_partitioning_type=.
1424 1425 1426 1427 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1424 def time_partitioning_expiration= expiration @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! expiration_ms: expiration * 1000 end |
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to partition the destination table. If not
set, the destination table is partitioned by pseudo column
_PARTITIONTIME
; if set, the table is partitioned by this field.
See Partitioned
Tables.
The destination table must also be partitioned. See #time_partitioning_type=.
You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
1385 1386 1387 1388 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1385 def time_partitioning_field= field @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! field: field end |
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.
1440 1441 1442 1443 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1440 def time_partitioning_require_filter= val @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! require_partition_filter: val end |
#time_partitioning_type=(type) ⇒ Object
Sets the partitioning for the destination table. See Partitioned
Tables.
The supported types are DAY
, HOUR
, MONTH
, and YEAR
, which will
generate one partition per day, hour, month, and year, respectively.
You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
1339 1340 1341 1342 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1339 def time_partitioning_type= type @gapi.configuration.query.time_partitioning ||= Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! type: type end |
#udfs=(value) ⇒ Object
Sets user defined functions for the query.
1117 1118 1119 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1117 def udfs= value @gapi.configuration.query.user_defined_function_resources = udfs_gapi_from value end |
#wait_until_done! ⇒ Object
1505 1506 1507 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1505 def wait_until_done! raise "not implemented in #{self.class}" end |
#write=(value) ⇒ Object
Sets the write disposition for when the query results table exists.
988 989 990 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 988 def write= value @gapi.configuration.query.write_disposition = Convert.write_disposition value end |