Class: Google::Cloud::Bigquery::QueryJob::Updater
- Inherits:
-
Google::Cloud::Bigquery::QueryJob
- Object
- Job
- Google::Cloud::Bigquery::QueryJob
- Google::Cloud::Bigquery::QueryJob::Updater
- Defined in:
- lib/google/cloud/bigquery/query_job.rb
Overview
Yielded to a block to accumulate changes for a patch request.
Attributes collapse
-
#cache=(value) ⇒ Object
Specifies to look in the query cache for results.
-
#clustering_fields=(fields) ⇒ Object
Sets one or more fields on which the destination table should be clustered.
-
#create=(value) ⇒ Object
Sets the create disposition for creating the query results table.
-
#dataset=(value) ⇒ Object
Sets the default dataset of tables referenced in the query.
-
#dryrun=(value) ⇒ Object
(also: #dry_run=)
Sets the dry run flag for the query job.
-
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
-
#external=(value) ⇒ Object
Sets definitions for external tables used in the query.
-
#flatten=(value) ⇒ Object
Flatten nested and repeated fields in legacy SQL queries.
-
#labels=(value) ⇒ Object
Sets the labels to use for the job.
-
#large_results=(value) ⇒ Object
Allow large results for a legacy SQL query.
-
#legacy_sql=(value) ⇒ Object
Sets the query syntax to legacy SQL.
-
#location=(value) ⇒ Object
Sets the geographic location where the job should run.
-
#maximum_bytes_billed=(value) ⇒ Object
Sets the maximum bytes billed for the query.
-
#params=(params) ⇒ Object
Sets the query parameters.
-
#priority=(value) ⇒ Object
Sets the priority of the query.
-
#standard_sql=(value) ⇒ Object
Sets the query syntax to standard SQL.
-
#table=(value) ⇒ Object
Sets the destination for the query results table.
-
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the partition expiration for the destination table.
-
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to partition the destination table.
-
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified.
-
#time_partitioning_type=(type) ⇒ Object
Sets the partitioning for the destination table.
-
#udfs=(value) ⇒ Object
Sets user defined functions for the query.
-
#write=(value) ⇒ Object
Sets the write disposition for when the query results table exists.
Methods inherited from Google::Cloud::Bigquery::QueryJob
#batch?, #bytes_processed, #cache?, #cache_hit?, #clustering?, #clustering_fields, #data, #ddl?, #ddl_operation_performed, #ddl_target_table, #destination, #dml?, #dryrun?, #encryption, #flatten?, #interactive?, #large_results?, #legacy_sql?, #maximum_billing_tier, #maximum_bytes_billed, #num_dml_affected_rows, #query_plan, #standard_sql?, #statement_type, #time_partitioning?, #time_partitioning_expiration, #time_partitioning_field, #time_partitioning_require_filter?, #time_partitioning_type, #udfs, #wait_until_done!
Methods inherited from Job
#cancel, #configuration, #created_at, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #pending?, #project_id, #reload!, #rerun!, #running?, #started_at, #state, #statistics, #status, #user_email, #wait_until_done!
Instance Method Details
#cache=(value) ⇒ Object
Specifies to look in the query cache for results.
696 697 698 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 696 def cache= value @gapi.configuration.query.use_query_cache = value end |
#clustering_fields=(fields) ⇒ Object
Sets one or more fields on which the destination table should be clustered. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
1145 1146 1147 1148 1149 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1145 def clustering_fields= fields @gapi.configuration.query.clustering ||= \ Google::Apis::BigqueryV2::Clustering.new @gapi.configuration.query.clustering.fields = fields end |
#create=(value) ⇒ Object
Sets the create disposition for creating the query results table.
create new tables. The default value is needed
.
The following values are supported:
needed
- Create the table if it does not exist.never
- The table must already exist. A 'notFound' error is raised if the table does not exist.
785 786 787 788 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 785 def create= value @gapi.configuration.query.create_disposition = Convert.create_disposition value end |
#dataset=(value) ⇒ Object
Sets the default dataset of tables referenced in the query.
733 734 735 736 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 733 def dataset= value @gapi.configuration.query.default_dataset = @service.dataset_ref_from value end |
#dryrun=(value) ⇒ Object Also known as: dry_run=
Sets the dry run flag for the query job.
818 819 820 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 818 def dryrun= value @gapi.configuration.dry_run = value end |
#encryption=(val) ⇒ Object
Sets the encryption configuration of the destination table.
952 953 954 955 956 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 952 def encryption= val @gapi.configuration.query.update!( destination_encryption_configuration: val.to_gapi ) end |
#external=(value) ⇒ Object
Sets definitions for external tables used in the query.
907 908 909 910 911 912 913 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 907 def external= value external_table_pairs = value.map do |name, obj| [String(name), obj.to_gapi] end external_table_hash = Hash[external_table_pairs] @gapi.configuration.query.table_definitions = external_table_hash end |
#flatten=(value) ⇒ Object
Flatten nested and repeated fields in legacy SQL queries.
722 723 724 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 722 def flatten= value @gapi.configuration.query.flatten_results = value end |
#labels=(value) ⇒ Object
Sets the labels to use for the job.
861 862 863 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 861 def labels= value @gapi.configuration.update! labels: value end |
#large_results=(value) ⇒ Object
Allow large results for a legacy SQL query.
709 710 711 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 709 def large_results= value @gapi.configuration.query.allow_large_results = value end |
#legacy_sql=(value) ⇒ Object
Sets the query syntax to legacy SQL.
877 878 879 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 877 def legacy_sql= value @gapi.configuration.query.use_legacy_sql = value end |
#location=(value) ⇒ Object
Sets the geographic location where the job should run. Required except for US and EU.
666 667 668 669 670 671 672 673 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 666 def location= value @gapi.job_reference.location = value return unless value.nil? # Treat assigning value of nil the same as unsetting the value. unset = @gapi.job_reference.instance_variables.include? :@location @gapi.job_reference.remove_instance_variable :@location if unset end |
#maximum_bytes_billed=(value) ⇒ Object
Sets the maximum bytes billed for the query.
844 845 846 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 844 def maximum_bytes_billed= value @gapi.configuration.query.maximum_bytes_billed = value end |
#params=(params) ⇒ Object
Sets the query parameters. Standard SQL only.
750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 750 def params= params case params when Array then @gapi.configuration.query.use_legacy_sql = false @gapi.configuration.query.parameter_mode = "POSITIONAL" @gapi.configuration.query.query_parameters = params.map do |param| Convert.to_query_param param end when Hash then @gapi.configuration.query.use_legacy_sql = false @gapi.configuration.query.parameter_mode = "NAMED" @gapi.configuration.query.query_parameters = params.map do |name, param| Convert.to_query_param(param).tap do |named_param| named_param.name = String name end end else raise "Query parameters must be an Array or a Hash." end end |
#priority=(value) ⇒ Object
Sets the priority of the query.
682 683 684 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 682 def priority= value @gapi.configuration.query.priority = priority_value value end |
#standard_sql=(value) ⇒ Object
Sets the query syntax to standard SQL.
893 894 895 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 893 def standard_sql= value @gapi.configuration.query.use_legacy_sql = !value end |
#table=(value) ⇒ Object
Sets the destination for the query results table.
831 832 833 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 831 def table= value @gapi.configuration.query.destination_table = table_ref_from value end |
#time_partitioning_expiration=(expiration) ⇒ Object
Sets the partition expiration for the destination table. See Partitioned Tables.
The destination table must also be partitioned. See #time_partitioning_type=.
1078 1079 1080 1081 1082 1083 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1078 def time_partitioning_expiration= expiration @gapi.configuration.query.time_partitioning ||= \ Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! \ expiration_ms: expiration * 1000 end |
#time_partitioning_field=(field) ⇒ Object
Sets the field on which to partition the destination table. If not
set, the destination table is partitioned by pseudo column
_PARTITIONTIME
; if set, the table is partitioned by this field.
See Partitioned
Tables.
The destination table must also be partitioned. See #time_partitioning_type=.
You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
1038 1039 1040 1041 1042 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1038 def time_partitioning_field= field @gapi.configuration.query.time_partitioning ||= \ Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! field: field end |
#time_partitioning_require_filter=(val) ⇒ Object
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.
1096 1097 1098 1099 1100 1101 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 1096 def time_partitioning_require_filter= val @gapi.configuration.query.time_partitioning ||= \ Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! \ require_partition_filter: val end |
#time_partitioning_type=(type) ⇒ Object
Sets the partitioning for the destination table. See Partitioned Tables.
You can only set the partitioning field while creating a table. BigQuery does not allow you to change partitioning on an existing table.
991 992 993 994 995 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 991 def time_partitioning_type= type @gapi.configuration.query.time_partitioning ||= \ Google::Apis::BigqueryV2::TimePartitioning.new @gapi.configuration.query.time_partitioning.update! type: type end |
#udfs=(value) ⇒ Object
Sets user defined functions for the query.
927 928 929 930 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 927 def udfs= value @gapi.configuration.query.user_defined_function_resources = udfs_gapi_from value end |
#write=(value) ⇒ Object
Sets the write disposition for when the query results table exists.
804 805 806 807 |
# File 'lib/google/cloud/bigquery/query_job.rb', line 804 def write= value @gapi.configuration.query.write_disposition = Convert.write_disposition value end |