Class: Google::Cloud::Bigquery::Project
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Project
- Defined in:
- lib/google/cloud/bigquery/project.rb,
lib/google/cloud/bigquery/project/list.rb
Overview
Project
Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.
Google::Cloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Google::Cloud::Bigquery::Project.
Defined Under Namespace
Classes: List
Instance Attribute Summary collapse
-
#name ⇒ String?
readonly
The descriptive name of the project.
-
#numeric_id ⇒ Integer?
readonly
The numeric ID of the project.
Data collapse
-
#copy(source_table, destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Copies the data from the source table to the destination table using a synchronous method that blocks for a response.
-
#copy_job(source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the source table to the destination table using an asynchronous method.
-
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset
Creates a new dataset.
-
#dataset(dataset_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
-
#datasets(all: nil, filter: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
-
#encryption(kms_key: nil) ⇒ Google::Cloud::Bigquery::EncryptionConfiguration
Creates a new Bigquery::EncryptionConfiguration instance.
-
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery.
-
#extract(table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean
Extracts the data from the provided table to a Google Cloud Storage file using a synchronous method that blocks for a response.
-
#extract_job(table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from the provided table to a Google Cloud Storage file using an asynchronous method.
-
#job(job_id, location: nil) ⇒ Google::Cloud::Bigquery::Job?
Retrieves an existing job by ID.
-
#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
-
#projects(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Project>
Retrieves the list of all projects for which the currently authorized account has been granted any project role.
-
#query(query, params: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results.
-
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
-
#schema {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Creates a new schema instance.
-
#time(hour, minute, second) ⇒ Bigquery::Time
Creates a Bigquery::Time object to represent a time, independent of a specific date.
Instance Method Summary collapse
-
#initialize(service) ⇒ Project
constructor
Creates a new Service instance.
-
#project_id ⇒ Object
(also: #project)
The BigQuery project connected to.
-
#service_account_email ⇒ String
The email address of the service account for the project used to connect to BigQuery.
Constructor Details
#initialize(service) ⇒ Project
Creates a new Service instance.
65 66 67 |
# File 'lib/google/cloud/bigquery/project.rb', line 65 def initialize service @service = service end |
Instance Attribute Details
#name ⇒ String? (readonly)
The descriptive name of the project. Can only be present if the project was retrieved with #projects.
54 55 56 |
# File 'lib/google/cloud/bigquery/project.rb', line 54 def name @name end |
#numeric_id ⇒ Integer? (readonly)
The numeric ID of the project. Can only be present if the project was retrieved with #projects.
54 55 56 |
# File 'lib/google/cloud/bigquery/project.rb', line 54 def numeric_id @numeric_id end |
Instance Method Details
#copy(source_table, destination_table, create: nil, write: nil) {|job| ... } ⇒ Boolean
Copies the data from the source table to the destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See #copy_job for the asynchronous version. Use this method instead of Table#copy to copy from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method.
264 265 266 267 268 269 270 271 272 273 274 |
# File 'lib/google/cloud/bigquery/project.rb', line 264 def copy source_table, destination_table, create: nil, write: nil, &block job = copy_job source_table, destination_table, create: create, write: write, &block job.wait_until_done! ensure_job_succeeded! job true end |
#copy_job(source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the source table to the destination table using an asynchronous method. In this method, a CopyJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See #copy for the synchronous version. Use this method instead of Table#copy_job to copy from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method.
185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
# File 'lib/google/cloud/bigquery/project.rb', line 185 def copy_job source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil ensure_service! = { create: create, write: write, labels: labels, job_id: job_id, prefix: prefix } updater = CopyJob::Updater.( service, Service.get_table_ref(source_table, default_ref: project_ref), Service.get_table_ref(destination_table, default_ref: project_ref), ) yield updater if block_given? job_gapi = updater.to_gapi gapi = service.copy_table job_gapi Job.from_gapi gapi, service end |
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset
Creates a new dataset.
919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 |
# File 'lib/google/cloud/bigquery/project.rb', line 919 def create_dataset dataset_id, name: nil, description: nil, expiration: nil, location: nil ensure_service! new_ds = Google::Apis::BigqueryV2::Dataset.new( dataset_reference: Google::Apis::BigqueryV2::DatasetReference.new( project_id: project, dataset_id: dataset_id ) ) # Can set location only on creation, no Dataset#location method new_ds.update! location: location unless location.nil? updater = Dataset::Updater.new(new_ds).tap do |b| b.name = name unless name.nil? b.description = description unless description.nil? b.default_expiration = expiration unless expiration.nil? end if block_given? yield updater updater.check_for_mutated_access! end gapi = service.insert_dataset new_ds Dataset.from_gapi gapi, service end |
#dataset(dataset_id, skip_lookup: nil) ⇒ Google::Cloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
862 863 864 865 866 867 868 869 870 871 |
# File 'lib/google/cloud/bigquery/project.rb', line 862 def dataset dataset_id, skip_lookup: nil ensure_service! if skip_lookup return Dataset.new_reference project, dataset_id, service end gapi = service.get_dataset dataset_id Dataset.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#datasets(all: nil, filter: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
991 992 993 994 995 996 |
# File 'lib/google/cloud/bigquery/project.rb', line 991 def datasets all: nil, filter: nil, token: nil, max: nil ensure_service! = { all: all, filter: filter, token: token, max: max } gapi = service.list_datasets Dataset::List.from_gapi gapi, service, all, filter, max end |
#encryption(kms_key: nil) ⇒ Google::Cloud::Bigquery::EncryptionConfiguration
Creates a new Bigquery::EncryptionConfiguration instance.
This method does not execute an API call. Use the encryption configuration to encrypt a table when creating one via Bigquery::Dataset#create_table, Bigquery::Dataset#load, Bigquery::Table#copy, or Bigquery::Project#query.
1276 1277 1278 1279 1280 |
# File 'lib/google/cloud/bigquery/project.rb', line 1276 def encryption kms_key: nil encrypt_config = Bigquery::EncryptionConfiguration.new encrypt_config.kms_key = kms_key unless kms_key.nil? encrypt_config end |
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
829 830 831 832 833 |
# File 'lib/google/cloud/bigquery/project.rb', line 829 def external url, format: nil ext = External.from_urls url, format yield ext if block_given? ext end |
#extract(table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil) {|job| ... } ⇒ Boolean
Extracts the data from the provided table to a Google Cloud Storage file using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See #extract_job for the asynchronous version. Use this method instead of Table#extract to extract data from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method.
1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 |
# File 'lib/google/cloud/bigquery/project.rb', line 1439 def extract table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, &block job = extract_job table, extract_url, format: format, compression: compression, delimiter: delimiter, header: header, &block job.wait_until_done! ensure_job_succeeded! job true end |
#extract_job(table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from the provided table to a Google Cloud Storage file using an asynchronous method. In this method, an ExtractJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling
Job#wait_until_done!. See #extract for the synchronous version.
Use this method instead of Table#extract_job to extract data from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method.
1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 |
# File 'lib/google/cloud/bigquery/project.rb', line 1365 def extract_job table, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil ensure_service! = { format: format, compression: compression, delimiter: delimiter, header: header, job_id: job_id, prefix: prefix, labels: labels } table_ref = Service.get_table_ref table, default_ref: project_ref updater = ExtractJob::Updater. service, table_ref, extract_url, yield updater if block_given? job_gapi = updater.to_gapi gapi = service.extract_table job_gapi Job.from_gapi gapi, service end |
#job(job_id, location: nil) ⇒ Google::Cloud::Bigquery::Job?
Retrieves an existing job by ID.
1015 1016 1017 1018 1019 1020 1021 |
# File 'lib/google/cloud/bigquery/project.rb', line 1015 def job job_id, location: nil ensure_service! gapi = service.get_job job_id, location: location Job.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Google::Cloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
1072 1073 1074 1075 1076 1077 |
# File 'lib/google/cloud/bigquery/project.rb', line 1072 def jobs all: nil, token: nil, max: nil, filter: nil ensure_service! = { all: all, token: token, max: max, filter: filter } gapi = service.list_jobs Job::List.from_gapi gapi, service, all, max, filter end |
#project_id ⇒ Object Also known as: project
The BigQuery project connected to.
82 83 84 |
# File 'lib/google/cloud/bigquery/project.rb', line 82 def project_id service.project end |
#projects(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Project>
Retrieves the list of all projects for which the currently authorized account has been granted any project role. The returned project instances share the same credentials as the project used to retrieve them, but lazily create a new API connection for interactions with the BigQuery service.
1120 1121 1122 1123 1124 1125 |
# File 'lib/google/cloud/bigquery/project.rb', line 1120 def projects token: nil, max: nil ensure_service! = { token: token, max: max } gapi = service.list_projects Project::List.from_gapi gapi, service, max end |
#query(query, params: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results. In this method, a QueryJob is created and its results are saved to a temporary table, then read from the table. Timeouts and transient errors are generally handled as needed to complete the query. When used for executing DDL/DML statements, this method does not return row data.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
NUMERIC |
BigDecimal |
Will be rounded to 9 decimal places |
STRING |
String |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method.
762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 |
# File 'lib/google/cloud/bigquery/project.rb', line 762 def query query, params: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, &block job = query_job query, params: params, external: external, cache: cache, dataset: dataset, project: project, standard_sql: standard_sql, legacy_sql: legacy_sql, &block job.wait_until_done! if job.failed? begin # raise to activate ruby exception cause handling raise job.gapi_error rescue StandardError => e # wrap Google::Apis::Error with Google::Cloud::Error raise Google::Cloud::Error.from_error(e) end end job.data max: max end |
#query_job(query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
When using standard SQL and passing arguments using params
, Ruby
types are mapped to BigQuery types as follows:
BigQuery | Ruby | Notes |
---|---|---|
BOOL |
true /false |
|
INT64 |
Integer |
|
FLOAT64 |
Float |
|
NUMERIC |
BigDecimal |
Will be rounded to 9 decimal places |
STRING |
String |
|
DATETIME |
DateTime |
DATETIME does not support time zone. |
DATE |
Date |
|
TIMESTAMP |
Time |
|
TIME |
Google::Cloud::BigQuery::Time |
|
BYTES |
File , IO , StringIO , or similar |
|
ARRAY |
Array |
Nested arrays, nil values are not supported. |
STRUCT |
Hash |
Hash keys may be strings or symbols. |
See Data Types for an overview of each BigQuery data type, including allowed values.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method.
541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 |
# File 'lib/google/cloud/bigquery/project.rb', line 541 def query_job query, params: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil ensure_service! = { priority: priority, cache: cache, table: table, create: create, write: write, dryrun: dryrun, large_results: large_results, flatten: flatten, dataset: dataset, project: (project || self.project), legacy_sql: legacy_sql, standard_sql: standard_sql, maximum_billing_tier: maximum_billing_tier, maximum_bytes_billed: maximum_bytes_billed, external: external, job_id: job_id, prefix: prefix, labels: labels, udfs: udfs, params: params } updater = QueryJob::Updater. service, query, yield updater if block_given? gapi = service.query_job updater.to_gapi Job.from_gapi gapi, service end |
#schema {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Creates a new schema instance. An optional block may be given to configure the schema, otherwise the schema is returned empty and may be configured directly.
The returned schema can be passed to Dataset#load using the
schema
option. However, for most use cases, the block yielded by
Dataset#load is a more convenient way to configure the schema
for the destination table.
1205 1206 1207 1208 1209 |
# File 'lib/google/cloud/bigquery/project.rb', line 1205 def schema s = Schema.from_gapi yield s if block_given? s end |
#service_account_email ⇒ String
The email address of the service account for the project used to connect to BigQuery. (See also #project_id.)
93 94 95 96 |
# File 'lib/google/cloud/bigquery/project.rb', line 93 def service_account_email @service_account_email ||= \ service.project_service_account.email end |
#time(hour, minute, second) ⇒ Bigquery::Time
Creates a Bigquery::Time object to represent a time, independent of a specific date.
1168 1169 1170 |
# File 'lib/google/cloud/bigquery/project.rb', line 1168 def time hour, minute, second Bigquery::Time.new "#{hour}:#{minute}:#{second}" end |