Class: Google::Cloud::Bigquery::Project
- Inherits:
-
Object
- Object
- Google::Cloud::Bigquery::Project
- Defined in:
- lib/google/cloud/bigquery/project.rb,
lib/google/cloud/bigquery/project/list.rb
Overview
Project
Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.
Google::Cloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Google::Cloud::Bigquery::Project.
Defined Under Namespace
Classes: List
Instance Attribute Summary collapse
-
#name ⇒ String?
readonly
The descriptive name of the project.
-
#numeric_id ⇒ Integer?
readonly
The numeric ID of the project.
Data collapse
-
#copy(source_table, destination_table, create: nil, write: nil, reservation: nil) {|job| ... } ⇒ Boolean
Copies the data from the source table to the destination table using a synchronous method that blocks for a response.
-
#copy_job(source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the source table to the destination table using an asynchronous method.
-
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil, access_policy_version: nil, dataset_view: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset
Creates a new dataset.
-
#dataset(dataset_id, skip_lookup: nil, project_id: nil, access_policy_version: nil, dataset_view: nil) ⇒ Google::Cloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
-
#datasets(all: nil, filter: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
-
#encryption(kms_key: nil) ⇒ Google::Cloud::Bigquery::EncryptionConfiguration
Creates a new Bigquery::EncryptionConfiguration instance.
-
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery.
-
#extract(source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, reservation: nil) {|job| ... } ⇒ Boolean
Extracts the data from a table or exports a model to Google Cloud Storage using a synchronous method that blocks for a response.
-
#extract_job(source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from a table or exports a model to Google Cloud Storage asynchronously, immediately returning an ExtractJob that can be used to track the progress of the export job.
-
#job(job_id, location: nil) ⇒ Google::Cloud::Bigquery::Job?
Retrieves an existing job by ID.
-
#jobs(all: nil, token: nil, max: nil, filter: nil, min_created_at: nil, max_created_at: nil, parent_job: nil) ⇒ Array<Google::Cloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
-
#load(table_id, files, dataset_id: "_SESSION", format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response.
-
#load_job(table_id, files, dataset_id: nil, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, project_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method.
-
#projects(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Project>
Retrieves the list of all projects for which the currently authorized account has been granted any project role.
-
#query(query, params: nil, types: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, session_id: nil, format_options_use_int64_timestamp: true, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results.
-
#query_job(query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
-
#schema {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Creates a new schema instance.
-
#time(hour, minute, second) ⇒ Bigquery::Time
Creates a Bigquery::Time object to represent a time, independent of a specific date.
Instance Method Summary collapse
-
#initialize(service) ⇒ Project
constructor
Creates a new Service instance.
-
#project_id ⇒ Object
(also: #project)
The BigQuery project connected to.
-
#service_account_email ⇒ String
The email address of the service account for the project used to connect to BigQuery.
-
#universe_domain ⇒ String
The universe domain the client is connected to.
Constructor Details
#initialize(service) ⇒ Project
Creates a new Service instance.
66 67 68 |
# File 'lib/google/cloud/bigquery/project.rb', line 66 def initialize service @service = service end |
Instance Attribute Details
#name ⇒ String? (readonly)
The descriptive name of the project. Can only be present if the project was retrieved with #projects.
54 55 56 |
# File 'lib/google/cloud/bigquery/project.rb', line 54 def name @name end |
#numeric_id ⇒ Integer? (readonly)
The numeric ID of the project. Can only be present if the project was retrieved with #projects.
54 55 56 |
# File 'lib/google/cloud/bigquery/project.rb', line 54 def numeric_id @numeric_id end |
Instance Method Details
#copy(source_table, destination_table, create: nil, write: nil, reservation: nil) {|job| ... } ⇒ Boolean
Copies the data from the source table to the destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See #copy_job for the asynchronous version. Use this method instead of Table#copy to copy from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method.
293 294 295 296 297 298 |
# File 'lib/google/cloud/bigquery/project.rb', line 293 def copy source_table, destination_table, create: nil, write: nil, reservation: nil, &block job = copy_job source_table, destination_table, create: create, write: write, reservation: reservation, &block job.wait_until_done! ensure_job_succeeded! job true end |
#copy_job(source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::CopyJob
Copies the data from the source table to the destination table using an asynchronous method. In this method, a CopyJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See #copy for the synchronous version. Use this method instead of Table#copy_job to copy from source tables in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via CopyJob::Updater#location= in a block passed to this method.
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
# File 'lib/google/cloud/bigquery/project.rb', line 208 def copy_job source_table, destination_table, create: nil, write: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil ensure_service! = { create: create, write: write, labels: labels, job_id: job_id, prefix: prefix, reservation: reservation } updater = CopyJob::Updater.( service, Service.get_table_ref(source_table, default_ref: project_ref), Service.get_table_ref(destination_table, default_ref: project_ref), ) yield updater if block_given? job_gapi = updater.to_gapi gapi = service.copy_table job_gapi Job.from_gapi gapi, service end |
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil, access_policy_version: nil, dataset_view: nil) {|access| ... } ⇒ Google::Cloud::Bigquery::Dataset
Creates a new dataset.
1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 |
# File 'lib/google/cloud/bigquery/project.rb', line 1651 def create_dataset dataset_id, name: nil, description: nil, expiration: nil, location: nil, access_policy_version: nil, dataset_view: nil ensure_service! new_ds = Google::Apis::BigqueryV2::Dataset.new( dataset_reference: Google::Apis::BigqueryV2::DatasetReference.new( project_id: project, dataset_id: dataset_id ) ) # Can set location only on creation, no Dataset#location method new_ds.update! location: location unless location.nil? updater = Dataset::Updater.new(new_ds).tap do |b| b.name = name unless name.nil? b.description = description unless description.nil? b.default_expiration = expiration unless expiration.nil? end if block_given? yield updater updater.check_for_mutated_access! end gapi = service.insert_dataset new_ds, access_policy_version: access_policy_version Dataset.from_gapi gapi, service, access_policy_version: access_policy_version, dataset_view: dataset_view end |
#dataset(dataset_id, skip_lookup: nil, project_id: nil, access_policy_version: nil, dataset_view: nil) ⇒ Google::Cloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 |
# File 'lib/google/cloud/bigquery/project.rb', line 1573 def dataset dataset_id, skip_lookup: nil, project_id: nil, access_policy_version: nil, dataset_view: nil ensure_service! project_id ||= project return Dataset.new_reference project_id, dataset_id, service if skip_lookup gapi = service.get_project_dataset project_id, dataset_id, access_policy_version: access_policy_version, dataset_view: dataset_view Dataset.from_gapi gapi, service, access_policy_version: access_policy_version, dataset_view: dataset_view rescue Google::Cloud::NotFoundError nil end |
#datasets(all: nil, filter: nil, token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
1724 1725 1726 1727 1728 |
# File 'lib/google/cloud/bigquery/project.rb', line 1724 def datasets all: nil, filter: nil, token: nil, max: nil ensure_service! gapi = service.list_datasets all: all, filter: filter, token: token, max: max Dataset::List.from_gapi gapi, service, all, filter, max end |
#encryption(kms_key: nil) ⇒ Google::Cloud::Bigquery::EncryptionConfiguration
Creates a new Bigquery::EncryptionConfiguration instance.
This method does not execute an API call. Use the encryption configuration to encrypt a table when creating one via Bigquery::Dataset#create_table, Bigquery::Dataset#load, Bigquery::Table#copy, or Bigquery::Project#query.
2085 2086 2087 2088 2089 |
# File 'lib/google/cloud/bigquery/project.rb', line 2085 def encryption kms_key: nil encrypt_config = Bigquery::EncryptionConfiguration.new encrypt_config.kms_key = kms_key unless kms_key.nil? encrypt_config end |
#external(url, format: nil) {|ext| ... } ⇒ External::DataSource
Creates a new External::DataSource (or subclass) object that represents the external data source that can be queried from directly, even though the data is not stored in BigQuery. Instead of loading or streaming the data, this object references the external data source.
1510 1511 1512 1513 1514 |
# File 'lib/google/cloud/bigquery/project.rb', line 1510 def external url, format: nil ext = External.from_urls url, format yield ext if block_given? ext end |
#extract(source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, reservation: nil) {|job| ... } ⇒ Boolean
Extracts the data from a table or exports a model to Google Cloud Storage using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See #extract_job for the asynchronous version.
Use this method instead of Table#extract or Model#extract to extract data from source tables or models in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method.
2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 |
# File 'lib/google/cloud/bigquery/project.rb', line 2310 def extract source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, reservation: nil, &block job = extract_job source, extract_url, format: format, compression: compression, delimiter: delimiter, header: header, reservation: reservation, &block job.wait_until_done! ensure_job_succeeded! job true end |
#extract_job(source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::ExtractJob
Extracts the data from a table or exports a model to Google Cloud Storage asynchronously, immediately returning an ExtractJob that can be used to track the progress of the export job. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling
Job#wait_until_done!. See #extract for the synchronous version.
Use this method instead of Table#extract_job or Model#extract_job to extract data from source tables or models in other projects.
The geographic location for the job ("US", "EU", etc.) can be set via ExtractJob::Updater#location= in a block passed to this method.
2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 |
# File 'lib/google/cloud/bigquery/project.rb', line 2208 def extract_job source, extract_url, format: nil, compression: nil, delimiter: nil, header: nil, job_id: nil, prefix: nil, labels: nil, reservation: nil ensure_service! = { format: format, compression: compression, delimiter: delimiter, header: header, job_id: job_id, prefix: prefix, labels: labels, reservation: reservation } source_ref = if source.respond_to? :model_ref source.model_ref else Service.get_table_ref source, default_ref: project_ref end updater = ExtractJob::Updater. service, source_ref, extract_url, yield updater if block_given? job_gapi = updater.to_gapi gapi = service.extract_table job_gapi Job.from_gapi gapi, service end |
#job(job_id, location: nil) ⇒ Google::Cloud::Bigquery::Job?
Retrieves an existing job by ID.
1747 1748 1749 1750 1751 1752 1753 |
# File 'lib/google/cloud/bigquery/project.rb', line 1747 def job job_id, location: nil ensure_service! gapi = service.get_job job_id, location: location Job.from_gapi gapi, service rescue Google::Cloud::NotFoundError nil end |
#jobs(all: nil, token: nil, max: nil, filter: nil, min_created_at: nil, max_created_at: nil, parent_job: nil) ⇒ Array<Google::Cloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 |
# File 'lib/google/cloud/bigquery/project.rb', line 1866 def jobs all: nil, token: nil, max: nil, filter: nil, min_created_at: nil, max_created_at: nil, parent_job: nil ensure_service! parent_job = parent_job.job_id if parent_job.is_a? Job = { parent_job_id: parent_job, all: all, token: token, max: max, filter: filter, min_created_at: min_created_at, max_created_at: max_created_at } gapi = service.list_jobs(**) Job::List.from_gapi gapi, service, ** end |
#load(table_id, files, dataset_id: "_SESSION", format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil) {|updater| ... } ⇒ Boolean
Loads data into the provided destination table using a synchronous method that blocks for a response. Timeouts and transient errors are generally handled as needed to complete the job. See also #load_job.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method.
1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 |
# File 'lib/google/cloud/bigquery/project.rb', line 1437 def load table_id, files, dataset_id: "_SESSION", format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, autodetect: nil, null_marker: nil, session_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil, &block job = load_job table_id, files, dataset_id: dataset_id, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, schema: schema, autodetect: autodetect, null_marker: null_marker, session_id: session_id, date_format: date_format, datetime_format: datetime_format, time_format: time_format, timestamp_format: , null_markers: null_markers, source_column_match: source_column_match, time_zone: time_zone, reference_file_schema_uri: reference_file_schema_uri, preserve_ascii_control_characters: preserve_ascii_control_characters, reservation: reservation, &block job.wait_until_done! ensure_job_succeeded! job true end |
#load_job(table_id, files, dataset_id: nil, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, project_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil) {|updater| ... } ⇒ Google::Cloud::Bigquery::LoadJob
Loads data into the provided destination table using an asynchronous method. In this method, a LoadJob is immediately returned. The caller may poll the service by repeatedly calling Job#reload! and Job#done? to detect when the job is done, or simply block until the job is done by calling #Job#wait_until_done!. See also #load.
For the source of the data, you can pass a google-cloud storage file
path or a google-cloud-storage File instance. Or, you can upload a
file directly. See Loading Data with a POST
Request.
The geographic location for the job ("US", "EU", etc.) can be set via LoadJob::Updater#location= in a block passed to this method.
1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 |
# File 'lib/google/cloud/bigquery/project.rb', line 1218 def load_job table_id, files, dataset_id: nil, format: nil, create: nil, write: nil, projection_fields: nil, jagged_rows: nil, quoted_newlines: nil, encoding: nil, delimiter: nil, ignore_unknown: nil, max_bad_records: nil, quote: nil, skip_leading: nil, schema: nil, job_id: nil, prefix: nil, labels: nil, autodetect: nil, null_marker: nil, dryrun: nil, create_session: nil, session_id: nil, project_id: nil, date_format: nil, datetime_format: nil, time_format: nil, timestamp_format: nil, null_markers: nil, source_column_match: nil, time_zone: nil, reference_file_schema_uri: nil, preserve_ascii_control_characters: nil, reservation: nil, &block ensure_service! dataset_id ||= "_SESSION" unless create_session.nil? && session_id.nil? session_dataset = dataset dataset_id, skip_lookup: true, project_id: project_id table = session_dataset.table table_id, skip_lookup: true table.load_job files, format: format, create: create, write: write, projection_fields: projection_fields, jagged_rows: jagged_rows, quoted_newlines: quoted_newlines, encoding: encoding, delimiter: delimiter, ignore_unknown: ignore_unknown, max_bad_records: max_bad_records, quote: quote, skip_leading: skip_leading, dryrun: dryrun, schema: schema, job_id: job_id, prefix: prefix, labels: labels, autodetect: autodetect, null_marker: null_marker, create_session: create_session, session_id: session_id, date_format: date_format, datetime_format: datetime_format, time_format: time_format, timestamp_format: , null_markers: null_markers, source_column_match: source_column_match, time_zone: time_zone, reference_file_schema_uri: reference_file_schema_uri, preserve_ascii_control_characters: preserve_ascii_control_characters, reservation: reservation, &block end |
#project_id ⇒ Object Also known as: project
The BigQuery project connected to.
92 93 94 |
# File 'lib/google/cloud/bigquery/project.rb', line 92 def project_id service.project end |
#projects(token: nil, max: nil) ⇒ Array<Google::Cloud::Bigquery::Project>
Retrieves the list of all projects for which the currently authorized account has been granted any project role. The returned project instances share the same credentials as the project used to retrieve them, but lazily create a new API connection for interactions with the BigQuery service.
1928 1929 1930 1931 1932 |
# File 'lib/google/cloud/bigquery/project.rb', line 1928 def projects token: nil, max: nil ensure_service! gapi = service.list_projects token: token, max: max Project::List.from_gapi gapi, service, max end |
#query(query, params: nil, types: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, session_id: nil, format_options_use_int64_timestamp: true, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::Data
Queries data and waits for the results. In this method, a QueryJob is created and its results are saved to a temporary table, then read from the table. Timeouts and transient errors are generally handled as needed to complete the query. When used for executing DDL/DML statements, this method does not return row data.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method.
951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 |
# File 'lib/google/cloud/bigquery/project.rb', line 951 def query query, params: nil, types: nil, external: nil, max: nil, cache: true, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, session_id: nil, format_options_use_int64_timestamp: true, reservation: nil, &block job = query_job query, params: params, types: types, external: external, cache: cache, dataset: dataset, project: project, standard_sql: standard_sql, legacy_sql: legacy_sql, session_id: session_id, reservation: reservation, &block job.wait_until_done! if job.failed? begin # raise to activate ruby exception cause handling raise job.gapi_error rescue StandardError => e # wrap Google::Apis::Error with Google::Cloud::Error raise Google::Cloud::Error.from_error(e) end end job.data max: max, format_options_use_int64_timestamp: end |
#query_job(query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil, reservation: nil) {|job| ... } ⇒ Google::Cloud::Bigquery::QueryJob
Queries data by creating a query job.
The geographic location for the job ("US", "EU", etc.) can be set via QueryJob::Updater#location= in a block passed to this method.
631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 |
# File 'lib/google/cloud/bigquery/project.rb', line 631 def query_job query, params: nil, types: nil, external: nil, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, dryrun: nil, dataset: nil, project: nil, standard_sql: nil, legacy_sql: nil, large_results: nil, flatten: nil, maximum_billing_tier: nil, maximum_bytes_billed: nil, job_id: nil, prefix: nil, labels: nil, udfs: nil, create_session: nil, session_id: nil, reservation: nil ensure_service! project ||= self.project = { params: params, types: types, external: external, priority: priority, cache: cache, table: table, create: create, write: write, dryrun: dryrun, dataset: dataset, project: project, standard_sql: standard_sql, legacy_sql: legacy_sql, large_results: large_results, flatten: flatten, maximum_billing_tier: maximum_billing_tier, maximum_bytes_billed: maximum_bytes_billed, job_id: job_id, prefix: prefix, labels: labels, udfs: udfs, create_session: create_session, session_id: session_id, reservation: reservation } updater = QueryJob::Updater. service, query, yield updater if block_given? gapi = service.query_job updater.to_gapi Job.from_gapi gapi, service end |
#schema {|schema| ... } ⇒ Google::Cloud::Bigquery::Schema
Creates a new schema instance. An optional block may be given to configure the schema, otherwise the schema is returned empty and may be configured directly.
The returned schema can be passed to Dataset#load using the
schema option. However, for most use cases, the block yielded by
Dataset#load is a more convenient way to configure the schema
for the destination table.
2014 2015 2016 2017 2018 |
# File 'lib/google/cloud/bigquery/project.rb', line 2014 def schema s = Schema.from_gapi yield s if block_given? s end |
#service_account_email ⇒ String
The email address of the service account for the project used to connect to BigQuery. (See also #project_id.)
103 104 105 |
# File 'lib/google/cloud/bigquery/project.rb', line 103 def service_account_email @service_account_email ||= service.project_service_account.email end |
#time(hour, minute, second) ⇒ Bigquery::Time
Creates a Bigquery::Time object to represent a time, independent of a specific date.
1977 1978 1979 |
# File 'lib/google/cloud/bigquery/project.rb', line 1977 def time hour, minute, second Bigquery::Time.new "#{hour}:#{minute}:#{second}" end |
#universe_domain ⇒ String
The universe domain the client is connected to
75 76 77 |
# File 'lib/google/cloud/bigquery/project.rb', line 75 def universe_domain service.universe_domain end |