Class: Gcloud::Bigquery::Project
- Inherits:
-
Object
- Object
- Gcloud::Bigquery::Project
- Defined in:
- lib/gcloud/bigquery/project.rb
Overview
# Project
Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.
Gcloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Gcloud::Bigquery::Project.
See Gcloud#bigquery
Instance Attribute Summary collapse
Class Method Summary collapse
Instance Method Summary collapse
-
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Gcloud::Bigquery::Dataset
Creates a new dataset.
-
#dataset(dataset_id) ⇒ Gcloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
-
#datasets(all: nil, token: nil, max: nil) ⇒ Array<Gcloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
-
#initialize(service) ⇒ Project
constructor
Creates a new Service instance.
-
#job(job_id) ⇒ Gcloud::Bigquery::Job?
Retrieves an existing job by ID.
-
#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Gcloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
-
#project ⇒ Object
The BigQuery project connected to.
-
#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil) ⇒ Gcloud::Bigquery::QueryData
Queries data using the [synchronous method](cloud.google.com/bigquery/querying-data).
-
#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil) ⇒ Gcloud::Bigquery::QueryJob
Queries data using the [asynchronous method](cloud.google.com/bigquery/querying-data).
Constructor Details
#initialize(service) ⇒ Project
Creates a new Service instance.
See Gcloud.bigquery
56 57 58 |
# File 'lib/gcloud/bigquery/project.rb', line 56 def initialize service @service = service end |
Instance Attribute Details
#service ⇒ Object
50 51 52 |
# File 'lib/gcloud/bigquery/project.rb', line 50 def service @service end |
Class Method Details
.default_project ⇒ Object
77 78 79 80 81 82 |
# File 'lib/gcloud/bigquery/project.rb', line 77 def self.default_project ENV["BIGQUERY_PROJECT"] || ENV["GCLOUD_PROJECT"] || ENV["GOOGLE_CLOUD_PROJECT"] || Gcloud::GCE.project_id end |
Instance Method Details
#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, location: nil) {|access| ... } ⇒ Gcloud::Bigquery::Dataset
Creates a new dataset.
308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 |
# File 'lib/gcloud/bigquery/project.rb', line 308 def create_dataset dataset_id, name: nil, description: nil, expiration: nil, location: nil ensure_service! new_ds = Google::Apis::BigqueryV2::Dataset.new( dataset_reference: Google::Apis::BigqueryV2::DatasetReference.new( project_id: project, dataset_id: dataset_id)) # Can set location only on creation, no Dataset#location method new_ds.update! location: location unless location.nil? updater = Dataset::Updater.new(new_ds).tap do |b| b.name = name unless name.nil? b.description = description unless description.nil? b.default_expiration = expiration unless expiration.nil? end if block_given? yield updater updater.check_for_mutated_access! end gapi = service.insert_dataset new_ds Dataset.from_gapi gapi, service end |
#dataset(dataset_id) ⇒ Gcloud::Bigquery::Dataset?
Retrieves an existing dataset by ID.
244 245 246 247 248 249 250 |
# File 'lib/gcloud/bigquery/project.rb', line 244 def dataset dataset_id ensure_service! gapi = service.get_dataset dataset_id Dataset.from_gapi gapi, service rescue Gcloud::NotFoundError nil end |
#datasets(all: nil, token: nil, max: nil) ⇒ Array<Gcloud::Bigquery::Dataset>
Retrieves the list of datasets belonging to the project.
376 377 378 379 380 381 |
# File 'lib/gcloud/bigquery/project.rb', line 376 def datasets all: nil, token: nil, max: nil ensure_service! = { all: all, token: token, max: max } gapi = service.list_datasets Dataset::List.from_gapi gapi, service, all, max end |
#job(job_id) ⇒ Gcloud::Bigquery::Job?
Retrieves an existing job by ID.
399 400 401 402 403 404 405 |
# File 'lib/gcloud/bigquery/project.rb', line 399 def job job_id ensure_service! gapi = service.get_job job_id Job.from_gapi gapi, service rescue Gcloud::NotFoundError nil end |
#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Gcloud::Bigquery::Job>
Retrieves the list of jobs belonging to the project.
459 460 461 462 463 464 |
# File 'lib/gcloud/bigquery/project.rb', line 459 def jobs all: nil, token: nil, max: nil, filter: nil ensure_service! = { all: all, token: token, max: max, filter: filter } gapi = service.list_jobs Job::List.from_gapi gapi, service, all, max, filter end |
#project ⇒ Object
The BigQuery project connected to.
71 72 73 |
# File 'lib/gcloud/bigquery/project.rb', line 71 def project service.project end |
#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil) ⇒ Gcloud::Bigquery::QueryData
Queries data using the [synchronous method](cloud.google.com/bigquery/querying-data).
218 219 220 221 222 223 224 225 |
# File 'lib/gcloud/bigquery/project.rb', line 218 def query query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil ensure_service! = { max: max, timeout: timeout, dryrun: dryrun, cache: cache, dataset: dataset, project: project } gapi = service.query query, QueryData.from_gapi gapi, service end |
#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil) ⇒ Gcloud::Bigquery::QueryJob
Queries data using the [asynchronous method](cloud.google.com/bigquery/querying-data).
146 147 148 149 150 151 152 153 154 155 |
# File 'lib/gcloud/bigquery/project.rb', line 146 def query_job query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil ensure_service! = { priority: priority, cache: cache, table: table, create: create, write: write, large_results: large_results, flatten: flatten, dataset: dataset } gapi = service.query_job query, Job.from_gapi gapi, service end |