Class: Gcloud::Bigquery::Project

Inherits:
Object
  • Object
show all
Defined in:
lib/gcloud/bigquery/project.rb

Overview

Project

Projects are top-level containers in Google Cloud Platform. They store information about billing and authorized users, and they contain BigQuery data. Each project has a friendly name and a unique ID.

Gcloud::Bigquery::Project is the main object for interacting with Google BigQuery. Dataset objects are created, accessed, and deleted by Gcloud::Bigquery::Project.

See Gcloud#bigquery

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"

Instance Method Summary collapse

Constructor Details

#initialize(project, credentials) ⇒ Project

Creates a new Connection instance.

See Gcloud.bigquery


56
57
58
59
60
# File 'lib/gcloud/bigquery/project.rb', line 56

def initialize project, credentials
  project = project.to_s # Always cast to a string
  fail ArgumentError, "project is missing" if project.empty?
  @connection = Connection.new project, credentials
end

Instance Method Details

#create_dataset(dataset_id, name: nil, description: nil, expiration: nil, access: nil) ⇒ Gcloud::Bigquery::Dataset

Creates a new dataset.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset"

A name and description can be provided:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset",
                                  name: "My Dataset",
                                  description: "This is my Dataset"

Access rules can be provided with the access option:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset",
  access: [{"role"=>"WRITER", "userByEmail"=>"[email protected]"}]

Or, configure access with a block: (See Dataset::Access)

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset" do |access|
  access.add_writer_user "[email protected]"
end

Parameters:

  • dataset_id (String)

    A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

  • name (String)

    A descriptive name for the dataset.

  • description (String)

    A user-friendly description of the dataset.

  • expiration (Integer)

    The default lifetime of all tables in the dataset, in milliseconds. The minimum value is 3600000 milliseconds (one hour).

  • access (Array<Hash>)

    The access rules for a Dataset using the Google Cloud Datastore API data structure of an array of hashes. See BigQuery Access Control for more information.

Returns:


308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
# File 'lib/gcloud/bigquery/project.rb', line 308

def create_dataset dataset_id, name: nil, description: nil,
                   expiration: nil, access: nil
  if block_given?
    access_builder = Dataset::Access.new connection.default_access_rules,
                                         "projectId" => project
    yield access_builder
    access = access_builder.access if access_builder.changed?
  end

  ensure_connection!
  options = { name: name, description: description,
              expiration: expiration, access: access }
  resp = connection.insert_dataset dataset_id, options
  return Dataset.from_gapi(resp.data, connection) if resp.success?
  fail ApiError.from_response(resp)
end

#dataset(dataset_id) ⇒ Gcloud::Bigquery::Dataset?

Retrieves an existing dataset by ID.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.dataset "my_dataset"
puts dataset.name

Parameters:

  • dataset_id (String)

    The ID of a dataset.

Returns:


241
242
243
244
245
246
247
248
249
250
# File 'lib/gcloud/bigquery/project.rb', line 241

def dataset dataset_id
  ensure_connection!
  resp = connection.get_dataset dataset_id
  if resp.success?
    Dataset.from_gapi resp.data, connection
  else
    return nil if resp.status == 404
    fail ApiError.from_response(resp)
  end
end

#datasets(all: nil, token: nil, max: nil) ⇒ Array<Gcloud::Bigquery::Dataset>

Retrieves the list of datasets belonging to the project.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

datasets = bigquery.datasets
datasets.each do |dataset|
  puts dataset.name
end

Retrieve all datasets, including hidden ones, with :all:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

all_datasets = bigquery.datasets, all: true

With pagination: (See Dataset::List#token)

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

all_datasets = []
tmp_datasets = bigquery.datasets
while tmp_datasets.any? do
  tmp_datasets.each do |dataset|
    all_datasets << dataset
  end
  # break loop if no more datasets available
  break if tmp_datasets.token.nil?
  # get the next group of datasets
  tmp_datasets = bigquery.datasets token: tmp_datasets.token
end

Parameters:

  • all (Boolean)

    Whether to list all datasets, including hidden ones. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of datasets to return.

Returns:


374
375
376
377
378
379
380
381
382
383
# File 'lib/gcloud/bigquery/project.rb', line 374

def datasets all: nil, token: nil, max: nil
  ensure_connection!
  options = { all: all, token: token, max: max }
  resp = connection.list_datasets options
  if resp.success?
    Dataset::List.from_response resp, connection
  else
    fail ApiError.from_response(resp)
  end
end

#job(job_id) ⇒ Gcloud::Bigquery::Job?

Retrieves an existing job by ID.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

job = bigquery.job "my_job"

Parameters:

  • job_id (String)

    The ID of a job.

Returns:


401
402
403
404
405
406
407
408
409
410
# File 'lib/gcloud/bigquery/project.rb', line 401

def job job_id
  ensure_connection!
  resp = connection.get_job job_id
  if resp.success?
    Job.from_gapi resp.data, connection
  else
    return nil if resp.status == 404
    fail ApiError.from_response(resp)
  end
end

#jobs(all: nil, token: nil, max: nil, filter: nil) ⇒ Array<Gcloud::Bigquery::Job>

Retrieves the list of jobs belonging to the project.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

jobs = bigquery.jobs

Retrieve only running jobs using the :filter option:


require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

running_jobs = bigquery.jobs filter: "running"

With pagination: (See Job::List#token)


require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

all_jobs = []
tmp_jobs = bigquery.jobs
while tmp_jobs.any? do
  tmp_jobs.each do |job|
    all_jobs << job
  end
  # break loop if no more jobs available
  break if tmp_jobs.token.nil?
  # get the next group of jobs
  tmp_jobs = bigquery.jobs token: tmp_jobs.token
end

Parameters:

  • all (Boolean)

    Whether to display jobs owned by all users in the project. The default is false.

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of jobs to return.

  • filter (String)

    A filter for job state.

    Acceptable values are:

    • done - Finished jobs
    • pending - Pending jobs
    • running - Running jobs

Returns:


466
467
468
469
470
471
472
473
474
475
# File 'lib/gcloud/bigquery/project.rb', line 466

def jobs all: nil, token: nil, max: nil, filter: nil
  ensure_connection!
  options = { all: all, token: token, max: max, filter: filter }
  resp = connection.list_jobs options
  if resp.success?
    Job::List.from_response resp, connection
  else
    fail ApiError.from_response(resp)
  end
end

#projectObject

The BigQuery project connected to.

Examples:

require "gcloud"

gcloud = Gcloud.new "my-todo-project", "/path/to/keyfile.json"
bigquery = gcloud.bigquery

bigquery.project #=> "my-todo-project"

73
74
75
# File 'lib/gcloud/bigquery/project.rb', line 73

def project
  connection.project
end

#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true, dataset: nil, project: nil) ⇒ Gcloud::Bigquery::QueryData

Queries data using the synchronous method.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

data = bigquery.query "SELECT name FROM [my_proj:my_data.my_table]"
data.each do |row|
  puts row["name"]
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • max (Integer)

    The maximum number of rows of data to return per page of results. Setting this flag to a small value such as 1000 and then paging through results might improve reliability when the query result set is large. In addition to this limit, responses are also limited to 10 MB. By default, there is no maximum row count, and only the byte limit applies.

  • timeout (Integer)

    How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with QueryData#complete? set to false. The default value is 10000 milliseconds (10 seconds).

  • dryrun (Boolean)

    If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • dataset (String)

    Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.

  • project (String)

    Specifies the default projectId to assume for any unqualified table names in the query. Only used if dataset option is set.

Returns:


211
212
213
214
215
216
217
218
219
220
221
222
# File 'lib/gcloud/bigquery/project.rb', line 211

def query query, max: nil, timeout: 10000, dryrun: nil, cache: true,
          dataset: nil, project: nil
  ensure_connection!
  options = { max: max, timeout: timeout, dryrun: dryrun, cache: cache,
              dataset: dataset, project: project }
  resp = connection.query query, options
  if resp.success?
    QueryData.from_gapi resp.data, connection
  else
    fail ApiError.from_response(resp)
  end
end

#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil, dataset: nil) ⇒ Gcloud::Bigquery::QueryJob

Queries data using the asynchronous method.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

job = bigquery.query_job "SELECT name FROM [my_proj:my_data.my_table]"

job.wait_until_done!
if !job.failed?
  job.query_results.each do |row|
    puts row["name"]
  end
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • priority (String)

    Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • table (Table)

    The destination table where the query results should be stored. If not present, a new table will be created to store the results.

  • create (String)

    Specifies whether the job is allowed to create new tables.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String)

    Specifies the action that occurs if the destination table already exists.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - A 'duplicate' error is returned in the job result if the table exists and contains data.
  • large_results (Boolean)

    If true, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires table parameter to be set.

  • flatten (Boolean)

    Flattens all nested and repeated fields in the query results. The default value is true. large_results parameter must be true if this is set to false.

  • dataset (Dataset, String)

    Specifies the default dataset to use for unqualified table names in the query.

Returns:


146
147
148
149
150
151
152
153
154
155
156
157
158
159
# File 'lib/gcloud/bigquery/project.rb', line 146

def query_job query, priority: "INTERACTIVE", cache: true, table: nil,
              create: nil, write: nil, large_results: nil, flatten: nil,
              dataset: nil
  ensure_connection!
  options = { priority: priority, cache: cache, table: table,
              create: create, write: write, large_results: large_results,
              flatten: flatten, dataset: dataset }
  resp = connection.query_job query, options
  if resp.success?
    Job.from_gapi resp.data, connection
  else
    fail ApiError.from_response(resp)
  end
end