Class: Gcloud::Bigquery::Dataset

Inherits:
Object
  • Object
show all
Defined in:
lib/gcloud/bigquery/dataset.rb,
lib/gcloud/bigquery/dataset/list.rb,
lib/gcloud/bigquery/dataset/access.rb

Overview

Dataset

Represents a Dataset. A dataset is a grouping mechanism that holds zero or more tables. Datasets are the lowest level unit of access control; you cannot control access at the table level. A dataset is contained within a specific project.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.create_dataset "my_dataset",
                                  name: "My Dataset",
                                  description: "This is my Dataset"

Defined Under Namespace

Classes: Access, List

Attributes collapse

Lifecycle collapse

Table collapse

Data collapse

Instance Method Details

#access {|a2| ... } ⇒ Object

Retrieves the access rules for a Dataset using the Google Cloud Datastore API data structure of an array of hashes. The rules can be updated when passing a block, see Access for all the methods available.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"

dataset.access #=> [{"role"=>"OWNER",
               #     "specialGroup"=>"projectOwners"},
               #    {"role"=>"WRITER",
               #     "specialGroup"=>"projectWriters"},
               #    {"role"=>"READER",
               #     "specialGroup"=>"projectReaders"},
               #    {"role"=>"OWNER",
               #     "userByEmail"=>"123456789-...com"}]

Manage the access rules by passing a block:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"

dataset.access do |access|
  access.add_owner_group "[email protected]"
  access.add_writer_user "[email protected]"
  access.remove_writer_user "[email protected]"
  access.add_reader_special :all
  access.add_reader_view other_dataset_view_object
end

Yields:

  • (a2)

See Also:


237
238
239
240
241
242
243
244
245
246
# File 'lib/gcloud/bigquery/dataset.rb', line 237

def access
  ensure_full_data!
  g = @gapi
  g = g.to_hash if g.respond_to? :to_hash
  a = g["access"] ||= []
  return a unless block_given?
  a2 = Access.new a, dataset_ref
  yield a2
  self.access = a2.access if a2.changed?
end

#access=(new_access) ⇒ Object

Sets the access rules for a Dataset using the Google Cloud Datastore API data structure of an array of hashes. See BigQuery Access Control for more information.

This method is provided for advanced usage of managing the access rules. Calling #access with a block is the preferred way to manage access rules.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"

dataset.access = [{"role"=>"OWNER",
                   "specialGroup"=>"projectOwners"},
                  {"role"=>"WRITER",
                   "specialGroup"=>"projectWriters"},
                  {"role"=>"READER",
                   "specialGroup"=>"projectReaders"},
                  {"role"=>"OWNER",
                   "userByEmail"=>"123456789-...com"}]

274
275
276
# File 'lib/gcloud/bigquery/dataset.rb', line 274

def access= new_access
  patch_gapi! access: new_access
end

#api_urlObject

A URL that can be used to access the dataset using the REST API.


122
123
124
125
# File 'lib/gcloud/bigquery/dataset.rb', line 122

def api_url
  ensure_full_data!
  @gapi["selfLink"]
end

#create_table(table_id, name: nil, description: nil, schema: nil) ⇒ Gcloud::Bigquery::Table

Creates a new table. If you are adapting existing code that was written for the Rest API , you can pass the table's schema as a hash (see example.)

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"

You can also pass name and description options.

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table"
                             name: "My Table",
                             description: "A description of my table."

You can define the table's schema using a block.

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.create_table "my_table" do |schema|
  schema.string "first_name", mode: :required
  schema.record "cities_lived", mode: :repeated do |nested_schema|
    nested_schema.string "place", mode: :required
    nested_schema.integer "number_of_years", mode: :required
  end
end

You can pass the table's schema as a hash.

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"

schema = {
  "fields" => [
    {
      "name" => "first_name",
      "type" => "STRING",
      "mode" => "REQUIRED"
    },
    {
      "name" => "cities_lived",
      "type" => "RECORD",
      "mode" => "REPEATED",
      "fields" => [
        {
          "name" => "place",
          "type" => "STRING",
          "mode" => "REQUIRED"
        },
        {
          "name" => "number_of_years",
          "type" => "INTEGER",
          "mode" => "REQUIRED"
        }
      ]
    }
  ]
}
table = dataset.create_table "my_table", schema: schema

Parameters:

  • table_id (String)

    The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

  • name (String)

    A descriptive name for the table.

  • description (String)

    A user-friendly description of the table.

  • schema (Hash)

    A hash specifying fields and data types for the table. A block may be passed instead (see examples.) For the format of this hash, see the Tables resource .

Returns:


397
398
399
400
401
402
403
404
405
406
407
408
409
# File 'lib/gcloud/bigquery/dataset.rb', line 397

def create_table table_id, name: nil, description: nil, schema: nil
  ensure_connection!
  if block_given?
    if schema
      fail ArgumentError, "only schema block or schema option is allowed"
    end
    schema_builder = Table::Schema.new nil
    yield schema_builder
    schema = schema_builder.schema if schema_builder.changed?
  end
  options = { name: name, description: description, schema: schema }
  insert_table table_id, options
end

#create_view(table_id, query, name: nil, description: nil) ⇒ Gcloud::Bigquery::View

Creates a new view table from the given query.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
view = dataset.create_view "my_view",
          "SELECT name, age FROM [proj:dataset.users]"

A name and description can be provided:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
view = dataset.create_view "my_view",
          "SELECT name, age FROM [proj:dataset.users]",
          name: "My View", description: "This is my view"

Parameters:

  • table_id (String)

    The ID of the view table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

  • query (String)

    The query that BigQuery executes when the view is referenced.

  • name (String)

    A descriptive name for the table.

  • description (String)

    A user-friendly description of the table.

Returns:


445
446
447
448
# File 'lib/gcloud/bigquery/dataset.rb', line 445

def create_view table_id, query, name: nil, description: nil
  options = { query: query, name: name, description: description }
  insert_table table_id, options
end

#created_atObject

The time when this dataset was created.


171
172
173
174
# File 'lib/gcloud/bigquery/dataset.rb', line 171

def created_at
  ensure_full_data!
  Time.at(@gapi["creationTime"] / 1000.0)
end

#dataset_idObject

A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.


66
67
68
# File 'lib/gcloud/bigquery/dataset.rb', line 66

def dataset_id
  @gapi["datasetReference"]["datasetId"]
end

#default_expirationObject

The default lifetime of all tables in the dataset, in milliseconds.


151
152
153
154
# File 'lib/gcloud/bigquery/dataset.rb', line 151

def default_expiration
  ensure_full_data!
  @gapi["defaultTableExpirationMs"]
end

#default_expiration=(new_default_expiration) ⇒ Object

Updates the default lifetime of all tables in the dataset, in milliseconds.


162
163
164
# File 'lib/gcloud/bigquery/dataset.rb', line 162

def default_expiration= new_default_expiration
  patch_gapi! default_expiration: new_default_expiration
end

#delete(force: nil) ⇒ Boolean

Permanently deletes the dataset. The dataset must be empty before it can be deleted unless the force option is set to true.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

dataset = bigquery.dataset "my_dataset"
dataset.delete

Parameters:

  • force (Boolean)

    If true, delete all the tables in the dataset. If false and the dataset contains tables, the request will fail. Default is false.

Returns:

  • (Boolean)

    Returns true if the dataset was deleted.


299
300
301
302
303
304
305
306
307
# File 'lib/gcloud/bigquery/dataset.rb', line 299

def delete force: nil
  ensure_connection!
  resp = connection.delete_dataset dataset_id, force
  if resp.success?
    true
  else
    fail ApiError.from_response(resp)
  end
end

#descriptionObject

A user-friendly description of the dataset.


132
133
134
135
# File 'lib/gcloud/bigquery/dataset.rb', line 132

def description
  ensure_full_data!
  @gapi["description"]
end

#description=(new_description) ⇒ Object

Updates the user-friendly description of the dataset.


142
143
144
# File 'lib/gcloud/bigquery/dataset.rb', line 142

def description= new_description
  patch_gapi! description: new_description
end

#etagObject

A string hash of the dataset.


112
113
114
115
# File 'lib/gcloud/bigquery/dataset.rb', line 112

def etag
  ensure_full_data!
  @gapi["etag"]
end

#locationObject

The geographic location where the dataset should reside. Possible values include EU and US. The default value is US.


192
193
194
195
# File 'lib/gcloud/bigquery/dataset.rb', line 192

def location
  ensure_full_data!
  @gapi["location"]
end

#modified_atObject

The date when this dataset or any of its tables was last modified.


181
182
183
184
# File 'lib/gcloud/bigquery/dataset.rb', line 181

def modified_at
  ensure_full_data!
  Time.at(@gapi["lastModifiedTime"] / 1000.0)
end

#nameObject

A descriptive name for the dataset.


94
95
96
# File 'lib/gcloud/bigquery/dataset.rb', line 94

def name
  @gapi["friendlyName"]
end

#name=(new_name) ⇒ Object

Updates the descriptive name for the dataset.


103
104
105
# File 'lib/gcloud/bigquery/dataset.rb', line 103

def name= new_name
  patch_gapi! name: new_name
end

#project_idObject

The ID of the project containing this dataset.


75
76
77
# File 'lib/gcloud/bigquery/dataset.rb', line 75

def project_id
  @gapi["datasetReference"]["projectId"]
end

#query(query, max: nil, timeout: 10000, dryrun: nil, cache: true) ⇒ Gcloud::Bigquery::QueryData

Queries data using the synchronous method.

Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

data = bigquery.query "SELECT name FROM my_table"
data.each do |row|
  puts row["name"]
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • max (Integer)

    The maximum number of rows of data to return per page of results. Setting this flag to a small value such as 1000 and then paging through results might improve reliability when the query result set is large. In addition to this limit, responses are also limited to 10 MB. By default, there is no maximum row count, and only the byte limit applies.

  • timeout (Integer)

    How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with QueryData#complete? set to false. The default value is 10000 milliseconds (10 seconds).

  • dryrun (Boolean)

    If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

Returns:


658
659
660
661
662
663
664
665
666
667
668
669
# File 'lib/gcloud/bigquery/dataset.rb', line 658

def query query, max: nil, timeout: 10000, dryrun: nil, cache: true
  options = { max: max, timeout: timeout, dryrun: dryrun, cache: cache }
  options[:dataset] ||= dataset_id
  options[:project] ||= project_id
  ensure_connection!
  resp = connection.query query, options
  if resp.success?
    QueryData.from_gapi resp.data, connection
  else
    fail ApiError.from_response(resp)
  end
end

#query_job(query, priority: "INTERACTIVE", cache: true, table: nil, create: nil, write: nil, large_results: nil, flatten: nil) ⇒ Gcloud::Bigquery::QueryJob

Queries data using the asynchronous method.

Sets the current dataset as the default dataset in the query. Useful for using unqualified table names.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery

job = bigquery.query_job "SELECT name FROM my_table"

job.wait_until_done!
if !job.failed?
  job.query_results.each do |row|
    puts row["name"]
  end
end

Parameters:

  • query (String)

    A query string, following the BigQuery query syntax, of the query to execute. Example: "SELECT count(f1) FROM [myProjectId:myDatasetId.myTableId]".

  • priority (String)

    Specifies a priority for the query. Possible values include INTERACTIVE and BATCH. The default value is INTERACTIVE.

  • cache (Boolean)

    Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true. For more information, see query caching.

  • table (Table)

    The destination table where the query results should be stored. If not present, a new table will be created to store the results.

  • create (String)

    Specifies whether the job is allowed to create new tables.

    The following values are supported:

    • needed - Create the table if it does not exist.
    • never - The table must already exist. A 'notFound' error is raised if the table does not exist.
  • write (String)

    Specifies the action that occurs if the destination table already exists.

    The following values are supported:

    • truncate - BigQuery overwrites the table data.
    • append - BigQuery appends the data to the table.
    • empty - A 'duplicate' error is returned in the job result if the table exists and contains data.
  • large_results (Boolean)

    If true, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires table parameter to be set.

  • flatten (Boolean)

    Flattens all nested and repeated fields in the query results. The default value is true. large_results parameter must be true if this is set to false.

Returns:


595
596
597
598
599
600
601
602
603
604
605
606
607
608
# File 'lib/gcloud/bigquery/dataset.rb', line 595

def query_job query, priority: "INTERACTIVE", cache: true, table: nil,
              create: nil, write: nil, large_results: nil, flatten: nil
  options = { priority: priority, cache: cache, table: table,
              create: create, write: write, large_results: large_results,
              flatten: flatten }
  options[:dataset] ||= self
  ensure_connection!
  resp = connection.query_job query, options
  if resp.success?
    Job.from_gapi resp.data, connection
  else
    fail ApiError.from_response(resp)
  end
end

#table(table_id) ⇒ Gcloud::Bigquery::Table, ...

Retrieves an existing table by ID.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
table = dataset.table "my_table"
puts table.name

Parameters:

  • table_id (String)

    The ID of a table.

Returns:


469
470
471
472
473
474
475
476
477
# File 'lib/gcloud/bigquery/dataset.rb', line 469

def table table_id
  ensure_connection!
  resp = connection.get_table dataset_id, table_id
  if resp.success?
    Table.from_gapi resp.data, connection
  else
    nil
  end
end

#tables(token: nil, max: nil) ⇒ Array<Gcloud::Bigquery::Table>, Array<Gcloud::Bigquery::View>

Retrieves the list of tables belonging to the dataset.

Examples:

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"
tables = dataset.tables
tables.each do |table|
  puts table.name
end

require "gcloud"

gcloud = Gcloud.new
bigquery = gcloud.bigquery
dataset = bigquery.dataset "my_dataset"

all_tables = []
tmp_tables = dataset.tables
while tmp_tables.any? do
  tmp_tables.each do |table|
    all_tables << table
  end
  # break loop if no more tables available
  break if tmp_tables.token.nil?
  # get the next group of tables
  tmp_tables = dataset.tables token: tmp_tables.token
end

Parameters:

  • token (String)

    A previously-returned page token representing part of the larger set of results to view.

  • max (Integer)

    Maximum number of tables to return.

Returns:


521
522
523
524
525
526
527
528
529
530
# File 'lib/gcloud/bigquery/dataset.rb', line 521

def tables token: nil, max: nil
  ensure_connection!
  options = { token: token, max: max }
  resp = connection.list_tables dataset_id, options
  if resp.success?
    Table::List.from_response resp, connection
  else
    fail ApiError.from_response(resp)
  end
end