Class: Gcloud::Bigquery::Connection
- Inherits:
-
Object
- Object
- Gcloud::Bigquery::Connection
- Defined in:
- lib/gcloud/bigquery/connection.rb
Overview
as well as expose the API calls.
Constant Summary collapse
- API_VERSION =
"v2"
Instance Attribute Summary collapse
-
#credentials ⇒ Object
Returns the value of attribute credentials.
-
#project ⇒ Object
Returns the value of attribute project.
Class Method Summary collapse
-
.table_ref_from_s(str, default_table_ref) ⇒ Object
Extracts at least ‘tbl` group, and possibly `dts` and `prj` groups, from strings in the formats: “my_table”, “my_dataset.my_table”, or “my-project:my_dataset.my_table”.
Instance Method Summary collapse
- #copy_table(source, target, options = {}) ⇒ Object
- #default_access_rules ⇒ Object
-
#delete_dataset(dataset_id, force = nil) ⇒ Object
Deletes the dataset specified by the datasetId value.
-
#delete_table(dataset_id, table_id) ⇒ Object
Deletes the table specified by tableId from the dataset.
- #extract_table(table, storage_files, options = {}) ⇒ Object
-
#get_dataset(dataset_id) ⇒ Object
Returns the dataset specified by datasetID.
-
#get_job(job_id) ⇒ Object
Returns the job specified by jobID.
- #get_project_table(project_id, dataset_id, table_id) ⇒ Object
-
#get_table(dataset_id, table_id) ⇒ Object
Gets the specified table resource by table ID.
-
#initialize(project, credentials) ⇒ Connection
constructor
Creates a new Connection instance.
-
#insert_dataset(dataset_id, options = {}) ⇒ Object
Creates a new empty dataset.
- #insert_job(config) ⇒ Object
-
#insert_table(dataset_id, table_id, options = {}) ⇒ Object
Creates a new, empty table in the dataset.
- #insert_tabledata(dataset_id, table_id, rows, options = {}) ⇒ Object
- #inspect ⇒ Object
-
#job_query_results(job_id, options = {}) ⇒ Object
Returns the query data for the job.
- #link_table(table, urls, options = {}) ⇒ Object
-
#list_datasets(options = {}) ⇒ Object
Lists all datasets in the specified project to which you have been granted the READER dataset role.
-
#list_jobs(options = {}) ⇒ Object
Lists all jobs in the specified project to which you have been granted the READER job role.
-
#list_tabledata(dataset_id, table_id, options = {}) ⇒ Object
Retrieves data from the table.
-
#list_tables(dataset_id, options = {}) ⇒ Object
Lists all tables in the specified dataset.
- #load_multipart(table, file, options = {}) ⇒ Object
- #load_resumable(table, file, chunk_size = nil, options = {}) ⇒ Object
- #load_table(table, storage_url, options = {}) ⇒ Object
-
#patch_dataset(dataset_id, options = {}) ⇒ Object
Updates information in an existing dataset, only replacing fields that are provided in the submitted dataset resource.
-
#patch_table(dataset_id, table_id, options = {}) ⇒ Object
Updates information in an existing table, replacing fields that are provided in the submitted table resource.
- #query(query, options = {}) ⇒ Object
- #query_job(query, options = {}) ⇒ Object
Constructor Details
#initialize(project, credentials) ⇒ Connection
Creates a new Connection instance.
34 35 36 37 38 39 40 41 |
# File 'lib/gcloud/bigquery/connection.rb', line 34 def initialize project, credentials @project = project @credentials = credentials @client = Google::APIClient.new application_name: "gcloud-ruby", application_version: Gcloud::VERSION @client. = @credentials.client @bigquery = @client.discovered_api "bigquery", API_VERSION end |
Instance Attribute Details
#credentials ⇒ Object
Returns the value of attribute credentials.
30 31 32 |
# File 'lib/gcloud/bigquery/connection.rb', line 30 def credentials @credentials end |
#project ⇒ Object
Returns the value of attribute project.
29 30 31 |
# File 'lib/gcloud/bigquery/connection.rb', line 29 def project @project end |
Class Method Details
.table_ref_from_s(str, default_table_ref) ⇒ Object
Extracts at least ‘tbl` group, and possibly `dts` and `prj` groups, from strings in the formats: “my_table”, “my_dataset.my_table”, or “my-project:my_dataset.my_table”. Then merges project_id and dataset_id from the default table if they are missing.
329 330 331 332 333 334 335 336 337 338 339 |
# File 'lib/gcloud/bigquery/connection.rb', line 329 def self.table_ref_from_s str, default_table_ref str = str.to_s m = /\A(((?<prj>\S*):)?(?<dts>\S*)\.)?(?<tbl>\S*)\z/.match str unless m fail ArgumentError, "unable to identify table from #{str.inspect}" end str_table_ref = { "projectId" => m["prj"], "datasetId" => m["dts"], "tableId" => m["tbl"] }.delete_if { |_, v| v.nil? } default_table_ref.merge str_table_ref end |
Instance Method Details
#copy_table(source, target, options = {}) ⇒ Object
257 258 259 260 261 262 263 |
# File 'lib/gcloud/bigquery/connection.rb', line 257 def copy_table source, target, = {} @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: copy_table_config(source, target, ) ) end |
#default_access_rules ⇒ Object
315 316 317 318 319 320 321 322 |
# File 'lib/gcloud/bigquery/connection.rb', line 315 def default_access_rules [ { "role" => "OWNER", "specialGroup" => "projectOwners" }, { "role" => "WRITER", "specialGroup" => "projectWriters" }, { "role" => "READER", "specialGroup" => "projectReaders" }, { "role" => "OWNER", "userByEmail" => credentials.issuer } ] end |
#delete_dataset(dataset_id, force = nil) ⇒ Object
Deletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying force: true in options. Immediately after deletion, you can create another dataset with the same name.
97 98 99 100 101 102 103 104 |
# File 'lib/gcloud/bigquery/connection.rb', line 97 def delete_dataset dataset_id, force = nil @client.execute( api_method: @bigquery.datasets.delete, parameters: { projectId: @project, datasetId: dataset_id, deleteContents: force }.delete_if { |_, v| v.nil? } ) end |
#delete_table(dataset_id, table_id) ⇒ Object
Deletes the table specified by tableId from the dataset. If the table contains data, all the data will be deleted.
164 165 166 167 168 169 170 |
# File 'lib/gcloud/bigquery/connection.rb', line 164 def delete_table dataset_id, table_id @client.execute( api_method: @bigquery.tables.delete, parameters: { projectId: @project, datasetId: dataset_id, tableId: table_id } ) end |
#extract_table(table, storage_files, options = {}) ⇒ Object
273 274 275 276 277 278 279 |
# File 'lib/gcloud/bigquery/connection.rb', line 273 def extract_table table, storage_files, = {} @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: extract_table_config(table, storage_files, ) ) end |
#get_dataset(dataset_id) ⇒ Object
Returns the dataset specified by datasetID.
61 62 63 64 65 66 |
# File 'lib/gcloud/bigquery/connection.rb', line 61 def get_dataset dataset_id @client.execute( api_method: @bigquery.datasets.get, parameters: { projectId: @project, datasetId: dataset_id } ) end |
#get_job(job_id) ⇒ Object
Returns the job specified by jobID.
210 211 212 213 214 215 |
# File 'lib/gcloud/bigquery/connection.rb', line 210 def get_job job_id @client.execute( api_method: @bigquery.jobs.get, parameters: { projectId: @project, jobId: job_id } ) end |
#get_project_table(project_id, dataset_id, table_id) ⇒ Object
122 123 124 125 126 127 128 |
# File 'lib/gcloud/bigquery/connection.rb', line 122 def get_project_table project_id, dataset_id, table_id @client.execute( api_method: @bigquery.tables.get, parameters: { projectId: project_id, datasetId: dataset_id, tableId: table_id } ) end |
#get_table(dataset_id, table_id) ⇒ Object
Gets the specified table resource by table ID. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.
135 136 137 |
# File 'lib/gcloud/bigquery/connection.rb', line 135 def get_table dataset_id, table_id get_project_table @project, dataset_id, table_id end |
#insert_dataset(dataset_id, options = {}) ⇒ Object
Creates a new empty dataset.
70 71 72 73 74 75 76 |
# File 'lib/gcloud/bigquery/connection.rb', line 70 def insert_dataset dataset_id, = {} @client.execute( api_method: @bigquery.datasets.insert, parameters: { projectId: @project }, body_object: insert_dataset_request(dataset_id, ) ) end |
#insert_job(config) ⇒ Object
217 218 219 220 221 222 223 |
# File 'lib/gcloud/bigquery/connection.rb', line 217 def insert_job config @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: { "configuration" => config } ) end |
#insert_table(dataset_id, table_id, options = {}) ⇒ Object
Creates a new, empty table in the dataset.
141 142 143 144 145 146 147 |
# File 'lib/gcloud/bigquery/connection.rb', line 141 def insert_table dataset_id, table_id, = {} @client.execute( api_method: @bigquery.tables.insert, parameters: { projectId: @project, datasetId: dataset_id }, body_object: insert_table_request(dataset_id, table_id, ) ) end |
#insert_tabledata(dataset_id, table_id, rows, options = {}) ⇒ Object
188 189 190 191 192 193 194 195 196 |
# File 'lib/gcloud/bigquery/connection.rb', line 188 def insert_tabledata dataset_id, table_id, rows, = {} @client.execute( api_method: @bigquery.tabledata.insert_all, parameters: { projectId: @project, datasetId: dataset_id, tableId: table_id }, body_object: insert_tabledata_rows(rows, ) ) end |
#inspect ⇒ Object
341 342 343 |
# File 'lib/gcloud/bigquery/connection.rb', line 341 def inspect "#{self.class}(#{@project})" end |
#job_query_results(job_id, options = {}) ⇒ Object
Returns the query data for the job
243 244 245 246 247 248 249 250 251 252 253 254 255 |
# File 'lib/gcloud/bigquery/connection.rb', line 243 def job_query_results job_id, = {} params = { projectId: @project, jobId: job_id, pageToken: .delete(:token), maxResults: .delete(:max), startIndex: .delete(:start), timeoutMs: .delete(:timeout) }.delete_if { |_, v| v.nil? } @client.execute( api_method: @bigquery.jobs.get_query_results, parameters: params ) end |
#link_table(table, urls, options = {}) ⇒ Object
265 266 267 268 269 270 271 |
# File 'lib/gcloud/bigquery/connection.rb', line 265 def link_table table, urls, = {} @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: link_table_config(table, urls, ) ) end |
#list_datasets(options = {}) ⇒ Object
Lists all datasets in the specified project to which you have been granted the READER dataset role.
46 47 48 49 50 51 52 53 54 55 56 57 |
# File 'lib/gcloud/bigquery/connection.rb', line 46 def list_datasets = {} params = { projectId: @project, all: .delete(:all), pageToken: .delete(:token), maxResults: .delete(:max) }.delete_if { |_, v| v.nil? } @client.execute( api_method: @bigquery.datasets.list, parameters: params ) end |
#list_jobs(options = {}) ⇒ Object
Lists all jobs in the specified project to which you have been granted the READER job role.
201 202 203 204 205 206 |
# File 'lib/gcloud/bigquery/connection.rb', line 201 def list_jobs = {} @client.execute( api_method: @bigquery.jobs.list, parameters: list_jobs_params() ) end |
#list_tabledata(dataset_id, table_id, options = {}) ⇒ Object
Retrieves data from the table.
174 175 176 177 178 179 180 181 182 183 184 185 186 |
# File 'lib/gcloud/bigquery/connection.rb', line 174 def list_tabledata dataset_id, table_id, = {} params = { projectId: @project, datasetId: dataset_id, tableId: table_id, pageToken: .delete(:token), maxResults: .delete(:max), startIndex: .delete(:start) }.delete_if { |_, v| v.nil? } @client.execute( api_method: @bigquery.tabledata.list, parameters: params ) end |
#list_tables(dataset_id, options = {}) ⇒ Object
Lists all tables in the specified dataset. Requires the READER dataset role.
109 110 111 112 113 114 115 116 117 118 119 120 |
# File 'lib/gcloud/bigquery/connection.rb', line 109 def list_tables dataset_id, = {} params = { projectId: @project, datasetId: dataset_id, pageToken: .delete(:token), maxResults: .delete(:max) }.delete_if { |_, v| v.nil? } @client.execute( api_method: @bigquery.tables.list, parameters: params ) end |
#load_multipart(table, file, options = {}) ⇒ Object
290 291 292 293 294 295 296 297 298 299 |
# File 'lib/gcloud/bigquery/connection.rb', line 290 def load_multipart table, file, = {} media = load_media file @client.execute( api_method: @bigquery.jobs.insert, media: media, parameters: { projectId: @project, uploadType: "multipart" }, body_object: load_table_config(table, nil, file, ) ) end |
#load_resumable(table, file, chunk_size = nil, options = {}) ⇒ Object
301 302 303 304 305 306 307 308 309 310 311 312 313 |
# File 'lib/gcloud/bigquery/connection.rb', line 301 def load_resumable table, file, chunk_size = nil, = {} media = load_media file, chunk_size result = @client.execute( api_method: @bigquery.jobs.insert, media: media, parameters: { projectId: @project, uploadType: "resumable" }, body_object: load_table_config(table, nil, file, ) ) upload = result.resumable_upload result = @client.execute upload while upload.resumable? result end |
#load_table(table, storage_url, options = {}) ⇒ Object
281 282 283 284 285 286 287 288 |
# File 'lib/gcloud/bigquery/connection.rb', line 281 def load_table table, storage_url, = {} @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: load_table_config(table, storage_url, Array(storage_url).first, ) ) end |
#patch_dataset(dataset_id, options = {}) ⇒ Object
Updates information in an existing dataset, only replacing fields that are provided in the submitted dataset resource.
81 82 83 84 85 86 87 88 89 |
# File 'lib/gcloud/bigquery/connection.rb', line 81 def patch_dataset dataset_id, = {} project_id = [:project_id] || @project @client.execute( api_method: @bigquery.datasets.patch, parameters: { projectId: project_id, datasetId: dataset_id }, body_object: patch_dataset_request() ) end |
#patch_table(dataset_id, table_id, options = {}) ⇒ Object
Updates information in an existing table, replacing fields that are provided in the submitted table resource.
152 153 154 155 156 157 158 159 |
# File 'lib/gcloud/bigquery/connection.rb', line 152 def patch_table dataset_id, table_id, = {} @client.execute( api_method: @bigquery.tables.patch, parameters: { projectId: @project, datasetId: dataset_id, tableId: table_id }, body_object: patch_table_request() ) end |
#query(query, options = {}) ⇒ Object
233 234 235 236 237 238 239 |
# File 'lib/gcloud/bigquery/connection.rb', line 233 def query query, = {} @client.execute( api_method: @bigquery.jobs.query, parameters: { projectId: @project }, body_object: query_config(query, ) ) end |
#query_job(query, options = {}) ⇒ Object
225 226 227 228 229 230 231 |
# File 'lib/gcloud/bigquery/connection.rb', line 225 def query_job query, = {} @client.execute( api_method: @bigquery.jobs.insert, parameters: { projectId: @project }, body_object: query_table_config(query, ) ) end |