Class: Sequel::Dataset
- Includes:
- Enumerable, SQL::AliasMethods, SQL::BooleanMethods, SQL::CastMethods, SQL::ComplexExpressionMethods, SQL::InequalityMethods, SQL::NumericMethods, SQL::OrderMethods, SQL::StringMethods
- Defined in:
- lib/sequel/dataset.rb,
lib/sequel/dataset/sql.rb,
lib/sequel/dataset/misc.rb,
lib/sequel/dataset/graph.rb,
lib/sequel/dataset/query.rb,
lib/sequel/dataset/actions.rb,
lib/sequel/dataset/features.rb,
lib/sequel/extensions/query.rb,
lib/sequel/extensions/pagination.rb,
lib/sequel/extensions/provenance.rb,
lib/sequel/adapters/utils/replace.rb,
lib/sequel/dataset/dataset_module.rb,
lib/sequel/extensions/null_dataset.rb,
lib/sequel/extensions/set_literalizer.rb,
lib/sequel/extensions/split_array_nil.rb,
lib/sequel/extensions/synchronize_sql.rb,
lib/sequel/dataset/prepared_statements.rb,
lib/sequel/extensions/round_timestamps.rb,
lib/sequel/extensions/implicit_subquery.rb,
lib/sequel/adapters/utils/columns_limit_1.rb,
lib/sequel/dataset/placeholder_literalizer.rb,
lib/sequel/extensions/auto_literal_strings.rb,
lib/sequel/extensions/dataset_source_alias.rb,
lib/sequel/adapters/utils/stored_procedures.rb,
lib/sequel/dataset/deprecated_singleton_class_methods.rb
Overview
A dataset represents an SQL query. Datasets can be used to select, insert, update and delete records.
Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):
my_posts = DB[:posts].where(author: 'david') # no records are retrieved
my_posts.all # records are retrieved
my_posts.all # records are retrieved again
Datasets are frozen and use a functional style where modification methods return modified copies of the the dataset. This allows you to reuse datasets:
posts = DB[:posts]
davids_posts = posts.where(author: 'david')
old_posts = posts.where{stamp < Date.today - 7}
davids_old_posts = davids_posts.where{stamp < Date.today - 7}
Datasets are Enumerable objects, so they can be manipulated using many of the Enumerable methods, such as map
and inject
. Note that there are some methods that Dataset defines that override methods defined in Enumerable and result in different behavior, such as select
and group_by
.
For more information, see the “Dataset Basics” guide.
Direct Known Subclasses
ADO::Dataset, Amalgalite::Dataset, IBMDB::Dataset, JDBC::Dataset, Mock::Dataset, MySQL::Dataset, Mysql2::Dataset, ODBC::Dataset, Oracle::Dataset, Postgres::Dataset, SQLite::Dataset, SqlAnywhere::Dataset, TinyTDS::Dataset, Trilogy::Dataset
Defined Under Namespace
Modules: ArgumentMapper, AutoLiteralStrings, ColumnsLimit1, DatasetSourceAlias, DeprecatedSingletonClassMethods, EmulatePreparedStatementMethods, ImplicitSubquery, NullDataset, Nullifiable, Pagination, PreparedStatementMethods, Provenance, Replace, RoundTimestamps, SetLiteralizer, SplitArrayNil, StoredProcedureMethods, StoredProcedures, SynchronizeSQL, UnnumberedArgumentMapper Classes: DatasetModule, PlaceholderLiteralizer, Query
Constant Summary collapse
- OPTS =
Sequel::OPTS
- TRUE_FREEZE =
Whether Dataset#freeze can actually freeze datasets. True only on ruby 2.4+, as it requires clone(freeze: false)
RUBY_VERSION >= '2.4'
- WILDCARD =
LiteralString.new('*').freeze
- COUNT_OF_ALL_AS_COUNT =
SQL::Function.new(:count, WILDCARD).as(:count)
- DEFAULT =
LiteralString.new('DEFAULT').freeze
- EXISTS =
['EXISTS '.freeze].freeze
- BITWISE_METHOD_MAP =
{:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}.freeze
- COUNT_FROM_SELF_OPTS =
[:distinct, :group, :sql, :limit, :offset, :compounds].freeze
- IS_LITERALS =
{nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze
- QUALIFY_KEYS =
[:select, :where, :having, :order, :group].freeze
- IS_OPERATORS =
::Sequel::SQL::ComplexExpression::IS_OPERATORS
- LIKE_OPERATORS =
::Sequel::SQL::ComplexExpression::LIKE_OPERATORS
- N_ARITY_OPERATORS =
::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS
- TWO_ARITY_OPERATORS =
::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS
- REGEXP_OPERATORS =
::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS
- EXTENSIONS =
Hash of extension name symbols to callable objects to load the extension into the Dataset object (usually by extending it with a module defined in the extension).
{}
- EMPTY_ARRAY =
[].freeze
- COLUMN_CHANGE_OPTS =
The dataset options that require the removal of cached columns if changed.
[:select, :sql, :from, :join].freeze
- NON_SQL_OPTIONS =
Which options don’t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table.
[:server, :graph, :row_proc, :quote_identifiers, :skip_symbol_cache].freeze
- CONDITIONED_JOIN_TYPES =
These symbols have _join methods created (e.g. inner_join) that call join_table with the symbol, passing along the arguments and block from the method call.
[:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left].freeze
- UNCONDITIONED_JOIN_TYPES =
These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to join_table, and they raise an error if called with a block.
[:natural, :natural_left, :natural_right, :natural_full, :cross].freeze
- JOIN_METHODS =
All methods that return modified datasets with a joined table added.
((CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table]).freeze
- QUERY_METHODS =
Methods that return modified datasets
((" add_graph_aliases distinct except exclude exclude_having\n filter for_update from from_self graph grep group group_and_count group_append group_by having intersect invert\n limit lock_style naked offset or order order_append order_by order_more order_prepend qualify\n reverse reverse_order select select_all select_append select_group select_more select_prepend server\n set_graph_aliases unfiltered ungraphed ungrouped union\n unlimited unordered where with with_recursive with_sql\n").split.map(&:to_sym) + JOIN_METHODS).freeze
- ACTION_METHODS =
Action methods defined by Sequel that execute code on the database.
(" << [] all as_hash avg count columns columns! delete each\n empty? fetch_rows first first! get import insert last\n map max min multi_insert paged_each select_hash select_hash_groups select_map select_order_map\n single_record single_record! single_value single_value! sum to_hash to_hash_groups truncate update\n where_all where_each where_single_value\n").split.map(&:to_sym).freeze
- COLUMNS_CLONE_OPTIONS =
The clone options to use when retrieving columns for a dataset.
{:distinct => nil, :limit => 0, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze
- COUNT_SELECT =
Sequel.function(:count).*.as(:count)
- EMPTY_SELECT =
Sequel::SQL::AliasedExpression.new(1, :one)
- PREPARED_ARG_PLACEHOLDER =
:section: 8 - Methods related to prepared statements or bound variables On some adapters, these use native prepared statements and bound variables, on others support is emulated. For details, see the “Prepared Statements/Bound Variables” guide.
LiteralString.new('?').freeze
- DEFAULT_PREPARED_STATEMENT_MODULE_METHODS =
%w'execute execute_dui execute_insert'.freeze.each(&:freeze)
- PREPARED_STATEMENT_MODULE_CODE =
{ :bind => "opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze, :prepare => "sql = prepared_statement_name".freeze, :prepare_bind => "sql = prepared_statement_name; opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze }.freeze
Instance Attribute Summary collapse
-
#db ⇒ Object
readonly
The database related to this dataset.
-
#opts ⇒ Object
readonly
The hash of options for this dataset, keys are symbols.
Class Method Summary collapse
-
.clause_methods(type, clauses) ⇒ Object
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.
-
.def_sql_method(mod, type, clauses) ⇒ Object
Define a dataset literalization method for the given type in the given module, using the given clauses.
-
.register_extension(ext, mod = nil, &block) ⇒ Object
Register an extension callback for Dataset objects.
Instance Method Summary collapse
-
#<<(arg) ⇒ Object
Inserts the given argument into the database.
-
#==(o) ⇒ Object
Define a hash value such that datasets with the same class, DB, and opts will be considered equal.
-
#[](*conditions) ⇒ Object
Returns the first record matching the conditions.
-
#add_graph_aliases(graph_aliases) ⇒ Object
Adds the given graph aliases to the list of graph aliases to use, unlike
set_graph_aliases
, which replaces the list (the equivalent ofselect_append
when graphing). -
#aliased_expression_sql_append(sql, ae) ⇒ Object
Append literalization of aliased expression to SQL string.
-
#all(&block) ⇒ Object
Returns an array with all records in the dataset.
-
#array_sql_append(sql, a) ⇒ Object
Append literalization of array to SQL string.
-
#as_hash(key_column, value_column = nil, opts = OPTS) ⇒ Object
Returns a hash with one column used as key and another used as value.
-
#avg(arg = (no_arg = true), &block) ⇒ Object
Returns the average value for the given column/expression.
-
#bind(bind_vars = OPTS) ⇒ Object
Set the bind variables to use for the call.
-
#boolean_constant_sql_append(sql, constant) ⇒ Object
Append literalization of boolean constant to SQL string.
-
#call(type, bind_variables = OPTS, *values, &block) ⇒ Object
For the given type (:select, :first, :insert, :insert_select, :update, :delete, or :single_value), run the sql with the bind variables specified in the hash.
-
#case_expression_sql_append(sql, ce) ⇒ Object
Append literalization of case expression to SQL string.
-
#cast_sql_append(sql, expr, type) ⇒ Object
Append literalization of cast expression to SQL string.
-
#clone(opts = OPTS) ⇒ Object
:nocov:.
-
#column_all_sql_append(sql, ca) ⇒ Object
Append literalization of column all selection to SQL string.
-
#columns ⇒ Object
Returns the columns in the result set in order as an array of symbols.
-
#columns! ⇒ Object
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
-
#complex_expression_sql_append(sql, op, args) ⇒ Object
Append literalization of complex expression to SQL string.
-
#constant_sql_append(sql, constant) ⇒ Object
Append literalization of constant to SQL string.
-
#count(arg = (no_arg=true), &block) ⇒ Object
Returns the number of records in the dataset.
-
#current_datetime ⇒ Object
An object representing the current date or time, should be an instance of Sequel.datetime_class.
-
#delayed_evaluation_sql_append(sql, delay) ⇒ Object
Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.
-
#delete(&block) ⇒ Object
Deletes the records in the dataset, returning the number of records deleted.
-
#distinct(*args, &block) ⇒ Object
Returns a copy of the dataset with the SQL DISTINCT clause.
-
#dup ⇒ Object
Return self, as datasets are always frozen.
-
#each ⇒ Object
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
-
#each_server ⇒ Object
Yield a dataset for each server in the connection pool that is tied to that server.
-
#empty? ⇒ Boolean
Returns true if no records exist in the dataset, false otherwise.
-
#eql?(o) ⇒ Boolean
Alias for ==.
-
#escape_like(string) ⇒ Object
Returns the string with the LIKE metacharacters (% and _) escaped.
-
#except(dataset, opts = OPTS) ⇒ Object
Adds an EXCEPT clause using a second dataset object.
-
#exclude(*cond, &block) ⇒ Object
Performs the inverse of Dataset#where.
-
#exclude_having(*cond, &block) ⇒ Object
Inverts the given conditions and adds them to the HAVING clause.
-
#exists ⇒ Object
Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
-
#extension(*exts) ⇒ Object
:nocov:.
-
#filter(*cond, &block) ⇒ Object
Alias for where.
-
#first(*args, &block) ⇒ Object
Returns the first matching record if no arguments are given.
-
#first!(*args, &block) ⇒ Object
Calls first.
-
#first_source ⇒ Object
Alias of
first_source_alias
. -
#first_source_alias ⇒ Object
The first source (primary table) for this dataset.
-
#first_source_table ⇒ Object
The first source (primary table) for this dataset.
-
#for_update ⇒ Object
Returns a cloned dataset with a :update lock style.
-
#freeze ⇒ Object
:nocov:.
-
#from(*source, &block) ⇒ Object
Returns a copy of the dataset with the source changed.
-
#from_self(opts = OPTS) ⇒ Object
Returns a dataset selecting from the current dataset.
-
#frozen? ⇒ Boolean
:nodoc:.
-
#function_sql_append(sql, f) ⇒ Object
Append literalization of function call to SQL string.
-
#get(column = (no_arg=true; nil), &block) ⇒ Object
Return the column value for the first matching record in the dataset.
-
#graph(dataset, join_conditions = nil, options = OPTS, &block) ⇒ Object
Similar to Dataset#join_table, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.
-
#grep(columns, patterns, opts = OPTS) ⇒ Object
Match any of the columns to any of the patterns.
-
#group(*columns, &block) ⇒ Object
Returns a copy of the dataset with the results grouped by the value of the given columns.
-
#group_and_count(*columns, &block) ⇒ Object
Returns a dataset grouped by the given column with count by group.
-
#group_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the list of existing columns to group on.
-
#group_by(*columns, &block) ⇒ Object
Alias of group.
-
#group_cube ⇒ Object
Adds the appropriate CUBE syntax to GROUP BY.
-
#group_rollup ⇒ Object
Adds the appropriate ROLLUP syntax to GROUP BY.
-
#grouping_sets ⇒ Object
Adds the appropriate GROUPING SETS syntax to GROUP BY.
-
#hash ⇒ Object
Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.
-
#having(*cond, &block) ⇒ Object
Returns a copy of the dataset with the HAVING conditions changed.
-
#import(columns, values, opts = OPTS) ⇒ Object
Inserts multiple records into the associated table.
-
#initialize(db) ⇒ Dataset
constructor
Constructs a new Dataset instance with an associated database and options.
-
#insert(*values, &block) ⇒ Object
Inserts values into the associated table.
-
#insert_sql(*values) ⇒ Object
Returns an INSERT SQL query string.
-
#inspect ⇒ Object
Returns a string representation of the dataset including the class name and the corresponding SQL select statement.
-
#intersect(dataset, opts = OPTS) ⇒ Object
Adds an INTERSECT clause using a second dataset object.
-
#invert ⇒ Object
Inverts the current WHERE and HAVING clauses.
-
#join(*args, &block) ⇒ Object
Alias of
inner_join
. -
#join_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN clause without ON or USING to SQL string.
-
#join_on_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN ON clause to SQL string.
-
#join_table(type, table, expr = nil, options = OPTS, &block) ⇒ Object
Returns a joined dataset.
-
#join_using_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN USING clause to SQL string.
-
#joined_dataset? ⇒ Boolean
Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
-
#last(*args, &block) ⇒ Object
Reverses the order and then runs #first with the given arguments and block.
-
#lateral ⇒ Object
Marks this dataset as a lateral dataset.
-
#limit(l, o = (no_offset = true; nil)) ⇒ Object
If given an integer, the dataset will contain only the first l results.
-
#literal_append(sql, v) ⇒ Object
Append a literal representation of a value to the given SQL string.
-
#literal_date_or_time(dt, raw = false) ⇒ Object
Literalize a date or time value, as a SQL string value with no typecasting.
-
#lock_style(style) ⇒ Object
Returns a cloned dataset with the given lock style.
-
#map(column = nil, &block) ⇒ Object
Maps column values for each record in the dataset (if an argument is given) or performs the stock mapping functionality of
Enumerable
otherwise. -
#max(arg = (no_arg = true), &block) ⇒ Object
Returns the maximum value for the given column/expression.
-
#merge ⇒ Object
Execute a MERGE statement, which allows for INSERT, UPDATE, and DELETE behavior in a single query, based on whether rows from a source table match rows in the current table, based on the join conditions.
-
#merge_delete(&block) ⇒ Object
Return a dataset with a WHEN MATCHED THEN DELETE clause added to the MERGE statement.
-
#merge_insert(*values, &block) ⇒ Object
Return a dataset with a WHEN NOT MATCHED THEN INSERT clause added to the MERGE statement.
-
#merge_sql ⇒ Object
The SQL to use for the MERGE statement.
-
#merge_update(values, &block) ⇒ Object
Return a dataset with a WHEN MATCHED THEN UPDATE clause added to the MERGE statement.
-
#merge_using(source, join_condition) ⇒ Object
Return a dataset with the source and join condition to use for the MERGE statement.
-
#min(arg = (no_arg = true), &block) ⇒ Object
Returns the minimum value for the given column/expression.
-
#multi_insert(hashes, opts = OPTS) ⇒ Object
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:.
-
#multi_insert_sql(columns, values) ⇒ Object
Returns an array of insert statements for inserting multiple records.
-
#naked ⇒ Object
Returns a cloned dataset without a row_proc.
-
#negative_boolean_constant_sql_append(sql, constant) ⇒ Object
Append literalization of negative boolean constant to SQL string.
-
#nowait ⇒ Object
Returns a copy of the dataset that will raise a DatabaseLockTimeout instead of waiting for rows that are locked by another transaction.
-
#offset(o) ⇒ Object
Returns a copy of the dataset with a specified order.
-
#or(*cond, &block) ⇒ Object
Adds an alternate filter to an existing WHERE clause using OR.
-
#order(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order changed.
-
#order_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order columns added to the end of the existing order.
-
#order_by(*columns, &block) ⇒ Object
Alias of order.
-
#order_more(*columns, &block) ⇒ Object
Alias of order_append.
-
#order_prepend(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
-
#ordered_expression_sql_append(sql, oe) ⇒ Object
Append literalization of ordered expression to SQL string.
-
#paged_each(opts = OPTS) ⇒ Object
Yields each row in the dataset, but internally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
-
#placeholder_literal_string_sql_append(sql, pls) ⇒ Object
Append literalization of placeholder literal string to SQL string.
-
#placeholder_literalizer_class ⇒ Object
The class to use for placeholder literalizers for the current dataset.
-
#placeholder_literalizer_loader(&block) ⇒ Object
A placeholder literalizer loader for the current dataset.
-
#prepare(type, name, *values) ⇒ Object
Prepare an SQL statement for later execution.
-
#provides_accurate_rows_matched? ⇒ Boolean
Whether this dataset will provide accurate number of rows matched for delete and update statements, true by default.
-
#qualified_identifier_sql_append(sql, table, column = (c = table.column; table = table.table; c)) ⇒ Object
Append literalization of qualified identifier to SQL string.
-
#qualify(table = (cache=true; first_source)) ⇒ Object
Qualify to the given table, or first source if no table is given.
-
#quote_identifier_append(sql, name) ⇒ Object
Append literalization of unqualified identifier to SQL string.
-
#quote_identifiers? ⇒ Boolean
Whether this dataset quotes identifiers.
-
#quote_schema_table_append(sql, table) ⇒ Object
Append literalization of identifier or unqualified identifier to SQL string.
-
#quoted_identifier_append(sql, name) ⇒ Object
Append literalization of quoted identifier to SQL string.
-
#recursive_cte_requires_column_aliases? ⇒ Boolean
Whether you must use a column alias list for recursive CTEs, false by default.
-
#requires_placeholder_type_specifiers? ⇒ Boolean
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer), false by default.
-
#requires_sql_standard_datetimes? ⇒ Boolean
Whether the dataset requires SQL standard datetimes.
-
#returning(*values) ⇒ Object
Modify the RETURNING clause, only supported on a few databases.
-
#reverse(*order, &block) ⇒ Object
Returns a copy of the dataset with the order reversed.
-
#reverse_order(*order, &block) ⇒ Object
Alias of
reverse
. -
#row_number_column ⇒ Object
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies.
-
#row_proc ⇒ Object
The row_proc for this database, should be any object that responds to
call
with a single hash argument and returns the object you want #each to return. -
#schema_and_table(table_name, sch = nil) ⇒ Object
Split the schema information from the table, returning two strings, one for the schema and one for the table.
-
#select(*columns, &block) ⇒ Object
Returns a copy of the dataset with the columns selected changed to the given columns.
-
#select_all(*tables) ⇒ Object
Returns a copy of the dataset selecting the wildcard if no arguments are given.
-
#select_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the existing selected columns.
-
#select_group(*columns, &block) ⇒ Object
Set both the select and group clauses with the given
columns
. -
#select_hash(key_column, value_column, opts = OPTS) ⇒ Object
Returns a hash with key_column values as keys and value_column values as values.
-
#select_hash_groups(key_column, value_column, opts = OPTS) ⇒ Object
Returns a hash with key_column values as keys and an array of value_column values.
-
#select_map(column = nil, &block) ⇒ Object
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset.
-
#select_more(*columns, &block) ⇒ Object
Alias for select_append.
-
#select_order_map(column = nil, &block) ⇒ Object
The same as select_map, but in addition orders the array by the column.
-
#select_prepend(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the existing selected columns.
-
#server(servr) ⇒ Object
Set the server for this dataset to use.
-
#server?(server) ⇒ Boolean
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server.
-
#set_graph_aliases(graph_aliases) ⇒ Object
This allows you to manually specify the graph aliases to use when using graph.
-
#single_record ⇒ Object
Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records.
-
#single_record! ⇒ Object
Returns the first record in dataset, without limiting the dataset.
-
#single_value ⇒ Object
Returns the first value of the first record in the dataset.
-
#single_value! ⇒ Object
Returns the first value of the first record in the dataset, without limiting the dataset.
-
#skip_limit_check ⇒ Object
Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.
-
#skip_locked ⇒ Object
Skip locked rows when returning results from this dataset.
-
#split_alias(c) ⇒ Object
Splits a possible implicit alias in
c
, handling both SQL::AliasedExpressions and Symbols. -
#split_qualifiers(table_name, *args) ⇒ Object
Splits table_name into an array of strings.
-
#sql ⇒ Object
Same as
select_sql
, not aliased directly to make subclassing simpler. -
#subscript_sql_append(sql, s) ⇒ Object
Append literalization of subscripts (SQL array accesses) to SQL string.
-
#sum(arg = (no_arg = true), &block) ⇒ Object
Returns the sum for the given column/expression.
-
#supports_cte?(type = :select) ⇒ Boolean
Whether the dataset supports common table expressions, false by default.
-
#supports_cte_in_subqueries? ⇒ Boolean
Whether the dataset supports common table expressions in subqueries, false by default.
-
#supports_deleting_joins? ⇒ Boolean
Whether deleting from joined datasets is supported, false by default.
-
#supports_derived_column_lists? ⇒ Boolean
Whether the database supports derived column lists (e.g. “table_expr AS table_alias(column_alias1, column_alias2, …)”), true by default.
-
#supports_distinct_on? ⇒ Boolean
Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.
-
#supports_group_cube? ⇒ Boolean
Whether the dataset supports CUBE with GROUP BY, false by default.
-
#supports_group_rollup? ⇒ Boolean
Whether the dataset supports ROLLUP with GROUP BY, false by default.
-
#supports_grouping_sets? ⇒ Boolean
Whether the dataset supports GROUPING SETS with GROUP BY, false by default.
-
#supports_insert_select? ⇒ Boolean
Whether this dataset supports the
insert_select
method for returning all columns values directly from an insert query, false by default. -
#supports_intersect_except? ⇒ Boolean
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
-
#supports_intersect_except_all? ⇒ Boolean
Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default.
-
#supports_is_true? ⇒ Boolean
Whether the dataset supports the IS TRUE syntax, true by default.
-
#supports_join_using? ⇒ Boolean
Whether the dataset supports the JOIN table USING (column1, …) syntax, true by default.
-
#supports_lateral_subqueries? ⇒ Boolean
Whether the dataset supports LATERAL for subqueries in the FROM or JOIN clauses, false by default.
-
#supports_limits_in_correlated_subqueries? ⇒ Boolean
Whether limits are supported in correlated subqueries, true by default.
-
#supports_merge? ⇒ Boolean
Whether the MERGE statement is supported, false by default.
-
#supports_modifying_joins? ⇒ Boolean
Whether modifying joined datasets is supported, false by default.
-
#supports_multiple_column_in? ⇒ Boolean
Whether the IN/NOT IN operators support multiple columns when an array of values is given, true by default.
-
#supports_nowait? ⇒ Boolean
Whether the dataset supports skipping raising an error instead of waiting for locked rows when returning data, false by default.
-
#supports_offsets_in_correlated_subqueries? ⇒ Boolean
Whether offsets are supported in correlated subqueries, true by default.
-
#supports_ordered_distinct_on? ⇒ Boolean
Whether the dataset supports or can fully emulate the DISTINCT ON clause, including respecting the ORDER BY clause, false by default.
-
#supports_placeholder_literalizer? ⇒ Boolean
Whether placeholder literalizers are supported, true by default.
-
#supports_regexp? ⇒ Boolean
Whether the dataset supports pattern matching by regular expressions, false by default.
-
#supports_replace? ⇒ Boolean
Whether the dataset supports REPLACE syntax, false by default.
-
#supports_returning?(type) ⇒ Boolean
Whether the RETURNING clause is supported for the given type of query, false by default.
-
#supports_select_all_and_column? ⇒ Boolean
Whether the database supports
SELECT *, column FROM table
, true by default. -
#supports_skip_locked? ⇒ Boolean
Whether the dataset supports skipping locked rows when returning data, false by default.
-
#supports_timestamp_timezones? ⇒ Boolean
Whether the dataset supports timezones in literal timestamps, false by default.
-
#supports_timestamp_usecs? ⇒ Boolean
Whether the dataset supports fractional seconds in literal timestamps, true by default.
-
#supports_updating_joins? ⇒ Boolean
Whether updating joined datasets is supported, false by default.
-
#supports_where_true? ⇒ Boolean
Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that that use 1 for true), true by default.
-
#supports_window_clause? ⇒ Boolean
Whether the dataset supports the WINDOW clause to define windows used by multiple window functions, false by default.
-
#supports_window_function_frame_option?(option) ⇒ Boolean
Whether the dataset supports the given window function option.
-
#supports_window_functions? ⇒ Boolean
Whether the dataset supports window functions, false by default.
-
#to_hash(*a) ⇒ Object
Alias of as_hash for backwards compatibility.
-
#to_hash_groups(key_column, value_column = nil, opts = OPTS) ⇒ Object
Returns a hash with one column used as key and the values being an array of column values.
-
#truncate ⇒ Object
Truncates the dataset.
-
#truncate_sql ⇒ Object
Returns a TRUNCATE SQL query string.
-
#unfiltered ⇒ Object
Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.
-
#ungraphed ⇒ Object
Remove the splitting of results into subhashes, and all metadata related to the current graph (if any).
-
#ungrouped ⇒ Object
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
-
#union(dataset, opts = OPTS) ⇒ Object
Adds a UNION clause using a second dataset object.
-
#unlimited ⇒ Object
Returns a copy of the dataset with no limit or offset.
-
#unordered ⇒ Object
Returns a copy of the dataset with no order.
-
#unqualified_column_for(v) ⇒ Object
This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value.
-
#unused_table_alias(table_alias, used_aliases = []) ⇒ Object
Creates a unique table alias that hasn’t already been used in the dataset.
-
#update(values = OPTS, &block) ⇒ Object
Updates values for the dataset.
-
#update_sql(values = OPTS) ⇒ Object
Formats an UPDATE statement using the given values.
-
#where(*cond, &block) ⇒ Object
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
-
#where_all(cond, &block) ⇒ Object
Return an array of all rows matching the given filter condition, also yielding each row to the given block.
-
#where_each(cond, &block) ⇒ Object
Iterate over all rows matching the given filter condition, yielding each row to the given block.
-
#where_single_value(cond) ⇒ Object
Filter the datasets using the given filter condition, then return a single value.
-
#window(name, opts) ⇒ Object
Return a clone of the dataset with an addition named window that can be referenced in window functions.
-
#window_sql_append(sql, opts) ⇒ Object
Append literalization of windows (for window functions) to SQL string.
-
#with(name, dataset, opts = OPTS) ⇒ Object
Add a common table expression (CTE) with the given name and a dataset that defines the CTE.
-
#with_extend(*mods, &block) ⇒ Object
:nocov:.
-
#with_quote_identifiers(v) ⇒ Object
Return a modified dataset with quote_identifiers set.
-
#with_recursive(name, nonrecursive, recursive, opts = OPTS) ⇒ Object
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE.
-
#with_row_proc(callable) ⇒ Object
Returns a cloned dataset with the given row_proc.
-
#with_sql(sql, *args) ⇒ Object
Returns a copy of the dataset with the static SQL used.
-
#with_sql_all(sql, &block) ⇒ Object
Run the given SQL and return an array of all rows.
-
#with_sql_delete(sql) ⇒ Object
(also: #with_sql_update)
Execute the given SQL and return the number of rows deleted.
-
#with_sql_each(sql) ⇒ Object
Run the given SQL and yield each returned row to the block.
-
#with_sql_first(sql) ⇒ Object
Run the given SQL and return the first row, or nil if no rows were returned.
-
#with_sql_insert(sql) ⇒ Object
Execute the given SQL and (on most databases) return the primary key of the inserted row.
-
#with_sql_single_value(sql) ⇒ Object
Run the given SQL and return the first value in the first row, or nil if no rows were returned.
Methods included from SQL::StringMethods
#escaped_ilike, #escaped_like, #ilike, #like
Methods included from SQL::OrderMethods
Methods included from SQL::NumericMethods
Methods included from SQL::ComplexExpressionMethods
#extract, #sql_boolean, #sql_number, #sql_string
Methods included from SQL::CastMethods
#cast, #cast_numeric, #cast_string
Methods included from SQL::BooleanMethods
Methods included from SQL::AliasMethods
Constructor Details
#initialize(db) ⇒ Dataset
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Database#dataset method return an instance of that subclass.
25 26 27 28 29 30 |
# File 'lib/sequel/dataset/misc.rb', line 25 def initialize(db) @db = db @opts = OPTS @cache = {} freeze end |
Instance Attribute Details
#db ⇒ Object (readonly)
The database related to this dataset. This is the Database instance that will execute all of this dataset’s queries.
12 13 14 |
# File 'lib/sequel/dataset/misc.rb', line 12 def db @db end |
#opts ⇒ Object (readonly)
The hash of options for this dataset, keys are symbols.
15 16 17 |
# File 'lib/sequel/dataset/misc.rb', line 15 def opts @opts end |
Class Method Details
.clause_methods(type, clauses) ⇒ Object
Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.
225 226 227 |
# File 'lib/sequel/dataset/sql.rb', line 225 def self.clause_methods(type, clauses) clauses.map{|clause| :"#{type}_#{clause}_sql"}.freeze end |
.def_sql_method(mod, type, clauses) ⇒ Object
Define a dataset literalization method for the given type in the given module, using the given clauses.
Arguments:
- mod
-
Module in which to define method
- type
-
Type of SQL literalization method to create, either :select, :insert, :update, or :delete
- clauses
-
array of clauses that make up the SQL query for the type. This can either be a single array of symbols/strings, or it can be an array of pairs, with the first element in each pair being an if/elsif/else code fragment, and the second element in each pair being an array of symbol/strings for the appropriate branch.
239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 |
# File 'lib/sequel/dataset/sql.rb', line 239 def self.def_sql_method(mod, type, clauses) priv = type == :update || type == :insert cacheable = type == :select || type == :delete lines = [] lines << 'private' if priv lines << "def #{'_' if priv}#{type}_sql" lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv lines << "if sql = cache_get(:_#{type}_sql); return sql end" if cacheable lines << 'check_delete_allowed!' << 'check_not_limited!(:delete)' if type == :delete lines << 'sql = @opts[:append_sql] || sql_string_origin' if clauses.all?{|c| c.is_a?(Array)} clauses.each do |i, cs| lines << i lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) end lines << 'end' else lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"}) end lines << "cache_set(:_#{type}_sql, sql) if cache_sql?" if cacheable lines << 'sql' lines << 'end' mod.class_eval lines.join("\n"), __FILE__, __LINE__ end |
.register_extension(ext, mod = nil, &block) ⇒ Object
Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should be a Module that will be included in the dataset’s class. This also registers a Database extension that will extend all of the database’s datasets.
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# File 'lib/sequel/dataset/query.rb', line 55 def self.register_extension(ext, mod=nil, &block) if mod raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block if mod.is_a?(Module) block = proc{|ds| ds.extend(mod)} Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)} Sequel.synchronize{EXTENSION_MODULES[ext] = mod} else block = mod end end unless mod.is_a?(Module) Sequel::Deprecation.deprecate("Providing a block or non-module to Sequel::Dataset.register_extension is deprecated and support for it will be removed in Sequel 6.") end Sequel.synchronize{EXTENSIONS[ext] = block} end |
Instance Method Details
#<<(arg) ⇒ Object
Inserts the given argument into the database. Returns self so it can be used safely when chaining:
DB[:items] << {id: 0, name: 'Zero'} << DB[:old_items].select(:id, name)
28 29 30 31 |
# File 'lib/sequel/dataset/actions.rb', line 28 def <<(arg) insert(arg) self end |
#==(o) ⇒ Object
Define a hash value such that datasets with the same class, DB, and opts will be considered equal.
34 35 36 |
# File 'lib/sequel/dataset/misc.rb', line 34 def ==(o) o.is_a?(self.class) && db == o.db && opts == o.opts end |
#[](*conditions) ⇒ Object
Returns the first record matching the conditions. Examples:
DB[:table][id: 1] # SELECT * FROM table WHERE (id = 1) LIMIT 1
# => {:id=>1}
37 38 39 40 |
# File 'lib/sequel/dataset/actions.rb', line 37 def [](*conditions) raise(Error, 'You cannot call Dataset#[] with an integer or with no arguments') if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 first(*conditions) end |
#add_graph_aliases(graph_aliases) ⇒ Object
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases
, which replaces the list (the equivalent of select_append
when graphing). See set_graph_aliases
.
DB[:table].add_graph_aliases(some_alias: [:table, :column])
# SELECT ..., table.column AS some_alias
18 19 20 21 22 23 24 25 |
# File 'lib/sequel/dataset/graph.rb', line 18 def add_graph_aliases(graph_aliases) graph = opts[:graph] unless (graph && (ga = graph[:column_aliases])) raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases" end columns, graph_aliases = graph_alias_columns(graph_aliases) select_append(*columns).clone(:graph => graph.merge(:column_aliases=>ga.merge(graph_aliases).freeze).freeze) end |
#aliased_expression_sql_append(sql, ae) ⇒ Object
Append literalization of aliased expression to SQL string.
300 301 302 303 |
# File 'lib/sequel/dataset/sql.rb', line 300 def aliased_expression_sql_append(sql, ae) literal_append(sql, ae.expression) as_sql_append(sql, ae.alias, ae.columns) end |
#all(&block) ⇒ Object
Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.
DB[:table].all # SELECT * FROM table
# => [{:id=>1, ...}, {:id=>2, ...}, ...]
# Iterate over all rows in the table
DB[:table].all{|row| p row}
50 51 52 |
# File 'lib/sequel/dataset/actions.rb', line 50 def all(&block) _all(block){|a| each{|r| a << r}} end |
#array_sql_append(sql, a) ⇒ Object
Append literalization of array to SQL string.
306 307 308 309 310 311 312 313 314 |
# File 'lib/sequel/dataset/sql.rb', line 306 def array_sql_append(sql, a) if a.empty? sql << '(NULL)' else sql << '(' expression_list_append(sql, a) sql << ')' end end |
#as_hash(key_column, value_column = nil, opts = OPTS) ⇒ Object
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].as_hash(:id, :name) # SELECT * FROM table
# {1=>'Jim', 2=>'Bob', ...}
DB[:table].as_hash(:id) # SELECT * FROM table
# {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].as_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table
# {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...}
DB[:table].as_hash([:id, :name]) # SELECT * FROM table
# {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob']=>{:id=>2, :name=>'Bob'}, ...}
Options:
- :all
-
Use all instead of each to retrieve the objects
- :hash
-
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.
847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 |
# File 'lib/sequel/dataset/actions.rb', line 847 def as_hash(key_column, value_column = nil, opts = OPTS) h = opts[:hash] || {} meth = opts[:all] ? :all : :each if value_column return naked.as_hash(key_column, value_column, opts) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) public_send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)} else public_send(meth){|r| h[r[key_column]] = r.values_at(*value_column)} end else if key_column.is_a?(Array) public_send(meth){|r| h[r.values_at(*key_column)] = r[value_column]} else public_send(meth){|r| h[r[key_column]] = r[value_column]} end end elsif key_column.is_a?(Array) public_send(meth){|r| h[key_column.map{|k| r[k]}] = r} else public_send(meth){|r| h[r[key_column]] = r} end h end |
#avg(arg = (no_arg = true), &block) ⇒ Object
Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1
# => 3
DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1
# => 1
61 62 63 64 |
# File 'lib/sequel/dataset/actions.rb', line 61 def avg(arg=(no_arg = true), &block) arg = Sequel.virtual_row(&block) if no_arg _aggregate(:avg, arg) end |
#bind(bind_vars = OPTS) ⇒ Object
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
DB[:table].where(id: :$id).bind(id: 1).call(:first)
# SELECT * FROM table WHERE id = ? LIMIT 1 -- (1)
# => {:id=>1}
332 333 334 335 336 337 338 339 340 341 342 343 344 |
# File 'lib/sequel/dataset/prepared_statements.rb', line 332 def bind(bind_vars=OPTS) bind_vars = if bv = @opts[:bind_vars] bv.merge(bind_vars).freeze else if bind_vars.frozen? bind_vars else Hash[bind_vars] end end clone(:bind_vars=>bind_vars) end |
#boolean_constant_sql_append(sql, constant) ⇒ Object
Append literalization of boolean constant to SQL string.
317 318 319 320 321 322 323 |
# File 'lib/sequel/dataset/sql.rb', line 317 def boolean_constant_sql_append(sql, constant) if (constant == true || constant == false) && !supports_where_true? sql << (constant == true ? '(1 = 1)' : '(1 = 0)') else literal_append(sql, constant) end end |
#call(type, bind_variables = OPTS, *values, &block) ⇒ Object
For the given type (:select, :first, :insert, :insert_select, :update, :delete, or :single_value), run the sql with the bind variables specified in the hash. values
is a hash passed to insert or update (if one of those types is used), which may contain placeholders.
DB[:table].where(id: :$id).call(:first, id: 1)
# SELECT * FROM table WHERE id = ? LIMIT 1 -- (1)
# => {:id=>1}
353 354 355 |
# File 'lib/sequel/dataset/prepared_statements.rb', line 353 def call(type, bind_variables=OPTS, *values, &block) to_prepared_statement(type, values, :extend=>bound_variable_modules).call(bind_variables, &block) end |
#case_expression_sql_append(sql, ce) ⇒ Object
Append literalization of case expression to SQL string.
326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 |
# File 'lib/sequel/dataset/sql.rb', line 326 def case_expression_sql_append(sql, ce) sql << '(CASE' if ce.expression? sql << ' ' literal_append(sql, ce.expression) end w = " WHEN " t = " THEN " ce.conditions.each do |c,r| sql << w literal_append(sql, c) sql << t literal_append(sql, r) end sql << " ELSE " literal_append(sql, ce.default) sql << " END)" end |
#cast_sql_append(sql, expr, type) ⇒ Object
Append literalization of cast expression to SQL string.
346 347 348 349 350 351 |
# File 'lib/sequel/dataset/sql.rb', line 346 def cast_sql_append(sql, expr, type) sql << 'CAST(' literal_append(sql, expr) sql << ' AS ' << db.cast_type_literal(type).to_s sql << ')' end |
#clone(opts = OPTS) ⇒ Object
:nocov:
90 91 92 93 94 95 96 97 98 99 |
# File 'lib/sequel/dataset/query.rb', line 90 def clone(opts = nil || (return self)) # return self used above because clone is called by almost all # other query methods, and it is the fastest approach c = super(:freeze=>false) c.opts.merge!(opts) unless opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)} c.clear_columns_cache end c.freeze end |
#column_all_sql_append(sql, ca) ⇒ Object
Append literalization of column all selection to SQL string.
354 355 356 |
# File 'lib/sequel/dataset/sql.rb', line 354 def column_all_sql_append(sql, ca) qualified_identifier_sql_append(sql, ca.table, WILDCARD) end |
#columns ⇒ Object
Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.
If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema
.
DB[:table].columns
# => [:id, :name]
75 76 77 |
# File 'lib/sequel/dataset/actions.rb', line 75 def columns _columns || columns! end |
#columns! ⇒ Object
Ignore any cached column information and perform a query to retrieve a row in order to get the columns.
DB[:table].columns!
# => [:id, :name]
84 85 86 87 88 89 90 91 92 93 |
# File 'lib/sequel/dataset/actions.rb', line 84 def columns! ds = clone(COLUMNS_CLONE_OPTIONS) ds.each{break} if cols = ds.cache[:_columns] self.columns = cols else [] end end |
#complex_expression_sql_append(sql, op, args) ⇒ Object
Append literalization of complex expression to SQL string.
359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 |
# File 'lib/sequel/dataset/sql.rb', line 359 def complex_expression_sql_append(sql, op, args) case op when *IS_OPERATORS r = args[1] if r.nil? || supports_is_true? raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r] sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' sql << val << ')' elsif op == :IS complex_expression_sql_append(sql, :"=", args) else complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new(:"!=", *args), SQL::BooleanExpression.new(:IS, args[0], nil)]) end when :IN, :"NOT IN" cols = args[0] vals = args[1] col_array = true if cols.is_a?(Array) if vals.is_a?(Array) val_array = true empty_val_array = vals == [] end if empty_val_array literal_append(sql, empty_array_value(op, cols)) elsif col_array if !supports_multiple_column_in? if val_array expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) literal_append(sql, op == :IN ? expr : ~expr) else old_vals = vals vals = vals.naked if vals.is_a?(Sequel::Dataset) vals = vals.to_a val_cols = old_vals.columns complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) end else # If the columns and values are both arrays, use array_sql instead of # literal so that if values is an array of two element arrays, it # will be treated as a value list instead of a condition specifier. sql << '(' literal_append(sql, cols) sql << ' ' << op.to_s << ' ' if val_array array_sql_append(sql, vals) else literal_append(sql, vals) end sql << ')' end else sql << '(' literal_append(sql, cols) sql << ' ' << op.to_s << ' ' literal_append(sql, vals) sql << ')' end when :LIKE, :'NOT LIKE' sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' literal_append(sql, args[1]) if requires_like_escape? sql << " ESCAPE " literal_append(sql, "\\") end sql << ')' when :ILIKE, :'NOT ILIKE' complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : :"NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)}) when :** function_sql_append(sql, Sequel.function(:power, *args)) when *TWO_ARITY_OPERATORS if REGEXP_OPERATORS.include?(op) && !supports_regexp? raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}" end sql << '(' literal_append(sql, args[0]) sql << ' ' << op.to_s << ' ' literal_append(sql, args[1]) sql << ')' when *N_ARITY_OPERATORS sql << '(' c = false op_str = " #{op} " args.each do |a| sql << op_str if c literal_append(sql, a) c ||= true end sql << ')' when :NOT sql << 'NOT ' literal_append(sql, args[0]) when :NOOP literal_append(sql, args[0]) when :'B~' sql << '~' literal_append(sql, args[0]) when :extract sql << 'extract(' << args[0].to_s << ' FROM ' literal_append(sql, args[1]) sql << ')' else raise(InvalidOperation, "invalid operator #{op}") end end |
#constant_sql_append(sql, constant) ⇒ Object
Append literalization of constant to SQL string.
468 469 470 |
# File 'lib/sequel/dataset/sql.rb', line 468 def constant_sql_append(sql, constant) sql << constant.to_s end |
#count(arg = (no_arg=true), &block) ⇒ Object
Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.
DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1
# => 3
DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1
# => 2
DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1
# => 1
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
# File 'lib/sequel/dataset/actions.rb', line 108 def count(arg=(no_arg=true), &block) if no_arg && !block cached_dataset(:_count_ds) do aggregate_dataset.select(COUNT_SELECT).single_value_ds end.single_value!.to_i else if block if no_arg arg = Sequel.virtual_row(&block) else raise Error, 'cannot provide both argument and block to Dataset#count' end end _aggregate(:count, arg) end end |
#current_datetime ⇒ Object
An object representing the current date or time, should be an instance of Sequel.datetime_class.
40 41 42 |
# File 'lib/sequel/dataset/misc.rb', line 40 def current_datetime Sequel.datetime_class.now end |
#delayed_evaluation_sql_append(sql, delay) ⇒ Object
Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.
474 475 476 477 478 479 480 481 482 483 484 485 |
# File 'lib/sequel/dataset/sql.rb', line 474 def delayed_evaluation_sql_append(sql, delay) # Delayed evaluations are used specifically so the SQL # can differ in subsequent calls, so we definitely don't # want to cache the sql in this case. disable_sql_caching! if recorder = @opts[:placeholder_literalizer] recorder.use(sql, lambda{delay.call(self)}, nil) else literal_append(sql, delay.call(self)) end end |
#delete(&block) ⇒ Object
Deletes the records in the dataset, returning the number of records deleted.
DB[:table].delete # DELETE * FROM table
# => 3
Some databases support using multiple tables in a DELETE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>c[:h]}}.
delete
# DELETE FROM a
# USING b
# INNER JOIN c ON (c.d = b.e)
# WHERE ((a.f = b.g) AND (a.id = c.h))
142 143 144 145 146 147 148 149 |
# File 'lib/sequel/dataset/actions.rb', line 142 def delete(&block) sql = delete_sql if uses_returning?(:delete) returning_fetch_rows(sql, &block) else execute_dui(sql) end end |
#distinct(*args, &block) ⇒ Object
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. If a block is given, it is treated as a virtual row block, similar to where
. Raises an error if arguments are given and DISTINCT ON is not supported.
DB[:items].distinct # SQL: SELECT DISTINCT * FROM items
DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id
There is support for emulating the DISTINCT ON support in MySQL, but it does not support the ORDER of the dataset, and also doesn’t work in many cases if the ONLY_FULL_GROUP_BY sql_mode is used, which is the default on MySQL 5.7.5+.
129 130 131 132 133 134 135 136 137 138 |
# File 'lib/sequel/dataset/query.rb', line 129 def distinct(*args, &block) virtual_row_columns(args, block) if args.empty? return self if opts[:distinct] == EMPTY_ARRAY cached_dataset(:_distinct_ds){clone(:distinct => EMPTY_ARRAY)} else raise(InvalidOperation, "DISTINCT ON not supported") unless supports_distinct_on? clone(:distinct => args.freeze) end end |
#dup ⇒ Object
Return self, as datasets are always frozen.
50 51 52 |
# File 'lib/sequel/dataset/misc.rb', line 50 def dup self end |
#each ⇒ Object
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
DB[:table].each{|row| p row} # SELECT * FROM table
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all
instead of each
for the outer queries, or use a separate thread or shard inside each
.
160 161 162 163 164 165 166 167 |
# File 'lib/sequel/dataset/actions.rb', line 160 def each if rp = row_proc fetch_rows(select_sql){|r| yield rp.call(r)} else fetch_rows(select_sql){|r| yield r} end self end |
#each_server ⇒ Object
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(key: 'setting').each_server{|ds| ds.update(value: 'new_value')}
59 60 61 |
# File 'lib/sequel/dataset/misc.rb', line 59 def each_server db.servers.each{|s| yield server(s)} end |
#empty? ⇒ Boolean
Returns true if no records exist in the dataset, false otherwise
DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1
# => false
175 176 177 178 179 |
# File 'lib/sequel/dataset/actions.rb', line 175 def empty? cached_dataset(:_empty_ds) do (@opts[:sql] ? from_self : self).single_value_ds.unordered.select(EMPTY_SELECT) end.single_value!.nil? end |
#eql?(o) ⇒ Boolean
Alias for ==
45 46 47 |
# File 'lib/sequel/dataset/misc.rb', line 45 def eql?(o) self == o end |
#escape_like(string) ⇒ Object
Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:
ds.escape_like("foo\\%_") # 'foo\\\%\_'
68 69 70 |
# File 'lib/sequel/dataset/misc.rb', line 68 def escape_like(string) string.gsub(/[\\%_]/){|m| "\\#{m}"} end |
#except(dataset, opts = OPTS) ⇒ Object
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation
if the operation is not supported. Options:
- :alias
-
Use the given value as the from_self alias
- :all
-
Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur
- :from_self
-
Set to false to not wrap the returned dataset in a from_self, use with care.
DB[:items].except(DB[:other_items])
# SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1
DB[:items].except(DB[:other_items], all: true, from_self: false)
# SELECT * FROM items EXCEPT ALL SELECT * FROM other_items
DB[:items].except(DB[:other_items], alias: :i)
# SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i
157 158 159 160 161 |
# File 'lib/sequel/dataset/query.rb', line 157 def except(dataset, opts=OPTS) raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:except, dataset, opts) end |
#exclude(*cond, &block) ⇒ Object
Performs the inverse of Dataset#where. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.
DB[:items].exclude(category: 'software')
# SELECT * FROM items WHERE (category != 'software')
DB[:items].exclude(category: 'software', id: 3)
# SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
Also note that SQL uses 3-valued boolean logic (true
, false
, NULL
), so the inverse of a true condition is a false condition, and will still not match rows that were NULL originally. If you take the earlier example:
DB[:items].exclude(category: 'software')
# SELECT * FROM items WHERE (category != 'software')
Note that this does not match rows where category
is NULL
. This is because NULL
is an unknown value, and you do not know whether or not the NULL
category is software
. You can explicitly specify how to handle NULL
values if you want:
DB[:items].exclude(Sequel.~(category: nil) & {category: 'software'})
# SELECT * FROM items WHERE ((category IS NULL) OR (category != 'software'))
187 188 189 |
# File 'lib/sequel/dataset/query.rb', line 187 def exclude(*cond, &block) add_filter(:where, cond, true, &block) end |
#exclude_having(*cond, &block) ⇒ Object
Inverts the given conditions and adds them to the HAVING clause.
DB[:items].select_group(:name).exclude_having{count(name) < 2}
# SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
198 199 200 |
# File 'lib/sequel/dataset/query.rb', line 198 def exclude_having(*cond, &block) add_filter(:having, cond, true, &block) end |
#exists ⇒ Object
Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.
DB.select(1).where(DB[:items].exists)
# SELECT 1 WHERE (EXISTS (SELECT * FROM items))
14 15 16 |
# File 'lib/sequel/dataset/sql.rb', line 14 def exists SQL::PlaceholderLiteralString.new(EXISTS, [self], true) end |
#extension(*exts) ⇒ Object
:nocov:
206 207 208 209 210 211 212 213 214 |
# File 'lib/sequel/dataset/query.rb', line 206 def extension(*exts) Sequel.extension(*exts) mods = exts.map{|ext| Sequel.synchronize{EXTENSION_MODULES[ext]}} if mods.all? with_extend(*mods) else with_extend(DeprecatedSingletonClassMethods).extension(*exts) end end |
#filter(*cond, &block) ⇒ Object
Alias for where.
226 227 228 |
# File 'lib/sequel/dataset/query.rb', line 226 def filter(*cond, &block) where(*cond, &block) end |
#first(*args, &block) ⇒ Object
Returns the first matching record if no arguments are given. If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If any other type of argument(s) is passed, it is treated as a filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.
If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).
Examples:
DB[:table].first # SELECT * FROM table LIMIT 1
# => {:id=>7}
DB[:table].first(2) # SELECT * FROM table LIMIT 2
# => [{:id=>6}, {:id=>4}]
DB[:table].first(id: 2) # SELECT * FROM table WHERE (id = 2) LIMIT 1
# => {:id=>2}
DB[:table].first(Sequel.lit("id = 3")) # SELECT * FROM table WHERE (id = 3) LIMIT 1
# => {:id=>3}
DB[:table].first(Sequel.lit("id = ?", 4)) # SELECT * FROM table WHERE (id = 4) LIMIT 1
# => {:id=>4}
DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1
# => {:id=>5}
DB[:table].first(Sequel.lit("id > ?", 4)){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1
# => {:id=>5}
DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2
# => [{:id=>1}]
216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
# File 'lib/sequel/dataset/actions.rb', line 216 def first(*args, &block) case args.length when 0 unless block return single_record end when 1 arg = args[0] if arg.is_a?(Integer) res = if block if loader = cached_placeholder_literalizer(:_first_integer_cond_loader) do |pl| where(pl.arg).limit(pl.arg) end loader.all(filter_expr(&block), arg) else where(&block).limit(arg).all end else if loader = cached_placeholder_literalizer(:_first_integer_loader) do |pl| limit(pl.arg) end loader.all(arg) else limit(arg).all end end return res end where_args = args args = arg end if loader = cached_where_placeholder_literalizer(where_args||args, block, :_first_cond_loader) do |pl| _single_record_ds.where(pl.arg) end loader.first(filter_expr(args, &block)) else _single_record_ds.where(args, &block).single_record! end end |
#first!(*args, &block) ⇒ Object
Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow exception.
263 264 265 |
# File 'lib/sequel/dataset/actions.rb', line 263 def first!(*args, &block) first(*args, &block) || raise(Sequel::NoMatchingRow.new(self)) end |
#first_source ⇒ Object
Alias of first_source_alias
91 92 93 |
# File 'lib/sequel/dataset/misc.rb', line 91 def first_source first_source_alias end |
#first_source_alias ⇒ Object
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an Error
. If the table is aliased, returns the aliased name.
DB[:table].first_source_alias
# => :table
DB[Sequel[:table].as(:t)].first_source_alias
# => :t
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
# File 'lib/sequel/dataset/misc.rb', line 103 def first_source_alias source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.alias when Symbol _, _, aliaz = split_symbol(s) aliaz ? aliaz.to_sym : s else s end end |
#first_source_table ⇒ Object
The first source (primary table) for this dataset. If the dataset doesn’t have a table, raises an error. If the table is aliased, returns the original table, not the alias
DB[:table].first_source_table
# => :table
DB[Sequel[:table].as(:t)].first_source_table
# => :table
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
# File 'lib/sequel/dataset/misc.rb', line 128 def first_source_table source = @opts[:from] if source.nil? || source.empty? raise Error, 'No source specified for query' end case s = source.first when SQL::AliasedExpression s.expression when Symbol sch, table, aliaz = split_symbol(s) aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s else s end end |
#for_update ⇒ Object
Returns a cloned dataset with a :update lock style.
DB[:table].for_update # SELECT * FROM table FOR UPDATE
233 234 235 236 |
# File 'lib/sequel/dataset/query.rb', line 233 def for_update return self if opts[:lock] == :update cached_dataset(:_for_update_ds){lock_style(:update)} end |
#freeze ⇒ Object
:nocov:
74 75 76 77 |
# File 'lib/sequel/dataset/misc.rb', line 74 def freeze @opts.freeze super end |
#from(*source, &block) ⇒ Object
Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].from # SQL: SELECT *
DB[:items].from(:blah) # SQL: SELECT * FROM blah
DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo
DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 |
# File 'lib/sequel/dataset/query.rb', line 247 def from(*source, &block) virtual_row_columns(source, block) table_alias_num = 0 ctes = nil source.map! do |s| case s when Dataset if hoist_cte?(s) ctes ||= [] ctes += s.opts[:with] s = s.clone(:with=>nil) end SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) when Symbol sch, table, aliaz = split_symbol(s) if aliaz s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table) SQL::AliasedExpression.new(s, aliaz.to_sym) else s end else s end end o = {:from=>source.empty? ? nil : source.freeze} o[:with] = ((opts[:with] || EMPTY_ARRAY) + ctes).freeze if ctes o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 clone(o) end |
#from_self(opts = OPTS) ⇒ Object
Returns a dataset selecting from the current dataset. Options:
- :alias
-
Controls the alias of the table
- :column_aliases
-
Also aliases columns, using derived column lists. Only used in conjunction with :alias.
ds = DB[:items].order(:name).select(:id, :name)
# SELECT id,name FROM items ORDER BY name
ds.from_self
# SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1
ds.from_self(alias: :foo)
# SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo
ds.from_self(alias: :foo, column_aliases: [:c1, :c2])
# SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)
295 296 297 298 299 300 301 302 303 304 305 306 307 |
# File 'lib/sequel/dataset/query.rb', line 295 def from_self(opts=OPTS) fs = {} @opts.keys.each{|k| fs[k] = nil unless non_sql_option?(k)} pr = proc do c = clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self) if cols = _columns c.send(:columns=, cols) end c end opts.empty? ? cached_dataset(:_from_self_ds, &pr) : pr.call end |
#frozen? ⇒ Boolean
:nodoc:
84 85 86 |
# File 'lib/sequel/dataset/misc.rb', line 84 def frozen? # :nodoc: true end |
#function_sql_append(sql, f) ⇒ Object
Append literalization of function call to SQL string.
488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 |
# File 'lib/sequel/dataset/sql.rb', line 488 def function_sql_append(sql, f) name = f.name opts = f.opts if opts[:emulate] if emulate_function?(name) emulate_function_sql_append(sql, f) return end name = native_function_name(name) end sql << 'LATERAL ' if opts[:lateral] case name when SQL::Identifier if supports_quoted_function_names? && opts[:quoted] literal_append(sql, name) else sql << name.value.to_s end when SQL::QualifiedIdentifier if supports_quoted_function_names? && opts[:quoted] != false literal_append(sql, name) else sql << split_qualifiers(name).join('.') end else if supports_quoted_function_names? && opts[:quoted] quote_identifier_append(sql, name) else sql << name.to_s end end sql << '(' if filter = opts[:filter] filter = filter_expr(filter, &opts[:filter_block]) end if opts[:*] if filter && !supports_filtered_aggregates? literal_append(sql, Sequel.case({filter=>1}, nil)) filter = nil else sql << '*' end else sql << "DISTINCT " if opts[:distinct] if filter && !supports_filtered_aggregates? expression_list_append(sql, f.args.map{|arg| Sequel.case({filter=>arg}, nil)}) filter = nil else expression_list_append(sql, f.args) end if order = opts[:order] sql << " ORDER BY " expression_list_append(sql, order) end end sql << ')' if group = opts[:within_group] sql << " WITHIN GROUP (ORDER BY " expression_list_append(sql, group) sql << ')' end if filter sql << " FILTER (WHERE " literal_append(sql, filter) sql << ')' end if window = opts[:over] sql << ' OVER ' window_sql_append(sql, window.opts) end if opts[:with_ordinality] sql << " WITH ORDINALITY" end end |
#get(column = (no_arg=true; nil), &block) ⇒ Object
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
DB[:table].get(:id) # SELECT id FROM table LIMIT 1
# => 3
ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1
# => 6
You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:
DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1
# => [3, 'foo']
DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1
# => [6, 'foo']
285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 |
# File 'lib/sequel/dataset/actions.rb', line 285 def get(column=(no_arg=true; nil), &block) ds = naked if block raise(Error, 'Must call Dataset#get with an argument or a block, not both') unless no_arg ds = ds.select(&block) column = ds.opts[:select] column = nil if column.is_a?(Array) && column.length < 2 else case column when Array ds = ds.select(*column) when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression if loader = cached_placeholder_literalizer(:_get_loader) do |pl| ds.single_value_ds.select(pl.arg) end return loader.get(column) end ds = ds.select(column) else if loader = cached_placeholder_literalizer(:_get_alias_loader) do |pl| ds.single_value_ds.select(Sequel.as(pl.arg, :v)) end return loader.get(column) end ds = ds.select(Sequel.as(column, :v)) end end if column.is_a?(Array) if r = ds.single_record r.values_at(*hash_key_symbols(column)) end else ds.single_value end end |
#graph(dataset, join_conditions = nil, options = OPTS, &block) ⇒ Object
Similar to Dataset#join_table, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.
Arguments:
- dataset
-
Can be a symbol (specifying a table), another dataset, or an SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression.
- join_conditions
-
Any condition(s) allowed by
join_table
. - block
-
A block that is passed to
join_table
.
Options:
- :from_self_alias
-
The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a from_self before graphing, and this option determines the alias to use.
- :implicit_qualifier
-
The qualifier of implicit conditions, see #join_table.
- :join_only
-
Only join the tables, do not change the selected columns.
- :join_type
-
The type of join to use (passed to
join_table
). Defaults to :left_outer. - :qualify
-
The type of qualification to do, see #join_table.
- :select
-
An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use
graph
instead ofjoin_table
. - :table_alias
-
The alias to use for the table. If not specified, doesn’t alias the table. You will get an error if the alias (or table) name is used more than once.
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
# File 'lib/sequel/dataset/graph.rb', line 53 def graph(dataset, join_conditions = nil, = OPTS, &block) # Allow the use of a dataset or symbol as the first argument # Find the table name/dataset based on the argument table_alias = [:table_alias] table = dataset create_dataset = true case dataset when Symbol # let alias be the same as the table name (sans any optional schema) # unless alias explicitly given in the symbol using ___ notation and symbol splitting is enabled table_alias ||= split_symbol(table).compact.last when Dataset if dataset.simple_select_all? table = dataset.opts[:from].first table_alias ||= table else table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) end create_dataset = false when SQL::Identifier table_alias ||= table.value when SQL::QualifiedIdentifier table_alias ||= split_qualifiers(table).last when SQL::AliasedExpression return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge!(), &block) else raise Error, "The dataset argument should be a symbol or dataset" end table_alias = table_alias.to_sym if create_dataset dataset = db.from(table) end # Raise Sequel::Error with explanation that the table alias has been used raise_alias_error = lambda do raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") end # Only allow table aliases that haven't been used raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table) implicit_qualifier = [:implicit_qualifier] joined_dataset = joined_dataset? ds = self graph = opts[:graph] if !graph && (select = @opts[:select]) && !select.empty? select_columns = nil unless !joined_dataset && select.length == 1 && (select[0].is_a?(SQL::ColumnAll)) force_from_self = false select_columns = select.map do |sel| unless col = _hash_key_symbol(sel) force_from_self = true break end [sel, col] end select_columns = nil if force_from_self end end # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs) if (@opts[:graph_from_self] != false && !graph && (joined_dataset || force_from_self)) from_selfed = true implicit_qualifier = [:from_self_alias] || first_source ds = ds.from_self(:alias=>implicit_qualifier) end # Join the table early in order to avoid cloning the dataset twice ds = ds.join_table([:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>[:qualify], &block) return ds if [:join_only] opts = ds.opts # Whether to include the table in the result set add_table = [:select] == false ? false : true if graph graph = graph.dup select = opts[:select].dup [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup} else # Setup the initial graph data structure if it doesn't exist qualifier = ds.first_source_alias master = alias_symbol(qualifier) raise_alias_error.call if master == table_alias # Master hash storing all .graph related information graph = {} # Associates column aliases back to tables and columns column_aliases = graph[:column_aliases] = {} # Associates table alias (the master is never aliased) table_aliases = graph[:table_aliases] = {master=>self} # Keep track of the alias numbers used ca_num = graph[:column_alias_num] = Hash.new(0) select = if select_columns select_columns.map do |sel, column| column_aliases[column] = [master, column] if from_selfed # Initial dataset was wrapped in subselect, selected all # columns in the subselect, qualified by the subselect alias. Sequel.qualify(qualifier, Sequel.identifier(column)) else # Initial dataset not wrapped in subslect, just make # sure columns are qualified in some way. qualified_expression(sel, qualifier) end end else columns.map do |column| column_aliases[column] = [master, column] SQL::QualifiedIdentifier.new(qualifier, column) end end end # Add the table alias to the list of aliases # Even if it isn't been used in the result set, # we add a key for it with a nil value so we can check if it # is used more than once table_aliases = graph[:table_aliases] table_aliases[table_alias] = add_table ? dataset : nil # Add the columns to the selection unless we are ignoring them if add_table column_aliases = graph[:column_aliases] ca_num = graph[:column_alias_num] # Which columns to add to the result set cols = [:select] || dataset.columns # If the column hasn't been used yet, don't alias it. # If it has been used, try table_column. # If that has been used, try table_column_N # using the next value of N that we know hasn't been # used cols.each do |column| col_alias, identifier = if column_aliases[column] column_alias = :"#{table_alias}_#{column}" if column_aliases[column_alias] column_alias_num = ca_num[column_alias] column_alias = :"#{column_alias}_#{column_alias_num}" ca_num[column_alias] += 1 end [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)] else ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column) [column, ident] end column_aliases[col_alias] = [table_alias, column].freeze select.push(identifier) end end [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k].freeze} ds = ds.clone(:graph=>graph.freeze) ds.select(*select) end |
#grep(columns, patterns, opts = OPTS) ⇒ Object
Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions if the database supports that. Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.
Options (all are boolean):
- :all_columns
-
All columns must be matched to any of the given patterns.
- :all_patterns
-
All patterns must match at least one of the columns.
- :case_insensitive
-
Use a case insensitive pattern match (the default is case sensitive if the database supports it).
If both :all_columns and :all_patterns are true, all columns must match all patterns.
Examples:
dataset.grep(:a, '%test%')
# SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\')
dataset.grep([:a, :b], %w'%test% foo')
# SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\')
# OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\'))
dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true)
# SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\'))
# AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))
dataset.grep([:a, :b], %w'%foo% %bar%', all_columns: true)
# SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\'))
# AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))
dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true, all_columns: true)
# SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\')
# AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))
344 345 346 347 348 349 350 351 352 353 354 355 356 357 |
# File 'lib/sequel/dataset/query.rb', line 344 def grep(columns, patterns, opts=OPTS) column_op = opts[:all_columns] ? :AND : :OR if opts[:all_patterns] conds = Array(patterns).map do |pat| SQL::BooleanExpression.new(column_op, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(:AND, *conds)) else conds = Array(columns).map do |c| SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)}) end where(SQL::BooleanExpression.new(column_op, *conds)) end end |
#group(*columns, &block) ⇒ Object
Returns a copy of the dataset with the results grouped by the value of the given columns. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].group(:id) # SELECT * FROM items GROUP BY id
DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name
DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)
366 367 368 369 |
# File 'lib/sequel/dataset/query.rb', line 366 def group(*columns, &block) virtual_row_columns(columns, block) clone(:group => (columns.compact.empty? ? nil : columns.freeze)) end |
#group_and_count(*columns, &block) ⇒ Object
Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause. If a block is given, it is treated as a virtual row block, similar to where
.
Examples:
DB[:items].group_and_count(:name).all
# SELECT name, count(*) AS count FROM items GROUP BY name
# => [{:name=>'a', :count=>1}, ...]
DB[:items].group_and_count(:first_name, :last_name).all
# SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name
# => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...]
DB[:items].group_and_count(Sequel[:first_name].as(:name)).all
# SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name
# => [{:name=>'a', :count=>1}, ...]
DB[:items].group_and_count{substr(:first_name, 1, 1).as(:initial)}.all
# SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1)
# => [{:initial=>'a', :count=>1}, ...]
397 398 399 |
# File 'lib/sequel/dataset/query.rb', line 397 def group_and_count(*columns, &block) select_group(*columns, &block).select_append(COUNT_OF_ALL_AS_COUNT) end |
#group_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.
DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b
DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b
407 408 409 410 |
# File 'lib/sequel/dataset/query.rb', line 407 def group_append(*columns, &block) columns = @opts[:group] + columns if @opts[:group] group(*columns, &block) end |
#group_by(*columns, &block) ⇒ Object
Alias of group
372 373 374 |
# File 'lib/sequel/dataset/query.rb', line 372 def group_by(*columns, &block) group(*columns, &block) end |
#group_cube ⇒ Object
Adds the appropriate CUBE syntax to GROUP BY.
413 414 415 416 |
# File 'lib/sequel/dataset/query.rb', line 413 def group_cube raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube? clone(:group_options=>:cube) end |
#group_rollup ⇒ Object
Adds the appropriate ROLLUP syntax to GROUP BY.
419 420 421 422 |
# File 'lib/sequel/dataset/query.rb', line 419 def group_rollup raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup? clone(:group_options=>:rollup) end |
#grouping_sets ⇒ Object
Adds the appropriate GROUPING SETS syntax to GROUP BY.
425 426 427 428 |
# File 'lib/sequel/dataset/query.rb', line 425 def grouping_sets raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets? clone(:group_options=>:"grouping sets") end |
#hash ⇒ Object
Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.
146 147 148 |
# File 'lib/sequel/dataset/misc.rb', line 146 def hash [self.class, db, opts].hash end |
#having(*cond, &block) ⇒ Object
Returns a copy of the dataset with the HAVING conditions changed. See #where for argument types.
DB[:items].group(:sum).having(sum: 10)
# SELECT * FROM items GROUP BY sum HAVING (sum = 10)
434 435 436 |
# File 'lib/sequel/dataset/query.rb', line 434 def having(*cond, &block) add_filter(:having, cond, &block) end |
#import(columns, values, opts = OPTS) ⇒ Object
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction if necessary.
This method is called with a columns array and an array of value arrays:
DB[:table].import([:x, :y], [[1, 2], [3, 4]])
# INSERT INTO table (x, y) VALUES (1, 2)
# INSERT INTO table (x, y) VALUES (3, 4)
or, if the database supports it:
# INSERT INTO table (x, y) VALUES (1, 2), (3, 4)
This method also accepts a dataset instead of an array of value arrays:
DB[:table].import([:x, :y], DB[:table2].select(:a, :b))
# INSERT INTO table (x, y) SELECT a, b FROM table2
Options:
- :commit_every
-
Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records. When a transaction is not required, this option controls the maximum number of values to insert with a single statement; it does not force the use of a transaction.
- :return
-
When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted. This does not have an effect if
values
is a Dataset. - :server
-
Set the server/shard to use for the transaction and insert queries.
- :skip_transaction
-
Do not use a transaction even when using multiple INSERT queries.
- :slice
-
Same as :commit_every, :commit_every takes precedence.
362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 |
# File 'lib/sequel/dataset/actions.rb', line 362 def import(columns, values, opts=OPTS) return insert(columns, values) if values.is_a?(Dataset) return if values.empty? raise(Error, 'Using Sequel::Dataset#import with an empty column array is not allowed') if columns.empty? ds = opts[:server] ? server(opts[:server]) : self if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice)) offset = 0 rows = [] while offset < values.length rows << ds._import(columns, values[offset, slice_size], opts) offset += slice_size end rows.flatten else ds._import(columns, values, opts) end end |
#insert(*values, &block) ⇒ Object
Inserts values into the associated table. The returned value is generally the value of the autoincremented primary key for the inserted row, assuming that a single row is inserted and the table has an autoincrementing primary key.
insert
handles a number of different argument formats:
- no arguments or single empty hash
-
Uses DEFAULT VALUES
- single hash
-
Most common format, treats keys as columns and values as values
- single array
-
Treats entries as values, with no columns
- two arrays
-
Treats first array as columns, second array as values
- single Dataset
-
Treats as an insert based on a selection from the dataset given, with no columns
- array and dataset
-
Treats as an insert based on a selection from the dataset given, with the columns given by the array.
Examples:
DB[:items].insert
# INSERT INTO items DEFAULT VALUES
DB[:items].insert({})
# INSERT INTO items DEFAULT VALUES
DB[:items].insert([1,2,3])
# INSERT INTO items VALUES (1, 2, 3)
DB[:items].insert([:a, :b], [1,2])
# INSERT INTO items (a, b) VALUES (1, 2)
DB[:items].insert(a: 1, b: 2)
# INSERT INTO items (a, b) VALUES (1, 2)
DB[:items].insert(DB[:old_items])
# INSERT INTO items SELECT * FROM old_items
DB[:items].insert([:a, :b], DB[:old_items])
# INSERT INTO items (a, b) SELECT * FROM old_items
418 419 420 421 422 423 424 425 |
# File 'lib/sequel/dataset/actions.rb', line 418 def insert(*values, &block) sql = insert_sql(*values) if uses_returning?(:insert) returning_fetch_rows(sql, &block) else execute_insert(sql) end end |
#insert_sql(*values) ⇒ Object
Returns an INSERT SQL query string. See insert
.
DB[:items].insert_sql(a: 1)
# => "INSERT INTO items (a) VALUES (1)"
22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
# File 'lib/sequel/dataset/sql.rb', line 22 def insert_sql(*values) return static_sql(@opts[:sql]) if @opts[:sql] check_insert_allowed! columns, values = _parse_insert_sql_args(values) if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? columns, values = insert_empty_columns_values elsif values.is_a?(Dataset) && hoist_cte?(values) && supports_cte?(:insert) ds, values = hoist_cte(values) return ds.clone(:columns=>columns, :values=>values).send(:_insert_sql) end clone(:columns=>columns, :values=>values).send(:_insert_sql) end |
#inspect ⇒ Object
Returns a string representation of the dataset including the class name and the corresponding SQL select statement.
152 153 154 |
# File 'lib/sequel/dataset/misc.rb', line 152 def inspect "#<#{visible_class_name}: #{sql.inspect}>" end |
#intersect(dataset, opts = OPTS) ⇒ Object
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation
if the operation is not supported. Options:
- :alias
-
Use the given value as the from_self alias
- :all
-
Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur
- :from_self
-
Set to false to not wrap the returned dataset in a from_self, use with care.
DB[:items].intersect(DB[:other_items])
# SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1
DB[:items].intersect(DB[:other_items], all: true, from_self: false)
# SELECT * FROM items INTERSECT ALL SELECT * FROM other_items
DB[:items].intersect(DB[:other_items], alias: :i)
# SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i
455 456 457 458 459 |
# File 'lib/sequel/dataset/query.rb', line 455 def intersect(dataset, opts=OPTS) raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? compound_clone(:intersect, dataset, opts) end |
#invert ⇒ Object
Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.
DB[:items].where(category: 'software').invert
# SELECT * FROM items WHERE (category != 'software')
DB[:items].where(category: 'software', id: 3).invert
# SELECT * FROM items WHERE ((category != 'software') OR (id != 3))
See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.
472 473 474 475 476 477 478 479 480 481 482 483 484 |
# File 'lib/sequel/dataset/query.rb', line 472 def invert cached_dataset(:_invert_ds) do having, where = @opts.values_at(:having, :where) if having.nil? && where.nil? where(false) else o = {} o[:having] = SQL::BooleanExpression.invert(having) if having o[:where] = SQL::BooleanExpression.invert(where) if where clone(o) end end end |
#join(*args, &block) ⇒ Object
Alias of inner_join
487 488 489 |
# File 'lib/sequel/dataset/query.rb', line 487 def join(*args, &block) inner_join(*args, &block) end |
#join_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN clause without ON or USING to SQL string.
573 574 575 576 577 578 579 580 |
# File 'lib/sequel/dataset/sql.rb', line 573 def join_clause_sql_append(sql, jc) table = jc.table table_alias = jc.table_alias table_alias = nil if table == table_alias && !jc.column_aliases sql << ' ' << join_type_sql(jc.join_type) << ' ' identifier_append(sql, table) as_sql_append(sql, table_alias, jc.column_aliases) if table_alias end |
#join_on_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN ON clause to SQL string.
583 584 585 586 587 |
# File 'lib/sequel/dataset/sql.rb', line 583 def join_on_clause_sql_append(sql, jc) join_clause_sql_append(sql, jc) sql << ' ON ' literal_append(sql, filter_expr(jc.on)) end |
#join_table(type, table, expr = nil, options = OPTS, &block) ⇒ Object
Returns a joined dataset. Not usually called directly, users should use the appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills in the type
argument.
Takes the following arguments:
- type
-
The type of join to do (e.g. :inner)
- table
-
table to join into the current dataset. Generally one of the following types:
- String, Symbol
-
identifier used as table or view name
- Dataset
-
a subselect is performed with an alias of tN for some value of N
- SQL::Function
-
set returning function
- SQL::AliasedExpression
-
already aliased expression. Uses given alias unless overridden by the :table_alias option.
- expr
-
conditions used when joining, depends on type:
- Hash, Array of pairs
-
Assumes key (1st arg) is column of joined table (unless already qualified), and value (2nd arg) is column of the last joined or primary table (or the :implicit_qualifier option). To specify multiple conditions on a single joined table column, you must use an array. Uses a JOIN with an ON clause.
- Array
-
If all members of the array are symbols, considers them as columns and uses a JOIN with a USING clause. Most databases will remove duplicate columns from the result set if this is used.
- nil
-
If a block is not given, doesn’t use ON or USING, so the JOIN should be a NATURAL or CROSS join. If a block is given, uses an ON clause based on the block, see below.
- otherwise
-
Treats the argument as a filter expression, so strings are considered literal, symbols specify boolean columns, and Sequel expressions can be used. Uses a JOIN with an ON clause.
- options
-
a hash of options, with the following keys supported:
- :table_alias
-
Override the table alias used when joining. In general you shouldn’t use this option, you should provide the appropriate SQL::AliasedExpression as the table argument.
- :implicit_qualifier
-
The name to use for qualifying implicit conditions. By default, the last joined or primary table is used.
- :join_using
-
Force the using of JOIN USING, even if
expr
is not an array of symbols. - :reset_implicit_qualifier
-
Can set to false to ignore this join when future joins determine qualifier for implicit conditions.
- :qualify
-
Can be set to false to not do any implicit qualification. Can be set to :deep to use the Qualifier AST Transformer, which will attempt to qualify subexpressions of the expression tree. Can be set to :symbol to only qualify symbols. Defaults to the value of default_join_table_qualification.
- block
-
The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the table alias/name for the table currently being joined, the table alias/name for the last joined (or first table), and an array of previous SQL::JoinClause. Unlike
where
, this block is not treated as a virtual row block.
Examples:
DB[:a].join_table(:cross, :b)
# SELECT * FROM a CROSS JOIN b
DB[:a].join_table(:inner, DB[:b], c: d)
# SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d)
DB[:a].join_table(:left, Sequel[:b].as(:c), [:d])
# SELECT * FROM a LEFT JOIN b AS c USING (d)
DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js|
(Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)}
end
# SELECT * FROM a NATURAL JOIN b INNER JOIN c
# ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))
551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 |
# File 'lib/sequel/dataset/query.rb', line 551 def join_table(type, table, expr=nil, =OPTS, &block) if hoist_cte?(table) s, ds = hoist_cte(table) return s.join_table(type, ds, expr, , &block) end using_join = [:join_using] || (expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)}) if using_join && !supports_join_using? h = {} expr.each{|e| h[e] = e} return join_table(type, table, h, ) end table_alias = [:table_alias] if table.is_a?(SQL::AliasedExpression) table_expr = if table_alias SQL::AliasedExpression.new(table.expression, table_alias, table.columns) else table end table = table_expr.expression table_name = table_alias = table_expr.alias elsif table.is_a?(Dataset) if table_alias.nil? table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 table_alias = dataset_alias(table_alias_num) end table_name = table_alias table_expr = SQL::AliasedExpression.new(table, table_alias) else table, implicit_table_alias = split_alias(table) table_alias ||= implicit_table_alias table_name = table_alias || table table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table end join = if expr.nil? and !block SQL::JoinClause.new(type, table_expr) elsif using_join raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block SQL::JoinUsingClause.new(expr, type, table_expr) else last_alias = [:implicit_qualifier] || @opts[:last_joined_table] || first_source_alias qualify_type = [:qualify] if Sequel.condition_specifier?(expr) expr = expr.map do |k, v| qualify_type = default_join_table_qualification if qualify_type.nil? case qualify_type when false nil # Do no qualification when :deep k = Sequel::Qualifier.new(table_name).transform(k) v = Sequel::Qualifier.new(last_alias).transform(v) else k = qualified_column_name(k, table_name) if k.is_a?(Symbol) v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) end [k,v] end expr = SQL::BooleanExpression.from_value_pairs(expr) end if block expr2 = yield(table_name, last_alias, @opts[:join] || EMPTY_ARRAY) expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 end SQL::JoinOnClause.new(expr, type, table_expr) end opts = {:join => ((@opts[:join] || EMPTY_ARRAY) + [join]).freeze} opts[:last_joined_table] = table_name unless [:reset_implicit_qualifier] == false opts[:num_dataset_sources] = table_alias_num if table_alias_num clone(opts) end |
#join_using_clause_sql_append(sql, jc) ⇒ Object
Append literalization of JOIN USING clause to SQL string.
590 591 592 593 |
# File 'lib/sequel/dataset/sql.rb', line 590 def join_using_clause_sql_append(sql, jc) join_clause_sql_append(sql, jc) join_using_clause_using_sql_append(sql, jc.using) end |
#joined_dataset? ⇒ Boolean
Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).
157 158 159 |
# File 'lib/sequel/dataset/misc.rb', line 157 def joined_dataset? !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join]) end |
#last(*args, &block) ⇒ Object
Reverses the order and then runs #first with the given arguments and block. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error
.
DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1
# => {:id=>10}
DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2
# => [{:id=>1}, {:id=>2}]
437 438 439 440 |
# File 'lib/sequel/dataset/actions.rb', line 437 def last(*args, &block) raise(Error, 'No order specified') unless @opts[:order] reverse.first(*args, &block) end |
#lateral ⇒ Object
Marks this dataset as a lateral dataset. If used in another dataset’s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:
DB.from(:a, DB[:b].where(Sequel[:a][:c]=>Sequel[:b][:d]).lateral)
# SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))
645 646 647 648 |
# File 'lib/sequel/dataset/query.rb', line 645 def lateral return self if opts[:lateral] cached_dataset(:_lateral_ds){clone(:lateral=>true)} end |
#limit(l, o = (no_offset = true; nil)) ⇒ Object
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.
DB[:items].limit(10) # SELECT * FROM items LIMIT 10
DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20
DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10
DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10
DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20
660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 |
# File 'lib/sequel/dataset/query.rb', line 660 def limit(l, o = (no_offset = true; nil)) return from_self.limit(l, o) if @opts[:sql] if l.is_a?(Range) no_offset = false o = l.first l = l.last - l.first + (l.exclude_end? ? 0 : 1) end l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) if l.is_a?(Integer) raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 end ds = clone(:limit=>l) ds = ds.offset(o) unless no_offset ds end |
#literal_append(sql, v) ⇒ Object
Append a literal representation of a value to the given SQL string.
If an unsupported object is given, an Error
is raised.
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
# File 'lib/sequel/dataset/sql.rb', line 40 def literal_append(sql, v) case v when Symbol if skip_symbol_cache? literal_symbol_append(sql, v) else unless l = db.literal_symbol(v) l = String.new literal_symbol_append(l, v) db.literal_symbol_set(v, l) end sql << l end when String case v when LiteralString sql << v when SQL::Blob literal_blob_append(sql, v) else literal_string_append(sql, v) end when Integer sql << literal_integer(v) when Hash literal_hash_append(sql, v) when SQL::Expression literal_expression_append(sql, v) when Float sql << literal_float(v) when BigDecimal sql << literal_big_decimal(v) when NilClass sql << literal_nil when TrueClass sql << literal_true when FalseClass sql << literal_false when Array literal_array_append(sql, v) when Time v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v) when DateTime literal_datetime_append(sql, v) when Date literal_date_append(sql, v) when Dataset literal_dataset_append(sql, v) else literal_other_append(sql, v) end end |
#literal_date_or_time(dt, raw = false) ⇒ Object
Literalize a date or time value, as a SQL string value with no typecasting. If raw
is true, remove the surrounding single quotes. This is designed for usage by bound argument code that can work even if the auto_cast_date_and_time extension is used (either manually or implicitly in the related adapter).
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
# File 'lib/sequel/dataset/sql.rb', line 123 def literal_date_or_time(dt, raw=false) value = case dt when SQLTime literal_sqltime(dt) when Time literal_time(dt) when DateTime literal_datetime(dt) when Date literal_date(dt) else raise TypeError, "unsupported type: #{dt.inspect}" end if raw value.sub!(/\A'/, '') value.sub!(/'\z/, '') end value end |
#lock_style(style) ⇒ Object
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL injection.
A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. for_update).
DB[:items].lock_style('FOR SHARE NOWAIT')
# SELECT * FROM items FOR SHARE NOWAIT
DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED')
# SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED
690 691 692 |
# File 'lib/sequel/dataset/query.rb', line 690 def lock_style(style) clone(:lock => style) end |
#map(column = nil, &block) ⇒ Object
Maps column values for each record in the dataset (if an argument is given) or performs the stock mapping functionality of Enumerable
otherwise. Raises an Error
if both an argument and block are given.
DB[:table].map(:id) # SELECT * FROM table
# => [1, 2, 3, ...]
DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table
# => [2, 4, 6, ...]
You can also provide an array of column names:
DB[:table].map([:id, :name]) # SELECT * FROM table
# => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
456 457 458 459 460 461 462 463 464 465 466 467 468 |
# File 'lib/sequel/dataset/actions.rb', line 456 def map(column=nil, &block) if column raise(Error, 'Must call Dataset#map with either an argument or a block, not both') if block return naked.map(column) if row_proc if column.is_a?(Array) super(){|r| r.values_at(*column)} else super(){|r| r[column]} end else super(&block) end end |
#max(arg = (no_arg = true), &block) ⇒ Object
Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1
# => 10
DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1
# => 7
477 478 479 480 |
# File 'lib/sequel/dataset/actions.rb', line 477 def max(arg=(no_arg = true), &block) arg = Sequel.virtual_row(&block) if no_arg _aggregate(:max, arg) end |
#merge ⇒ Object
Execute a MERGE statement, which allows for INSERT, UPDATE, and DELETE behavior in a single query, based on whether rows from a source table match rows in the current table, based on the join conditions.
Unless the dataset uses static SQL, to use #merge, you must first have called #merge_using to specify the merge source and join conditions. You will then likely to call one or more of the following methods to specify MERGE behavior by adding WHEN [NOT] MATCHED clauses:
-
#merge_insert
-
#merge_update
-
#merge_delete
The WHEN [NOT] MATCHED clauses are added to the SQL in the order these methods were called on the dataset. If none of these methods are called, an error is raised.
Example:
DB[:m1]
merge_using(:m2, i1: :i2).
merge_insert(i1: :i2, a: Sequel[:b]+11).
merge_delete{a > 30}.
merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20).
merge
SQL:
MERGE INTO m1 USING m2 ON (i1 = i2)
WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11))
WHEN MATCHED AND (a > 30) THEN DELETE
WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20)
On PostgreSQL, two additional merge methods are supported, for the PostgreSQL-specific DO NOTHING syntax.
-
#merge_do_nothing_when_matched
-
#merge_do_nothing_when_not_matched
This method is supported on Oracle, but Oracle’s MERGE support is non-standard, and has the following issues:
-
DELETE clause requires UPDATE clause
-
DELETE clause requires a condition
-
DELETE clause only affects rows updated by UPDATE clause
527 528 529 |
# File 'lib/sequel/dataset/actions.rb', line 527 def merge execute_ddl(merge_sql) end |
#merge_delete(&block) ⇒ Object
Return a dataset with a WHEN MATCHED THEN DELETE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_delete
# WHEN MATCHED THEN DELETE
merge_delete{a > 30}
# WHEN MATCHED AND (a > 30) THEN DELETE
703 704 705 |
# File 'lib/sequel/dataset/query.rb', line 703 def merge_delete(&block) _merge_when(:type=>:delete, &block) end |
#merge_insert(*values, &block) ⇒ Object
Return a dataset with a WHEN NOT MATCHED THEN INSERT clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
The arguments provided can be any arguments that would be accepted by #insert.
merge_insert(i1: :i2, a: Sequel[:b]+11)
# WHEN NOT MATCHED THEN INSERT (i1, a) VALUES (i2, (b + 11))
merge_insert(:i2, Sequel[:b]+11){a > 30}
# WHEN NOT MATCHED AND (a > 30) THEN INSERT VALUES (i2, (b + 11))
719 720 721 |
# File 'lib/sequel/dataset/query.rb', line 719 def merge_insert(*values, &block) _merge_when(:type=>:insert, :values=>values, &block) end |
#merge_sql ⇒ Object
The SQL to use for the MERGE statement.
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
# File 'lib/sequel/dataset/sql.rb', line 94 def merge_sql raise Error, "This database doesn't support MERGE" unless supports_merge? if sql = opts[:sql] return static_sql(sql) end if sql = cache_get(:_merge_sql) return sql end source, join_condition = @opts[:merge_using] raise Error, "No USING clause for MERGE" unless source sql = @opts[:append_sql] || sql_string_origin select_with_sql(sql) sql << "MERGE INTO " source_list_append(sql, @opts[:from]) sql << " USING " identifier_append(sql, source) sql << " ON " literal_append(sql, join_condition) _merge_when_sql(sql) cache_set(:_merge_sql, sql) if cache_sql? sql end |
#merge_update(values, &block) ⇒ Object
Return a dataset with a WHEN MATCHED THEN UPDATE clause added to the MERGE statement. If a block is passed, treat it as a virtual row and use it as additional conditions for the match.
merge_update(i1: Sequel[:i1]+:i2+10, a: Sequel[:a]+:b+20)
# WHEN MATCHED THEN UPDATE SET i1 = (i1 + i2 + 10), a = (a + b + 20)
merge_update(i1: :i2){a > 30}
# WHEN MATCHED AND (a > 30) THEN UPDATE SET i1 = i2
732 733 734 |
# File 'lib/sequel/dataset/query.rb', line 732 def merge_update(values, &block) _merge_when(:type=>:update, :values=>values, &block) end |
#merge_using(source, join_condition) ⇒ Object
Return a dataset with the source and join condition to use for the MERGE statement.
merge_using(:m2, i1: :i2)
# USING m2 ON (i1 = i2)
740 741 742 |
# File 'lib/sequel/dataset/query.rb', line 740 def merge_using(source, join_condition) clone(:merge_using => [source, join_condition].freeze) end |
#min(arg = (no_arg = true), &block) ⇒ Object
Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.
DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1
# => 1
DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1
# => 0
538 539 540 541 |
# File 'lib/sequel/dataset/actions.rb', line 538 def min(arg=(no_arg = true), &block) arg = Sequel.virtual_row(&block) if no_arg _aggregate(:min, arg) end |
#multi_insert(hashes, opts = OPTS) ⇒ Object
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
DB[:table].multi_insert([{x: 1}, {x: 2}])
# INSERT INTO table (x) VALUES (1)
# INSERT INTO table (x) VALUES (2)
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
This respects the same options as #import.
555 556 557 558 559 |
# File 'lib/sequel/dataset/actions.rb', line 555 def multi_insert(hashes, opts=OPTS) return if hashes.empty? columns = hashes.first.keys import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) end |
#multi_insert_sql(columns, values) ⇒ Object
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert
to format insert statements and expects a keys array and and an array of value arrays.
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
# File 'lib/sequel/dataset/sql.rb', line 148 def multi_insert_sql(columns, values) case multi_insert_sql_strategy when :values sql = LiteralString.new('VALUES ') expression_list_append(sql, values.map{|r| Array(r)}) [insert_sql(columns, sql)] when :union c = false sql = LiteralString.new u = ' UNION ALL SELECT ' f = empty_from_sql values.each do |v| if c sql << u else sql << 'SELECT ' c = true end expression_list_append(sql, v) sql << f if f end [insert_sql(columns, sql)] else values.map{|r| insert_sql(columns, r)} end end |
#naked ⇒ Object
Returns a cloned dataset without a row_proc.
ds = DB[:items].with_row_proc(:invert.to_proc)
ds.all # => [{2=>:id}]
ds.naked.all # => [{:id=>2}]
749 750 751 752 |
# File 'lib/sequel/dataset/query.rb', line 749 def naked return self unless opts[:row_proc] cached_dataset(:_naked_ds){with_row_proc(nil)} end |
#negative_boolean_constant_sql_append(sql, constant) ⇒ Object
Append literalization of negative boolean constant to SQL string.
596 597 598 599 |
# File 'lib/sequel/dataset/sql.rb', line 596 def negative_boolean_constant_sql_append(sql, constant) sql << 'NOT ' boolean_constant_sql_append(sql, constant) end |
#nowait ⇒ Object
Returns a copy of the dataset that will raise a DatabaseLockTimeout instead of waiting for rows that are locked by another transaction
DB[:items].for_update.nowait
# SELECT * FROM items FOR UPDATE NOWAIT
759 760 761 762 763 764 765 |
# File 'lib/sequel/dataset/query.rb', line 759 def nowait return self if opts[:nowait] cached_dataset(:_nowait_ds) do raise(Error, 'This dataset does not support raises errors instead of waiting for locked rows') unless supports_nowait? clone(:nowait=>true) end end |
#offset(o) ⇒ Object
Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override the offset if you’ve called offset first.
DB[:items].offset(10) # SELECT * FROM items OFFSET 10
772 773 774 775 776 777 778 |
# File 'lib/sequel/dataset/query.rb', line 772 def offset(o) o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) if o.is_a?(Integer) raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 end clone(:offset => o) end |
#or(*cond, &block) ⇒ Object
Adds an alternate filter to an existing WHERE clause using OR. If there is no WHERE clause, then the default is WHERE true, and OR would be redundant, so return the dataset in that case.
DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b
DB[:items].or(:b) # SELECT * FROM items
786 787 788 789 790 791 792 |
# File 'lib/sequel/dataset/query.rb', line 786 def or(*cond, &block) if @opts[:where].nil? self else add_filter(:where, cond, false, :OR, &block) end end |
#order(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL functions. If a block is given, it is treated as a virtual row block, similar to where
.
DB[:items].order(:name) # SELECT * FROM items ORDER BY name
DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b
DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b
DB[:items].order(Sequel[:a] + :b) # SELECT * FROM items ORDER BY (a + b)
DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC
DB[:items].order(Sequel.asc(:name, nulls: :last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST
DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC
DB[:items].order(nil) # SELECT * FROM items
808 809 810 811 |
# File 'lib/sequel/dataset/query.rb', line 808 def order(*columns, &block) virtual_row_columns(columns, block) clone(:order => (columns.compact.empty?) ? nil : columns.freeze) end |
#order_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order columns added to the end of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b
DB[:items].order(:a).order_append(:b) # SELECT * FROM items ORDER BY a, b
818 819 820 821 |
# File 'lib/sequel/dataset/query.rb', line 818 def order_append(*columns, &block) columns = @opts[:order] + columns if @opts[:order] order(*columns, &block) end |
#order_by(*columns, &block) ⇒ Object
Alias of order
824 825 826 |
# File 'lib/sequel/dataset/query.rb', line 824 def order_by(*columns, &block) order(*columns, &block) end |
#order_more(*columns, &block) ⇒ Object
Alias of order_append.
829 830 831 |
# File 'lib/sequel/dataset/query.rb', line 829 def order_more(*columns, &block) order_append(*columns, &block) end |
#order_prepend(*columns, &block) ⇒ Object
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b
DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a
838 839 840 841 |
# File 'lib/sequel/dataset/query.rb', line 838 def order_prepend(*columns, &block) ds = order(*columns, &block) @opts[:order] ? ds.order_append(*@opts[:order]) : ds end |
#ordered_expression_sql_append(sql, oe) ⇒ Object
Append literalization of ordered expression to SQL string.
602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 |
# File 'lib/sequel/dataset/sql.rb', line 602 def ordered_expression_sql_append(sql, oe) if emulate = requires_emulating_nulls_first? case oe.nulls when :first null_order = 0 when :last null_order = 2 end if null_order literal_append(sql, Sequel.case({{oe.expression=>nil}=>null_order}, 1)) sql << ", " end end literal_append(sql, oe.expression) sql << (oe.descending ? ' DESC' : ' ASC') unless emulate case oe.nulls when :first sql << " NULLS FIRST" when :last sql << " NULLS LAST" end end end |
#paged_each(opts = OPTS) ⇒ Object
Yields each row in the dataset, but internally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.
Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.
Sequel checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, use a separate thread or shard inside paged_each
.
Options:
- :rows_per_fetch
-
The number of rows to fetch per query. Defaults to 1000.
- :strategy
-
The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some Sequel adapters have optimized implementations that will use cursors or streaming regardless of the :strategy option used.
- :filter_values
-
If the strategy: :filter option is used, this option should be a proc that accepts the last retrieved row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases.
- :skip_transaction
-
Do not use a transaction. This can be useful if you want to prevent a lock on the database table, at the expense of consistency.
Examples:
DB[:table].order(:id).paged_each{|row| }
# SELECT * FROM table ORDER BY id LIMIT 1000
# SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000
# ...
DB[:table].order(:id).paged_each(rows_per_fetch: 100){|row| }
# SELECT * FROM table ORDER BY id LIMIT 100
# SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100
# ...
DB[:table].order(:id).paged_each(strategy: :filter){|row| }
# SELECT * FROM table ORDER BY id LIMIT 1000
# SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
# ...
DB[:table].order(:id).paged_each(strategy: :filter,
filter_values: lambda{|row, exprs| [row[:id]]}){|row| }
# SELECT * FROM table ORDER BY id LIMIT 1000
# SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
# ...
618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 |
# File 'lib/sequel/dataset/actions.rb', line 618 def paged_each(opts=OPTS) unless @opts[:order] raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered" end unless defined?(yield) return enum_for(:paged_each, opts) end total_limit = @opts[:limit] offset = @opts[:offset] if server = @opts[:server] opts = Hash[opts] opts[:server] = server end rows_per_fetch = opts[:rows_per_fetch] || 1000 strategy = if offset || total_limit :offset else opts[:strategy] || :offset end db.transaction(opts) do case strategy when :filter filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}} base_ds = ds = limit(rows_per_fetch) while ds last_row = nil ds.each do |row| last_row = row yield row end ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row) end else offset ||= 0 num_rows_yielded = rows_per_fetch total_rows = 0 while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit) if total_limit && total_rows + rows_per_fetch > total_limit rows_per_fetch = total_limit - total_rows end num_rows_yielded = 0 limit(rows_per_fetch, offset).each do |row| num_rows_yielded += 1 total_rows += 1 if total_limit yield row end offset += rows_per_fetch end end end self end |
#placeholder_literal_string_sql_append(sql, pls) ⇒ Object
Append literalization of placeholder literal string to SQL string.
631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 |
# File 'lib/sequel/dataset/sql.rb', line 631 def placeholder_literal_string_sql_append(sql, pls) args = pls.args str = pls.str sql << '(' if pls.parens if args.is_a?(Hash) if args.empty? sql << str else re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ while true previous, q, str = str.partition(re) sql << previous literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty? break if str.empty? end end elsif str.is_a?(Array) len = args.length str.each_with_index do |s, i| sql << s literal_append(sql, args[i]) unless i == len end unless str.length == args.length || str.length == args.length + 1 raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array" end else i = -1 match_len = args.length - 1 while true previous, q, str = str.partition('?') sql << previous literal_append(sql, args.at(i+=1)) unless q.empty? if str.empty? unless i == match_len raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder string" end break end end end sql << ')' if pls.parens end |
#placeholder_literalizer_class ⇒ Object
The class to use for placeholder literalizers for the current dataset.
162 163 164 |
# File 'lib/sequel/dataset/misc.rb', line 162 def placeholder_literalizer_class ::Sequel::Dataset::PlaceholderLiteralizer end |
#placeholder_literalizer_loader(&block) ⇒ Object
A placeholder literalizer loader for the current dataset.
167 168 169 |
# File 'lib/sequel/dataset/misc.rb', line 167 def placeholder_literalizer_loader(&block) placeholder_literalizer_class.loader(self, &block) end |
#prepare(type, name, *values) ⇒ Object
Prepare an SQL statement for later execution. Takes a type similar to #call, and the name
symbol of the prepared statement.
This returns a clone of the dataset extended with PreparedStatementMethods, which you can call
with the hash of bind variables to use. The prepared statement is also stored in the associated Database, where it can be called by name. The following usage is identical:
ps = DB[:table].where(name: :$name).prepare(:first, :select_by_name)
ps.call(name: 'Blah')
# SELECT * FROM table WHERE name = ? -- ('Blah')
# => {:id=>1, :name=>'Blah'}
DB.call(:select_by_name, name: 'Blah') # Same thing
373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 |
# File 'lib/sequel/dataset/prepared_statements.rb', line 373 def prepare(type, name, *values) ps = to_prepared_statement(type, values, :name=>name, :extend=>prepared_statement_modules, :no_delayed_evaluations=>true) ps = if ps.send(:emulate_prepared_statements?) ps = ps.with_extend(EmulatePreparedStatementMethods) ps.send(:emulated_prepared_statement, type, name, values) else sql = ps.prepared_sql ps.prepared_args.freeze ps.clone(:prepared_sql=>sql, :sql=>sql) end db.set_prepared_statement(name, ps) ps end |
#provides_accurate_rows_matched? ⇒ Boolean
Whether this dataset will provide accurate number of rows matched for delete and update statements, true by default. Accurate in this case is the number of rows matched by the dataset’s filter.
19 20 21 |
# File 'lib/sequel/dataset/features.rb', line 19 def provides_accurate_rows_matched? true end |
#qualified_identifier_sql_append(sql, table, column = (c = table.column; table = table.table; c)) ⇒ Object
Append literalization of qualified identifier to SQL string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.
677 678 679 680 681 |
# File 'lib/sequel/dataset/sql.rb', line 677 def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c)) identifier_append(sql, table) sql << '.' identifier_append(sql, column) end |
#qualify(table = (cache=true; first_source)) ⇒ Object
Qualify to the given table, or first source if no table is given.
DB[:items].where(id: 1).qualify
# SELECT items.* FROM items WHERE (items.id = 1)
DB[:items].where(id: 1).qualify(:i)
# SELECT i.* FROM items WHERE (i.id = 1)
850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 |
# File 'lib/sequel/dataset/query.rb', line 850 def qualify(table=(cache=true; first_source)) o = @opts return self if o[:sql] pr = proc do h = {} (o.keys & QUALIFY_KEYS).each do |k| h[k] = qualified_expression(o[k], table) end h[:select] = [SQL::ColumnAll.new(table)].freeze if !o[:select] || o[:select].empty? clone(h) end cache ? cached_dataset(:_qualify_ds, &pr) : pr.call end |
#quote_identifier_append(sql, name) ⇒ Object
Append literalization of unqualified identifier to SQL string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
687 688 689 690 691 692 693 694 695 696 697 698 699 |
# File 'lib/sequel/dataset/sql.rb', line 687 def quote_identifier_append(sql, name) if name.is_a?(LiteralString) sql << name else name = name.value if name.is_a?(SQL::Identifier) name = input_identifier(name) if quote_identifiers? quoted_identifier_append(sql, name) else sql << name end end end |
#quote_identifiers? ⇒ Boolean
Whether this dataset quotes identifiers.
12 13 14 |
# File 'lib/sequel/dataset/features.rb', line 12 def quote_identifiers? @opts.fetch(:quote_identifiers, true) end |
#quote_schema_table_append(sql, table) ⇒ Object
Append literalization of identifier or unqualified identifier to SQL string.
702 703 704 705 706 707 708 709 |
# File 'lib/sequel/dataset/sql.rb', line 702 def quote_schema_table_append(sql, table) schema, table = schema_and_table(table) if schema quote_identifier_append(sql, schema) sql << '.' end quote_identifier_append(sql, table) end |
#quoted_identifier_append(sql, name) ⇒ Object
Append literalization of quoted identifier to SQL string. This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
715 716 717 |
# File 'lib/sequel/dataset/sql.rb', line 715 def quoted_identifier_append(sql, name) sql << '"' << name.to_s.gsub('"', '""') << '"' end |
#recursive_cte_requires_column_aliases? ⇒ Boolean
Whether you must use a column alias list for recursive CTEs, false by default.
24 25 26 |
# File 'lib/sequel/dataset/features.rb', line 24 def recursive_cte_requires_column_aliases? false end |
#requires_placeholder_type_specifiers? ⇒ Boolean
Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer), false by default.
41 42 43 |
# File 'lib/sequel/dataset/features.rb', line 41 def requires_placeholder_type_specifiers? false end |
#requires_sql_standard_datetimes? ⇒ Boolean
Whether the dataset requires SQL standard datetimes. False by default, as most allow strings with ISO 8601 format. Only for backwards compatibility, no longer used internally, do not use in new code.
33 34 35 36 |
# File 'lib/sequel/dataset/features.rb', line 33 def requires_sql_standard_datetimes? # SEQUEL6: Remove false end |
#returning(*values) ⇒ Object
Modify the RETURNING clause, only supported on a few databases. If returning is used, instead of insert returning the autogenerated primary key or update/delete returning the number of modified rows, results are returned using fetch_rows
.
DB[:items].returning # RETURNING *
DB[:items].returning(nil) # RETURNING NULL
DB[:items].returning(:id, :name) # RETURNING id, name
DB[:items].returning.insert(a: 1) do |hash|
# hash for each row inserted, with values for all columns
end
DB[:items].returning.update(a: 1) do |hash|
# hash for each row updated, with values for all columns
end
DB[:items].returning.delete(a: 1) do |hash|
# hash for each row deleted, with values for all columns
end
884 885 886 887 888 889 890 891 892 893 894 895 |
# File 'lib/sequel/dataset/query.rb', line 884 def returning(*values) if values.empty? return self if opts[:returning] == EMPTY_ARRAY cached_dataset(:_returning_ds) do raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) clone(:returning=>EMPTY_ARRAY) end else raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert) clone(:returning=>values.freeze) end end |
#reverse(*order, &block) ⇒ Object
Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.
DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC
DB[:items].reverse{foo()} # SELECT * FROM items ORDER BY foo(bar) DESC
DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC
DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC
904 905 906 907 908 909 910 911 |
# File 'lib/sequel/dataset/query.rb', line 904 def reverse(*order, &block) if order.empty? && !block cached_dataset(:_reverse_ds){order(*invert_order(@opts[:order]))} else virtual_row_columns(order, block) order(*invert_order(order.empty? ? @opts[:order] : order.freeze)) end end |
#reverse_order(*order, &block) ⇒ Object
Alias of reverse
914 915 916 |
# File 'lib/sequel/dataset/query.rb', line 914 def reverse_order(*order, &block) reverse(*order, &block) end |
#row_number_column ⇒ Object
The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies
173 174 175 |
# File 'lib/sequel/dataset/misc.rb', line 173 def row_number_column :x_sequel_row_number_x end |
#row_proc ⇒ Object
The row_proc for this database, should be any object that responds to call
with a single hash argument and returns the object you want #each to return.
179 180 181 |
# File 'lib/sequel/dataset/misc.rb', line 179 def row_proc @opts[:row_proc] end |
#schema_and_table(table_name, sch = nil) ⇒ Object
Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.
Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).
726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 |
# File 'lib/sequel/dataset/sql.rb', line 726 def schema_and_table(table_name, sch=nil) sch = sch.to_s if sch case table_name when Symbol s, t, _ = split_symbol(table_name) [s||sch, t] when SQL::QualifiedIdentifier [table_name.table.to_s, table_name.column.to_s] when SQL::Identifier [sch, table_name.value.to_s] when String [sch, table_name] else raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' end end |
#select(*columns, &block) ⇒ Object
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to where
.
DB[:items].select(:a) # SELECT a FROM items
DB[:items].select(:a, :b) # SELECT a, b FROM items
DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items
925 926 927 928 |
# File 'lib/sequel/dataset/query.rb', line 925 def select(*columns, &block) virtual_row_columns(columns, block) clone(:select => columns.freeze) end |
#select_all(*tables) ⇒ Object
Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.
DB[:items].select(:a).select_all # SELECT * FROM items
DB[:items].select_all(:items) # SELECT items.* FROM items
DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items
937 938 939 940 941 942 943 944 |
# File 'lib/sequel/dataset/query.rb', line 937 def select_all(*tables) if tables.empty? return self unless opts[:select] cached_dataset(:_select_all_ds){clone(:select => nil)} else select(*tables.map{|t| i, a = split_alias(t); a || i}.map!{|t| SQL::ColumnAll.new(t)}.freeze) end end |
#select_append(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items
DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items
DB[:items].select_append(:b) # SELECT *, b FROM items
953 954 955 956 |
# File 'lib/sequel/dataset/query.rb', line 953 def select_append(*columns, &block) virtual_row_columns(columns, block) select(*(_current_select(true) + columns)) end |
#select_group(*columns, &block) ⇒ Object
Set both the select and group clauses with the given columns
. Column aliases may be supplied, and will be included in the select clause. This also takes a virtual row block similar to where
.
DB[:items].select_group(:a, :b)
# SELECT a, b FROM items GROUP BY a, b
DB[:items].select_group(Sequel[:c].as(:a)){f(c2)}
# SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)
967 968 969 970 |
# File 'lib/sequel/dataset/query.rb', line 967 def select_group(*columns, &block) virtual_row_columns(columns, block) select(*columns).group(*columns.map{|c| unaliased_identifier(c)}) end |
#select_hash(key_column, value_column, opts = OPTS) ⇒ Object
Returns a hash with key_column values as keys and value_column values as values. Similar to as_hash, but only selects the columns given. Like as_hash, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash(:id, :name)
# SELECT id, name FROM table
# => {1=>'a', 2=>'b', ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash([:id, :foo], [:name, :bar])
# SELECT id, foo, name, bar FROM table
# => {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine.
696 697 698 |
# File 'lib/sequel/dataset/actions.rb', line 696 def select_hash(key_column, value_column, opts = OPTS) _select_hash(:as_hash, key_column, value_column, opts) end |
#select_hash_groups(key_column, value_column, opts = OPTS) ⇒ Object
Returns a hash with key_column values as keys and an array of value_column values. Similar to to_hash_groups, but only selects the columns given. Like to_hash_groups, it accepts an optional :hash parameter, into which entries will be merged.
DB[:table].select_hash_groups(:name, :id)
# SELECT id, name FROM table
# => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].select_hash_groups([:first, :middle], [:last, :id])
# SELECT first, middle, last, id FROM table
# => {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}
When using this method, you must be sure that each expression has an alias that Sequel can determine.
717 718 719 |
# File 'lib/sequel/dataset/actions.rb', line 717 def select_hash_groups(key_column, value_column, opts = OPTS) _select_hash(:to_hash_groups, key_column, value_column, opts) end |
#select_map(column = nil, &block) ⇒ Object
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error if called with both an argument and a block.
DB[:table].select_map(:id) # SELECT id FROM table
# => [3, 5, 8, 1, ...]
DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table
# => [6, 10, 16, 2, ...]
You can also provide an array of column names:
DB[:table].select_map([:id, :name]) # SELECT id, name FROM table
# => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.
740 741 742 |
# File 'lib/sequel/dataset/actions.rb', line 740 def select_map(column=nil, &block) _select_map(column, false, &block) end |
#select_more(*columns, &block) ⇒ Object
Alias for select_append.
973 974 975 |
# File 'lib/sequel/dataset/query.rb', line 973 def select_more(*columns, &block) select_append(*columns, &block) end |
#select_order_map(column = nil, &block) ⇒ Object
The same as select_map, but in addition orders the array by the column.
DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id
# => [1, 2, 3, 4, ...]
DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2)
# => [2, 4, 6, 8, ...]
You can also provide an array of column names:
DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name
# => [[1, 'A'], [2, 'B'], [3, 'C'], ...]
If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.
759 760 761 |
# File 'lib/sequel/dataset/actions.rb', line 759 def select_order_map(column=nil, &block) _select_map(column, true, &block) end |
#select_prepend(*columns, &block) ⇒ Object
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.
DB[:items].select(:a).select(:b) # SELECT b FROM items
DB[:items].select(:a).select_prepend(:b) # SELECT b, a FROM items
DB[:items].select_prepend(:b) # SELECT b, * FROM items
984 985 986 987 |
# File 'lib/sequel/dataset/query.rb', line 984 def select_prepend(*columns, &block) virtual_row_columns(columns, block) select(*(columns + _current_select(false))) end |
#server(servr) ⇒ Object
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.
DB[:items].all # Uses the :read_only or :default server
DB[:items].delete # Uses the :default server
DB[:items].server(:blah).delete # Uses the :blah server
998 999 1000 |
# File 'lib/sequel/dataset/query.rb', line 998 def server(servr) clone(:server=>servr) end |
#server?(server) ⇒ Boolean
If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.
1005 1006 1007 1008 1009 1010 1011 |
# File 'lib/sequel/dataset/query.rb', line 1005 def server?(server) if db.sharded? && !opts[:server] server(server) else self end end |
#set_graph_aliases(graph_aliases) ⇒ Object
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select
for a graphed dataset, and must be used instead of select
whenever graphing is used.
graph_aliases should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name.
DB[:artists].graph(:albums, :artist_id: :id).
set_graph_aliases(name: :artists,
album_name: [:albums, :name],
forty_two: [:albums, :fourtwo, 42]).first
# SELECT artists.name, albums.name AS album_name, 42 AS forty_two ...
244 245 246 247 248 249 250 251 |
# File 'lib/sequel/dataset/graph.rb', line 244 def set_graph_aliases(graph_aliases) columns, graph_aliases = graph_alias_columns(graph_aliases) if graph = opts[:graph] select(*columns).clone(:graph => graph.merge(:column_aliases=>graph_aliases.freeze).freeze) else raise Error, "cannot call #set_graph_aliases on an ungraphed dataset" end end |
#single_record ⇒ Object
Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records. Users should probably use first
instead of this method. Example:
DB[:test].single_record # SELECT * FROM test LIMIT 1
# => {:column_name=>'value'}
769 770 771 |
# File 'lib/sequel/dataset/actions.rb', line 769 def single_record _single_record_ds.single_record! end |
#single_record! ⇒ Object
Returns the first record in dataset, without limiting the dataset. Returns nil if the dataset has no records. Users should probably use first
instead of this method. This should only be used if you know the dataset is already limited to a single record. This method may be desirable to use for performance reasons, as it does not clone the receiver. Example:
DB[:test].single_record! # SELECT * FROM test
# => {:column_name=>'value'}
781 782 783 |
# File 'lib/sequel/dataset/actions.rb', line 781 def single_record! with_sql_first(select_sql) end |
#single_value ⇒ Object
Returns the first value of the first record in the dataset. Returns nil if dataset is empty. Users should generally use get
instead of this method. Example:
DB[:test].single_value # SELECT * FROM test LIMIT 1
# => 'value'
791 792 793 794 795 796 |
# File 'lib/sequel/dataset/actions.rb', line 791 def single_value single_value_ds.each do |r| r.each{|_, v| return v} end nil end |
#single_value! ⇒ Object
Returns the first value of the first record in the dataset, without limiting the dataset. Returns nil if the dataset is empty. Users should generally use get
instead of this method. Should not be used on graphed datasets or datasets that have row_procs that don’t return hashes. This method may be desirable to use for performance reasons, as it does not clone the receiver.
DB[:test].single_value! # SELECT * FROM test
# => 'value'
806 807 808 |
# File 'lib/sequel/dataset/actions.rb', line 806 def single_value! with_sql_single_value(select_sql) end |
#skip_limit_check ⇒ Object
Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.
1014 1015 1016 1017 1018 1019 |
# File 'lib/sequel/dataset/query.rb', line 1014 def skip_limit_check return self if opts[:skip_limit_check] cached_dataset(:_skip_limit_check_ds) do clone(:skip_limit_check=>true) end end |
#skip_locked ⇒ Object
Skip locked rows when returning results from this dataset.
1022 1023 1024 1025 1026 1027 1028 |
# File 'lib/sequel/dataset/query.rb', line 1022 def skip_locked return self if opts[:skip_locked] cached_dataset(:_skip_locked_ds) do raise(Error, 'This dataset does not support skipping locked rows') unless supports_skip_locked? clone(:skip_locked=>true) end end |
#split_alias(c) ⇒ Object
Splits a possible implicit alias in c
, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.
186 187 188 189 190 191 192 193 194 195 196 197 198 |
# File 'lib/sequel/dataset/misc.rb', line 186 def split_alias(c) case c when Symbol c_table, column, aliaz = split_symbol(c) [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz] when SQL::AliasedExpression [c.expression, c.alias] when SQL::JoinClause [c.table, c.table_alias] else [c, nil] end end |
#split_qualifiers(table_name, *args) ⇒ Object
Splits table_name into an array of strings.
ds.split_qualifiers(:s) # ['s']
ds.split_qualifiers(Sequel[:t][:s]) # ['t', 's']
ds.split_qualifiers(Sequel[:d][:t][:s]) # ['d', 't', 's']
ds.split_qualifiers(Sequel.qualify(Sequel[:h][:d], Sequel[:t][:s])) # ['h', 'd', 't', 's']
749 750 751 752 753 754 755 756 757 |
# File 'lib/sequel/dataset/sql.rb', line 749 def split_qualifiers(table_name, *args) case table_name when SQL::QualifiedIdentifier split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil) else sch, table = schema_and_table(table_name, *args) sch ? [sch, table] : [table] end end |
#sql ⇒ Object
Same as select_sql
, not aliased directly to make subclassing simpler.
176 177 178 |
# File 'lib/sequel/dataset/sql.rb', line 176 def sql select_sql end |
#subscript_sql_append(sql, s) ⇒ Object
Append literalization of subscripts (SQL array accesses) to SQL string.
760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 |
# File 'lib/sequel/dataset/sql.rb', line 760 def subscript_sql_append(sql, s) case s.expression when Symbol, SQL::Subscript, SQL::Identifier, SQL::QualifiedIdentifier # nothing else wrap_expression = true sql << '(' end literal_append(sql, s.expression) if wrap_expression sql << ')[' else sql << '[' end sub = s.sub if sub.length == 1 && (range = sub.first).is_a?(Range) literal_append(sql, range.begin) sql << ':' e = range.end e -= 1 if range.exclude_end? && e.is_a?(Integer) literal_append(sql, e) else expression_list_append(sql, s.sub) end sql << ']' end |
#sum(arg = (no_arg = true), &block) ⇒ Object
Returns the sum for the given column/expression. Uses a virtual row block if no column is given.
DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1
# => 55
DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1
# => 10
817 818 819 820 |
# File 'lib/sequel/dataset/actions.rb', line 817 def sum(arg=(no_arg = true), &block) arg = Sequel.virtual_row(&block) if no_arg _aggregate(:sum, arg) end |
#supports_cte?(type = :select) ⇒ Boolean
Whether the dataset supports common table expressions, false by default. If given, type
can be :select, :insert, :update, or :delete, in which case it determines whether WITH is supported for the respective statement type.
48 49 50 |
# File 'lib/sequel/dataset/features.rb', line 48 def supports_cte?(type=:select) false end |
#supports_cte_in_subqueries? ⇒ Boolean
Whether the dataset supports common table expressions in subqueries, false by default. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.
55 56 57 |
# File 'lib/sequel/dataset/features.rb', line 55 def supports_cte_in_subqueries? false end |
#supports_deleting_joins? ⇒ Boolean
Whether deleting from joined datasets is supported, false by default.
60 61 62 |
# File 'lib/sequel/dataset/features.rb', line 60 def supports_deleting_joins? end |
#supports_derived_column_lists? ⇒ Boolean
Whether the database supports derived column lists (e.g. “table_expr AS table_alias(column_alias1, column_alias2, …)”), true by default.
67 68 69 |
# File 'lib/sequel/dataset/features.rb', line 67 def supports_derived_column_lists? true end |
#supports_distinct_on? ⇒ Boolean
Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.
72 73 74 |
# File 'lib/sequel/dataset/features.rb', line 72 def supports_distinct_on? false end |
#supports_group_cube? ⇒ Boolean
Whether the dataset supports CUBE with GROUP BY, false by default.
77 78 79 |
# File 'lib/sequel/dataset/features.rb', line 77 def supports_group_cube? false end |
#supports_group_rollup? ⇒ Boolean
Whether the dataset supports ROLLUP with GROUP BY, false by default.
82 83 84 |
# File 'lib/sequel/dataset/features.rb', line 82 def supports_group_rollup? false end |
#supports_grouping_sets? ⇒ Boolean
Whether the dataset supports GROUPING SETS with GROUP BY, false by default.
87 88 89 |
# File 'lib/sequel/dataset/features.rb', line 87 def supports_grouping_sets? false end |
#supports_insert_select? ⇒ Boolean
Whether this dataset supports the insert_select
method for returning all columns values directly from an insert query, false by default.
93 94 95 |
# File 'lib/sequel/dataset/features.rb', line 93 def supports_insert_select? supports_returning?(:insert) end |
#supports_intersect_except? ⇒ Boolean
Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.
98 99 100 |
# File 'lib/sequel/dataset/features.rb', line 98 def supports_intersect_except? true end |
#supports_intersect_except_all? ⇒ Boolean
Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default.
103 104 105 |
# File 'lib/sequel/dataset/features.rb', line 103 def supports_intersect_except_all? true end |
#supports_is_true? ⇒ Boolean
Whether the dataset supports the IS TRUE syntax, true by default.
108 109 110 |
# File 'lib/sequel/dataset/features.rb', line 108 def supports_is_true? true end |
#supports_join_using? ⇒ Boolean
Whether the dataset supports the JOIN table USING (column1, …) syntax, true by default. If false, support is emulated using JOIN table ON (table.column1 = other_table.column1).
114 115 116 |
# File 'lib/sequel/dataset/features.rb', line 114 def supports_join_using? true end |
#supports_lateral_subqueries? ⇒ Boolean
Whether the dataset supports LATERAL for subqueries in the FROM or JOIN clauses, false by default.
119 120 121 |
# File 'lib/sequel/dataset/features.rb', line 119 def supports_lateral_subqueries? false end |
#supports_limits_in_correlated_subqueries? ⇒ Boolean
Whether limits are supported in correlated subqueries, true by default.
124 125 126 |
# File 'lib/sequel/dataset/features.rb', line 124 def true end |
#supports_merge? ⇒ Boolean
Whether the MERGE statement is supported, false by default.
134 135 136 |
# File 'lib/sequel/dataset/features.rb', line 134 def supports_merge? false end |
#supports_modifying_joins? ⇒ Boolean
Whether modifying joined datasets is supported, false by default.
139 140 141 |
# File 'lib/sequel/dataset/features.rb', line 139 def false end |
#supports_multiple_column_in? ⇒ Boolean
Whether the IN/NOT IN operators support multiple columns when an array of values is given, true by default.
145 146 147 |
# File 'lib/sequel/dataset/features.rb', line 145 def supports_multiple_column_in? true end |
#supports_nowait? ⇒ Boolean
Whether the dataset supports skipping raising an error instead of waiting for locked rows when returning data, false by default.
129 130 131 |
# File 'lib/sequel/dataset/features.rb', line 129 def supports_nowait? false end |
#supports_offsets_in_correlated_subqueries? ⇒ Boolean
Whether offsets are supported in correlated subqueries, true by default.
150 151 152 |
# File 'lib/sequel/dataset/features.rb', line 150 def true end |
#supports_ordered_distinct_on? ⇒ Boolean
Whether the dataset supports or can fully emulate the DISTINCT ON clause, including respecting the ORDER BY clause, false by default.
156 157 158 |
# File 'lib/sequel/dataset/features.rb', line 156 def supports_ordered_distinct_on? supports_distinct_on? end |
#supports_placeholder_literalizer? ⇒ Boolean
Whether placeholder literalizers are supported, true by default.
161 162 163 |
# File 'lib/sequel/dataset/features.rb', line 161 def supports_placeholder_literalizer? true end |
#supports_regexp? ⇒ Boolean
Whether the dataset supports pattern matching by regular expressions, false by default.
166 167 168 |
# File 'lib/sequel/dataset/features.rb', line 166 def supports_regexp? false end |
#supports_replace? ⇒ Boolean
Whether the dataset supports REPLACE syntax, false by default.
171 172 173 |
# File 'lib/sequel/dataset/features.rb', line 171 def supports_replace? false end |
#supports_returning?(type) ⇒ Boolean
Whether the RETURNING clause is supported for the given type of query, false by default. type
can be :insert, :update, or :delete.
177 178 179 |
# File 'lib/sequel/dataset/features.rb', line 177 def supports_returning?(type) false end |
#supports_select_all_and_column? ⇒ Boolean
Whether the database supports SELECT *, column FROM table
, true by default.
187 188 189 |
# File 'lib/sequel/dataset/features.rb', line 187 def supports_select_all_and_column? true end |
#supports_skip_locked? ⇒ Boolean
Whether the dataset supports skipping locked rows when returning data, false by default.
182 183 184 |
# File 'lib/sequel/dataset/features.rb', line 182 def supports_skip_locked? false end |
#supports_timestamp_timezones? ⇒ Boolean
Whether the dataset supports timezones in literal timestamps, false by default.
194 195 196 197 |
# File 'lib/sequel/dataset/features.rb', line 194 def # SEQUEL6: Remove false end |
#supports_timestamp_usecs? ⇒ Boolean
Whether the dataset supports fractional seconds in literal timestamps, true by default.
201 202 203 |
# File 'lib/sequel/dataset/features.rb', line 201 def true end |
#supports_updating_joins? ⇒ Boolean
Whether updating joined datasets is supported, false by default.
206 207 208 |
# File 'lib/sequel/dataset/features.rb', line 206 def supports_updating_joins? end |
#supports_where_true? ⇒ Boolean
Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that that use 1 for true), true by default.
235 236 237 |
# File 'lib/sequel/dataset/features.rb', line 235 def supports_where_true? true end |
#supports_window_clause? ⇒ Boolean
Whether the dataset supports the WINDOW clause to define windows used by multiple window functions, false by default.
212 213 214 |
# File 'lib/sequel/dataset/features.rb', line 212 def supports_window_clause? false end |
#supports_window_function_frame_option?(option) ⇒ Boolean
Whether the dataset supports the given window function option. True by default. This should only be called if supports_window_functions? is true. Possible options are :rows, :range, :groups, :offset, :exclude.
224 225 226 227 228 229 230 231 |
# File 'lib/sequel/dataset/features.rb', line 224 def supports_window_function_frame_option?(option) case option when :rows, :range, :offset true else false end end |
#supports_window_functions? ⇒ Boolean
Whether the dataset supports window functions, false by default.
217 218 219 |
# File 'lib/sequel/dataset/features.rb', line 217 def supports_window_functions? false end |
#to_hash(*a) ⇒ Object
Alias of as_hash for backwards compatibility.
874 875 876 |
# File 'lib/sequel/dataset/actions.rb', line 874 def to_hash(*a) as_hash(*a) end |
#to_hash_groups(key_column, value_column = nil, opts = OPTS) ⇒ Object
Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.
DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table
# {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...}
DB[:table].to_hash_groups(:name) # SELECT * FROM table
# {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}
You can also provide an array of column names for either the key_column, the value column, or both:
DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table
# {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...}
DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table
# {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}
Options:
- :all
-
Use all instead of each to retrieve the objects
- :hash
-
The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.
902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 |
# File 'lib/sequel/dataset/actions.rb', line 902 def to_hash_groups(key_column, value_column = nil, opts = OPTS) h = opts[:hash] || {} meth = opts[:all] ? :all : :each if value_column return naked.to_hash_groups(key_column, value_column, opts) if row_proc if value_column.is_a?(Array) if key_column.is_a?(Array) public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)} else public_send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)} end else if key_column.is_a?(Array) public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]} else public_send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]} end end elsif key_column.is_a?(Array) public_send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r} else public_send(meth){|r| (h[r[key_column]] ||= []) << r} end h end |
#truncate ⇒ Object
Truncates the dataset. Returns nil.
DB[:table].truncate # TRUNCATE table
# => nil
932 933 934 |
# File 'lib/sequel/dataset/actions.rb', line 932 def truncate execute_ddl(truncate_sql) end |
#truncate_sql ⇒ Object
Returns a TRUNCATE SQL query string. See truncate
DB[:items].truncate_sql # => 'TRUNCATE items'
183 184 185 186 187 188 189 190 191 192 193 194 |
# File 'lib/sequel/dataset/sql.rb', line 183 def truncate_sql if opts[:sql] static_sql(opts[:sql]) else check_truncation_allowed! check_not_limited!(:truncate) raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having] t = String.new source_list_append(t, opts[:from]) _truncate_sql(t) end end |
#unfiltered ⇒ Object
Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).unfiltered
# SELECT * FROM items GROUP BY a
1034 1035 1036 1037 |
# File 'lib/sequel/dataset/query.rb', line 1034 def unfiltered return self unless opts[:where] || opts[:having] cached_dataset(:_unfiltered_ds){clone(:where => nil, :having => nil)} end |
#ungraphed ⇒ Object
Remove the splitting of results into subhashes, and all metadata related to the current graph (if any).
255 256 257 258 |
# File 'lib/sequel/dataset/graph.rb', line 255 def ungraphed return self unless opts[:graph] clone(:graph=>nil) end |
#ungrouped ⇒ Object
Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.
DB[:items].group(:a).having(a: 1).where(:b).ungrouped
# SELECT * FROM items WHERE b
1043 1044 1045 1046 |
# File 'lib/sequel/dataset/query.rb', line 1043 def ungrouped return self unless opts[:group] || opts[:having] cached_dataset(:_ungrouped_ds){clone(:group => nil, :having => nil)} end |
#union(dataset, opts = OPTS) ⇒ Object
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
- :alias
-
Use the given value as the from_self alias
- :all
-
Set to true to use UNION ALL instead of UNION, so duplicate rows can occur
- :from_self
-
Set to false to not wrap the returned dataset in a from_self, use with care.
DB[:items].union(DB[:other_items])
# SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1
DB[:items].union(DB[:other_items], all: true, from_self: false)
# SELECT * FROM items UNION ALL SELECT * FROM other_items
DB[:items].union(DB[:other_items], alias: :i)
# SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i
1064 1065 1066 |
# File 'lib/sequel/dataset/query.rb', line 1064 def union(dataset, opts=OPTS) compound_clone(:union, dataset, opts) end |
#unlimited ⇒ Object
Returns a copy of the dataset with no limit or offset.
DB[:items].limit(10, 20).unlimited # SELECT * FROM items
1071 1072 1073 1074 |
# File 'lib/sequel/dataset/query.rb', line 1071 def unlimited return self unless opts[:limit] || opts[:offset] cached_dataset(:_unlimited_ds){clone(:limit=>nil, :offset=>nil)} end |
#unordered ⇒ Object
Returns a copy of the dataset with no order.
DB[:items].order(:a).unordered # SELECT * FROM items
1079 1080 1081 1082 |
# File 'lib/sequel/dataset/query.rb', line 1079 def unordered return self unless opts[:order] cached_dataset(:_unordered_ds){clone(:order=>nil)} end |
#unqualified_column_for(v) ⇒ Object
This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression containing one of those. In other cases, this returns nil.
205 206 207 208 209 |
# File 'lib/sequel/dataset/misc.rb', line 205 def unqualified_column_for(v) unless v.is_a?(String) _unqualified_column_for(v) end end |
#unused_table_alias(table_alias, used_aliases = []) ⇒ Object
Creates a unique table alias that hasn’t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with “_N” if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.
DB[:table].unused_table_alias(:t)
# => :t
DB[:table].unused_table_alias(:table)
# => :table_0
DB[:table, :table_0].unused_table_alias(:table)
# => :table_1
DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2])
# => :table_3
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 |
# File 'lib/sequel/dataset/misc.rb', line 233 def unused_table_alias(table_alias, used_aliases = []) table_alias = alias_symbol(table_alias) used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] if used_aliases.include?(table_alias) i = 0 while true ta = :"#{table_alias}_#{i}" return ta unless used_aliases.include?(ta) i += 1 end else table_alias end end |
#update(values = OPTS, &block) ⇒ Object
Updates values for the dataset. The returned value is the number of rows updated. values
should be a hash where the keys are columns to set and values are the values to which to set the columns.
DB[:table].update(x: nil) # UPDATE table SET x = NULL
# => 10
DB[:table].update(x: Sequel[:x]+1, y: 0) # UPDATE table SET x = (x + 1), y = 0
# => 10
Some databases support using multiple tables in an UPDATE query. This requires multiple FROM tables (JOINs can also be used). As multiple FROM tables use an implicit CROSS JOIN, you should make sure your WHERE condition uses the appropriate filters for the FROM tables:
DB.from(:a, :b).join(:c, :d=>Sequel[:b][:e]).where{{a[:f]=>b[:g], a[:id]=>10}}.
update(:f=>Sequel[:c][:h])
# UPDATE a
# SET f = c.h
# FROM b
# INNER JOIN c ON (c.d = b.e)
# WHERE ((a.f = b.g) AND (a.id = 10))
958 959 960 961 962 963 964 965 |
# File 'lib/sequel/dataset/actions.rb', line 958 def update(values=OPTS, &block) sql = update_sql(values) if uses_returning?(:update) returning_fetch_rows(sql, &block) else execute_dui(sql) end end |
#update_sql(values = OPTS) ⇒ Object
Formats an UPDATE statement using the given values. See update
.
DB[:items].update_sql(price: 100, category: 'software')
# => "UPDATE items SET price = 100, category = 'software'
Raises an Error
if the dataset is grouped or includes more than one table.
203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
# File 'lib/sequel/dataset/sql.rb', line 203 def update_sql(values = OPTS) return static_sql(opts[:sql]) if opts[:sql] check_update_allowed! check_not_limited!(:update) case values when LiteralString # nothing when String raise Error, "plain string passed to Dataset#update is not supported, use Sequel.lit to use a literal string" end clone(:values=>values).send(:_update_sql) end |
#where(*cond, &block) ⇒ Object
Returns a copy of the dataset with the given WHERE conditions imposed upon it.
Accepts the following argument types:
- Hash, Array of pairs
-
list of equality/inclusion expressions
- Symbol
-
taken as a boolean column argument (e.g. WHERE active)
- Sequel::SQL::BooleanExpression, Sequel::LiteralString
-
an existing condition expression, probably created using the Sequel expression filter DSL.
where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the “Virtual Rows” guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
DB[:items].where(id: 3)
# SELECT * FROM items WHERE (id = 3)
DB[:items].where(Sequel.lit('price < ?', 100))
# SELECT * FROM items WHERE price < 100
DB[:items].where([[:id, [1,2,3]], [:id, 0..10]])
# SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10)))
DB[:items].where(Sequel.lit('price < 100'))
# SELECT * FROM items WHERE price < 100
DB[:items].where(:active)
# SELECT * FROM items WHERE :active
DB[:items].where{price < 100}
# SELECT * FROM items WHERE (price < 100)
Multiple where calls can be chained for scoping:
software = dataset.where(category: 'software').where{price < 100}
# SELECT * FROM items WHERE ((category = 'software') AND (price < 100))
See the “Dataset Filtering” guide for more examples and details.
1126 1127 1128 |
# File 'lib/sequel/dataset/query.rb', line 1126 def where(*cond, &block) add_filter(:where, cond, &block) end |
#where_all(cond, &block) ⇒ Object
Return an array of all rows matching the given filter condition, also yielding each row to the given block. Basically the same as where(cond).all(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_all(id: [1,2,3])
# SELECT * FROM table WHERE (id IN (1, 2, 3))
973 974 975 976 977 978 979 |
# File 'lib/sequel/dataset/actions.rb', line 973 def where_all(cond, &block) if loader = _where_loader([cond], nil) loader.all(filter_expr(cond), &block) else where(cond).all(&block) end end |
#where_each(cond, &block) ⇒ Object
Iterate over all rows matching the given filter condition, yielding each row to the given block. Basically the same as where(cond).each(&block), except it can be optimized to not create an intermediate dataset.
DB[:table].where_each(id: [1,2,3]){|row| p row}
# SELECT * FROM table WHERE (id IN (1, 2, 3))
987 988 989 990 991 992 993 |
# File 'lib/sequel/dataset/actions.rb', line 987 def where_each(cond, &block) if loader = _where_loader([cond], nil) loader.each(filter_expr(cond), &block) else where(cond).each(&block) end end |
#where_single_value(cond) ⇒ Object
Filter the datasets using the given filter condition, then return a single value. This assumes that the dataset has already been setup to limit the selection to a single column. Basically the same as where(cond).single_value, except it can be optimized to not create an intermediate dataset.
DB[:table].select(:name).where_single_value(id: 1)
# SELECT name FROM table WHERE (id = 1) LIMIT 1
1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 |
# File 'lib/sequel/dataset/actions.rb', line 1002 def where_single_value(cond) if loader = cached_where_placeholder_literalizer([cond], nil, :_where_single_value_loader) do |pl| single_value_ds.where(pl.arg) end loader.get(filter_expr(cond)) else where(cond).single_value end end |
#window(name, opts) ⇒ Object
Return a clone of the dataset with an addition named window that can be referenced in window functions. See Sequel::SQL::Window for a list of options that can be passed in. Example:
DB[:items].window(:w, partition: :c1, order: :c2)
# SELECT * FROM items WINDOW w AS (PARTITION BY c1 ORDER BY c2)
1136 1137 1138 |
# File 'lib/sequel/dataset/query.rb', line 1136 def window(name, opts) clone(:window=>((@opts[:window]||EMPTY_ARRAY) + [[name, SQL::Window.new(opts)].freeze]).freeze) end |
#window_sql_append(sql, opts) ⇒ Object
Append literalization of windows (for window functions) to SQL string.
788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 |
# File 'lib/sequel/dataset/sql.rb', line 788 def window_sql_append(sql, opts) raise(Error, 'This dataset does not support window functions') unless supports_window_functions? space = false space_s = ' ' sql << '(' if window = opts[:window] literal_append(sql, window) space = true end if part = opts[:partition] sql << space_s if space sql << "PARTITION BY " expression_list_append(sql, Array(part)) space = true end if order = opts[:order] sql << space_s if space sql << "ORDER BY " expression_list_append(sql, Array(order)) space = true end if frame = opts[:frame] sql << space_s if space if frame.is_a?(String) sql << frame else case frame when :all frame_type = :rows frame_start = :preceding frame_end = :following when :rows, :range, :groups frame_type = frame frame_start = :preceding frame_end = :current when Hash frame_type = frame[:type] unless frame_type == :rows || frame_type == :range || frame_type == :groups raise Error, "invalid window :frame :type option: #{frame_type.inspect}" end unless frame_start = frame[:start] raise Error, "invalid window :frame :start option: #{frame_start.inspect}" end frame_end = frame[:end] frame_exclude = frame[:exclude] else raise Error, "invalid window :frame option: #{frame.inspect}" end sql << frame_type.to_s.upcase << " " sql << 'BETWEEN ' if frame_end window_frame_boundary_sql_append(sql, frame_start, :preceding) if frame_end sql << " AND " window_frame_boundary_sql_append(sql, frame_end, :following) end if frame_exclude sql << " EXCLUDE " case frame_exclude when :current sql << "CURRENT ROW" when :group sql << "GROUP" when :ties sql << "TIES" when :no_others sql << "NO OTHERS" else raise Error, "invalid window :frame :exclude option: #{frame_exclude.inspect}" end end end end sql << ')' end |
#with(name, dataset, opts = OPTS) ⇒ Object
Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query.
Options:
- :args
-
Specify the arguments/columns for the CTE, should be an array of symbols.
- :recursive
-
Specify that this is a recursive CTE
- :materialized
-
Set to false to force inlining of the CTE, or true to force not inlining the CTE (PostgreSQL 12+/SQLite 3.35+).
DB[:items].with(:items, DB[:syx].where(Sequel[:name].like('A%')))
# WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items
1151 1152 1153 1154 1155 1156 1157 1158 1159 |
# File 'lib/sequel/dataset/query.rb', line 1151 def with(name, dataset, opts=OPTS) raise(Error, 'This dataset does not support common table expressions') unless supports_cte? if hoist_cte?(dataset) s, ds = hoist_cte(dataset) s.with(name, ds, opts) else clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]).freeze) end end |
#with_extend(*mods, &block) ⇒ Object
:nocov:
1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 |
# File 'lib/sequel/dataset/query.rb', line 1240 def with_extend(*mods, &block) c = Class.new(self.class) c.include(*mods) unless mods.empty? c.include(DatasetModule.new(&block)) if block o = c.freeze.allocate o.instance_variable_set(:@db, @db) o.instance_variable_set(:@opts, @opts) o.instance_variable_set(:@cache, {}) if cols = cache_get(:_columns) o.send(:columns=, cols) end o.freeze end |
#with_quote_identifiers(v) ⇒ Object
Return a modified dataset with quote_identifiers set.
250 251 252 |
# File 'lib/sequel/dataset/misc.rb', line 250 def with_quote_identifiers(v) clone(:quote_identifiers=>v, :skip_symbol_cache=>true) end |
#with_recursive(name, nonrecursive, recursive, opts = OPTS) ⇒ Object
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE.
Options:
- :args
-
Specify the arguments/columns for the CTE, should be an array of symbols.
- :union_all
-
Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts.
PostgreSQL 14+ Options:
- :cycle
-
Stop recursive searching when a cycle is detected. Includes two columns in the result of the CTE, a cycle column indicating whether a cycle was detected for the current row, and a path column for the path traversed to get to the current row. If given, must be a hash with the following keys:
- :columns
-
(required) The column or array of columns to use to detect a cycle. If the value of these columns match columns already traversed, then a cycle is detected, and recursive searching will not traverse beyond the cycle (the CTE will include the row where the cycle was detected).
- :cycle_column
-
The name of the cycle column in the output, defaults to :is_cycle.
- :cycle_value
-
The value of the cycle column in the output if the current row was detected as a cycle, defaults to true.
- :noncycle_value
-
The value of the cycle column in the output if the current row was not detected as a cycle, defaults to false. Only respected if :cycle_value is given.
- :path_column
-
The name of the path column in the output, defaults to :path.
- :search
-
Include an order column in the result of the CTE that allows for breadth or depth first searching. If given, must be a hash with the following keys:
- :by
-
(required) The column or array of columns to search by.
- :order_column
-
The name of the order column in the output, defaults to :ordercol.
- :type
-
Set to :breadth to use breadth-first searching (depth-first searching is the default).
DB[:t].with_recursive(:t,
DB[:i1].select(:id, :parent_id).where(parent_id: nil),
DB[:i1].join(:t, id: :parent_id).select(Sequel[:i1][:id], Sequel[:i1][:parent_id]),
args: [:id, :parent_id])
# WITH RECURSIVE t(id, parent_id) AS (
# SELECT id, parent_id FROM i1 WHERE (parent_id IS NULL)
# UNION ALL
# SELECT i1.id, i1.parent_id FROM i1 INNER JOIN t ON (t.id = i1.parent_id)
# ) SELECT * FROM t
DB[:t].with_recursive(:t,
DB[:i1].where(parent_id: nil),
DB[:i1].join(:t, id: :parent_id).select_all(:i1),
search: {by: :id, type: :breadth},
cycle: {columns: :id, cycle_value: 1, noncycle_value: 2})
# WITH RECURSIVE t AS (
# SELECT * FROM i1 WHERE (parent_id IS NULL)
# UNION ALL
# (SELECT i1.* FROM i1 INNER JOIN t ON (t.id = i1.parent_id))
# )
# SEARCH BREADTH FIRST BY id SET ordercol
# CYCLE id SET is_cycle TO 1 DEFAULT 2 USING path
# SELECT * FROM t
1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 |
# File 'lib/sequel/dataset/query.rb', line 1217 def with_recursive(name, nonrecursive, recursive, opts=OPTS) raise(Error, 'This dataset does not support common table expressions') unless supports_cte? if hoist_cte?(nonrecursive) s, ds = hoist_cte(nonrecursive) s.with_recursive(name, ds, recursive, opts) elsif hoist_cte?(recursive) s, ds = hoist_cte(recursive) s.with_recursive(name, nonrecursive, ds, opts) else clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]).freeze) end end |
#with_row_proc(callable) ⇒ Object
Returns a cloned dataset with the given row_proc.
ds = DB[:items]
ds.all # => [{:id=>2}]
ds.with_row_proc(:invert.to_proc).all # => [{2=>:id}]
1269 1270 1271 |
# File 'lib/sequel/dataset/query.rb', line 1269 def with_row_proc(callable) clone(:row_proc=>callable) end |
#with_sql(sql, *args) ⇒ Object
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo
You can use placeholders in your SQL and provide arguments for those placeholders:
DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo
You can also provide a method name and arguments to call to get the SQL:
DB[:items].with_sql(:insert_sql, b: 1) # INSERT INTO items (b) VALUES (1)
Note that datasets that specify custom SQL using this method will generally ignore future dataset methods that modify the SQL used, as specifying custom SQL overrides Sequel’s SQL generator. You should probably limit yourself to the following dataset methods when using this method, or use the implicit_subquery extension:
-
each
-
all
-
single_record (if only one record could be returned)
-
single_value (if only one record could be returned, and a single column is selected)
-
map
-
as_hash
-
to_hash
-
to_hash_groups
-
delete (if a DELETE statement)
-
update (if an UPDATE statement, with no arguments)
-
insert (if an INSERT statement, with no arguments)
-
truncate (if a TRUNCATE statement, with no arguments)
1303 1304 1305 1306 1307 1308 1309 1310 |
# File 'lib/sequel/dataset/query.rb', line 1303 def with_sql(sql, *args) if sql.is_a?(Symbol) sql = public_send(sql, *args) else sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? end clone(:sql=>sql) end |
#with_sql_all(sql, &block) ⇒ Object
Run the given SQL and return an array of all rows. If a block is given, each row is yielded to the block after all rows are loaded. See with_sql_each.
1015 1016 1017 |
# File 'lib/sequel/dataset/actions.rb', line 1015 def with_sql_all(sql, &block) _all(block){|a| with_sql_each(sql){|r| a << r}} end |
#with_sql_delete(sql) ⇒ Object Also known as: with_sql_update
Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing with_sql(sql).delete. It’s significantly faster as it does not require cloning the current dataset.
1022 1023 1024 |
# File 'lib/sequel/dataset/actions.rb', line 1022 def with_sql_delete(sql) execute_dui(sql) end |
#with_sql_each(sql) ⇒ Object
Run the given SQL and yield each returned row to the block.
1028 1029 1030 1031 1032 1033 1034 1035 |
# File 'lib/sequel/dataset/actions.rb', line 1028 def with_sql_each(sql) if rp = row_proc _with_sql_dataset.fetch_rows(sql){|r| yield rp.call(r)} else _with_sql_dataset.fetch_rows(sql){|r| yield r} end self end |
#with_sql_first(sql) ⇒ Object
Run the given SQL and return the first row, or nil if no rows were returned. See with_sql_each.
1039 1040 1041 1042 |
# File 'lib/sequel/dataset/actions.rb', line 1039 def with_sql_first(sql) with_sql_each(sql){|r| return r} nil end |
#with_sql_insert(sql) ⇒ Object
Execute the given SQL and (on most databases) return the primary key of the inserted row.
1055 1056 1057 |
# File 'lib/sequel/dataset/actions.rb', line 1055 def with_sql_insert(sql) execute_insert(sql) end |
#with_sql_single_value(sql) ⇒ Object
Run the given SQL and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL given should select only a single value. See with_sql_each.
1047 1048 1049 1050 1051 |
# File 'lib/sequel/dataset/actions.rb', line 1047 def with_sql_single_value(sql) if r = with_sql_first(sql) r.each{|_, v| return v} end end |