Class: Polars::DataFrame

Inherits:
Object
  • Object
show all
Includes:
Plot
Defined in:
lib/polars/data_frame.rb

Overview

Two-dimensional data structure representing data as a table with rows and columns.

Instance Method Summary collapse

Constructor Details

#initialize(data = nil, schema: nil, schema_overrides: nil, strict: true, orient: nil, infer_schema_length: 100, nan_to_null: false) ⇒ DataFrame

Create a new DataFrame.

Parameters:

  • data (Object) (defaults to: nil)

    Two-dimensional data in various forms; hash input must contain arrays or a range. Arrays may contain Series or other arrays.

  • schema (Object) (defaults to: nil)

    The schema of the resulting DataFrame. The schema may be declared in several ways:

    • As a hash of \{name:type} pairs; if type is nil, it will be auto-inferred.
    • As an array of column names; in this case types are automatically inferred.
    • As an array of (name,type) pairs; this is equivalent to the hash form.

    If you supply an array of column names that does not match the names in the underlying data, the names given here will overwrite them. The number of names given in the schema should match the underlying data dimensions.

    If set to nil (default), the schema is inferred from the data.

  • schema_overrides (Hash) (defaults to: nil)

    Support type specification or override of one or more columns; note that any dtypes inferred from the schema param will be overridden.

    The number of entries in the schema should match the underlying data dimensions, unless an array of hashes is being passed, in which case a partial schema can be declared to prevent specific fields from being loaded.

  • strict (Boolean) (defaults to: true)

    Throw an error if any data value does not exactly match the given or inferred data type for that column. If set to false, values that do not match the data type are cast to that data type or, if casting is not possible, set to null instead.

  • orient ("col", "row") (defaults to: nil)

    Whether to interpret two-dimensional data as columns or as rows. If nil, the orientation is inferred by matching the columns and data dimensions. If this does not yield conclusive results, column orientation is used.

  • infer_schema_length (Integer) (defaults to: 100)

    The maximum number of rows to scan for schema inference. If set to nil, the full data may be scanned (this can be slow). This parameter only applies if the input data is a sequence or generator of rows; other input is read as-is.

  • nan_to_null (Boolean) (defaults to: false)

    If the data comes from one or more Numo arrays, can optionally convert input data NaN values to null instead. This is a no-op for all other input data.



50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
# File 'lib/polars/data_frame.rb', line 50

def initialize(data = nil, schema: nil, schema_overrides: nil, strict: true, orient: nil, infer_schema_length: 100, nan_to_null: false)
  if defined?(ActiveRecord) && (data.is_a?(ActiveRecord::Relation) || data.is_a?(ActiveRecord::Result))
    raise ArgumentError, "Use read_database instead"
  end

  if data.nil?
    self._df = self.class.hash_to_rbdf({}, schema: schema, schema_overrides: schema_overrides)
  elsif data.is_a?(Hash)
    data = data.transform_keys { |v| v.is_a?(Symbol) ? v.to_s : v }
    self._df = self.class.hash_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict, nan_to_null: nan_to_null)
  elsif data.is_a?(::Array)
    self._df = self.class.sequence_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict, orient: orient, infer_schema_length: infer_schema_length)
  elsif data.is_a?(Series)
    self._df = self.class.series_to_rbdf(data, schema: schema, schema_overrides: schema_overrides, strict: strict)
  elsif data.respond_to?(:arrow_c_stream)
    # This uses the fact that RbSeries.from_arrow_c_stream will create a
    # struct-typed Series. Then we unpack that to a DataFrame.
    tmp_col_name = ""
    s = Utils.wrap_s(RbSeries.from_arrow_c_stream(data))
    self._df = s.to_frame(tmp_col_name).unnest(tmp_col_name)._df
  else
    raise ArgumentError, "DataFrame constructor called with unsupported type; got #{data.class.name}"
  end
end

Instance Method Details

#!=(other) ⇒ DataFrame

Not equal.

Returns:



225
226
227
# File 'lib/polars/data_frame.rb', line 225

def !=(other)
  _comp(other, "neq")
end

#%(other) ⇒ DataFrame

Returns the modulo.

Returns:



308
309
310
311
312
313
314
315
# File 'lib/polars/data_frame.rb', line 308

def %(other)
  if other.is_a?(DataFrame)
    return _from_rbdf(_df.rem_df(other._df))
  end

  other = _prepare_other_arg(other)
  _from_rbdf(_df.rem(other._s))
end

#*(other) ⇒ DataFrame

Performs multiplication.

Returns:



260
261
262
263
264
265
266
267
# File 'lib/polars/data_frame.rb', line 260

def *(other)
  if other.is_a?(DataFrame)
    return _from_rbdf(_df.mul_df(other._df))
  end

  other = _prepare_other_arg(other)
  _from_rbdf(_df.mul(other._s))
end

#+(other) ⇒ DataFrame

Performs addition.

Returns:



284
285
286
287
288
289
290
291
# File 'lib/polars/data_frame.rb', line 284

def +(other)
  if other.is_a?(DataFrame)
    return _from_rbdf(_df.add_df(other._df))
  end

  other = _prepare_other_arg(other)
  _from_rbdf(_df.add(other._s))
end

#-(other) ⇒ DataFrame

Performs subtraction.

Returns:



296
297
298
299
300
301
302
303
# File 'lib/polars/data_frame.rb', line 296

def -(other)
  if other.is_a?(DataFrame)
    return _from_rbdf(_df.sub_df(other._df))
  end

  other = _prepare_other_arg(other)
  _from_rbdf(_df.sub(other._s))
end

#/(other) ⇒ DataFrame

Performs division.

Returns:



272
273
274
275
276
277
278
279
# File 'lib/polars/data_frame.rb', line 272

def /(other)
  if other.is_a?(DataFrame)
    return _from_rbdf(_df.div_df(other._df))
  end

  other = _prepare_other_arg(other)
  _from_rbdf(_df.div(other._s))
end

#<(other) ⇒ DataFrame

Less than.

Returns:



239
240
241
# File 'lib/polars/data_frame.rb', line 239

def <(other)
  _comp(other, "lt")
end

#<=(other) ⇒ DataFrame

Less than or equal.

Returns:



253
254
255
# File 'lib/polars/data_frame.rb', line 253

def <=(other)
  _comp(other, "lt_eq")
end

#==(other) ⇒ DataFrame

Equal.

Returns:



218
219
220
# File 'lib/polars/data_frame.rb', line 218

def ==(other)
  _comp(other, "eq")
end

#>(other) ⇒ DataFrame

Greater than.

Returns:



232
233
234
# File 'lib/polars/data_frame.rb', line 232

def >(other)
  _comp(other, "gt")
end

#>=(other) ⇒ DataFrame

Greater than or equal.

Returns:



246
247
248
# File 'lib/polars/data_frame.rb', line 246

def >=(other)
  _comp(other, "gt_eq")
end

#[](*args) ⇒ Object

Returns subset of the DataFrame.

Returns:

Raises:

  • (ArgumentError)


349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
# File 'lib/polars/data_frame.rb', line 349

def [](*args)
  if args.size == 2
    row_selection, col_selection = args

    # df[.., unknown]
    if row_selection.is_a?(Range)

      # multiple slices
      # df[.., ..]
      if col_selection.is_a?(Range)
        raise Todo
      end
    end

    # df[2, ..] (select row as df)
    if row_selection.is_a?(Integer)
      if col_selection.is_a?(::Array)
        df = self[0.., col_selection]
        return df.slice(row_selection, 1)
      end
      # df[2, "a"]
      if col_selection.is_a?(::String) || col_selection.is_a?(Symbol)
        return self[col_selection][row_selection]
      end
    end

    # column selection can be "a" and ["a", "b"]
    if col_selection.is_a?(::String) || col_selection.is_a?(Symbol)
      col_selection = [col_selection]
    end

    # df[.., 1]
    if col_selection.is_a?(Integer)
      series = to_series(col_selection)
      return series[row_selection]
    end

    if col_selection.is_a?(::Array)
      # df[.., [1, 2]]
      if Utils.is_int_sequence(col_selection)
        series_list = col_selection.map { |i| to_series(i) }
        df = self.class.new(series_list)
        return df[row_selection]
      end
    end

    df = self[col_selection]
    return df[row_selection]
  elsif args.size == 1
    item = args[0]

    # select single column
    # df["foo"]
    if item.is_a?(::String) || item.is_a?(Symbol)
      return Utils.wrap_s(_df.get_column(item.to_s))
    end

    # df[idx]
    if item.is_a?(Integer)
      return slice(_pos_idx(item, 0), 1)
    end

    # df[..]
    if item.is_a?(Range)
      return Slice.new(self).apply(item)
    end

    if item.is_a?(::Array) && item.all? { |v| Utils.strlike?(v) }
      # select multiple columns
      # df[["foo", "bar"]]
      return _from_rbdf(_df.select(item.map(&:to_s)))
    end

    if Utils.is_int_sequence(item)
      item = Series.new("", item)
    end

    if item.is_a?(Series)
      dtype = item.dtype
      if dtype == String
        return _from_rbdf(_df.select(item))
      elsif dtype == UInt32
        return _from_rbdf(_df.take_with_series(item._s))
      elsif [UInt8, UInt16, UInt64, Int8, Int16, Int32, Int64].include?(dtype)
        return _from_rbdf(
          _df.take_with_series(_pos_idxs(item, 0)._s)
        )
      end
    end
  end

  # Ruby-specific
  if item.is_a?(Expr) || item.is_a?(Series)
    return filter(item)
  end

  raise ArgumentError, "Cannot get item of type: #{item.class.name}"
end

#[]=(*key, value) ⇒ Object

Set item.

Returns:



451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
# File 'lib/polars/data_frame.rb', line 451

def []=(*key, value)
  if key.length == 1
    key = key.first
  elsif key.length != 2
    raise ArgumentError, "wrong number of arguments (given #{key.length + 1}, expected 2..3)"
  end

  if Utils.strlike?(key)
    if value.is_a?(::Array) || (defined?(Numo::NArray) && value.is_a?(Numo::NArray))
      value = Series.new(value)
    elsif !value.is_a?(Series)
      value = Polars.lit(value)
    end
    self._df = with_column(value.alias(key.to_s))._df
  elsif key.is_a?(::Array)
    row_selection, col_selection = key

    if Utils.strlike?(col_selection)
      s = self[col_selection]
    elsif col_selection.is_a?(Integer)
      raise Todo
    else
      raise ArgumentError, "column selection not understood: #{col_selection}"
    end

    s[row_selection] = value

    if col_selection.is_a?(Integer)
      replace_column(col_selection, s)
    elsif Utils.strlike?(col_selection)
      replace(col_selection, s)
    end
  else
    raise Todo
  end
end

#bottom_k(k, by:, reverse: false) ⇒ DataFrame

Return the k smallest rows.

Non-null elements are always preferred over null elements, regardless of the value of reverse. The output is not guaranteed to be in any particular order, call sort after this function if you wish the output to be sorted.

Examples:

Get the rows which contain the 4 smallest values in column b.

df = Polars::DataFrame.new(
  {
    "a" => ["a", "b", "a", "b", "b", "c"],
    "b" => [2, 1, 1, 3, 2, 1]
  }
)
df.bottom_k(4, by: "b")
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ b   ┆ 1   │
# │ a   ┆ 1   │
# │ c   ┆ 1   │
# │ a   ┆ 2   │
# └─────┴─────┘

Get the rows which contain the 4 smallest values when sorting on column a and b.

df.bottom_k(4, by: ["a", "b"])
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ a   ┆ 1   │
# │ a   ┆ 2   │
# │ b   ┆ 1   │
# │ b   ┆ 2   │
# └─────┴─────┘

Parameters:

  • k (Integer)

    Number of rows to return.

  • by (Object)

    Column(s) used to determine the bottom rows. Accepts expression input. Strings are parsed as column names.

  • reverse (Object) (defaults to: false)

    Consider the k largest elements of the by column(s) (instead of the k smallest). This can be specified per column by passing a sequence of booleans.

Returns:



1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
# File 'lib/polars/data_frame.rb', line 1981

def bottom_k(
  k,
  by:,
  reverse: false
)
  lazy
  .bottom_k(k, by: by, reverse: reverse)
  .collect(
    # optimizations=QueryOptFlags(
    #   projection_pushdown=False,
    #   predicate_pushdown=False,
    #   comm_subplan_elim=False,
    #   slice_pushdown=True,
    # )
  )
end

#cast(dtypes, strict: true) ⇒ DataFrame

Cast DataFrame column(s) to the specified dtype(s).

Examples:

Cast specific frame columns to the specified dtypes:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => [Date.new(2020, 1, 2), Date.new(2021, 3, 4), Date.new(2022, 5, 6)]
  }
)
df.cast({"foo" => Polars::Float32, "bar" => Polars::UInt8})
# =>
# shape: (3, 3)
# ┌─────┬─────┬────────────┐
# │ foo ┆ bar ┆ ham        │
# │ --- ┆ --- ┆ ---        │
# │ f32 ┆ u8  ┆ date       │
# ╞═════╪═════╪════════════╡
# │ 1.0 ┆ 6   ┆ 2020-01-02 │
# │ 2.0 ┆ 7   ┆ 2021-03-04 │
# │ 3.0 ┆ 8   ┆ 2022-05-06 │
# └─────┴─────┴────────────┘

Cast all frame columns matching one dtype (or dtype group) to another dtype:

df.cast({Polars::Date => Polars::Datetime})
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────────────────────┐
# │ foo ┆ bar ┆ ham                 │
# │ --- ┆ --- ┆ ---                 │
# │ i64 ┆ f64 ┆ datetime[μs]        │
# ╞═════╪═════╪═════════════════════╡
# │ 1   ┆ 6.0 ┆ 2020-01-02 00:00:00 │
# │ 2   ┆ 7.0 ┆ 2021-03-04 00:00:00 │
# │ 3   ┆ 8.0 ┆ 2022-05-06 00:00:00 │
# └─────┴─────┴─────────────────────┘

Cast all frame columns to the specified dtype:

df.cast(Polars::String).to_h(as_series: false)
# => {"foo"=>["1", "2", "3"], "bar"=>["6.0", "7.0", "8.0"], "ham"=>["2020-01-02", "2021-03-04", "2022-05-06"]}

Parameters:

  • dtypes (Object)

    Mapping of column names (or selector) to dtypes, or a single dtype to which all columns will be cast.

  • strict (Boolean) (defaults to: true)

    Throw an error if a cast could not be done (for instance, due to an overflow).

Returns:



3668
3669
3670
# File 'lib/polars/data_frame.rb', line 3668

def cast(dtypes, strict: true)
  lazy.cast(dtypes, strict: strict).collect(_eager: true)
end

#clear(n = 0) ⇒ DataFrame Also known as: cleared

Create an empty copy of the current DataFrame.

Returns a DataFrame with identical schema but no data.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [nil, 2, 3, 4],
    "b" => [0.5, nil, 2.5, 13],
    "c" => [true, true, false, nil]
  }
)
df.clear
# =>
# shape: (0, 3)
# ┌─────┬─────┬──────┐
# │ a   ┆ b   ┆ c    │
# │ --- ┆ --- ┆ ---  │
# │ i64 ┆ f64 ┆ bool │
# ╞═════╪═════╪══════╡
# └─────┴─────┴──────┘
df.clear(2)
# =>
# shape: (2, 3)
# ┌──────┬──────┬──────┐
# │ a    ┆ b    ┆ c    │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ f64  ┆ bool │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ null ┆ null ┆ null │
# └──────┴──────┴──────┘

Returns:



3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
# File 'lib/polars/data_frame.rb', line 3708

def clear(n = 0)
  if n == 0
    _from_rbdf(_df.clear)
  elsif n > 0 || len > 0
    self.class.new(
      schema.to_h { |nm, tp| [nm, Series.new(nm, [], dtype: tp).extend_constant(nil, n)] }
    )
  else
    clone
  end
end

#collect_schemaSchema

Note:

This method is included to facilitate writing code that is generic for both DataFrame and LazyFrame.

Get an ordered mapping of column names to their data type.

Examples:

Determine the schema.

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.collect_schema
# => Polars::Schema({"foo"=>Polars::Int64, "bar"=>Polars::Float64, "ham"=>Polars::String})

Access various properties of the schema using the Schema object.

schema = df.collect_schema
schema["bar"]
# => Polars::Float64
schema.names
# => ["foo", "bar", "ham"]
schema.dtypes
# => [Polars::Int64, Polars::Float64, Polars::String]
schema.length
# => 3

Returns:



528
529
530
# File 'lib/polars/data_frame.rb', line 528

def collect_schema
  Schema.new(columns.zip(dtypes), check_dtypes: false)
end

#columnsArray

Get column names.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.columns
# => ["foo", "bar", "ham"]

Returns:



135
136
137
# File 'lib/polars/data_frame.rb', line 135

def columns
  _df.columns
end

#columns=(columns) ⇒ Object

Change the column names of the DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.columns = ["apple", "banana", "orange"]
df
# =>
# shape: (3, 3)
# ┌───────┬────────┬────────┐
# │ apple ┆ banana ┆ orange │
# │ ---   ┆ ---    ┆ ---    │
# │ i64   ┆ i64    ┆ str    │
# ╞═══════╪════════╪════════╡
# │ 1     ┆ 6      ┆ a      │
# │ 2     ┆ 7      ┆ b      │
# │ 3     ┆ 8      ┆ c      │
# └───────┴────────┴────────┘

Parameters:

  • columns (Array)

    A list with new names for the DataFrame. The length of the list should be equal to the width of the DataFrame.

Returns:



168
169
170
# File 'lib/polars/data_frame.rb', line 168

def columns=(columns)
  _df.set_column_names(columns)
end

#delete(name) ⇒ Series

Drop in place if exists.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.delete("ham")
# =>
# shape: (3,)
# Series: 'ham' [str]
# [
#         "a"
#         "b"
#         "c"
# ]
df.delete("missing")
# => nil

Parameters:

  • name (Object)

    Column to drop.

Returns:



3615
3616
3617
# File 'lib/polars/data_frame.rb', line 3615

def delete(name)
  drop_in_place(name) if include?(name)
end

#describeDataFrame

Summary statistics for a DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1.0, 2.8, 3.0],
    "b" => [4, 5, nil],
    "c" => [true, false, true],
    "d" => [nil, "b", "c"],
    "e" => ["usd", "eur", nil]
  }
)
df.describe
# =>
# shape: (7, 6)
# ┌────────────┬──────────┬──────────┬──────────┬──────┬──────┐
# │ describe   ┆ a        ┆ b        ┆ c        ┆ d    ┆ e    │
# │ ---        ┆ ---      ┆ ---      ┆ ---      ┆ ---  ┆ ---  │
# │ str        ┆ f64      ┆ f64      ┆ f64      ┆ str  ┆ str  │
# ╞════════════╪══════════╪══════════╪══════════╪══════╪══════╡
# │ count      ┆ 3.0      ┆ 3.0      ┆ 3.0      ┆ 3    ┆ 3    │
# │ null_count ┆ 0.0      ┆ 1.0      ┆ 0.0      ┆ 1    ┆ 1    │
# │ mean       ┆ 2.266667 ┆ 4.5      ┆ 0.666667 ┆ null ┆ null │
# │ std        ┆ 1.101514 ┆ 0.707107 ┆ 0.57735  ┆ null ┆ null │
# │ min        ┆ 1.0      ┆ 4.0      ┆ 0.0      ┆ b    ┆ eur  │
# │ max        ┆ 3.0      ┆ 5.0      ┆ 1.0      ┆ c    ┆ usd  │
# │ median     ┆ 2.8      ┆ 4.5      ┆ 1.0      ┆ null ┆ null │
# └────────────┴──────────┴──────────┴──────────┴──────┴──────┘

Returns:



1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
# File 'lib/polars/data_frame.rb', line 1616

def describe
  describe_cast = lambda do |stat|
    columns = []
    self.columns.each_with_index do |s, i|
      if self[s].is_numeric || self[s].is_boolean
        columns << stat[0.., i].cast(:f64)
      else
        # for dates, strings, etc, we cast to string so that all
        # statistics can be shown
        columns << stat[0.., i].cast(:str)
      end
    end
    self.class.new(columns)
  end

  summary = _from_rbdf(
    Polars.concat(
      [
        describe_cast.(
          self.class.new(columns.to_h { |c| [c, [height]] })
        ),
        describe_cast.(null_count),
        describe_cast.(mean),
        describe_cast.(std),
        describe_cast.(min),
        describe_cast.(max),
        describe_cast.(median)
      ]
    )._df
  )
  summary.insert_column(
    0,
    Polars::Series.new(
      "describe",
      ["count", "null_count", "mean", "std", "min", "max", "median"],
    )
  )
  summary
end

#drop(*columns) ⇒ DataFrame

Remove column from DataFrame and return as new.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.drop("ham")
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ f64 │
# ╞═════╪═════╡
# │ 1   ┆ 6.0 │
# │ 2   ┆ 7.0 │
# │ 3   ┆ 8.0 │
# └─────┴─────┘

Drop multiple columns by passing a list of column names.

df.drop(["bar", "ham"])
# =>
# shape: (3, 1)
# ┌─────┐
# │ foo │
# │ --- │
# │ i64 │
# ╞═════╡
# │ 1   │
# │ 2   │
# │ 3   │
# └─────┘

Use positional arguments to drop multiple columns.

df.drop("foo", "ham")
# =>
# shape: (3, 1)
# ┌─────┐
# │ bar │
# │ --- │
# │ f64 │
# ╞═════╡
# │ 6.0 │
# │ 7.0 │
# │ 8.0 │
# └─────┘

Parameters:

  • columns (Object)

    Column(s) to drop.

Returns:



3555
3556
3557
# File 'lib/polars/data_frame.rb', line 3555

def drop(*columns)
  lazy.drop(*columns).collect(_eager: true)
end

#drop_in_place(name) ⇒ Series

Drop in place.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.drop_in_place("ham")
# =>
# shape: (3,)
# Series: 'ham' [str]
# [
#         "a"
#         "b"
#         "c"
# ]

Parameters:

  • name (Object)

    Column to drop.

Returns:



3583
3584
3585
# File 'lib/polars/data_frame.rb', line 3583

def drop_in_place(name)
  Utils.wrap_s(_df.drop_in_place(name))
end

#drop_nans(subset: nil) ⇒ DataFrame

Drop all rows that contain one or more NaN values.

The original order of the remaining rows is preserved.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [-20.5, Float::NAN, 80.0],
    "bar" => [Float::NAN, 110.0, 25.5],
    "ham" => ["xxx", "yyy", nil]
  }
)
df.drop_nans
# =>
# shape: (1, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ f64  ┆ f64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ 80.0 ┆ 25.5 ┆ null │
# └──────┴──────┴──────┘
df.drop_nans(subset: ["bar"])
# =>
# shape: (2, 3)
# ┌──────┬───────┬──────┐
# │ foo  ┆ bar   ┆ ham  │
# │ ---  ┆ ---   ┆ ---  │
# │ f64  ┆ f64   ┆ str  │
# ╞══════╪═══════╪══════╡
# │ NaN  ┆ 110.0 ┆ yyy  │
# │ 80.0 ┆ 25.5  ┆ null │
# └──────┴───────┴──────┘

Parameters:

  • subset (Object) (defaults to: nil)

    Column name(s) for which NaN values are considered; if set to nil (default), use all columns (note that only floating-point columns can contain NaNs).

Returns:



2230
2231
2232
# File 'lib/polars/data_frame.rb', line 2230

def drop_nans(subset: nil)
  lazy.drop_nans(subset: subset).collect(_eager: true)
end

#drop_nulls(subset: nil) ⇒ DataFrame

Drop all rows that contain one or more null values.

The original order of the remaining rows is preserved.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, nil, 8],
    "ham" => ["a", "b", nil]
  }
)
df.drop_nulls
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# └─────┴─────┴─────┘
df.drop_nulls(subset: Polars.cs.integer)
# =>
# shape: (2, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ i64 ┆ i64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 1   ┆ 6   ┆ a    │
# │ 3   ┆ 8   ┆ null │
# └─────┴─────┴──────┘

Parameters:

  • subset (Object) (defaults to: nil)

    Column name(s) for which null values are considered. If set to nil (default), use all columns.

Returns:



2275
2276
2277
# File 'lib/polars/data_frame.rb', line 2275

def drop_nulls(subset: nil)
  lazy.drop_nulls(subset: subset).collect(_eager: true)
end

#dtypesArray

Get dtypes of columns in DataFrame. Dtypes can also be found in column headers when printing the DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.dtypes
# => [Polars::Int64, Polars::Float64, Polars::String]

Returns:



186
187
188
# File 'lib/polars/data_frame.rb', line 186

def dtypes
  _df.dtypes
end

#each(&block) ⇒ Object

Returns an enumerator.

Returns:



342
343
344
# File 'lib/polars/data_frame.rb', line 342

def each(&block)
  get_columns.each(&block)
end

#each_row(named: true, buffer_size: 500, &block) ⇒ Object

Returns an iterator over the DataFrame of rows of Ruby-native values.

Parameters:

  • named (Boolean) (defaults to: true)

    Return hashes instead of arrays. The hashes are a mapping of column name to row value. This is more expensive than returning an array, but allows for accessing values by column name.

  • buffer_size (Integer) (defaults to: 500)

    Determines the number of rows that are buffered internally while iterating over the data; you should only modify this in very specific cases where the default value is determined not to be a good fit to your access pattern, as the speedup from using the buffer is significant (~2-4x). Setting this value to zero disables row buffering.

Returns:



5716
5717
5718
# File 'lib/polars/data_frame.rb', line 5716

def each_row(named: true, buffer_size: 500, &block)
  iter_rows(named: named, buffer_size: buffer_size, &block)
end

#equals(other, null_equal: true) ⇒ Boolean Also known as: frame_equal

Check if DataFrame is equal to other.

Examples:

df1 = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df2 = Polars::DataFrame.new(
  {
    "foo" => [3, 2, 1],
    "bar" => [8.0, 7.0, 6.0],
    "ham" => ["c", "b", "a"]
  }
)
df1.equals(df1)
# => true
df1.equals(df2)
# => false

Parameters:

  • other (DataFrame)

    DataFrame to compare with.

  • null_equal (Boolean) (defaults to: true)

    Consider null values as equal.

Returns:



2026
2027
2028
# File 'lib/polars/data_frame.rb', line 2026

def equals(other, null_equal: true)
  _df.equals(other._df, null_equal)
end

#estimated_size(unit = "b") ⇒ Numeric

Return an estimation of the total (heap) allocated size of the DataFrame.

Estimated size is given in the specified unit (bytes by default).

This estimation is the sum of the size of its buffers, validity, including nested arrays. Multiple arrays may share buffers and bitmaps. Therefore, the size of 2 arrays is not the sum of the sizes computed from this function. In particular, StructArray's size is an upper bound.

When an array is sliced, its allocated size remains constant because the buffer unchanged. However, this function will yield a smaller number. This is because this function returns the visible size of the buffer, not its total capacity.

FFI buffers are included in this estimation.

Examples:

df = Polars::DataFrame.new(
  {
    "x" => 1_000_000.times.to_a.reverse,
    "y" => 1_000_000.times.map { |v| v / 1000.0 },
    "z" => 1_000_000.times.map(&:to_s)
  },
  schema: {"x" => :u32, "y" => :f64, "z" => :str}
)
df.estimated_size
# => 25888898
df.estimated_size("mb")
# => 17.0601749420166

Parameters:

  • unit ("b", "kb", "mb", "gb", "tb") (defaults to: "b")

    Scale the returned size to the given unit.

Returns:

  • (Numeric)


1239
1240
1241
1242
# File 'lib/polars/data_frame.rb', line 1239

def estimated_size(unit = "b")
  sz = _df.estimated_size
  Utils.scale_bytes(sz, to: unit)
end

#explode(columns) ⇒ DataFrame

Explode DataFrame to long format by exploding a column with Lists.

Examples:

df = Polars::DataFrame.new(
  {
    "letters" => ["a", "a", "b", "c"],
    "numbers" => [[1], [2, 3], [4, 5], [6, 7, 8]]
  }
)
df.explode("numbers")
# =>
# shape: (8, 2)
# ┌─────────┬─────────┐
# │ letters ┆ numbers │
# │ ---     ┆ ---     │
# │ str     ┆ i64     │
# ╞═════════╪═════════╡
# │ a       ┆ 1       │
# │ a       ┆ 2       │
# │ a       ┆ 3       │
# │ b       ┆ 4       │
# │ b       ┆ 5       │
# │ c       ┆ 6       │
# │ c       ┆ 7       │
# │ c       ┆ 8       │
# └─────────┴─────────┘

Parameters:

  • columns (Object)

    Column of LargeList type.

Returns:



3957
3958
3959
# File 'lib/polars/data_frame.rb', line 3957

def explode(columns)
  lazy.explode(columns).collect(no_optimization: true)
end

#extend(other) ⇒ DataFrame

Extend the memory backed by this DataFrame with the values from other.

Different from vstack which adds the chunks from other to the chunks of this DataFrame extend appends the data from other to the underlying memory locations and thus may cause a reallocation.

If this does not cause a reallocation, the resulting data structure will not have any extra chunks and thus will yield faster queries.

Prefer extend over vstack when you want to do a query after a single append. For instance during online operations where you add n rows and rerun a query.

Prefer vstack over extend when you want to append many times before doing a query. For instance when you read in multiple files and when to store them in a single DataFrame. In the latter case, finish the sequence of vstack operations with a rechunk.

Examples:

df1 = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
df2 = Polars::DataFrame.new({"foo" => [10, 20, 30], "bar" => [40, 50, 60]})
df1.extend(df2)
# =>
# shape: (6, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ 4   │
# │ 2   ┆ 5   │
# │ 3   ┆ 6   │
# │ 10  ┆ 40  │
# │ 20  ┆ 50  │
# │ 30  ┆ 60  │
# └─────┴─────┘

Parameters:

  • other (DataFrame)

    DataFrame to vertically add.

Returns:



3495
3496
3497
3498
# File 'lib/polars/data_frame.rb', line 3495

def extend(other)
  _df.extend(other._df)
  self
end

#fill_nan(fill_value) ⇒ DataFrame

Note:

Note that floating point NaNs (Not a Number) are not missing values! To replace missing values, use fill_null.

Fill floating point NaN values by an Expression evaluation.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1.5, 2, Float::NAN, 4],
    "b" => [0.5, 4, Float::NAN, 13]
  }
)
df.fill_nan(99)
# =>
# shape: (4, 2)
# ┌──────┬──────┐
# │ a    ┆ b    │
# │ ---  ┆ ---  │
# │ f64  ┆ f64  │
# ╞══════╪══════╡
# │ 1.5  ┆ 0.5  │
# │ 2.0  ┆ 4.0  │
# │ 99.0 ┆ 99.0 │
# │ 4.0  ┆ 13.0 │
# └──────┴──────┘

Parameters:

  • fill_value (Object)

    Value to fill NaN with.

Returns:



3922
3923
3924
# File 'lib/polars/data_frame.rb', line 3922

def fill_nan(fill_value)
  lazy.fill_nan(fill_value).collect(no_optimization: true)
end

#fill_null(value = nil, strategy: nil, limit: nil, matches_supertype: true) ⇒ DataFrame

Fill null values using the specified value or strategy.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, nil, 4],
    "b" => [0.5, 4, nil, 13]
  }
)
df.fill_null(99)
# =>
# shape: (4, 2)
# ┌─────┬──────┐
# │ a   ┆ b    │
# │ --- ┆ ---  │
# │ i64 ┆ f64  │
# ╞═════╪══════╡
# │ 1   ┆ 0.5  │
# │ 2   ┆ 4.0  │
# │ 99  ┆ 99.0 │
# │ 4   ┆ 13.0 │
# └─────┴──────┘
df.fill_null(strategy: "forward")
# =>
# shape: (4, 2)
# ┌─────┬──────┐
# │ a   ┆ b    │
# │ --- ┆ ---  │
# │ i64 ┆ f64  │
# ╞═════╪══════╡
# │ 1   ┆ 0.5  │
# │ 2   ┆ 4.0  │
# │ 2   ┆ 4.0  │
# │ 4   ┆ 13.0 │
# └─────┴──────┘
df.fill_null(strategy: "max")
# =>
# shape: (4, 2)
# ┌─────┬──────┐
# │ a   ┆ b    │
# │ --- ┆ ---  │
# │ i64 ┆ f64  │
# ╞═════╪══════╡
# │ 1   ┆ 0.5  │
# │ 2   ┆ 4.0  │
# │ 4   ┆ 13.0 │
# │ 4   ┆ 13.0 │
# └─────┴──────┘
df.fill_null(strategy: "zero")
# =>
# shape: (4, 2)
# ┌─────┬──────┐
# │ a   ┆ b    │
# │ --- ┆ ---  │
# │ i64 ┆ f64  │
# ╞═════╪══════╡
# │ 1   ┆ 0.5  │
# │ 2   ┆ 4.0  │
# │ 0   ┆ 0.0  │
# │ 4   ┆ 13.0 │
# └─────┴──────┘

Parameters:

  • value (Numeric) (defaults to: nil)

    Value used to fill null values.

  • strategy (nil, "forward", "backward", "min", "max", "mean", "zero", "one") (defaults to: nil)

    Strategy used to fill null values.

  • limit (Integer) (defaults to: nil)

    Number of consecutive null values to fill when using the 'forward' or 'backward' strategy.

  • matches_supertype (Boolean) (defaults to: true)

    Fill all matching supertype of the fill value.

Returns:



3882
3883
3884
3885
3886
3887
3888
3889
# File 'lib/polars/data_frame.rb', line 3882

def fill_null(value = nil, strategy: nil, limit: nil, matches_supertype: true)
  _from_rbdf(
    lazy
      .fill_null(value, strategy: strategy, limit: limit, matches_supertype: matches_supertype)
      .collect(no_optimization: true)
      ._df
  )
end

#filter(predicate) ⇒ DataFrame

Filter the rows in the DataFrame based on a predicate expression.

Examples:

Filter on one condition:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.filter(Polars.col("foo") < 3)
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# │ 2   ┆ 7   ┆ b   │
# └─────┴─────┴─────┘

Filter on multiple conditions:

df.filter((Polars.col("foo") < 3) & (Polars.col("ham") == "a"))
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# └─────┴─────┴─────┘

Parameters:

  • predicate (Expr)

    Expression that evaluates to a boolean Series.

Returns:



1462
1463
1464
# File 'lib/polars/data_frame.rb', line 1462

def filter(predicate)
  lazy.filter(predicate).collect
end

#flagsHash

Get flags that are set on the columns of this DataFrame.

Returns:

  • (Hash)


193
194
195
# File 'lib/polars/data_frame.rb', line 193

def flags
  columns.to_h { |name| [name, self[name].flags] }
end

#foldSeries

Apply a horizontal reduction on a DataFrame.

This can be used to effectively determine aggregations on a row level, and can be applied to any DataType that can be supercasted (casted to a similar parent type).

An example of the supercast rules when applying an arithmetic operation on two DataTypes are for instance:

i8 + str = str f32 + i64 = f32 f32 + f64 = f64

Examples:

A horizontal sum operation:

df = Polars::DataFrame.new(
  {
    "a" => [2, 1, 3],
    "b" => [1, 2, 3],
    "c" => [1.0, 2.0, 3.0]
  }
)
df.fold { |s1, s2| s1 + s2 }
# =>
# shape: (3,)
# Series: 'a' [f64]
# [
#         4.0
#         5.0
#         9.0
# ]

A horizontal minimum operation:

df = Polars::DataFrame.new({"a" => [2, 1, 3], "b" => [1, 2, 3], "c" => [1.0, 2.0, 3.0]})
df.fold { |s1, s2| s1.zip_with(s1 < s2, s2) }
# =>
# shape: (3,)
# Series: 'a' [f64]
# [
#         1.0
#         1.0
#         3.0
# ]

A horizontal string concatenation:

df = Polars::DataFrame.new(
  {
    "a" => ["foo", "bar", nil],
    "b" => [1, 2, 3],
    "c" => [1.0, 2.0, 3.0]
  }
)
df.fold { |s1, s2| s1 + s2 }
# =>
# shape: (3,)
# Series: 'a' [str]
# [
#         "foo11.0"
#         "bar22.0"
#         null
# ]

A horizontal boolean or, similar to a row-wise .any:

df = Polars::DataFrame.new(
  {
    "a" => [false, false, true],
    "b" => [false, true, false]
  }
)
df.fold { |s1, s2| s1 | s2 }
# =>
# shape: (3,)
# Series: 'a' [bool]
# [
#         false
#         true
#         true
# ]

Returns:



5446
5447
5448
5449
5450
5451
5452
5453
# File 'lib/polars/data_frame.rb', line 5446

def fold
  acc = to_series(0)

  1.upto(width - 1) do |i|
    acc = yield(acc, to_series(i))
  end
  acc
end

#gather_every(n, offset = 0) ⇒ DataFrame Also known as: take_every

Take every nth row in the DataFrame and return as a new DataFrame.

Examples:

s = Polars::DataFrame.new({"a" => [1, 2, 3, 4], "b" => [5, 6, 7, 8]})
s.gather_every(2)
# =>
# shape: (2, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ 5   │
# │ 3   ┆ 7   │
# └─────┴─────┘

Returns:



5837
5838
5839
# File 'lib/polars/data_frame.rb', line 5837

def gather_every(n, offset = 0)
  select(F.col("*").gather_every(n, offset))
end

#get_column(name) ⇒ Series

Get a single column as Series by name.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
df.get_column("foo")
# =>
# shape: (3,)
# Series: 'foo' [i64]
# [
#         1
#         2
#         3
# ]

Parameters:

  • name (String)

    Name of the column to retrieve.

Returns:



3799
3800
3801
# File 'lib/polars/data_frame.rb', line 3799

def get_column(name)
  self[name]
end

#get_column_index(name) ⇒ Series Also known as: find_idx_by_name

Find the index of a column by name.

Examples:

df = Polars::DataFrame.new(
  {"foo" => [1, 2, 3], "bar" => [6, 7, 8], "ham" => ["a", "b", "c"]}
)
df.get_column_index("ham")
# => 2

Parameters:

  • name (String)

    Name of the column to find.

Returns:



1669
1670
1671
# File 'lib/polars/data_frame.rb', line 1669

def get_column_index(name)
  _df.get_column_index(name)
end

#get_columnsArray

Get the DataFrame as a Array of Series.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
df.get_columns
# =>
# [shape: (3,)
# Series: 'foo' [i64]
# [
#         1
#         2
#         3
# ], shape: (3,)
# Series: 'bar' [i64]
# [
#         4
#         5
#         6
# ]]
df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 4],
    "b" => [0.5, 4, 10, 13],
    "c" => [true, true, false, true]
  }
)
df.get_columns
# =>
# [shape: (4,)
# Series: 'a' [i64]
# [
#         1
#         2
#         3
#         4
# ], shape: (4,)
# Series: 'b' [f64]
# [
#         0.5
#         4.0
#         10.0
#         13.0
# ], shape: (4,)
# Series: 'c' [bool]
# [
#         true
#         true
#         false
#         true
# ]]

Returns:



3777
3778
3779
# File 'lib/polars/data_frame.rb', line 3777

def get_columns
  _df.get_columns.map { |s| Utils.wrap_s(s) }
end

#group_by(by, maintain_order: false) ⇒ GroupBy Also known as: groupby, group

Start a group by operation.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => ["a", "b", "a", "b", "b", "c"],
    "b" => [1, 2, 3, 4, 5, 6],
    "c" => [6, 5, 4, 3, 2, 1]
  }
)
df.group_by("a").agg(Polars.col("b").sum).sort("a")
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ a   ┆ 4   │
# │ b   ┆ 11  │
# │ c   ┆ 6   │
# └─────┴─────┘

Parameters:

  • by (Object)

    Column(s) to group by.

  • maintain_order (Boolean) (defaults to: false)

    Make sure that the order of the groups remain consistent. This is more expensive than a default group by. Note that this only works in expression aggregations.

Returns:



2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
# File 'lib/polars/data_frame.rb', line 2383

def group_by(by, maintain_order: false)
  if !Utils.bool?(maintain_order)
    raise TypeError, "invalid input for group_by arg `maintain_order`: #{maintain_order}."
  end
  GroupBy.new(
    self,
    by,
    maintain_order: maintain_order
  )
end

#group_by_dynamic(index_column, every:, period: nil, offset: nil, truncate: true, include_boundaries: false, closed: "left", by: nil, start_by: "window") ⇒ DataFrame Also known as: groupby_dynamic

Group based on a time value (or index value of type :i32, :i64).

Time windows are calculated and rows are assigned to windows. Different from a normal group by is that a row can be member of multiple groups. The time/index window could be seen as a rolling window, with a window size determined by dates/times/values instead of slots in the DataFrame.

A window is defined by:

  • every: interval of the window
  • period: length of the window
  • offset: offset of the window

The every, period and offset arguments are created with the following string language:

  • 1ns (1 nanosecond)
  • 1us (1 microsecond)
  • 1ms (1 millisecond)
  • 1s (1 second)
  • 1m (1 minute)
  • 1h (1 hour)
  • 1d (1 day)
  • 1w (1 week)
  • 1mo (1 calendar month)
  • 1y (1 calendar year)
  • 1i (1 index count)

Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds

In case of a group_by_dynamic on an integer column, the windows are defined by:

  • "1i" # length 1
  • "10i" # length 10

Examples:

df = Polars::DataFrame.new(
  {
    "time" => Polars.datetime_range(
      DateTime.new(2021, 12, 16),
      DateTime.new(2021, 12, 16, 3),
      "30m",
      time_unit: "us",
      eager: true
    ),
    "n" => 0..6
  }
)
# =>
# shape: (7, 2)
# ┌─────────────────────┬─────┐
# │ time                ┆ n   │
# │ ---                 ┆ --- │
# │ datetime[μs]        ┆ i64 │
# ╞═════════════════════╪═════╡
# │ 2021-12-16 00:00:00 ┆ 0   │
# │ 2021-12-16 00:30:00 ┆ 1   │
# │ 2021-12-16 01:00:00 ┆ 2   │
# │ 2021-12-16 01:30:00 ┆ 3   │
# │ 2021-12-16 02:00:00 ┆ 4   │
# │ 2021-12-16 02:30:00 ┆ 5   │
# │ 2021-12-16 03:00:00 ┆ 6   │
# └─────────────────────┴─────┘

Group by windows of 1 hour starting at 2021-12-16 00:00:00.

df.group_by_dynamic("time", every: "1h", closed: "right").agg(
  [
    Polars.col("time").min.alias("time_min"),
    Polars.col("time").max.alias("time_max")
  ]
)
# =>
# shape: (4, 3)
# ┌─────────────────────┬─────────────────────┬─────────────────────┐
# │ time                ┆ time_min            ┆ time_max            │
# │ ---                 ┆ ---                 ┆ ---                 │
# │ datetime[μs]        ┆ datetime[μs]        ┆ datetime[μs]        │
# ╞═════════════════════╪═════════════════════╪═════════════════════╡
# │ 2021-12-15 23:00:00 ┆ 2021-12-16 00:00:00 ┆ 2021-12-16 00:00:00 │
# │ 2021-12-16 00:00:00 ┆ 2021-12-16 00:30:00 ┆ 2021-12-16 01:00:00 │
# │ 2021-12-16 01:00:00 ┆ 2021-12-16 01:30:00 ┆ 2021-12-16 02:00:00 │
# │ 2021-12-16 02:00:00 ┆ 2021-12-16 02:30:00 ┆ 2021-12-16 03:00:00 │
# └─────────────────────┴─────────────────────┴─────────────────────┘

The window boundaries can also be added to the aggregation result.

df.group_by_dynamic(
  "time", every: "1h", include_boundaries: true, closed: "right"
).agg([Polars.col("time").count.alias("time_count")])
# =>
# shape: (4, 4)
# ┌─────────────────────┬─────────────────────┬─────────────────────┬────────────┐
# │ _lower_boundary     ┆ _upper_boundary     ┆ time                ┆ time_count │
# │ ---                 ┆ ---                 ┆ ---                 ┆ ---        │
# │ datetime[μs]        ┆ datetime[μs]        ┆ datetime[μs]        ┆ u32        │
# ╞═════════════════════╪═════════════════════╪═════════════════════╪════════════╡
# │ 2021-12-15 23:00:00 ┆ 2021-12-16 00:00:00 ┆ 2021-12-15 23:00:00 ┆ 1          │
# │ 2021-12-16 00:00:00 ┆ 2021-12-16 01:00:00 ┆ 2021-12-16 00:00:00 ┆ 2          │
# │ 2021-12-16 01:00:00 ┆ 2021-12-16 02:00:00 ┆ 2021-12-16 01:00:00 ┆ 2          │
# │ 2021-12-16 02:00:00 ┆ 2021-12-16 03:00:00 ┆ 2021-12-16 02:00:00 ┆ 2          │
# └─────────────────────┴─────────────────────┴─────────────────────┴────────────┘

When closed="left", should not include right end of interval.

df.group_by_dynamic("time", every: "1h", closed: "left").agg(
  [
    Polars.col("time").count.alias("time_count"),
    Polars.col("time").alias("time_agg_list")
  ]
)
# =>
# shape: (4, 3)
# ┌─────────────────────┬────────────┬─────────────────────────────────┐
# │ time                ┆ time_count ┆ time_agg_list                   │
# │ ---                 ┆ ---        ┆ ---                             │
# │ datetime[μs]        ┆ u32        ┆ list[datetime[μs]]              │
# ╞═════════════════════╪════════════╪═════════════════════════════════╡
# │ 2021-12-16 00:00:00 ┆ 2          ┆ [2021-12-16 00:00:00, 2021-12-… │
# │ 2021-12-16 01:00:00 ┆ 2          ┆ [2021-12-16 01:00:00, 2021-12-… │
# │ 2021-12-16 02:00:00 ┆ 2          ┆ [2021-12-16 02:00:00, 2021-12-… │
# │ 2021-12-16 03:00:00 ┆ 1          ┆ [2021-12-16 03:00:00]           │
# └─────────────────────┴────────────┴─────────────────────────────────┘

When closed="both" the time values at the window boundaries belong to 2 groups.

df.group_by_dynamic("time", every: "1h", closed: "both").agg(
  [Polars.col("time").count.alias("time_count")]
)
# =>
# shape: (5, 2)
# ┌─────────────────────┬────────────┐
# │ time                ┆ time_count │
# │ ---                 ┆ ---        │
# │ datetime[μs]        ┆ u32        │
# ╞═════════════════════╪════════════╡
# │ 2021-12-15 23:00:00 ┆ 1          │
# │ 2021-12-16 00:00:00 ┆ 3          │
# │ 2021-12-16 01:00:00 ┆ 3          │
# │ 2021-12-16 02:00:00 ┆ 3          │
# │ 2021-12-16 03:00:00 ┆ 1          │
# └─────────────────────┴────────────┘

Dynamic group bys can also be combined with grouping on normal keys.

df = Polars::DataFrame.new(
  {
    "time" => Polars.datetime_range(
      DateTime.new(2021, 12, 16),
      DateTime.new(2021, 12, 16, 3),
      "30m",
      time_unit: "us",
      eager: true
    ),
    "groups" => ["a", "a", "a", "b", "b", "a", "a"]
  }
)
df.group_by_dynamic(
  "time",
  every: "1h",
  closed: "both",
  by: "groups",
  include_boundaries: true
).agg([Polars.col("time").count.alias("time_count")])
# =>
# shape: (7, 5)
# ┌────────┬─────────────────────┬─────────────────────┬─────────────────────┬────────────┐
# │ groups ┆ _lower_boundary     ┆ _upper_boundary     ┆ time                ┆ time_count │
# │ ---    ┆ ---                 ┆ ---                 ┆ ---                 ┆ ---        │
# │ str    ┆ datetime[μs]        ┆ datetime[μs]        ┆ datetime[μs]        ┆ u32        │
# ╞════════╪═════════════════════╪═════════════════════╪═════════════════════╪════════════╡
# │ a      ┆ 2021-12-15 23:00:00 ┆ 2021-12-16 00:00:00 ┆ 2021-12-15 23:00:00 ┆ 1          │
# │ a      ┆ 2021-12-16 00:00:00 ┆ 2021-12-16 01:00:00 ┆ 2021-12-16 00:00:00 ┆ 3          │
# │ a      ┆ 2021-12-16 01:00:00 ┆ 2021-12-16 02:00:00 ┆ 2021-12-16 01:00:00 ┆ 1          │
# │ a      ┆ 2021-12-16 02:00:00 ┆ 2021-12-16 03:00:00 ┆ 2021-12-16 02:00:00 ┆ 2          │
# │ a      ┆ 2021-12-16 03:00:00 ┆ 2021-12-16 04:00:00 ┆ 2021-12-16 03:00:00 ┆ 1          │
# │ b      ┆ 2021-12-16 01:00:00 ┆ 2021-12-16 02:00:00 ┆ 2021-12-16 01:00:00 ┆ 2          │
# │ b      ┆ 2021-12-16 02:00:00 ┆ 2021-12-16 03:00:00 ┆ 2021-12-16 02:00:00 ┆ 1          │
# └────────┴─────────────────────┴─────────────────────┴─────────────────────┴────────────┘

Dynamic group by on an index column.

df = Polars::DataFrame.new(
  {
    "idx" => Polars.arange(0, 6, eager: true),
    "A" => ["A", "A", "B", "B", "B", "C"]
  }
)
df.group_by_dynamic(
  "idx",
  every: "2i",
  period: "3i",
  include_boundaries: true,
  closed: "right"
).agg(Polars.col("A").alias("A_agg_list"))
# =>
# shape: (4, 4)
# ┌─────────────────┬─────────────────┬─────┬─────────────────┐
# │ _lower_boundary ┆ _upper_boundary ┆ idx ┆ A_agg_list      │
# │ ---             ┆ ---             ┆ --- ┆ ---             │
# │ i64             ┆ i64             ┆ i64 ┆ list[str]       │
# ╞═════════════════╪═════════════════╪═════╪═════════════════╡
# │ -2              ┆ 1               ┆ -2  ┆ ["A", "A"]      │
# │ 0               ┆ 3               ┆ 0   ┆ ["A", "B", "B"] │
# │ 2               ┆ 5               ┆ 2   ┆ ["B", "B", "C"] │
# │ 4               ┆ 7               ┆ 4   ┆ ["C"]           │
# └─────────────────┴─────────────────┴─────┴─────────────────┘

Parameters:

  • index_column

    Column used to group based on the time window. Often to type Date/Datetime This column must be sorted in ascending order. If not the output will not make sense.

    In case of a dynamic group by on indices, dtype needs to be one of :i32, :i64. Note that :i32 gets temporarily cast to :i64, so if performance matters use an :i64 column.

  • every

    Interval of the window.

  • period (defaults to: nil)

    Length of the window, if nil it is equal to 'every'.

  • offset (defaults to: nil)

    Offset of the window if nil and period is nil it will be equal to negative every.

  • truncate (defaults to: true)

    Truncate the time value to the window lower bound.

  • include_boundaries (defaults to: false)

    Add the lower and upper bound of the window to the "_lower_bound" and "_upper_bound" columns. This will impact performance because it's harder to parallelize

  • closed ("right", "left", "both", "none") (defaults to: "left")

    Define whether the temporal window interval is closed or not.

  • by (defaults to: nil)

    Also group by this column/these columns

  • start_by ('window', 'datapoint', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday') (defaults to: "window")

    The strategy to determine the start of the first window by.

    • 'window': Start by taking the earliest timestamp, truncating it with every, and then adding offset. Note that weekly windows start on Monday.
    • 'datapoint': Start from the first encountered data point.
    • a day of the week (only takes effect if every contains 'w'):

      • 'monday': Start the window on the Monday before the first data point.
      • 'tuesday': Start the window on the Tuesday before the first data point.
      • ...
      • 'sunday': Start the window on the Sunday before the first data point.

    The resulting window is then shifted back until the earliest datapoint is in or in front of it.

Returns:



2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
# File 'lib/polars/data_frame.rb', line 2739

def group_by_dynamic(
  index_column,
  every:,
  period: nil,
  offset: nil,
  truncate: true,
  include_boundaries: false,
  closed: "left",
  by: nil,
  start_by: "window"
)
  DynamicGroupBy.new(
    self,
    index_column,
    every,
    period,
    offset,
    truncate,
    include_boundaries,
    closed,
    by,
    start_by
  )
end

#hash_rows(seed: 0, seed_1: nil, seed_2: nil, seed_3: nil) ⇒ Series

Hash and combine the rows in this DataFrame.

The hash value is of type :u64.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, nil, 3, 4],
    "ham" => ["a", "b", nil, "d"]
  }
)
df.hash_rows(seed: 42)
# =>
# shape: (4,)
# Series: '' [u64]
# [
#         4238614331852490969
#         17976148875586754089
#         4702262519505526977
#         18144177983981041107
# ]

Parameters:

  • seed (Integer) (defaults to: 0)

    Random seed parameter. Defaults to 0.

  • seed_1 (Integer) (defaults to: nil)

    Random seed parameter. Defaults to seed if not set.

  • seed_2 (Integer) (defaults to: nil)

    Random seed parameter. Defaults to seed if not set.

  • seed_3 (Integer) (defaults to: nil)

    Random seed parameter. Defaults to seed if not set.

Returns:



5874
5875
5876
5877
5878
5879
5880
# File 'lib/polars/data_frame.rb', line 5874

def hash_rows(seed: 0, seed_1: nil, seed_2: nil, seed_3: nil)
  k0 = seed
  k1 = seed_1.nil? ? seed : seed_1
  k2 = seed_2.nil? ? seed : seed_2
  k3 = seed_3.nil? ? seed : seed_3
  Utils.wrap_s(_df.hash_rows(k0, k1, k2, k3))
end

#head(n = 5) ⇒ DataFrame

Get the first n rows.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3, 4, 5],
    "bar" => [6, 7, 8, 9, 10],
    "ham" => ["a", "b", "c", "d", "e"]
  }
)
df.head(3)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# │ 2   ┆ 7   ┆ b   │
# │ 3   ┆ 8   ┆ c   │
# └─────┴─────┴─────┘

Parameters:

  • n (Integer) (defaults to: 5)

    Number of rows to return.

Returns:



2153
2154
2155
# File 'lib/polars/data_frame.rb', line 2153

def head(n = 5)
  _from_rbdf(_df.head(n))
end

#heightInteger Also known as: count, length, size

Get the height of the DataFrame.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3, 4, 5]})
df.height
# => 5

Returns:

  • (Integer)


102
103
104
# File 'lib/polars/data_frame.rb', line 102

def height
  _df.height
end

#hstack(columns, in_place: false) ⇒ DataFrame

Return a new DataFrame grown horizontally by stacking multiple Series to it.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
x = Polars::Series.new("apple", [10, 20, 30])
df.hstack([x])
# =>
# shape: (3, 4)
# ┌─────┬─────┬─────┬───────┐
# │ foo ┆ bar ┆ ham ┆ apple │
# │ --- ┆ --- ┆ --- ┆ ---   │
# │ i64 ┆ i64 ┆ str ┆ i64   │
# ╞═════╪═════╪═════╪═══════╡
# │ 1   ┆ 6   ┆ a   ┆ 10    │
# │ 2   ┆ 7   ┆ b   ┆ 20    │
# │ 3   ┆ 8   ┆ c   ┆ 30    │
# └─────┴─────┴─────┴───────┘

Parameters:

  • columns (Object)

    Series to stack.

  • in_place (Boolean) (defaults to: false)

    Modify in place.

Returns:



3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
# File 'lib/polars/data_frame.rb', line 3397

def hstack(columns, in_place: false)
  if !columns.is_a?(::Array)
    columns = columns.get_columns
  end
  if in_place
    _df.hstack_mut(columns.map(&:_s))
    self
  else
    _from_rbdf(_df.hstack(columns.map(&:_s)))
  end
end

#include?(name) ⇒ Boolean

Check if DataFrame includes column.

Returns:



335
336
337
# File 'lib/polars/data_frame.rb', line 335

def include?(name)
  columns.include?(name)
end

#insert_column(index, series) ⇒ DataFrame Also known as: insert_at_idx

Insert a Series at a certain column index. This operation is in place.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
s = Polars::Series.new("baz", [97, 98, 99])
df.insert_column(1, s)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ baz ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ i64 │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 97  ┆ 4   │
# │ 2   ┆ 98  ┆ 5   │
# │ 3   ┆ 99  ┆ 6   │
# └─────┴─────┴─────┘
df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 4],
    "b" => [0.5, 4, 10, 13],
    "c" => [true, true, false, true]
  }
)
s = Polars::Series.new("d", [-2.5, 15, 20.5, 0])
df.insert_column(3, s)
# =>
# shape: (4, 4)
# ┌─────┬──────┬───────┬──────┐
# │ a   ┆ b    ┆ c     ┆ d    │
# │ --- ┆ ---  ┆ ---   ┆ ---  │
# │ i64 ┆ f64  ┆ bool  ┆ f64  │
# ╞═════╪══════╪═══════╪══════╡
# │ 1   ┆ 0.5  ┆ true  ┆ -2.5 │
# │ 2   ┆ 4.0  ┆ true  ┆ 15.0 │
# │ 3   ┆ 10.0 ┆ false ┆ 20.5 │
# │ 4   ┆ 13.0 ┆ true  ┆ 0.0  │
# └─────┴──────┴───────┴──────┘

Parameters:

  • index (Integer)

    Column to insert the new Series column.

  • series (Series)

    Series to insert.

Returns:



1415
1416
1417
1418
1419
1420
1421
# File 'lib/polars/data_frame.rb', line 1415

def insert_column(index, series)
  if index < 0
    index = columns.length + index
  end
  _df.insert_column(index, series._s)
  self
end

#interpolateDataFrame

Interpolate intermediate values. The interpolation method is linear.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, nil, 9, 10],
    "bar" => [6, 7, 9, nil],
    "baz" => [1, nil, nil, 9]
  }
)
df.interpolate
# =>
# shape: (4, 3)
# ┌──────┬──────┬──────────┐
# │ foo  ┆ bar  ┆ baz      │
# │ ---  ┆ ---  ┆ ---      │
# │ f64  ┆ f64  ┆ f64      │
# ╞══════╪══════╪══════════╡
# │ 1.0  ┆ 6.0  ┆ 1.0      │
# │ 5.0  ┆ 7.0  ┆ 3.666667 │
# │ 9.0  ┆ 9.0  ┆ 6.333333 │
# │ 10.0 ┆ null ┆ 9.0      │
# └──────┴──────┴──────────┘

Returns:



5907
5908
5909
# File 'lib/polars/data_frame.rb', line 5907

def interpolate
  select(F.col("*").interpolate)
end

#is_duplicatedSeries

Get a mask of all duplicated rows in this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 1],
    "b" => ["x", "y", "z", "x"],
  }
)
df.is_duplicated
# =>
# shape: (4,)
# Series: '' [bool]
# [
#         true
#         false
#         false
#         true
# ]

Returns:



4439
4440
4441
# File 'lib/polars/data_frame.rb', line 4439

def is_duplicated
  Utils.wrap_s(_df.is_duplicated)
end

#is_emptyBoolean Also known as: empty?

Check if the dataframe is empty.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
df.is_empty
# => false
df.filter(Polars.col("foo") > 99).is_empty
# => true

Returns:



5921
5922
5923
# File 'lib/polars/data_frame.rb', line 5921

def is_empty
  height == 0
end

#is_uniqueSeries

Get a mask of all unique rows in this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 1],
    "b" => ["x", "y", "z", "x"]
  }
)
df.is_unique
# =>
# shape: (4,)
# Series: '' [bool]
# [
#         false
#         true
#         true
#         false
# ]

Returns:



4464
4465
4466
# File 'lib/polars/data_frame.rb', line 4464

def is_unique
  Utils.wrap_s(_df.is_unique)
end

#itemObject

Return the dataframe as a scalar.

Equivalent to df[0,0], with a check that the shape is (1,1).

Examples:

df = Polars::DataFrame.new({"a" => [1, 2, 3], "b" => [4, 5, 6]})
result = df.select((Polars.col("a") * Polars.col("b")).sum)
result.item
# => 32

Returns:



543
544
545
546
547
548
# File 'lib/polars/data_frame.rb', line 543

def item
  if shape != [1, 1]
    raise ArgumentError, "Can only call .item if the dataframe is of shape (1,1), dataframe is of shape #{shape}"
  end
  self[0, 0]
end

#iter_columnsObject

Note:

Consider whether you can use all instead. If you can, it will be more efficient.

Returns an iterator over the columns of this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 3, 5],
    "b" => [2, 4, 6]
  }
)
df.iter_columns.map { |s| s.name }
# => ["a", "b"]

If you're using this to modify a dataframe's columns, e.g.

# Do NOT do this
Polars::DataFrame.new(df.iter_columns.map { |column| column * 2 })
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 2   ┆ 4   │
# │ 6   ┆ 8   │
# │ 10  ┆ 12  │
# └─────┴─────┘

then consider whether you can use all instead:

df.select(Polars.all * 2)
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 2   ┆ 4   │
# │ 6   ┆ 8   │
# │ 10  ┆ 12  │
# └─────┴─────┘

Returns:



5766
5767
5768
5769
5770
5771
5772
# File 'lib/polars/data_frame.rb', line 5766

def iter_columns
  return to_enum(:iter_columns) unless block_given?

  _df.get_columns.each do |s|
    yield Utils.wrap_s(s)
  end
end

#iter_rows(named: false, buffer_size: 500, &block) ⇒ Object

Returns an iterator over the DataFrame of rows of Ruby-native values.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 3, 5],
    "b" => [2, 4, 6]
  }
)
df.iter_rows.map { |row| row[0] }
# => [1, 3, 5]
df.iter_rows(named: true).map { |row| row["b"] }
# => [2, 4, 6]

Parameters:

  • named (Boolean) (defaults to: false)

    Return hashes instead of arrays. The hashes are a mapping of column name to row value. This is more expensive than returning an array, but allows for accessing values by column name.

  • buffer_size (Integer) (defaults to: 500)

    Determines the number of rows that are buffered internally while iterating over the data; you should only modify this in very specific cases where the default value is determined not to be a good fit to your access pattern, as the speedup from using the buffer is significant (~2-4x). Setting this value to zero disables row buffering.

Returns:



5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
# File 'lib/polars/data_frame.rb', line 5669

def iter_rows(named: false, buffer_size: 500, &block)
  return to_enum(:iter_rows, named: named, buffer_size: buffer_size) unless block_given?

  # load into the local namespace for a modest performance boost in the hot loops
  columns = self.columns

  # note: buffering rows results in a 2-4x speedup over individual calls
  # to ".row(i)", so it should only be disabled in extremely specific cases.
  if buffer_size
    offset = 0
    while offset < height
      zerocopy_slice = slice(offset, buffer_size)
      rows_chunk = zerocopy_slice.rows(named: false)
      if named
        rows_chunk.each do |row|
          yield columns.zip(row).to_h
        end
      else
        rows_chunk.each(&block)
      end
      offset += buffer_size
    end
  elsif named
    height.times do |i|
      yield columns.zip(row(i)).to_h
    end
  else
    height.times do |i|
      yield row(i)
    end
  end
end

#iter_slices(n_rows: 10_000) ⇒ Object

Returns a non-copying iterator of slices over the underlying DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => 0...17_500,
    "b" => Date.new(2023, 1, 1),
    "c" => "klmnoopqrstuvwxyz"
  },
  schema_overrides: {"a" => Polars::Int32}
)
df.iter_slices.map.with_index do |frame, idx|
  "#{frame.class.name}:[#{idx}]:#{frame.length}"
end
# => ["Polars::DataFrame:[0]:10000", "Polars::DataFrame:[1]:7500"]

Parameters:

  • n_rows (Integer) (defaults to: 10_000)

    Determines the number of rows contained in each DataFrame slice.

Returns:



5794
5795
5796
5797
5798
5799
5800
5801
5802
# File 'lib/polars/data_frame.rb', line 5794

def iter_slices(n_rows: 10_000)
  return to_enum(:iter_slices, n_rows: n_rows) unless block_given?

  offset = 0
  while offset < height
    yield slice(offset, n_rows)
    offset += n_rows
  end
end

#join(other, left_on: nil, right_on: nil, on: nil, how: "inner", suffix: "_right", validate: "m:m", join_nulls: false, coalesce: nil, maintain_order: nil) ⇒ DataFrame

Join in SQL-like fashion.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
other_df = Polars::DataFrame.new(
  {
    "apple" => ["x", "y", "z"],
    "ham" => ["a", "b", "d"]
  }
)
df.join(other_df, on: "ham")
# =>
# shape: (2, 4)
# ┌─────┬─────┬─────┬───────┐
# │ foo ┆ bar ┆ ham ┆ apple │
# │ --- ┆ --- ┆ --- ┆ ---   │
# │ i64 ┆ f64 ┆ str ┆ str   │
# ╞═════╪═════╪═════╪═══════╡
# │ 1   ┆ 6.0 ┆ a   ┆ x     │
# │ 2   ┆ 7.0 ┆ b   ┆ y     │
# └─────┴─────┴─────┴───────┘
df.join(other_df, on: "ham", how: "full")
# =>
# shape: (4, 5)
# ┌──────┬──────┬──────┬───────┬───────────┐
# │ foo  ┆ bar  ┆ ham  ┆ apple ┆ ham_right │
# │ ---  ┆ ---  ┆ ---  ┆ ---   ┆ ---       │
# │ i64  ┆ f64  ┆ str  ┆ str   ┆ str       │
# ╞══════╪══════╪══════╪═══════╪═══════════╡
# │ 1    ┆ 6.0  ┆ a    ┆ x     ┆ a         │
# │ 2    ┆ 7.0  ┆ b    ┆ y     ┆ b         │
# │ null ┆ null ┆ null ┆ z     ┆ d         │
# │ 3    ┆ 8.0  ┆ c    ┆ null  ┆ null      │
# └──────┴──────┴──────┴───────┴───────────┘
df.join(other_df, on: "ham", how: "left")
# =>
# shape: (3, 4)
# ┌─────┬─────┬─────┬───────┐
# │ foo ┆ bar ┆ ham ┆ apple │
# │ --- ┆ --- ┆ --- ┆ ---   │
# │ i64 ┆ f64 ┆ str ┆ str   │
# ╞═════╪═════╪═════╪═══════╡
# │ 1   ┆ 6.0 ┆ a   ┆ x     │
# │ 2   ┆ 7.0 ┆ b   ┆ y     │
# │ 3   ┆ 8.0 ┆ c   ┆ null  │
# └─────┴─────┴─────┴───────┘
df.join(other_df, on: "ham", how: "semi")
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ f64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6.0 ┆ a   │
# │ 2   ┆ 7.0 ┆ b   │
# └─────┴─────┴─────┘
df.join(other_df, on: "ham", how: "anti")
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ f64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 3   ┆ 8.0 ┆ c   │
# └─────┴─────┴─────┘

Parameters:

  • other (DataFrame)

    DataFrame to join with.

  • left_on (Object) (defaults to: nil)

    Name(s) of the left join column(s).

  • right_on (Object) (defaults to: nil)

    Name(s) of the right join column(s).

  • on (Object) (defaults to: nil)

    Name(s) of the join columns in both DataFrames.

  • how ("inner", "left", "full", "semi", "anti", "cross") (defaults to: "inner")

    Join strategy.

  • suffix (String) (defaults to: "_right")

    Suffix to append to columns with a duplicate name.

  • validate ('m:m', 'm:1', '1:m', '1:1') (defaults to: "m:m")

    Checks if join is of specified type.

    • many_to_many - “m:m”: default, does not result in checks
    • one_to_one - “1:1”: check if join keys are unique in both left and right datasets
    • one_to_many - “1:m”: check if join keys are unique in left dataset
    • many_to_one - “m:1”: check if join keys are unique in right dataset
  • join_nulls (Boolean) (defaults to: false)

    Join on null values. By default null values will never produce matches.

  • coalesce (Boolean) (defaults to: nil)

    Coalescing behavior (merging of join columns).

    • nil: -> join specific.
    • true: -> Always coalesce join columns.
    • false: -> Never coalesce join columns. Note that joining on any other expressions than col will turn off coalescing.
  • maintain_order ('none', 'left', 'right', 'left_right', 'right_left') (defaults to: nil)

    Which DataFrame row order to preserve, if any. Do not rely on any observed ordering without explicitly setting this parameter, as your code may break in a future release. Not specifying any ordering can improve performance Supported for inner, left, right and full joins

    • none No specific ordering is desired. The ordering might differ across Polars versions or even between different runs.
    • left Preserves the order of the left DataFrame.
    • right Preserves the order of the right DataFrame.
    • left_right First preserves the order of the left DataFrame, then the right.
    • right_left First preserves the order of the right DataFrame, then the left.

Returns:



3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
# File 'lib/polars/data_frame.rb', line 3128

def join(
  other,
  left_on: nil,
  right_on: nil,
  on: nil,
  how: "inner",
  suffix: "_right",
  validate: "m:m",
  join_nulls: false,
  coalesce: nil,
  maintain_order: nil
)
  lazy
    .join(
      other.lazy,
      left_on: left_on,
      right_on: right_on,
      on: on,
      how: how,
      suffix: suffix,
      validate: validate,
      join_nulls: join_nulls,
      coalesce: coalesce,
      maintain_order: maintain_order
    )
    .collect(no_optimization: true)
end

#join_asof(other, left_on: nil, right_on: nil, on: nil, by_left: nil, by_right: nil, by: nil, strategy: "backward", suffix: "_right", tolerance: nil, allow_parallel: true, force_parallel: false, coalesce: true, allow_exact_matches: true, check_sortedness: true) ⇒ DataFrame

Perform an asof join.

This is similar to a left-join except that we match on nearest key rather than equal keys.

Both DataFrames must be sorted by the asof_join key.

For each row in the left DataFrame:

  • A "backward" search selects the last row in the right DataFrame whose 'on' key is less than or equal to the left's key.
  • A "forward" search selects the first row in the right DataFrame whose 'on' key is greater than or equal to the left's key.

The default is "backward".

Examples:

gdp = Polars::DataFrame.new(
  {
    "date" => [
      DateTime.new(2016, 1, 1),
      DateTime.new(2017, 1, 1),
      DateTime.new(2018, 1, 1),
      DateTime.new(2019, 1, 1),
    ],  # note record date: Jan 1st (sorted!)
    "gdp" => [4164, 4411, 4566, 4696]
  }
).set_sorted("date")
population = Polars::DataFrame.new(
  {
    "date" => [
      DateTime.new(2016, 5, 12),
      DateTime.new(2017, 5, 12),
      DateTime.new(2018, 5, 12),
      DateTime.new(2019, 5, 12),
    ],  # note record date: May 12th (sorted!)
    "population" => [82.19, 82.66, 83.12, 83.52]
  }
).set_sorted("date")
population.join_asof(
  gdp, left_on: "date", right_on: "date", strategy: "backward"
)
# =>
# shape: (4, 3)
# ┌─────────────────────┬────────────┬──────┐
# │ date                ┆ population ┆ gdp  │
# │ ---                 ┆ ---        ┆ ---  │
# │ datetime[ns]        ┆ f64        ┆ i64  │
# ╞═════════════════════╪════════════╪══════╡
# │ 2016-05-12 00:00:00 ┆ 82.19      ┆ 4164 │
# │ 2017-05-12 00:00:00 ┆ 82.66      ┆ 4411 │
# │ 2018-05-12 00:00:00 ┆ 83.12      ┆ 4566 │
# │ 2019-05-12 00:00:00 ┆ 83.52      ┆ 4696 │
# └─────────────────────┴────────────┴──────┘

Parameters:

  • other (DataFrame)

    DataFrame to join with.

  • left_on (String) (defaults to: nil)

    Join column of the left DataFrame.

  • right_on (String) (defaults to: nil)

    Join column of the right DataFrame.

  • on (String) (defaults to: nil)

    Join column of both DataFrames. If set, left_on and right_on should be nil.

  • by_left (Object) (defaults to: nil)

    join on these columns before doing asof join

  • by_right (Object) (defaults to: nil)

    join on these columns before doing asof join

  • by (Object) (defaults to: nil)

    join on these columns before doing asof join

  • strategy ("backward", "forward") (defaults to: "backward")

    Join strategy.

  • suffix (String) (defaults to: "_right")

    Suffix to append to columns with a duplicate name.

  • tolerance (Object) (defaults to: nil)

    Numeric tolerance. By setting this the join will only be done if the near keys are within this distance. If an asof join is done on columns of dtype "Date", "Datetime", "Duration" or "Time" you use the following string language:

    • 1ns (1 nanosecond)
    • 1us (1 microsecond)
    • 1ms (1 millisecond)
    • 1s (1 second)
    • 1m (1 minute)
    • 1h (1 hour)
    • 1d (1 day)
    • 1w (1 week)
    • 1mo (1 calendar month)
    • 1y (1 calendar year)
    • 1i (1 index count)

    Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds

  • allow_parallel (Boolean) (defaults to: true)

    Allow the physical plan to optionally evaluate the computation of both DataFrames up to the join in parallel.

  • force_parallel (Boolean) (defaults to: false)

    Force the physical plan to evaluate the computation of both DataFrames up to the join in parallel.

  • coalesce (Boolean) (defaults to: true)

    Coalescing behavior (merging of join columns).

    • true: -> Always coalesce join columns.
    • false: -> Never coalesce join columns. Note that joining on any other expressions than col will turn off coalescing.
  • allow_exact_matches (Boolean) (defaults to: true)

    Whether exact matches are valid join predicates.

    • If true, allow matching with the same on value (i.e. less-than-or-equal-to / greater-than-or-equal-to).
    • If false, don't match the same on value (i.e., strictly less-than / strictly greater-than).
  • check_sortedness (Boolean) (defaults to: true)

    Check the sortedness of the asof keys. If the keys are not sorted Polars will error, or in case of 'by' argument raise a warning. This might become a hard error in the future.

Returns:



2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
# File 'lib/polars/data_frame.rb', line 2962

def join_asof(
  other,
  left_on: nil,
  right_on: nil,
  on: nil,
  by_left: nil,
  by_right: nil,
  by: nil,
  strategy: "backward",
  suffix: "_right",
  tolerance: nil,
  allow_parallel: true,
  force_parallel: false,
  coalesce: true,
  allow_exact_matches: true,
  check_sortedness: true
)
  lazy
    .join_asof(
      other.lazy,
      left_on: left_on,
      right_on: right_on,
      on: on,
      by_left: by_left,
      by_right: by_right,
      by: by,
      strategy: strategy,
      suffix: suffix,
      tolerance: tolerance,
      allow_parallel: allow_parallel,
      force_parallel: force_parallel,
      coalesce: coalesce,
      allow_exact_matches: allow_exact_matches,
      check_sortedness: check_sortedness
    )
    .collect(no_optimization: true)
end

#join_where(other, *predicates, suffix: "_right") ⇒ DataFrame

Note:

The row order of the input DataFrames is not preserved.

Note:

This functionality is experimental. It may be changed at any point without it being considered a breaking change.

Perform a join based on one or multiple (in)equality predicates.

This performs an inner join, so only rows where all predicates are true are included in the result, and a row from either DataFrame may be included multiple times in the result.

Examples:

Join two dataframes together based on two predicates which get AND-ed together.

east = Polars::DataFrame.new(
  {
    "id": [100, 101, 102],
    "dur": [120, 140, 160],
    "rev": [12, 14, 16],
    "cores": [2, 8, 4]
  }
)
west = Polars::DataFrame.new(
  {
    "t_id": [404, 498, 676, 742],
    "time": [90, 130, 150, 170],
    "cost": [9, 13, 15, 16],
    "cores": [4, 2, 1, 4]
  }
)
east.join_where(
  west,
  Polars.col("dur") < Polars.col("time"),
  Polars.col("rev") < Polars.col("cost")
)
# =>
# shape: (5, 8)
# ┌─────┬─────┬─────┬───────┬──────┬──────┬──────┬─────────────┐
# │ id  ┆ dur ┆ rev ┆ cores ┆ t_id ┆ time ┆ cost ┆ cores_right │
# │ --- ┆ --- ┆ --- ┆ ---   ┆ ---  ┆ ---  ┆ ---  ┆ ---         │
# │ i64 ┆ i64 ┆ i64 ┆ i64   ┆ i64  ┆ i64  ┆ i64  ┆ i64         │
# ╞═════╪═════╪═════╪═══════╪══════╪══════╪══════╪═════════════╡
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 498  ┆ 130  ┆ 13   ┆ 2           │
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 676  ┆ 150  ┆ 15   ┆ 1           │
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 742  ┆ 170  ┆ 16   ┆ 4           │
# │ 101 ┆ 140 ┆ 14  ┆ 8     ┆ 676  ┆ 150  ┆ 15   ┆ 1           │
# │ 101 ┆ 140 ┆ 14  ┆ 8     ┆ 742  ┆ 170  ┆ 16   ┆ 4           │
# └─────┴─────┴─────┴───────┴──────┴──────┴──────┴─────────────┘

To OR them together, use a single expression and the | operator.

east.join_where(
  west,
  (Polars.col("dur") < Polars.col("time")) | (Polars.col("rev") < Polars.col("cost"))
)
# =>
# shape: (6, 8)
# ┌─────┬─────┬─────┬───────┬──────┬──────┬──────┬─────────────┐
# │ id  ┆ dur ┆ rev ┆ cores ┆ t_id ┆ time ┆ cost ┆ cores_right │
# │ --- ┆ --- ┆ --- ┆ ---   ┆ ---  ┆ ---  ┆ ---  ┆ ---         │
# │ i64 ┆ i64 ┆ i64 ┆ i64   ┆ i64  ┆ i64  ┆ i64  ┆ i64         │
# ╞═════╪═════╪═════╪═══════╪══════╪══════╪══════╪═════════════╡
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 498  ┆ 130  ┆ 13   ┆ 2           │
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 676  ┆ 150  ┆ 15   ┆ 1           │
# │ 100 ┆ 120 ┆ 12  ┆ 2     ┆ 742  ┆ 170  ┆ 16   ┆ 4           │
# │ 101 ┆ 140 ┆ 14  ┆ 8     ┆ 676  ┆ 150  ┆ 15   ┆ 1           │
# │ 101 ┆ 140 ┆ 14  ┆ 8     ┆ 742  ┆ 170  ┆ 16   ┆ 4           │
# │ 102 ┆ 160 ┆ 16  ┆ 4     ┆ 742  ┆ 170  ┆ 16   ┆ 4           │
# └─────┴─────┴─────┴───────┴──────┴──────┴──────┴─────────────┘

Parameters:

  • other (DataFrame)

    DataFrame to join with.

  • predicates (Array)

    (In)Equality condition to join the two tables on. When a column name occurs in both tables, the proper suffix must be applied in the predicate.

  • suffix (String) (defaults to: "_right")

    Suffix to append to columns with a duplicate name.

Returns:



3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
# File 'lib/polars/data_frame.rb', line 3235

def join_where(
  other,
  *predicates,
  suffix: "_right"
)
  Utils.require_same_type(self, other)

  lazy
  .join_where(
    other.lazy,
    *predicates,
    suffix: suffix
  )
  .collect(_eager: true)
end

#lazyLazyFrame

Start a lazy query from this point.

Returns:



4471
4472
4473
# File 'lib/polars/data_frame.rb', line 4471

def lazy
  wrap_ldf(_df.lazy)
end

#limit(n = 5) ⇒ DataFrame

Get the first n rows.

Alias for #head.

Examples:

df = Polars::DataFrame.new(
  {"foo" => [1, 2, 3, 4, 5, 6], "bar" => ["a", "b", "c", "d", "e", "f"]}
)
df.limit(4)
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ str │
# ╞═════╪═════╡
# │ 1   ┆ a   │
# │ 2   ┆ b   │
# │ 3   ┆ c   │
# │ 4   ┆ d   │
# └─────┴─────┘

Parameters:

  • n (Integer) (defaults to: 5)

    Number of rows to return.

Returns:



2122
2123
2124
# File 'lib/polars/data_frame.rb', line 2122

def limit(n = 5)
  head(n)
end

#map_rows(return_dtype: nil, inference_size: 256, &f) ⇒ Object Also known as: apply

Note:

The frame-level apply cannot track column names (as the UDF is a black-box that may arbitrarily drop, rearrange, transform, or add new columns); if you want to apply a UDF such that column names are preserved, you should use the expression-level apply syntax instead.

Apply a custom/user-defined function (UDF) over the rows of the DataFrame.

The UDF will receive each row as a tuple of values: udf(row).

Implementing logic using a Ruby function is almost always significantly slower and more memory intensive than implementing the same logic using the native expression API because:

  • The native expression engine runs in Rust; UDFs run in Ruby.
  • Use of Ruby UDFs forces the DataFrame to be materialized in memory.
  • Polars-native expressions can be parallelised (UDFs cannot).
  • Polars-native expressions can be logically optimised (UDFs cannot).

Wherever possible you should strongly prefer the native expression API to achieve the best performance.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [-1, 5, 8]})

Return a DataFrame by mapping each row to a tuple:

df.map_rows { |t| [t[0] * 2, t[1] * 3] }
# =>
# shape: (3, 2)
# ┌──────────┬──────────┐
# │ column_0 ┆ column_1 │
# │ ---      ┆ ---      │
# │ i64      ┆ i64      │
# ╞══════════╪══════════╡
# │ 2        ┆ -3       │
# │ 4        ┆ 15       │
# │ 6        ┆ 24       │
# └──────────┴──────────┘

Return a Series by mapping each row to a scalar:

df.map_rows { |t| t[0] * 2 + t[1] }
# =>
# shape: (3, 1)
# ┌─────┐
# │ map │
# │ --- │
# │ i64 │
# ╞═════╡
# │ 1   │
# │ 9   │
# │ 14  │
# └─────┘

Parameters:

  • return_dtype (Symbol) (defaults to: nil)

    Output type of the operation. If none given, Polars tries to infer the type.

  • inference_size (Integer) (defaults to: 256)

    Only used in the case when the custom function returns rows. This uses the first n rows to determine the output schema

Returns:



3311
3312
3313
3314
3315
3316
3317
3318
# File 'lib/polars/data_frame.rb', line 3311

def map_rows(return_dtype: nil, inference_size: 256, &f)
  out, is_df = _df.map_rows(f, return_dtype, inference_size)
  if is_df
    _from_rbdf(out)
  else
    _from_rbdf(Utils.wrap_s(out).to_frame._df)
  end
end

#maxDataFrame

Aggregate the columns of this DataFrame to their maximum value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.max
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 3   ┆ 8   ┆ c   │
# └─────┴─────┴─────┘

Returns:



4776
4777
4778
# File 'lib/polars/data_frame.rb', line 4776

def max
  lazy.max.collect(_eager: true)
end

#max_horizontalSeries

Get the maximum value horizontally across columns.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [4.0, 5.0, 6.0]
  }
)
df.max_horizontal
# =>
# shape: (3,)
# Series: 'max' [f64]
# [
#         4.0
#         5.0
#         6.0
# ]

Returns:



4800
4801
4802
# File 'lib/polars/data_frame.rb', line 4800

def max_horizontal
  select(max: F.max_horizontal(F.all)).to_series
end

#meanDataFrame

Aggregate the columns of this DataFrame to their mean value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.mean
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ f64 ┆ f64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 2.0 ┆ 7.0 ┆ null │
# └─────┴─────┴──────┘

Returns:



4932
4933
4934
# File 'lib/polars/data_frame.rb', line 4932

def mean
  lazy.mean.collect(_eager: true)
end

#mean_horizontal(ignore_nulls: true) ⇒ Series

Take the mean of all values horizontally across columns.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [4.0, 5.0, 6.0]
  }
)
df.mean_horizontal
# =>
# shape: (3,)
# Series: 'mean' [f64]
# [
#         2.5
#         3.5
#         4.5
# ]

Parameters:

  • ignore_nulls (Boolean) (defaults to: true)

    Ignore null values (default). If set to false, any null value in the input will lead to a null output.

Returns:



4960
4961
4962
4963
4964
# File 'lib/polars/data_frame.rb', line 4960

def mean_horizontal(ignore_nulls: true)
  select(
    mean: F.mean_horizontal(F.all, ignore_nulls: ignore_nulls)
  ).to_series
end

#medianDataFrame

Aggregate the columns of this DataFrame to their median value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.median
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ f64 ┆ f64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 2.0 ┆ 7.0 ┆ null │
# └─────┴─────┴──────┘

Returns:



5070
5071
5072
# File 'lib/polars/data_frame.rb', line 5070

def median
  lazy.median.collect(_eager: true)
end

#merge_sorted(other, key) ⇒ DataFrame

Take two sorted DataFrames and merge them by the sorted key.

The output of this operation will also be sorted. It is the callers responsibility that the frames are sorted by that key otherwise the output will not make sense.

The schemas of both DataFrames must be equal.

Examples:

df0 = Polars::DataFrame.new(
  {"name" => ["steve", "elise", "bob"], "age" => [42, 44, 18]}
).sort("age")
df1 = Polars::DataFrame.new(
  {"name" => ["anna", "megan", "steve", "thomas"], "age" => [21, 33, 42, 20]}
).sort("age")
df0.merge_sorted(df1, "age")
# =>
# shape: (7, 2)
# ┌────────┬─────┐
# │ name   ┆ age │
# │ ---    ┆ --- │
# │ str    ┆ i64 │
# ╞════════╪═════╡
# │ bob    ┆ 18  │
# │ thomas ┆ 20  │
# │ anna   ┆ 21  │
# │ megan  ┆ 33  │
# │ steve  ┆ 42  │
# │ steve  ┆ 42  │
# │ elise  ┆ 44  │
# └────────┴─────┘

Parameters:

  • other (DataFrame)

    Other DataFrame that must be merged

  • key (String)

    Key that is sorted.

Returns:



6036
6037
6038
# File 'lib/polars/data_frame.rb', line 6036

def merge_sorted(other, key)
  lazy.merge_sorted(other.lazy, key).collect(_eager: true)
end

#minDataFrame

Aggregate the columns of this DataFrame to their minimum value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.min
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# └─────┴─────┴─────┘

Returns:



4826
4827
4828
# File 'lib/polars/data_frame.rb', line 4826

def min
  lazy.min.collect(_eager: true)
end

#min_horizontalSeries

Get the minimum value horizontally across columns.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [4.0, 5.0, 6.0]
  }
)
df.min_horizontal
# =>
# shape: (3,)
# Series: 'min' [f64]
# [
#         1.0
#         2.0
#         3.0
# ]

Returns:



4850
4851
4852
# File 'lib/polars/data_frame.rb', line 4850

def min_horizontal
  select(min: F.min_horizontal(F.all)).to_series
end

#n_chunks(strategy: "first") ⇒ Object

Get number of chunks used by the ChunkedArrays of this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 4],
    "b" => [0.5, 4, 10, 13],
    "c" => [true, true, false, true]
  }
)
df.n_chunks
# => 1
df.n_chunks(strategy: "all")
# => [1, 1, 1]

Parameters:

  • strategy ("first", "all") (defaults to: "first")

    Return the number of chunks of the 'first' column, or 'all' columns in this DataFrame.

Returns:



4744
4745
4746
4747
4748
4749
4750
4751
4752
# File 'lib/polars/data_frame.rb', line 4744

def n_chunks(strategy: "first")
  if strategy == "first"
    _df.n_chunks
  elsif strategy == "all"
    get_columns.map(&:n_chunks)
  else
    raise ArgumentError, "Strategy: '{strategy}' not understood. Choose one of {{'first',  'all'}}"
  end
end

#n_unique(subset: nil) ⇒ DataFrame

Return the number of unique rows, or the number of unique row-subsets.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 1, 2, 3, 4, 5],
    "b" => [0.5, 0.5, 1.0, 2.0, 3.0, 3.0],
    "c" => [true, true, true, false, true, true]
  }
)
df.n_unique
# => 5

Simple columns subset

df.n_unique(subset: ["b", "c"])
# => 4

Expression subset

df.n_unique(
  subset: [
    (Polars.col("a").floordiv(2)),
    (Polars.col("c") | (Polars.col("b") >= 2))
  ]
)
# => 3

Parameters:

  • subset (Object) (defaults to: nil)

    One or more columns/expressions that define what to count; omit to return the count of unique rows.

Returns:



5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
# File 'lib/polars/data_frame.rb', line 5249

def n_unique(subset: nil)
  if subset.is_a?(StringIO)
    subset = [Polars.col(subset)]
  elsif subset.is_a?(Expr)
    subset = [subset]
  end

  if subset.is_a?(::Array) && subset.length == 1
    expr = Utils.wrap_expr(Utils.parse_into_expression(subset[0], str_as_lit: false))
  else
    struct_fields = subset.nil? ? Polars.all : subset
    expr = Polars.struct(struct_fields)
  end

  df = lazy.select(expr.n_unique).collect
  df.is_empty ? 0 : df.row(0)[0]
end

#null_countDataFrame

Create a new DataFrame that shows the null counts per column.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, nil, 3],
    "bar" => [6, 7, nil],
    "ham" => ["a", "b", "c"]
  }
)
df.null_count
# =>
# shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ u32 ┆ u32 ┆ u32 │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 1   ┆ 0   │
# └─────┴─────┴─────┘

Returns:



5299
5300
5301
# File 'lib/polars/data_frame.rb', line 5299

def null_count
  _from_rbdf(_df.null_count)
end

#partition_by(groups, maintain_order: true, include_key: true, as_dict: false) ⇒ Object

Split into multiple DataFrames partitioned by groups.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => ["A", "A", "B", "B", "C"],
    "N" => [1, 2, 2, 4, 2],
    "bar" => ["k", "l", "m", "m", "l"]
  }
)
df.partition_by("foo", maintain_order: true)
# =>
# [shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ A   ┆ 1   ┆ k   │
# │ A   ┆ 2   ┆ l   │
# └─────┴─────┴─────┘, shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ B   ┆ 2   ┆ m   │
# │ B   ┆ 4   ┆ m   │
# └─────┴─────┴─────┘, shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ C   ┆ 2   ┆ l   │
# └─────┴─────┴─────┘]
df.partition_by("foo", maintain_order: true, as_dict: true)
# =>
# {"A"=>shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ A   ┆ 1   ┆ k   │
# │ A   ┆ 2   ┆ l   │
# └─────┴─────┴─────┘, "B"=>shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ B   ┆ 2   ┆ m   │
# │ B   ┆ 4   ┆ m   │
# └─────┴─────┴─────┘, "C"=>shape: (1, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ N   ┆ bar │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ C   ┆ 2   ┆ l   │
# └─────┴─────┴─────┘}

Parameters:

  • groups (Object)

    Groups to partition by.

  • maintain_order (Boolean) (defaults to: true)

    Keep predictable output order. This is slower as it requires an extra sort operation.

  • include_key (Boolean) (defaults to: true)

    Include the columns used to partition the DataFrame in the output.

  • as_dict (Boolean) (defaults to: false)

    If true, return the partitions in a hash keyed by the distinct group values instead of an array.

Returns:



4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
# File 'lib/polars/data_frame.rb', line 4312

def partition_by(groups, maintain_order: true, include_key: true, as_dict: false)
  if groups.is_a?(::String)
    groups = [groups]
  elsif !groups.is_a?(::Array)
    groups = Array(groups)
  end

  if as_dict
    out = {}
    if groups.length == 1
      _df.partition_by(groups, maintain_order, include_key).each do |df|
        df = _from_rbdf(df)
        out[df[groups][0, 0]] = df
      end
    else
      _df.partition_by(groups, maintain_order, include_key).each do |df|
        df = _from_rbdf(df)
        out[df[groups].row(0)] = df
      end
    end
    out
  else
    _df.partition_by(groups, maintain_order, include_key).map { |df| _from_rbdf(df) }
  end
end

#pipe(func, *args, **kwargs, &block) ⇒ Object

Note:

It is recommended to use LazyFrame when piping operations, in order to fully take advantage of query optimization and parallelization. See #lazy.

Offers a structured way to apply a sequence of user-defined functions (UDFs).

Examples:

cast_str_to_int = lambda do |data, col_name:|
  data.with_column(Polars.col(col_name).cast(:i64))
end

df = Polars::DataFrame.new({"a" => [1, 2, 3, 4], "b" => ["10", "20", "30", "40"]})
df.pipe(cast_str_to_int, col_name: "b")
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ 10  │
# │ 2   ┆ 20  │
# │ 3   ┆ 30  │
# │ 4   ┆ 40  │
# └─────┴─────┘

Parameters:

  • func (Object)

    Callable; will receive the frame as the first parameter, followed by any given args/kwargs.

  • args (Object)

    Arguments to pass to the UDF.

  • kwargs (Object)

    Keyword arguments to pass to the UDF.

Returns:



2315
2316
2317
# File 'lib/polars/data_frame.rb', line 2315

def pipe(func, *args, **kwargs, &block)
  func.call(self, *args, **kwargs, &block)
end

#pivot(on, index: nil, values: nil, aggregate_function: nil, maintain_order: true, sort_columns: false, separator: "_") ⇒ DataFrame

Create a spreadsheet-style pivot table as a DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => ["one", "one", "two", "two", "one", "two"],
    "bar" => ["y", "y", "y", "x", "x", "x"],
    "baz" => [1, 2, 3, 4, 5, 6]
  }
)
df.pivot("bar", index: "foo", values: "baz", aggregate_function: "sum")
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ y   ┆ x   │
# │ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ i64 │
# ╞═════╪═════╪═════╡
# │ one ┆ 3   ┆ 5   │
# │ two ┆ 3   ┆ 10  │
# └─────┴─────┴─────┘

Parameters:

  • on (Object)

    Columns whose values will be used as the header of the output DataFrame

  • index (Object) (defaults to: nil)

    One or multiple keys to group by

  • values (Object) (defaults to: nil)

    Column values to aggregate. Can be multiple columns if the columns arguments contains multiple columns as well

  • aggregate_function ("first", "sum", "max", "min", "mean", "median", "last", "count") (defaults to: nil)

    A predefined aggregate function str or an expression.

  • maintain_order (Object) (defaults to: true)

    Sort the grouped keys so that the output order is predictable.

  • sort_columns (Object) (defaults to: false)

    Sort the transposed columns by name. Default is by order of discovery.

  • separator (String) (defaults to: "_")

    Used as separator/delimiter in generated column names in case of multiple values columns.

Returns:



4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
# File 'lib/polars/data_frame.rb', line 4001

def pivot(
  on,
  index: nil,
  values: nil,
  aggregate_function: nil,
  maintain_order: true,
  sort_columns: false,
  separator: "_"
)
  index = Utils._expand_selectors(self, index)
  on = Utils._expand_selectors(self, on)
  if !values.nil?
    values = Utils._expand_selectors(self, values)
  end

  if aggregate_function.is_a?(::String)
    case aggregate_function
    when "first"
      aggregate_expr = F.element.first._rbexpr
    when "sum"
      aggregate_expr = F.element.sum._rbexpr
    when "max"
      aggregate_expr = F.element.max._rbexpr
    when "min"
      aggregate_expr = F.element.min._rbexpr
    when "mean"
      aggregate_expr = F.element.mean._rbexpr
    when "median"
      aggregate_expr = F.element.median._rbexpr
    when "last"
      aggregate_expr = F.element.last._rbexpr
    when "len"
      aggregate_expr = F.len._rbexpr
    when "count"
      warn "`aggregate_function: \"count\"` input for `pivot` is deprecated. Use `aggregate_function: \"len\"` instead."
      aggregate_expr = F.len._rbexpr
    else
      raise ArgumentError, "Argument aggregate fn: '#{aggregate_fn}' was not expected."
    end
  elsif aggregate_function.nil?
    aggregate_expr = nil
  else
    aggregate_expr = aggregate_function._rbexpr
  end

  _from_rbdf(
    _df.pivot_expr(
      on,
      index,
      values,
      maintain_order,
      sort_columns,
      aggregate_expr,
      separator
    )
  )
end

#plot(x = nil, y = nil, type: nil, group: nil, stacked: nil) ⇒ Vega::LiteChart Originally defined in module Plot

Plot data.

Returns:

  • (Vega::LiteChart)

Raises:

  • (ArgumentError)

#productDataFrame

Aggregate the columns of this DataFrame to their product values.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3],
    "b" => [0.5, 4, 10],
    "c" => [true, true, false]
  }
)
df.product
# =>
# shape: (1, 3)
# ┌─────┬──────┬─────┐
# │ a   ┆ b    ┆ c   │
# │ --- ┆ ---  ┆ --- │
# │ i64 ┆ f64  ┆ i64 │
# ╞═════╪══════╪═════╡
# │ 6   ┆ 20.0 ┆ 0   │
# └─────┴──────┴─────┘

Returns:



5096
5097
5098
# File 'lib/polars/data_frame.rb', line 5096

def product
  select(Polars.all.product)
end

#quantile(quantile, interpolation: "nearest") ⇒ DataFrame

Aggregate the columns of this DataFrame to their quantile value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.quantile(0.5, interpolation: "nearest")
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ f64 ┆ f64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 2.0 ┆ 7.0 ┆ null │
# └─────┴─────┴──────┘

Parameters:

  • quantile (Float)

    Quantile between 0.0 and 1.0.

  • interpolation ("nearest", "higher", "lower", "midpoint", "linear") (defaults to: "nearest")

    Interpolation method.

Returns:



5127
5128
5129
# File 'lib/polars/data_frame.rb', line 5127

def quantile(quantile, interpolation: "nearest")
  lazy.quantile(quantile, interpolation: interpolation).collect(_eager: true)
end

#rechunkDataFrame

This will make sure all subsequent operations have optimal and predictable performance.

Returns:



5273
5274
5275
# File 'lib/polars/data_frame.rb', line 5273

def rechunk
  _from_rbdf(_df.rechunk)
end

#remove(*predicates, **constraints) ⇒ DataFrame

Remove rows, dropping those that match the given predicate expression(s).

The original order of the remaining rows is preserved.

Rows where the filter predicate does not evaluate to True are retained (this includes rows where the predicate evaluates as null).

Examples:

Remove rows matching a condition:

df = Polars::DataFrame.new(
  {
    "foo" => [2, 3, nil, 4, 0],
    "bar" => [5, 6, nil, nil, 0],
    "ham" => ["a", "b", nil, "c", "d"]
  }
)
df.remove(Polars.col("bar") >= 5)
# =>
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ 4    ┆ null ┆ c    │
# │ 0    ┆ 0    ┆ d    │
# └──────┴──────┴──────┘

Discard rows based on multiple conditions, combined with and/or operators:

df.remove(
  (Polars.col("foo") >= 0) & (Polars.col("bar") >= 0),
)
# =>
# shape: (2, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ 4    ┆ null ┆ c    │
# └──────┴──────┴──────┘
df.remove(
  (Polars.col("foo") >= 0) | (Polars.col("bar") >= 0),
)
# =>
# shape: (1, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# └──────┴──────┴──────┘

Provide multiple constraints using *args syntax:

df.remove(
  Polars.col("ham").is_not_null,
  Polars.col("bar") >= 0
)
# =>
# shape: (2, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ 4    ┆ null ┆ c    │
# └──────┴──────┴──────┘

Provide constraints(s) using **kwargs syntax:

df.remove(foo: 0, bar: 0)
# =>
# shape: (4, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ 2    ┆ 5    ┆ a    │
# │ 3    ┆ 6    ┆ b    │
# │ null ┆ null ┆ null │
# │ 4    ┆ null ┆ c    │
# └──────┴──────┴──────┘

Remove rows by comparing two columns against each other:

df.remove(
  Polars.col("foo").ne_missing(Polars.col("bar"))
)
# =>
# shape: (2, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ 0    ┆ 0    ┆ d    │
# └──────┴──────┴──────┘

Parameters:

  • predicates (Array)

    Expression that evaluates to a boolean Series.

  • constraints (Hash)

    Column filters; use name = value to filter columns using the supplied value. Each constraint behaves the same as Polars.col(name).eq(value), and is implicitly joined with the other filter conditions using &.

Returns:



1577
1578
1579
1580
1581
1582
1583
1584
# File 'lib/polars/data_frame.rb', line 1577

def remove(
  *predicates,
  **constraints
)
  lazy
  .remove(*predicates, **constraints)
  .collect(_eager: true)
end

#rename(mapping, strict: true) ⇒ DataFrame

Rename column names.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.rename({"foo" => "apple"})
# =>
# shape: (3, 3)
# ┌───────┬─────┬─────┐
# │ apple ┆ bar ┆ ham │
# │ ---   ┆ --- ┆ --- │
# │ i64   ┆ i64 ┆ str │
# ╞═══════╪═════╪═════╡
# │ 1     ┆ 6   ┆ a   │
# │ 2     ┆ 7   ┆ b   │
# │ 3     ┆ 8   ┆ c   │
# └───────┴─────┴─────┘

Parameters:

  • mapping (Hash)

    Key value pairs that map from old name to new name.

  • strict (Boolean) (defaults to: true)

    Validate that all column names exist in the current schema, and throw an exception if any do not. (Note that this parameter is a no-op when passing a function to mapping).

Returns:



1364
1365
1366
# File 'lib/polars/data_frame.rb', line 1364

def rename(mapping, strict: true)
  lazy.rename(mapping, strict: strict).collect(no_optimization: true)
end

#replace(column, new_col) ⇒ DataFrame

Replace a column by a new Series.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
s = Polars::Series.new([10, 20, 30])
df.replace("foo", s)
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 10  ┆ 4   │
# │ 20  ┆ 5   │
# │ 30  ┆ 6   │
# └─────┴─────┘

Parameters:

  • column (String)

    Column to replace.

  • new_col (Series)

    New column to insert.

Returns:



2055
2056
2057
2058
# File 'lib/polars/data_frame.rb', line 2055

def replace(column, new_col)
  _df.replace(column.to_s, new_col._s)
  self
end

#replace_column(index, series) ⇒ DataFrame Also known as: replace_at_idx

Replace a column at an index location.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
s = Polars::Series.new("apple", [10, 20, 30])
df.replace_column(0, s)
# =>
# shape: (3, 3)
# ┌───────┬─────┬─────┐
# │ apple ┆ bar ┆ ham │
# │ ---   ┆ --- ┆ --- │
# │ i64   ┆ i64 ┆ str │
# ╞═══════╪═════╪═════╡
# │ 10    ┆ 6   ┆ a   │
# │ 20    ┆ 7   ┆ b   │
# │ 30    ┆ 8   ┆ c   │
# └───────┴─────┴─────┘

Parameters:

  • index (Integer)

    Column index.

  • series (Series)

    Series that will replace the column.

Returns:



1704
1705
1706
1707
1708
1709
1710
# File 'lib/polars/data_frame.rb', line 1704

def replace_column(index, series)
  if index < 0
    index = columns.length + index
  end
  _df.replace_column(index, series._s)
  self
end

#reverseDataFrame

Reverse the DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "key" => ["a", "b", "c"],
    "val" => [1, 2, 3]
  }
)
df.reverse
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ key ┆ val │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ c   ┆ 3   │
# │ b   ┆ 2   │
# │ a   ┆ 1   │
# └─────┴─────┘

Returns:



1329
1330
1331
# File 'lib/polars/data_frame.rb', line 1329

def reverse
  select(Polars.col("*").reverse)
end

#rolling(index_column:, period:, offset: nil, closed: "right", by: nil) ⇒ RollingGroupBy Also known as: groupby_rolling, group_by_rolling

Create rolling groups based on a time column.

Also works for index values of type :i32 or :i64.

Different from a dynamic_group_by the windows are now determined by the individual values and are not of constant intervals. For constant intervals use group_by_dynamic

The period and offset arguments are created either from a timedelta, or by using the following string language:

  • 1ns (1 nanosecond)
  • 1us (1 microsecond)
  • 1ms (1 millisecond)
  • 1s (1 second)
  • 1m (1 minute)
  • 1h (1 hour)
  • 1d (1 day)
  • 1w (1 week)
  • 1mo (1 calendar month)
  • 1y (1 calendar year)
  • 1i (1 index count)

Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds

In case of a group_by_rolling on an integer column, the windows are defined by:

  • "1i" # length 1
  • "10i" # length 10

Examples:

dates = [
  "2020-01-01 13:45:48",
  "2020-01-01 16:42:13",
  "2020-01-01 16:45:09",
  "2020-01-02 18:12:48",
  "2020-01-03 19:45:32",
  "2020-01-08 23:16:43"
]
df = Polars::DataFrame.new({"dt" => dates, "a" => [3, 7, 5, 9, 2, 1]}).with_column(
  Polars.col("dt").str.strptime(Polars::Datetime).set_sorted
)
df.rolling(index_column: "dt", period: "2d").agg(
  [
    Polars.sum("a").alias("sum_a"),
    Polars.min("a").alias("min_a"),
    Polars.max("a").alias("max_a")
  ]
)
# =>
# shape: (6, 4)
# ┌─────────────────────┬───────┬───────┬───────┐
# │ dt                  ┆ sum_a ┆ min_a ┆ max_a │
# │ ---                 ┆ ---   ┆ ---   ┆ ---   │
# │ datetime[μs]        ┆ i64   ┆ i64   ┆ i64   │
# ╞═════════════════════╪═══════╪═══════╪═══════╡
# │ 2020-01-01 13:45:48 ┆ 3     ┆ 3     ┆ 3     │
# │ 2020-01-01 16:42:13 ┆ 10    ┆ 3     ┆ 7     │
# │ 2020-01-01 16:45:09 ┆ 15    ┆ 3     ┆ 7     │
# │ 2020-01-02 18:12:48 ┆ 24    ┆ 3     ┆ 9     │
# │ 2020-01-03 19:45:32 ┆ 11    ┆ 2     ┆ 9     │
# │ 2020-01-08 23:16:43 ┆ 1     ┆ 1     ┆ 1     │
# └─────────────────────┴───────┴───────┴───────┘

Parameters:

  • index_column (Object)

    Column used to group based on the time window. Often to type Date/Datetime This column must be sorted in ascending order. If not the output will not make sense.

    In case of a rolling group by on indices, dtype needs to be one of :i32, :i64. Note that :i32 gets temporarily cast to :i64, so if performance matters use an :i64 column.

  • period (Object)

    Length of the window.

  • offset (Object) (defaults to: nil)

    Offset of the window. Default is -period.

  • closed ("right", "left", "both", "none") (defaults to: "right")

    Define whether the temporal window interval is closed or not.

  • by (Object) (defaults to: nil)

    Also group by this column/these columns.

Returns:



2480
2481
2482
2483
2484
2485
2486
2487
2488
# File 'lib/polars/data_frame.rb', line 2480

def rolling(
  index_column:,
  period:,
  offset: nil,
  closed: "right",
  by: nil
)
  RollingGroupBy.new(self, index_column, period, offset, closed, by)
end

#row(index = nil, by_predicate: nil, named: false) ⇒ Object

Note:

The index and by_predicate params are mutually exclusive. Additionally, to ensure clarity, the by_predicate parameter must be supplied by keyword.

When using by_predicate it is an error condition if anything other than one row is returned; more than one row raises TooManyRowsReturned, and zero rows will raise NoRowsReturned (both inherit from RowsException).

Get a row as tuple, either by index or by predicate.

Examples:

Return the row at the given index

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.row(2)
# => [3, 8, "c"]

Get a hash instead with a mapping of column names to row values

df.row(2, named: true)
# => {"foo"=>3, "bar"=>8, "ham"=>"c"}

Return the row that matches the given predicate

df.row(by_predicate: Polars.col("ham") == "b")
# => [2, 7, "b"]

Parameters:

  • index (Object) (defaults to: nil)

    Row index.

  • by_predicate (Object) (defaults to: nil)

    Select the row according to a given expression/predicate.

  • named (Boolean) (defaults to: false)

    Return a hash instead of an array. The hash is a mapping of column name to row value. This is more expensive than returning an array, but allows for accessing values by column name.

Returns:



5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
# File 'lib/polars/data_frame.rb', line 5494

def row(index = nil, by_predicate: nil, named: false)
  if !index.nil? && !by_predicate.nil?
    raise ArgumentError, "Cannot set both 'index' and 'by_predicate'; mutually exclusive"
  elsif index.is_a?(Expr)
    raise TypeError, "Expressions should be passed to the 'by_predicate' param"
  end

  if !index.nil?
    row = _df.row_tuple(index)
    if named
      columns.zip(row).to_h
    else
      row
    end
  elsif !by_predicate.nil?
    if !by_predicate.is_a?(Expr)
      raise TypeError, "Expected by_predicate to be an expression; found #{by_predicate.class.name}"
    end
    rows = filter(by_predicate).rows
    n_rows = rows.length
    if n_rows > 1
      raise TooManyRowsReturned, "Predicate #{by_predicate} returned #{n_rows} rows"
    elsif n_rows == 0
      raise NoRowsReturned, "Predicate #{by_predicate} returned no rows"
    end
    row = rows[0]
    if named
      columns.zip(row).to_h
    else
      row
    end
  else
    raise ArgumentError, "One of 'index' or 'by_predicate' must be set"
  end
end

#rows(named: false) ⇒ Array

Convert columnar data to rows as Ruby arrays.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 3, 5],
    "b" => [2, 4, 6]
  }
)
df.rows
# => [[1, 2], [3, 4], [5, 6]]
df.rows(named: true)
# => [{"a"=>1, "b"=>2}, {"a"=>3, "b"=>4}, {"a"=>5, "b"=>6}]

Parameters:

  • named (Boolean) (defaults to: false)

    Return hashes instead of arrays. The hashes are a mapping of column name to row value. This is more expensive than returning an array, but allows for accessing values by column name.

Returns:



5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
# File 'lib/polars/data_frame.rb', line 5551

def rows(named: false)
  if named
    columns = self.columns
    _df.row_tuples.map do |v|
      columns.zip(v).to_h
    end
  else
    _df.row_tuples
  end
end

#rows_by_key(key, named: false, include_key: false, unique: false) ⇒ Hash

Convert columnar data to rows as Ruby arrays in a hash keyed by some column.

This method is like rows, but instead of returning rows in a flat list, rows are grouped by the values in the key column(s) and returned as a hash.

Note that this method should not be used in place of native operations, due to the high cost of materializing all frame data out into a hash; it should be used only when you need to move the values out into a Ruby data structure or other object that cannot operate directly with Polars/Arrow.

Examples:

Group rows by the given key column(s):

df = Polars::DataFrame.new(
  {
    "w" => ["a", "b", "b", "a"],
    "x" => ["q", "q", "q", "k"],
    "y" => [1.0, 2.5, 3.0, 4.5],
    "z" => [9, 8, 7, 6]
  }
)
df.rows_by_key(["w"])
# => {"a"=>[["q", 1.0, 9], ["k", 4.5, 6]], "b"=>[["q", 2.5, 8], ["q", 3.0, 7]]}

Return the same row groupings as hashes:

df.rows_by_key(["w"], named: true)
# => {"a"=>[{"x"=>"q", "y"=>1.0, "z"=>9}, {"x"=>"k", "y"=>4.5, "z"=>6}], "b"=>[{"x"=>"q", "y"=>2.5, "z"=>8}, {"x"=>"q", "y"=>3.0, "z"=>7}]}

Return row groupings, assuming keys are unique:

df.rows_by_key(["z"], unique: true)
# => {9=>["a", "q", 1.0], 8=>["b", "q", 2.5], 7=>["b", "q", 3.0], 6=>["a", "k", 4.5]}

Return row groupings as hashes, assuming keys are unique:

df.rows_by_key(["z"], named: true, unique: true)
# => {9=>{"w"=>"a", "x"=>"q", "y"=>1.0}, 8=>{"w"=>"b", "x"=>"q", "y"=>2.5}, 7=>{"w"=>"b", "x"=>"q", "y"=>3.0}, 6=>{"w"=>"a", "x"=>"k", "y"=>4.5}}

Return hash rows grouped by a compound key, including key values:

df.rows_by_key(["w", "x"], named: true, include_key: true)
# => {["a", "q"]=>[{"w"=>"a", "x"=>"q", "y"=>1.0, "z"=>9}], ["b", "q"]=>[{"w"=>"b", "x"=>"q", "y"=>2.5, "z"=>8}, {"w"=>"b", "x"=>"q", "y"=>3.0, "z"=>7}], ["a", "k"]=>[{"w"=>"a", "x"=>"k", "y"=>4.5, "z"=>6}]}

Parameters:

  • key (Object)

    The column(s) to use as the key for the returned hash. If multiple columns are specified, the key will be a tuple of those values, otherwise it will be a string.

  • named (Boolean) (defaults to: false)

    Return hashes instead of arrays. The hashes are a mapping of column name to row value. This is more expensive than returning an array, but allows for accessing values by column name.

  • include_key (Boolean) (defaults to: false)

    Include key values inline with the associated data (by default the key values are omitted as a memory/performance optimisation, as they can be reoconstructed from the key).

  • unique (Boolean) (defaults to: false)

    Indicate that the key is unique; this will result in a 1:1 mapping from key to a single associated row. Note that if the key is not actually unique the last row with the given key will be returned.

Returns:

  • (Hash)


5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
# File 'lib/polars/data_frame.rb', line 5618

def rows_by_key(key, named: false, include_key: false, unique: false)
  key = Utils._expand_selectors(self, key)

  keys = key.size == 1 ? get_column(key[0]) : select(key).iter_rows

  if include_key
    values = self
  else
    data_cols = schema.keys - key
    values = select(data_cols)
  end

  zipped = keys.each.zip(values.iter_rows(named: named))

  # if unique, we expect to write just one entry per key; otherwise, we're
  # returning a list of rows for each key, so append into a hash of arrays.
  if unique
    zipped.to_h
  else
    zipped.each_with_object({}) { |(key, data), h| (h[key] ||= []) << data }
  end
end

#sample(n: nil, frac: nil, with_replacement: false, shuffle: false, seed: nil) ⇒ DataFrame

Sample from this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.sample(n: 2, seed: 0)
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# │ 2   ┆ 7   ┆ b   │
# └─────┴─────┴─────┘

Parameters:

  • n (Integer) (defaults to: nil)

    Number of items to return. Cannot be used with frac. Defaults to 1 if frac is nil.

  • frac (Float) (defaults to: nil)

    Fraction of items to return. Cannot be used with n.

  • with_replacement (Boolean) (defaults to: false)

    Allow values to be sampled more than once.

  • shuffle (Boolean) (defaults to: false)

    Shuffle the order of sampled data points.

  • seed (Integer) (defaults to: nil)

    Seed for the random number generator. If set to nil (default), a random seed is used.

Returns:



5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
# File 'lib/polars/data_frame.rb', line 5339

def sample(
  n: nil,
  frac: nil,
  with_replacement: false,
  shuffle: false,
  seed: nil
)
  if !n.nil? && !frac.nil?
    raise ArgumentError, "cannot specify both `n` and `frac`"
  end

  if n.nil? && !frac.nil?
    frac = Series.new("frac", [frac]) unless frac.is_a?(Series)

    return _from_rbdf(
      _df.sample_frac(frac._s, with_replacement, shuffle, seed)
    )
  end

  if n.nil?
    n = 1
  end

  n = Series.new("", [n]) unless n.is_a?(Series)

  _from_rbdf(_df.sample_n(n._s, with_replacement, shuffle, seed))
end

#schemaHash

Get the schema.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.schema
# => {"foo"=>Polars::Int64, "bar"=>Polars::Float64, "ham"=>Polars::String}

Returns:

  • (Hash)


211
212
213
# File 'lib/polars/data_frame.rb', line 211

def schema
  columns.zip(dtypes).to_h
end

#select(*exprs, **named_exprs) ⇒ DataFrame

Select columns from this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.select("foo")
# =>
# shape: (3, 1)
# ┌─────┐
# │ foo │
# │ --- │
# │ i64 │
# ╞═════╡
# │ 1   │
# │ 2   │
# │ 3   │
# └─────┘
df.select(["foo", "bar"])
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ 6   │
# │ 2   ┆ 7   │
# │ 3   ┆ 8   │
# └─────┴─────┘
df.select(Polars.col("foo") + 1)
# =>
# shape: (3, 1)
# ┌─────┐
# │ foo │
# │ --- │
# │ i64 │
# ╞═════╡
# │ 2   │
# │ 3   │
# │ 4   │
# └─────┘
df.select([Polars.col("foo") + 1, Polars.col("bar") + 1])
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ foo ┆ bar │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 2   ┆ 7   │
# │ 3   ┆ 8   │
# │ 4   ┆ 9   │
# └─────┴─────┘
df.select(Polars.when(Polars.col("foo") > 2).then(10).otherwise(0))
# =>
# shape: (3, 1)
# ┌─────────┐
# │ literal │
# │ ---     │
# │ i32     │
# ╞═════════╡
# │ 0       │
# │ 0       │
# │ 10      │
# └─────────┘

Parameters:

  • exprs (Array)

    Column(s) to select, specified as positional arguments. Accepts expression input. Strings are parsed as column names, other non-expression inputs are parsed as literals.

  • named_exprs (Hash)

    Additional columns to select, specified as keyword arguments. The columns will be renamed to the keyword used.

Returns:



4563
4564
4565
# File 'lib/polars/data_frame.rb', line 4563

def select(*exprs, **named_exprs)
  lazy.select(*exprs, **named_exprs).collect(_eager: true)
end

#select_seq(*exprs, **named_exprs) ⇒ DataFrame

Select columns from this DataFrame.

This will run all expression sequentially instead of in parallel. Use this when the work per expression is cheap.

Parameters:

  • exprs (Array)

    Column(s) to select, specified as positional arguments. Accepts expression input. Strings are parsed as column names, other non-expression inputs are parsed as literals.

  • named_exprs (Hash)

    Additional columns to select, specified as keyword arguments. The columns will be renamed to the keyword used.

Returns:



4581
4582
4583
4584
4585
# File 'lib/polars/data_frame.rb', line 4581

def select_seq(*exprs, **named_exprs)
  lazy
  .select_seq(*exprs, **named_exprs)
  .collect(_eager: true)
end

#set_sorted(column, descending: false) ⇒ DataFrame

Note:

This can lead to incorrect results if the data is NOT sorted! Use with care!

Flag a column as sorted.

This can speed up future operations.

Parameters:

  • column (Object)

    Column that is sorted.

  • descending (Boolean) (defaults to: false)

    Whether the column is sorted in descending order.

Returns:



6053
6054
6055
6056
6057
6058
6059
6060
# File 'lib/polars/data_frame.rb', line 6053

def set_sorted(
  column,
  descending: false
)
  lazy
    .set_sorted(column, descending: descending)
    .collect(no_optimization: true)
end

#shapeArray

Get the shape of the DataFrame.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3, 4, 5]})
df.shape
# => [5, 1]

Returns:



90
91
92
# File 'lib/polars/data_frame.rb', line 90

def shape
  _df.shape
end

#shift(n, fill_value: nil) ⇒ DataFrame

Shift values by the given period.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.shift(1)
# =>
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ null ┆ null ┆ null │
# │ 1    ┆ 6    ┆ a    │
# │ 2    ┆ 7    ┆ b    │
# └──────┴──────┴──────┘
df.shift(-1)
# =>
# shape: (3, 3)
# ┌──────┬──────┬──────┐
# │ foo  ┆ bar  ┆ ham  │
# │ ---  ┆ ---  ┆ ---  │
# │ i64  ┆ i64  ┆ str  │
# ╞══════╪══════╪══════╡
# │ 2    ┆ 7    ┆ b    │
# │ 3    ┆ 8    ┆ c    │
# │ null ┆ null ┆ null │
# └──────┴──────┴──────┘

Parameters:

  • n (Integer)

    Number of places to shift (may be negative).

  • fill_value (Object) (defaults to: nil)

    Fill the resulting null values with this value.

Returns:



4381
4382
4383
# File 'lib/polars/data_frame.rb', line 4381

def shift(n, fill_value: nil)
  lazy.shift(n, fill_value: fill_value).collect(_eager: true)
end

#shift_and_fill(periods, fill_value) ⇒ DataFrame

Shift the values by a given period and fill the resulting null values.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.shift_and_fill(1, 0)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 0   ┆ 0   ┆ 0   │
# │ 1   ┆ 6   ┆ a   │
# │ 2   ┆ 7   ┆ b   │
# └─────┴─────┴─────┘

Parameters:

  • periods (Integer)

    Number of places to shift (may be negative).

  • fill_value (Object)

    fill nil values with this value.

Returns:



4414
4415
4416
# File 'lib/polars/data_frame.rb', line 4414

def shift_and_fill(periods, fill_value)
  shift(periods, fill_value: fill_value)
end

#shrink_to_fit(in_place: false) ⇒ DataFrame

Shrink DataFrame memory usage.

Shrinks to fit the exact capacity needed to hold the data.

Returns:



5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
# File 'lib/polars/data_frame.rb', line 5809

def shrink_to_fit(in_place: false)
  if in_place
    _df.shrink_to_fit
    self
  else
    df = clone
    df._df.shrink_to_fit
    df
  end
end

#slice(offset, length = nil) ⇒ DataFrame

Get a slice of this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.slice(1, 2)
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ f64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 2   ┆ 7.0 ┆ b   │
# │ 3   ┆ 8.0 ┆ c   │
# └─────┴─────┴─────┘

Parameters:

  • offset (Integer)

    Start index. Negative indexing is supported.

  • length (Integer, nil) (defaults to: nil)

    Length of the slice. If set to nil, all rows starting at the offset will be selected.

Returns:



2089
2090
2091
2092
2093
2094
# File 'lib/polars/data_frame.rb', line 2089

def slice(offset, length = nil)
  if !length.nil? && length < 0
    length = height - offset + length
  end
  _from_rbdf(_df.slice(offset, length))
end

#sort(by, reverse: false, nulls_last: false) ⇒ DataFrame

Sort the DataFrame by column.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6.0, 7.0, 8.0],
    "ham" => ["a", "b", "c"]
  }
)
df.sort("foo", reverse: true)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ f64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 3   ┆ 8.0 ┆ c   │
# │ 2   ┆ 7.0 ┆ b   │
# │ 1   ┆ 6.0 ┆ a   │
# └─────┴─────┴─────┘

Sort by multiple columns.

df.sort(
  [Polars.col("foo"), Polars.col("bar")**2],
  reverse: [true, false]
)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ f64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 3   ┆ 8.0 ┆ c   │
# │ 2   ┆ 7.0 ┆ b   │
# │ 1   ┆ 6.0 ┆ a   │
# └─────┴─────┴─────┘

Parameters:

  • by (String)

    By which column to sort.

  • reverse (Boolean) (defaults to: false)

    Reverse/descending sort.

  • nulls_last (Boolean) (defaults to: false)

    Place null values last. Can only be used if sorted by a single column.

Returns:



1761
1762
1763
1764
1765
# File 'lib/polars/data_frame.rb', line 1761

def sort(by, reverse: false, nulls_last: false)
  lazy
    .sort(by, reverse: reverse, nulls_last: nulls_last)
    .collect(no_optimization: true)
end

#sort!(by, reverse: false, nulls_last: false) ⇒ DataFrame

Sort the DataFrame by column in-place.

Parameters:

  • by (String)

    By which column to sort.

  • reverse (Boolean) (defaults to: false)

    Reverse/descending sort.

  • nulls_last (Boolean) (defaults to: false)

    Place null values last. Can only be used if sorted by a single column.

Returns:



1777
1778
1779
# File 'lib/polars/data_frame.rb', line 1777

def sort!(by, reverse: false, nulls_last: false)
  self._df = sort(by, reverse: reverse, nulls_last: nulls_last)._df
end

#sql(query, table_name: "self") ⇒ DataFrame

Note:

This functionality is considered unstable, although it is close to being considered stable. It may be changed at any point without it being considered a breaking change.

Note:
  • The calling frame is automatically registered as a table in the SQL context under the name "self". If you want access to the DataFrames and LazyFrames found in the current globals, use the top-level :meth:pl.sql <polars.sql>.
  • More control over registration and execution behaviour is available by using the :class:SQLContext object.
  • The SQL query executes in lazy mode before being collected and returned as a DataFrame.

Execute a SQL query against the DataFrame.

Examples:

Query the DataFrame using SQL:

df1 = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3],
    "b" => ["zz", "yy", "xx"],
    "c" => [Date.new(1999, 12, 31), Date.new(2010, 10, 10), Date.new(2077, 8, 8)]
  }
)
df1.sql("SELECT c, b FROM self WHERE a > 1")
# =>
# shape: (2, 2)
# ┌────────────┬─────┐
# │ c          ┆ b   │
# │ ---        ┆ --- │
# │ date       ┆ str │
# ╞════════════╪═════╡
# │ 2010-10-10 ┆ yy  │
# │ 2077-08-08 ┆ xx  │
# └────────────┴─────┘

Apply transformations to a DataFrame using SQL, aliasing "self" to "frame".

df1.sql(
  "
    SELECT
        a,
        (a % 2 == 0) AS a_is_even,
        CONCAT_WS(':', b, b) AS b_b,
        EXTRACT(year FROM c) AS year,
        0::float4 AS \"zero\",
    FROM frame
  ",
  table_name: "frame"
)
# =>
# shape: (3, 5)
# ┌─────┬───────────┬───────┬──────┬──────┐
# │ a   ┆ a_is_even ┆ b_b   ┆ year ┆ zero │
# │ --- ┆ ---       ┆ ---   ┆ ---  ┆ ---  │
# │ i64 ┆ bool      ┆ str   ┆ i32  ┆ f32  │
# ╞═════╪═══════════╪═══════╪══════╪══════╡
# │ 1   ┆ false     ┆ zz:zz ┆ 1999 ┆ 0.0  │
# │ 2   ┆ true      ┆ yy:yy ┆ 2010 ┆ 0.0  │
# │ 3   ┆ false     ┆ xx:xx ┆ 2077 ┆ 0.0  │
# └─────┴───────────┴───────┴──────┴──────┘

Parameters:

  • query (String)

    SQL query to execute.

  • table_name (String) (defaults to: "self")

    Optionally provide an explicit name for the table that represents the calling frame (defaults to "self").

Returns:



1849
1850
1851
1852
1853
1854
# File 'lib/polars/data_frame.rb', line 1849

def sql(query, table_name: "self")
  ctx = SQLContext.new(eager_execution: true)
  name = table_name || "self"
  ctx.register(name, self)
  ctx.execute(query)
end

#std(ddof: 1) ⇒ DataFrame

Aggregate the columns of this DataFrame to their standard deviation value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.std
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ f64 ┆ f64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 1.0 ┆ 1.0 ┆ null │
# └─────┴─────┴──────┘
df.std(ddof: 0)
# =>
# shape: (1, 3)
# ┌──────────┬──────────┬──────┐
# │ foo      ┆ bar      ┆ ham  │
# │ ---      ┆ ---      ┆ ---  │
# │ f64      ┆ f64      ┆ str  │
# ╞══════════╪══════════╪══════╡
# │ 0.816497 ┆ 0.816497 ┆ null │
# └──────────┴──────────┴──────┘

Parameters:

  • ddof (Integer) (defaults to: 1)

    Degrees of freedom

Returns:



5003
5004
5005
# File 'lib/polars/data_frame.rb', line 5003

def std(ddof: 1)
  lazy.std(ddof: ddof).collect(_eager: true)
end

#sumDataFrame

Aggregate the columns of this DataFrame to their sum value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"],
  }
)
df.sum
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ i64 ┆ i64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 6   ┆ 21  ┆ null │
# └─────┴─────┴──────┘

Returns:



4876
4877
4878
# File 'lib/polars/data_frame.rb', line 4876

def sum
  lazy.sum.collect(_eager: true)
end

#sum_horizontal(ignore_nulls: true) ⇒ Series

Sum all values horizontally across columns.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [4.0, 5.0, 6.0]
  }
)
df.sum_horizontal
# =>
# shape: (3,)
# Series: 'sum' [f64]
# [
#         5.0
#         7.0
#         9.0
# ]

Parameters:

  • ignore_nulls (Boolean) (defaults to: true)

    Ignore null values (default). If set to false, any null value in the input will lead to a null output.

Returns:



4904
4905
4906
4907
4908
# File 'lib/polars/data_frame.rb', line 4904

def sum_horizontal(ignore_nulls: true)
  select(
    sum: F.sum_horizontal(F.all, ignore_nulls: ignore_nulls)
  ).to_series
end

#tail(n = 5) ⇒ DataFrame

Get the last n rows.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3, 4, 5],
    "bar" => [6, 7, 8, 9, 10],
    "ham" => ["a", "b", "c", "d", "e"]
  }
)
df.tail(3)
# =>
# shape: (3, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 3   ┆ 8   ┆ c   │
# │ 4   ┆ 9   ┆ d   │
# │ 5   ┆ 10  ┆ e   │
# └─────┴─────┴─────┘

Parameters:

  • n (Integer) (defaults to: 5)

    Number of rows to return.

Returns:



2184
2185
2186
# File 'lib/polars/data_frame.rb', line 2184

def tail(n = 5)
  _from_rbdf(_df.tail(n))
end

#to_aArray

Returns an array representing the DataFrame

Returns:



328
329
330
# File 'lib/polars/data_frame.rb', line 328

def to_a
  rows(named: true)
end

#to_csv(**options) ⇒ String

Write to comma-separated values (CSV) string.

Returns:



819
820
821
# File 'lib/polars/data_frame.rb', line 819

def to_csv(**options)
  write_csv(**options)
end

#to_dummies(columns: nil, separator: "_", drop_first: false, drop_nulls: false) ⇒ DataFrame

Get one hot encoded dummy variables.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2],
    "bar" => [3, 4],
    "ham" => ["a", "b"]
  }
)
df.to_dummies
# =>
# shape: (2, 6)
# ┌───────┬───────┬───────┬───────┬───────┬───────┐
# │ foo_1 ┆ foo_2 ┆ bar_3 ┆ bar_4 ┆ ham_a ┆ ham_b │
# │ ---   ┆ ---   ┆ ---   ┆ ---   ┆ ---   ┆ ---   │
# │ u8    ┆ u8    ┆ u8    ┆ u8    ┆ u8    ┆ u8    │
# ╞═══════╪═══════╪═══════╪═══════╪═══════╪═══════╡
# │ 1     ┆ 0     ┆ 1     ┆ 0     ┆ 1     ┆ 0     │
# │ 0     ┆ 1     ┆ 0     ┆ 1     ┆ 0     ┆ 1     │
# └───────┴───────┴───────┴───────┴───────┴───────┘

Parameters:

  • columns (Array) (defaults to: nil)

    A subset of columns to convert to dummy variables. nil means "all columns".

  • separator (String) (defaults to: "_")

    Separator/delimiter used when generating column names.

  • drop_first (Boolean) (defaults to: false)

    Remove the first category from the variables being encoded.

  • drop_nulls (Boolean) (defaults to: false)

    If there are nil values in the series, a null column is not generated

Returns:



5164
5165
5166
5167
5168
5169
# File 'lib/polars/data_frame.rb', line 5164

def to_dummies(columns: nil, separator: "_", drop_first: false, drop_nulls: false)
  if columns.is_a?(::String)
    columns = [columns]
  end
  _from_rbdf(_df.to_dummies(columns, separator, drop_first, drop_nulls))
end

#to_h(as_series: true) ⇒ Hash

Convert DataFrame to a hash mapping column name to values.

Returns:

  • (Hash)


555
556
557
558
559
560
561
# File 'lib/polars/data_frame.rb', line 555

def to_h(as_series: true)
  if as_series
    get_columns.to_h { |s| [s.name, s] }
  else
    get_columns.to_h { |s| [s.name, s.to_a] }
  end
end

#to_hashesArray

Convert every row to a hash.

Note that this is slow.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3], "bar" => [4, 5, 6]})
df.to_hashes
# =>
# [{"foo"=>1, "bar"=>4}, {"foo"=>2, "bar"=>5}, {"foo"=>3, "bar"=>6}]

Returns:



574
575
576
577
578
579
580
581
# File 'lib/polars/data_frame.rb', line 574

def to_hashes
  rbdf = _df
  names = columns

  height.times.map do |i|
    names.zip(rbdf.row_tuple(i)).to_h
  end
end

#to_numoNumo::NArray

Convert DataFrame to a 2D Numo array.

This operation clones data.

Examples:

df = Polars::DataFrame.new(
  {"foo" => [1, 2, 3], "bar" => [6, 7, 8], "ham" => ["a", "b", "c"]}
)
df.to_numo.class
# => Numo::RObject

Returns:

  • (Numo::NArray)


595
596
597
598
599
600
601
602
# File 'lib/polars/data_frame.rb', line 595

def to_numo
  out = _df.to_numo
  if out.nil?
    Numo::NArray.vstack(width.times.map { |i| to_series(i).to_numo }).transpose
  else
    out
  end
end

#to_sString Also known as: inspect

Returns a string representing the DataFrame.

Returns:



320
321
322
# File 'lib/polars/data_frame.rb', line 320

def to_s
  _df.to_s
end

#to_series(index = 0) ⇒ Series

Select column as Series at index location.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.to_series(1)
# =>
# shape: (3,)
# Series: 'bar' [i64]
# [
#         6
#         7
#         8
# ]

Parameters:

  • index (Integer) (defaults to: 0)

    Location of selection.

Returns:



630
631
632
633
634
635
# File 'lib/polars/data_frame.rb', line 630

def to_series(index = 0)
  if index < 0
    index = columns.length + index
  end
  Utils.wrap_s(_df.select_at_idx(index))
end

#to_struct(name) ⇒ Series

Convert a DataFrame to a Series of type Struct.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 4, 5],
    "b" => ["one", "two", "three", "four", "five"]
  }
)
df.to_struct("nums")
# =>
# shape: (5,)
# Series: 'nums' [struct[2]]
# [
#         {1,"one"}
#         {2,"two"}
#         {3,"three"}
#         {4,"four"}
#         {5,"five"}
# ]

Parameters:

  • name (String)

    Name for the struct Series

Returns:



5951
5952
5953
# File 'lib/polars/data_frame.rb', line 5951

def to_struct(name)
  Utils.wrap_s(_df.to_struct(name))
end

#top_k(k, by:, reverse: false) ⇒ DataFrame

Return the k largest rows.

Non-null elements are always preferred over null elements, regardless of the value of reverse. The output is not guaranteed to be in any particular order, call sort after this function if you wish the output to be sorted.

Examples:

Get the rows which contain the 4 largest values in column b.

df = Polars::DataFrame.new(
  {
    "a" => ["a", "b", "a", "b", "b", "c"],
    "b" => [2, 1, 1, 3, 2, 1]
  }
)
df.top_k(4, by: "b")
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ b   ┆ 3   │
# │ a   ┆ 2   │
# │ b   ┆ 2   │
# │ b   ┆ 1   │
# └─────┴─────┘

Get the rows which contain the 4 largest values when sorting on column b and a.

df.top_k(4, by: ["b", "a"])
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ str ┆ i64 │
# ╞═════╪═════╡
# │ b   ┆ 3   │
# │ b   ┆ 2   │
# │ a   ┆ 2   │
# │ c   ┆ 1   │
# └─────┴─────┘

Parameters:

  • k (Integer)

    Number of rows to return.

  • by (Object)

    Column(s) used to determine the top rows. Accepts expression input. Strings are parsed as column names.

  • reverse (Object) (defaults to: false)

    Consider the k smallest elements of the by column(s) (instead of the k largest). This can be specified per column by passing a sequence of booleans.

Returns:



1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
# File 'lib/polars/data_frame.rb', line 1910

def top_k(
  k,
  by:,
  reverse: false
)
  lazy
  .top_k(k, by: by, reverse: reverse)
  .collect(
    # optimizations=QueryOptFlags(
    #   projection_pushdown=False,
    #   predicate_pushdown=False,
    #   comm_subplan_elim=False,
    #   slice_pushdown=True
    # )
  )
end

#transpose(include_header: false, header_name: "column", column_names: nil) ⇒ DataFrame

Note:

This is a very expensive operation. Perhaps you can do it differently.

Transpose a DataFrame over the diagonal.

Examples:

df = Polars::DataFrame.new({"a" => [1, 2, 3], "b" => [1, 2, 3]})
df.transpose(include_header: true)
# =>
# shape: (2, 4)
# ┌────────┬──────────┬──────────┬──────────┐
# │ column ┆ column_0 ┆ column_1 ┆ column_2 │
# │ ---    ┆ ---      ┆ ---      ┆ ---      │
# │ str    ┆ i64      ┆ i64      ┆ i64      │
# ╞════════╪══════════╪══════════╪══════════╡
# │ a      ┆ 1        ┆ 2        ┆ 3        │
# │ b      ┆ 1        ┆ 2        ┆ 3        │
# └────────┴──────────┴──────────┴──────────┘

Replace the auto-generated column names with a list

df.transpose(include_header: false, column_names: ["a", "b", "c"])
# =>
# shape: (2, 3)
# ┌─────┬─────┬─────┐
# │ a   ┆ b   ┆ c   │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ i64 │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 2   ┆ 3   │
# │ 1   ┆ 2   ┆ 3   │
# └─────┴─────┴─────┘

Include the header as a separate column

df.transpose(
  include_header: true, header_name: "foo", column_names: ["a", "b", "c"]
)
# =>
# shape: (2, 4)
# ┌─────┬─────┬─────┬─────┐
# │ foo ┆ a   ┆ b   ┆ c   │
# │ --- ┆ --- ┆ --- ┆ --- │
# │ str ┆ i64 ┆ i64 ┆ i64 │
# ╞═════╪═════╪═════╪═════╡
# │ a   ┆ 1   ┆ 2   ┆ 3   │
# │ b   ┆ 1   ┆ 2   ┆ 3   │
# └─────┴─────┴─────┴─────┘

Parameters:

  • include_header (Boolean) (defaults to: false)

    If set, the column names will be added as first column.

  • header_name (String) (defaults to: "column")

    If include_header is set, this determines the name of the column that will be inserted.

  • column_names (Array) (defaults to: nil)

    Optional generator/iterator that yields column names. Will be used to replace the columns in the DataFrame.

Returns:



1301
1302
1303
1304
# File 'lib/polars/data_frame.rb', line 1301

def transpose(include_header: false, header_name: "column", column_names: nil)
  keep_names_as = include_header ? header_name : nil
  _from_rbdf(_df.transpose(keep_names_as, column_names))
end

#unique(maintain_order: true, subset: nil, keep: "first") ⇒ DataFrame

Note:

Note that this fails if there is a column of type List in the DataFrame or subset.

Drop duplicate rows from this DataFrame.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 1, 2, 3, 4, 5],
    "b" => [0.5, 0.5, 1.0, 2.0, 3.0, 3.0],
    "c" => [true, true, true, false, true, true]
  }
)
df.unique
# =>
# shape: (5, 3)
# ┌─────┬─────┬───────┐
# │ a   ┆ b   ┆ c     │
# │ --- ┆ --- ┆ ---   │
# │ i64 ┆ f64 ┆ bool  │
# ╞═════╪═════╪═══════╡
# │ 1   ┆ 0.5 ┆ true  │
# │ 2   ┆ 1.0 ┆ true  │
# │ 3   ┆ 2.0 ┆ false │
# │ 4   ┆ 3.0 ┆ true  │
# │ 5   ┆ 3.0 ┆ true  │
# └─────┴─────┴───────┘

Parameters:

  • maintain_order (Boolean) (defaults to: true)

    Keep the same order as the original DataFrame. This requires more work to compute.

  • subset (Object) (defaults to: nil)

    Subset to use to compare rows.

  • keep ("first", "last") (defaults to: "first")

    Which of the duplicate rows to keep (in conjunction with subset).

Returns:



5209
5210
5211
5212
5213
5214
5215
5216
# File 'lib/polars/data_frame.rb', line 5209

def unique(maintain_order: true, subset: nil, keep: "first")
  self._from_rbdf(
    lazy
      .unique(maintain_order: maintain_order, subset: subset, keep: keep)
      .collect(no_optimization: true)
      ._df
  )
end

#unnest(names) ⇒ DataFrame

Decompose a struct into its fields.

The fields will be inserted into the DataFrame on the location of the struct type.

Examples:

df = Polars::DataFrame.new(
  {
    "before" => ["foo", "bar"],
    "t_a" => [1, 2],
    "t_b" => ["a", "b"],
    "t_c" => [true, nil],
    "t_d" => [[1, 2], [3]],
    "after" => ["baz", "womp"]
  }
).select(["before", Polars.struct(Polars.col("^t_.$")).alias("t_struct"), "after"])
df.unnest("t_struct")
# =>
# shape: (2, 6)
# ┌────────┬─────┬─────┬──────┬───────────┬───────┐
# │ before ┆ t_a ┆ t_b ┆ t_c  ┆ t_d       ┆ after │
# │ ---    ┆ --- ┆ --- ┆ ---  ┆ ---       ┆ ---   │
# │ str    ┆ i64 ┆ str ┆ bool ┆ list[i64] ┆ str   │
# ╞════════╪═════╪═════╪══════╪═══════════╪═══════╡
# │ foo    ┆ 1   ┆ a   ┆ true ┆ [1, 2]    ┆ baz   │
# │ bar    ┆ 2   ┆ b   ┆ null ┆ [3]       ┆ womp  │
# └────────┴─────┴─────┴──────┴───────────┴───────┘

Parameters:

  • names (Object)

    Names of the struct columns that will be decomposed by its fields

Returns:



5987
5988
5989
5990
5991
5992
# File 'lib/polars/data_frame.rb', line 5987

def unnest(names)
  if names.is_a?(::String)
    names = [names]
  end
  _from_rbdf(_df.unnest(names))
end

#unpivot(on, index: nil, variable_name: nil, value_name: nil) ⇒ DataFrame Also known as: melt

Unpivot a DataFrame from wide to long format.

Optionally leaves identifiers set.

This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (index) while all other columns, considered measured variables (on), are "unpivoted" to the row axis leaving just two non-identifier columns, 'variable' and 'value'.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => ["x", "y", "z"],
    "b" => [1, 3, 5],
    "c" => [2, 4, 6]
  }
)
df.unpivot(Polars.cs.numeric, index: "a")
# =>
# shape: (6, 3)
# ┌─────┬──────────┬───────┐
# │ a   ┆ variable ┆ value │
# │ --- ┆ ---      ┆ ---   │
# │ str ┆ str      ┆ i64   │
# ╞═════╪══════════╪═══════╡
# │ x   ┆ b        ┆ 1     │
# │ y   ┆ b        ┆ 3     │
# │ z   ┆ b        ┆ 5     │
# │ x   ┆ c        ┆ 2     │
# │ y   ┆ c        ┆ 4     │
# │ z   ┆ c        ┆ 6     │
# └─────┴──────────┴───────┘

Parameters:

  • on (Object)

    Column(s) or selector(s) to use as values variables; if on is empty all columns that are not in index will be used.

  • index (Object) (defaults to: nil)

    Column(s) or selector(s) to use as identifier variables.

  • variable_name (Object) (defaults to: nil)

    Name to give to the variable column. Defaults to "variable"

  • value_name (Object) (defaults to: nil)

    Name to give to the value column. Defaults to "value"

Returns:



4103
4104
4105
4106
4107
4108
# File 'lib/polars/data_frame.rb', line 4103

def unpivot(on, index: nil, variable_name: nil, value_name: nil)
  on = on.nil? ? [] : Utils._expand_selectors(self, on)
  index = index.nil? ? [] : Utils._expand_selectors(self, index)

  _from_rbdf(_df.unpivot(on, index, value_name, variable_name))
end

#unstack(step:, how: "vertical", columns: nil, fill_values: nil) ⇒ DataFrame

Note:

This functionality is experimental and may be subject to changes without it being considered a breaking change.

Unstack a long table to a wide form without doing an aggregation.

This can be much faster than a pivot, because it can skip the grouping phase.

Examples:

df = Polars::DataFrame.new(
  {
    "col1" => "A".."I",
    "col2" => Polars.arange(0, 9, eager: true)
  }
)
# =>
# shape: (9, 2)
# ┌──────┬──────┐
# │ col1 ┆ col2 │
# │ ---  ┆ ---  │
# │ str  ┆ i64  │
# ╞══════╪══════╡
# │ A    ┆ 0    │
# │ B    ┆ 1    │
# │ C    ┆ 2    │
# │ D    ┆ 3    │
# │ E    ┆ 4    │
# │ F    ┆ 5    │
# │ G    ┆ 6    │
# │ H    ┆ 7    │
# │ I    ┆ 8    │
# └──────┴──────┘
df.unstack(step: 3, how: "vertical")
# =>
# shape: (3, 6)
# ┌────────┬────────┬────────┬────────┬────────┬────────┐
# │ col1_0 ┆ col1_1 ┆ col1_2 ┆ col2_0 ┆ col2_1 ┆ col2_2 │
# │ ---    ┆ ---    ┆ ---    ┆ ---    ┆ ---    ┆ ---    │
# │ str    ┆ str    ┆ str    ┆ i64    ┆ i64    ┆ i64    │
# ╞════════╪════════╪════════╪════════╪════════╪════════╡
# │ A      ┆ D      ┆ G      ┆ 0      ┆ 3      ┆ 6      │
# │ B      ┆ E      ┆ H      ┆ 1      ┆ 4      ┆ 7      │
# │ C      ┆ F      ┆ I      ┆ 2      ┆ 5      ┆ 8      │
# └────────┴────────┴────────┴────────┴────────┴────────┘
df.unstack(step: 3, how: "horizontal")
# =>
# shape: (3, 6)
# ┌────────┬────────┬────────┬────────┬────────┬────────┐
# │ col1_0 ┆ col1_1 ┆ col1_2 ┆ col2_0 ┆ col2_1 ┆ col2_2 │
# │ ---    ┆ ---    ┆ ---    ┆ ---    ┆ ---    ┆ ---    │
# │ str    ┆ str    ┆ str    ┆ i64    ┆ i64    ┆ i64    │
# ╞════════╪════════╪════════╪════════╪════════╪════════╡
# │ A      ┆ B      ┆ C      ┆ 0      ┆ 1      ┆ 2      │
# │ D      ┆ E      ┆ F      ┆ 3      ┆ 4      ┆ 5      │
# │ G      ┆ H      ┆ I      ┆ 6      ┆ 7      ┆ 8      │
# └────────┴────────┴────────┴────────┴────────┴────────┘

Parameters:

  • step

    Integer Number of rows in the unstacked frame.

  • how ("vertical", "horizontal") (defaults to: "vertical")

    Direction of the unstack.

  • columns (Object) (defaults to: nil)

    Column to include in the operation.

  • fill_values (Object) (defaults to: nil)

    Fill values that don't fit the new size with this value.

Returns:



4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
# File 'lib/polars/data_frame.rb', line 4182

def unstack(step:, how: "vertical", columns: nil, fill_values: nil)
  if !columns.nil?
    df = select(columns)
  else
    df = self
  end

  height = df.height
  if how == "vertical"
    n_rows = step
    n_cols = (height / n_rows.to_f).ceil
  else
    n_cols = step
    n_rows = (height / n_cols.to_f).ceil
  end

  n_fill = n_cols * n_rows - height

  if n_fill > 0
    if !fill_values.is_a?(::Array)
      fill_values = [fill_values] * df.width
    end

    df = df.select(
      df.get_columns.zip(fill_values).map do |s, next_fill|
        s.extend_constant(next_fill, n_fill)
      end
    )
  end

  if how == "horizontal"
    df = (
      df.with_column(
        (Polars.arange(0, n_cols * n_rows, eager: true) % n_cols).alias(
          "__sort_order"
        )
      )
      .sort("__sort_order")
      .drop("__sort_order")
    )
  end

  zfill_val = Math.log10(n_cols).floor + 1
  slices =
    df.get_columns.flat_map do |s|
      n_cols.times.map do |slice_nbr|
        s.slice(slice_nbr * n_rows, n_rows).alias("%s_%0#{zfill_val}d" % [s.name, slice_nbr])
      end
    end

  _from_rbdf(DataFrame.new(slices)._df)
end

#update(other, on: nil, how: "left", left_on: nil, right_on: nil, include_nulls: false, maintain_order: "left") ⇒ DataFrame

Note:

This functionality is considered unstable. It may be changed at any point without it being considered a breaking change.

Note:

This is syntactic sugar for a left/inner join that preserves the order of the left DataFrame by default, with an optional coalesce when include_nulls: false.

Update the values in this DataFrame with the values in other.

Examples:

Update df values with the non-null values in new_df, by row index:

df = Polars::DataFrame.new(
  {
    "A" => [1, 2, 3, 4],
    "B" => [400, 500, 600, 700]
  }
)
new_df = Polars::DataFrame.new(
  {
    "B" => [-66, nil, -99],
    "C" => [5, 3, 1]
  }
)
df.update(new_df)
# =>
# shape: (4, 2)
# ┌─────┬─────┐
# │ A   ┆ B   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ -66 │
# │ 2   ┆ 500 │
# │ 3   ┆ -99 │
# │ 4   ┆ 700 │
# └─────┴─────┘

Update df values with the non-null values in new_df, by row index, but only keeping those rows that are common to both frames:

df.update(new_df, how: "inner")
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ A   ┆ B   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ -66 │
# │ 2   ┆ 500 │
# │ 3   ┆ -99 │
# └─────┴─────┘

Update df values with the non-null values in new_df, using a full outer join strategy that defines explicit join columns in each frame:

df.update(new_df, left_on: ["A"], right_on: ["C"], how: "full")
# =>
# shape: (5, 2)
# ┌─────┬─────┐
# │ A   ┆ B   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ -99 │
# │ 2   ┆ 500 │
# │ 3   ┆ 600 │
# │ 4   ┆ 700 │
# │ 5   ┆ -66 │
# └─────┴─────┘

Update df values including null values in new_df, using a full outer join strategy that defines explicit join columns in each frame:

df.update(new_df, left_on: "A", right_on: "C", how: "full", include_nulls: true)
# =>
# shape: (5, 2)
# ┌─────┬──────┐
# │ A   ┆ B    │
# │ --- ┆ ---  │
# │ i64 ┆ i64  │
# ╞═════╪══════╡
# │ 1   ┆ -99  │
# │ 2   ┆ 500  │
# │ 3   ┆ null │
# │ 4   ┆ 700  │
# │ 5   ┆ -66  │
# └─────┴──────┘

Parameters:

  • other (DataFrame)

    DataFrame that will be used to update the values

  • on (Object) (defaults to: nil)

    Column names that will be joined on. If set to nil (default), the implicit row index of each frame is used as a join key.

  • how ('left', 'inner', 'full') (defaults to: "left")
    • 'left' will keep all rows from the left table; rows may be duplicated if multiple rows in the right frame match the left row's key.
    • 'inner' keeps only those rows where the key exists in both frames.
    • 'full' will update existing rows where the key matches while also adding any new rows contained in the given frame.
  • left_on (Object) (defaults to: nil)

    Join column(s) of the left DataFrame.

  • right_on (Object) (defaults to: nil)

    Join column(s) of the right DataFrame.

  • include_nulls (Boolean) (defaults to: false)

    Overwrite values in the left frame with null values from the right frame. If set to false (default), null values in the right frame are ignored.

  • maintain_order ('none', 'left', 'right', 'left_right', 'right_left') (defaults to: "left")

    Which order of rows from the inputs to preserve. See DataFrame.join for details. Unlike join this function preserves the left order by default.

Returns:



6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
# File 'lib/polars/data_frame.rb', line 6170

def update(
  other,
  on: nil,
  how: "left",
  left_on: nil,
  right_on: nil,
  include_nulls: false,
  maintain_order: "left"
)
  Utils.require_same_type(self, other)
  lazy
  .update(
    other.lazy,
    on: on,
    how: how,
    left_on: left_on,
    right_on: right_on,
    include_nulls: include_nulls,
    maintain_order: maintain_order
  )
  .collect(_eager: true)
end

#upsample(time_column:, every:, by: nil, maintain_order: false) ⇒ DataFrame

Upsample a DataFrame at a regular frequency.

The every and offset arguments are created with the following string language:

  • 1ns (1 nanosecond)
  • 1us (1 microsecond)
  • 1ms (1 millisecond)
  • 1s (1 second)
  • 1m (1 minute)
  • 1h (1 hour)
  • 1d (1 day)
  • 1w (1 week)
  • 1mo (1 calendar month)
  • 1y (1 calendar year)
  • 1i (1 index count)

Or combine them: "3d12h4m25s" # 3 days, 12 hours, 4 minutes, and 25 seconds

Examples:

Upsample a DataFrame by a certain interval.

df = Polars::DataFrame.new(
  {
    "time" => [
      DateTime.new(2021, 2, 1),
      DateTime.new(2021, 4, 1),
      DateTime.new(2021, 5, 1),
      DateTime.new(2021, 6, 1)
    ],
    "groups" => ["A", "B", "A", "B"],
    "values" => [0, 1, 2, 3]
  }
).set_sorted("time")
df.upsample(
  time_column: "time", every: "1mo", by: "groups", maintain_order: true
).select(Polars.all.forward_fill)
# =>
# shape: (7, 3)
# ┌─────────────────────┬────────┬────────┐
# │ time                ┆ groups ┆ values │
# │ ---                 ┆ ---    ┆ ---    │
# │ datetime[ns]        ┆ str    ┆ i64    │
# ╞═════════════════════╪════════╪════════╡
# │ 2021-02-01 00:00:00 ┆ A      ┆ 0      │
# │ 2021-03-01 00:00:00 ┆ A      ┆ 0      │
# │ 2021-04-01 00:00:00 ┆ A      ┆ 0      │
# │ 2021-05-01 00:00:00 ┆ A      ┆ 2      │
# │ 2021-04-01 00:00:00 ┆ B      ┆ 1      │
# │ 2021-05-01 00:00:00 ┆ B      ┆ 1      │
# │ 2021-06-01 00:00:00 ┆ B      ┆ 3      │
# └─────────────────────┴────────┴────────┘

Parameters:

  • time_column (Object)

    time column will be used to determine a date_range. Note that this column has to be sorted for the output to make sense.

  • every (String)

    interval will start 'every' duration

  • by (Object) (defaults to: nil)

    First group by these columns and then upsample for every group

  • maintain_order (Boolean) (defaults to: false)

    Keep the ordering predictable. This is slower.

Returns:



2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
# File 'lib/polars/data_frame.rb', line 2828

def upsample(
  time_column:,
  every:,
  by: nil,
  maintain_order: false
)
  if by.nil?
    by = []
  end
  if by.is_a?(::String)
    by = [by]
  end

  every = Utils.parse_as_duration_string(every)

  _from_rbdf(
    _df.upsample(by, time_column, every, maintain_order)
  )
end

#var(ddof: 1) ⇒ DataFrame

Aggregate the columns of this DataFrame to their variance value.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8],
    "ham" => ["a", "b", "c"]
  }
)
df.var
# =>
# shape: (1, 3)
# ┌─────┬─────┬──────┐
# │ foo ┆ bar ┆ ham  │
# │ --- ┆ --- ┆ ---  │
# │ f64 ┆ f64 ┆ str  │
# ╞═════╪═════╪══════╡
# │ 1.0 ┆ 1.0 ┆ null │
# └─────┴─────┴──────┘
df.var(ddof: 0)
# =>
# shape: (1, 3)
# ┌──────────┬──────────┬──────┐
# │ foo      ┆ bar      ┆ ham  │
# │ ---      ┆ ---      ┆ ---  │
# │ f64      ┆ f64      ┆ str  │
# ╞══════════╪══════════╪══════╡
# │ 0.666667 ┆ 0.666667 ┆ null │
# └──────────┴──────────┴──────┘

Parameters:

  • ddof (Integer) (defaults to: 1)

    Degrees of freedom

Returns:



5044
5045
5046
# File 'lib/polars/data_frame.rb', line 5044

def var(ddof: 1)
  lazy.var(ddof: ddof).collect(_eager: true)
end

#vstack(df, in_place: false) ⇒ DataFrame

Grow this DataFrame vertically by stacking a DataFrame to it.

Examples:

df1 = Polars::DataFrame.new(
  {
    "foo" => [1, 2],
    "bar" => [6, 7],
    "ham" => ["a", "b"]
  }
)
df2 = Polars::DataFrame.new(
  {
    "foo" => [3, 4],
    "bar" => [8, 9],
    "ham" => ["c", "d"]
  }
)
df1.vstack(df2)
# =>
# shape: (4, 3)
# ┌─────┬─────┬─────┐
# │ foo ┆ bar ┆ ham │
# │ --- ┆ --- ┆ --- │
# │ i64 ┆ i64 ┆ str │
# ╞═════╪═════╪═════╡
# │ 1   ┆ 6   ┆ a   │
# │ 2   ┆ 7   ┆ b   │
# │ 3   ┆ 8   ┆ c   │
# │ 4   ┆ 9   ┆ d   │
# └─────┴─────┴─────┘

Parameters:

  • df (DataFrame)

    DataFrame to stack.

  • in_place (Boolean) (defaults to: false)

    Modify in place

Returns:



3446
3447
3448
3449
3450
3451
3452
3453
# File 'lib/polars/data_frame.rb', line 3446

def vstack(df, in_place: false)
  if in_place
    _df.vstack_mut(df._df)
    self
  else
    _from_rbdf(_df.vstack(df._df))
  end
end

#widthInteger

Get the width of the DataFrame.

Examples:

df = Polars::DataFrame.new({"foo" => [1, 2, 3, 4, 5]})
df.width
# => 1

Returns:

  • (Integer)


117
118
119
# File 'lib/polars/data_frame.rb', line 117

def width
  _df.width
end

#with_column(column) ⇒ DataFrame

Return a new DataFrame with the column added or replaced.

Examples:

Added

df = Polars::DataFrame.new(
  {
    "a" => [1, 3, 5],
    "b" => [2, 4, 6]
  }
)
df.with_column((Polars.col("b") ** 2).alias("b_squared"))
# =>
# shape: (3, 3)
# ┌─────┬─────┬───────────┐
# │ a   ┆ b   ┆ b_squared │
# │ --- ┆ --- ┆ ---       │
# │ i64 ┆ i64 ┆ i64       │
# ╞═════╪═════╪═══════════╡
# │ 1   ┆ 2   ┆ 4         │
# │ 3   ┆ 4   ┆ 16        │
# │ 5   ┆ 6   ┆ 36        │
# └─────┴─────┴───────────┘

Replaced

df.with_column(Polars.col("a") ** 2)
# =>
# shape: (3, 2)
# ┌─────┬─────┐
# │ a   ┆ b   │
# │ --- ┆ --- │
# │ i64 ┆ i64 │
# ╞═════╪═════╡
# │ 1   ┆ 2   │
# │ 9   ┆ 4   │
# │ 25  ┆ 6   │
# └─────┴─────┘

Parameters:

  • column (Object)

    Series, where the name of the Series refers to the column in the DataFrame.

Returns:



3361
3362
3363
3364
3365
# File 'lib/polars/data_frame.rb', line 3361

def with_column(column)
  lazy
    .with_column(column)
    .collect(no_optimization: true, string_cache: false)
end

#with_columns(*exprs, **named_exprs) ⇒ DataFrame

Add columns to this DataFrame.

Added columns will replace existing columns with the same name.

Examples:

Pass an expression to add it as a new column.

df = Polars::DataFrame.new(
  {
    "a" => [1, 2, 3, 4],
    "b" => [0.5, 4, 10, 13],
    "c" => [true, true, false, true]
  }
)
df.with_columns((Polars.col("a") ** 2).alias("a^2"))
# =>
# shape: (4, 4)
# ┌─────┬──────┬───────┬─────┐
# │ a   ┆ b    ┆ c     ┆ a^2 │
# │ --- ┆ ---  ┆ ---   ┆ --- │
# │ i64 ┆ f64  ┆ bool  ┆ i64 │
# ╞═════╪══════╪═══════╪═════╡
# │ 1   ┆ 0.5  ┆ true  ┆ 1   │
# │ 2   ┆ 4.0  ┆ true  ┆ 4   │
# │ 3   ┆ 10.0 ┆ false ┆ 9   │
# │ 4   ┆ 13.0 ┆ true  ┆ 16  │
# └─────┴──────┴───────┴─────┘

Added columns will replace existing columns with the same name.

df.with_columns(Polars.col("a").cast(Polars::Float64))
# =>
# shape: (4, 3)
# ┌─────┬──────┬───────┐
# │ a   ┆ b    ┆ c     │
# │ --- ┆ ---  ┆ ---   │
# │ f64 ┆ f64  ┆ bool  │
# ╞═════╪══════╪═══════╡
# │ 1.0 ┆ 0.5  ┆ true  │
# │ 2.0 ┆ 4.0  ┆ true  │
# │ 3.0 ┆ 10.0 ┆ false │
# │ 4.0 ┆ 13.0 ┆ true  │
# └─────┴──────┴───────┘

Multiple columns can be added by passing a list of expressions.

df.with_columns(
  [
    (Polars.col("a") ** 2).alias("a^2"),
    (Polars.col("b") / 2).alias("b/2"),
    (Polars.col("c").not_).alias("not c"),
  ]
)
# =>
# shape: (4, 6)
# ┌─────┬──────┬───────┬─────┬──────┬───────┐
# │ a   ┆ b    ┆ c     ┆ a^2 ┆ b/2  ┆ not c │
# │ --- ┆ ---  ┆ ---   ┆ --- ┆ ---  ┆ ---   │
# │ i64 ┆ f64  ┆ bool  ┆ i64 ┆ f64  ┆ bool  │
# ╞═════╪══════╪═══════╪═════╪══════╪═══════╡
# │ 1   ┆ 0.5  ┆ true  ┆ 1   ┆ 0.25 ┆ false │
# │ 2   ┆ 4.0  ┆ true  ┆ 4   ┆ 2.0  ┆ false │
# │ 3   ┆ 10.0 ┆ false ┆ 9   ┆ 5.0  ┆ true  │
# │ 4   ┆ 13.0 ┆ true  ┆ 16  ┆ 6.5  ┆ false │
# └─────┴──────┴───────┴─────┴──────┴───────┘

Multiple columns also can be added using positional arguments instead of a list.

df.with_columns(
  (Polars.col("a") ** 2).alias("a^2"),
  (Polars.col("b") / 2).alias("b/2"),
  (Polars.col("c").not_).alias("not c"),
)
# =>
# shape: (4, 6)
# ┌─────┬──────┬───────┬─────┬──────┬───────┐
# │ a   ┆ b    ┆ c     ┆ a^2 ┆ b/2  ┆ not c │
# │ --- ┆ ---  ┆ ---   ┆ --- ┆ ---  ┆ ---   │
# │ i64 ┆ f64  ┆ bool  ┆ i64 ┆ f64  ┆ bool  │
# ╞═════╪══════╪═══════╪═════╪══════╪═══════╡
# │ 1   ┆ 0.5  ┆ true  ┆ 1   ┆ 0.25 ┆ false │
# │ 2   ┆ 4.0  ┆ true  ┆ 4   ┆ 2.0  ┆ false │
# │ 3   ┆ 10.0 ┆ false ┆ 9   ┆ 5.0  ┆ true  │
# │ 4   ┆ 13.0 ┆ true  ┆ 16  ┆ 6.5  ┆ false │
# └─────┴──────┴───────┴─────┴──────┴───────┘

Use keyword arguments to easily name your expression inputs.

df.with_columns(
  ab: Polars.col("a") * Polars.col("b"),
  not_c: Polars.col("c").not_
)
# =>
# shape: (4, 5)
# ┌─────┬──────┬───────┬──────┬───────┐
# │ a   ┆ b    ┆ c     ┆ ab   ┆ not_c │
# │ --- ┆ ---  ┆ ---   ┆ ---  ┆ ---   │
# │ i64 ┆ f64  ┆ bool  ┆ f64  ┆ bool  │
# ╞═════╪══════╪═══════╪══════╪═══════╡
# │ 1   ┆ 0.5  ┆ true  ┆ 0.5  ┆ false │
# │ 2   ┆ 4.0  ┆ true  ┆ 8.0  ┆ false │
# │ 3   ┆ 10.0 ┆ false ┆ 30.0 ┆ true  │
# │ 4   ┆ 13.0 ┆ true  ┆ 52.0 ┆ false │
# └─────┴──────┴───────┴──────┴───────┘

Parameters:

  • exprs (Array)

    Column(s) to add, specified as positional arguments. Accepts expression input. Strings are parsed as column names, other non-expression inputs are parsed as literals.

  • named_exprs (Hash)

    Additional columns to add, specified as keyword arguments. The columns will be renamed to the keyword used.

Returns:



4695
4696
4697
# File 'lib/polars/data_frame.rb', line 4695

def with_columns(*exprs, **named_exprs)
  lazy.with_columns(*exprs, **named_exprs).collect(_eager: true)
end

#with_columns_seq(*exprs, **named_exprs) ⇒ DataFrame

Add columns to this DataFrame.

Added columns will replace existing columns with the same name.

This will run all expression sequentially instead of in parallel. Use this when the work per expression is cheap.

Parameters:

  • exprs (Array)

    Column(s) to add, specified as positional arguments. Accepts expression input. Strings are parsed as column names, other non-expression inputs are parsed as literals.

  • named_exprs (Hash)

    Additional columns to add, specified as keyword arguments. The columns will be renamed to the keyword used.

Returns:



4715
4716
4717
4718
4719
4720
4721
4722
# File 'lib/polars/data_frame.rb', line 4715

def with_columns_seq(
  *exprs,
  **named_exprs
)
  lazy
  .with_columns_seq(*exprs, **named_exprs)
  .collect(_eager: true)
end

#with_row_index(name: "index", offset: 0) ⇒ DataFrame Also known as: with_row_count

Add a column at index 0 that counts the rows.

Examples:

df = Polars::DataFrame.new(
  {
    "a" => [1, 3, 5],
    "b" => [2, 4, 6]
  }
)
df.with_row_index
# =>
# shape: (3, 3)
# ┌───────┬─────┬─────┐
# │ index ┆ a   ┆ b   │
# │ ---   ┆ --- ┆ --- │
# │ u32   ┆ i64 ┆ i64 │
# ╞═══════╪═════╪═════╡
# │ 0     ┆ 1   ┆ 2   │
# │ 1     ┆ 3   ┆ 4   │
# │ 2     ┆ 5   ┆ 6   │
# └───────┴─────┴─────┘

Parameters:

  • name (String) (defaults to: "index")

    Name of the column to add.

  • offset (Integer) (defaults to: 0)

    Start the row count at this offset.

Returns:



2347
2348
2349
# File 'lib/polars/data_frame.rb', line 2347

def with_row_index(name: "index", offset: 0)
  _from_rbdf(_df.with_row_index(name, offset))
end

#write_avro(file, compression = "uncompressed", name: "") ⇒ nil

Write to Apache Avro file.

Parameters:

  • file (String)

    File path to which the file should be written.

  • compression ("uncompressed", "snappy", "deflate") (defaults to: "uncompressed")

    Compression method. Defaults to "uncompressed".

  • name (String) (defaults to: "")

    Schema name. Defaults to empty string.

Returns:

  • (nil)


833
834
835
836
837
838
839
840
841
842
843
844
845
# File 'lib/polars/data_frame.rb', line 833

def write_avro(file, compression = "uncompressed", name: "")
  if compression.nil?
    compression = "uncompressed"
  end
  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end
  if name.nil?
    name = ""
  end

  _df.write_avro(file, compression, name)
end

#write_csv(file = nil, include_header: true, sep: ",", quote: '"', batch_size: 1024, datetime_format: nil, date_format: nil, time_format: nil, float_precision: nil, null_value: nil) ⇒ String?

Write to comma-separated values (CSV) file.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3, 4, 5],
    "bar" => [6, 7, 8, 9, 10],
    "ham" => ["a", "b", "c", "d", "e"]
  }
)
df.write_csv("file.csv")

Parameters:

  • file (String, nil) (defaults to: nil)

    File path to which the result should be written. If set to nil (default), the output is returned as a string instead.

  • include_header (Boolean) (defaults to: true)

    Whether to include header in the CSV output.

  • sep (String) (defaults to: ",")

    Separate CSV fields with this symbol.

  • quote (String) (defaults to: '"')

    Byte to use as quoting character.

  • batch_size (Integer) (defaults to: 1024)

    Number of rows that will be processed per thread.

  • datetime_format (String, nil) (defaults to: nil)

    A format string, with the specifiers defined by the chrono Rust crate. If no format specified, the default fractional-second precision is inferred from the maximum timeunit found in the frame's Datetime cols (if any).

  • date_format (String, nil) (defaults to: nil)

    A format string, with the specifiers defined by the chrono Rust crate.

  • time_format (String, nil) (defaults to: nil)

    A format string, with the specifiers defined by the chrono Rust crate.

  • float_precision (Integer, nil) (defaults to: nil)

    Number of decimal places to write, applied to both :f32 and :f64 datatypes.

  • null_value (String, nil) (defaults to: nil)

    A string representing null values (defaulting to the empty string).

Returns:



759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
# File 'lib/polars/data_frame.rb', line 759

def write_csv(
  file = nil,
  include_header: true,
  sep: ",",
  quote: '"',
  batch_size: 1024,
  datetime_format: nil,
  date_format: nil,
  time_format: nil,
  float_precision: nil,
  null_value: nil
)
  if sep.length > 1
    raise ArgumentError, "only single byte separator is allowed"
  elsif quote.length > 1
    raise ArgumentError, "only single byte quote char is allowed"
  elsif null_value == ""
    null_value = nil
  end

  if file.nil?
    buffer = StringIO.new
    buffer.set_encoding(Encoding::BINARY)
    _df.write_csv(
      buffer,
      include_header,
      sep.ord,
      quote.ord,
      batch_size,
      datetime_format,
      date_format,
      time_format,
      float_precision,
      null_value
    )
    return buffer.string.force_encoding(Encoding::UTF_8)
  end

  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end

  _df.write_csv(
    file,
    include_header,
    sep.ord,
    quote.ord,
    batch_size,
    datetime_format,
    date_format,
    time_format,
    float_precision,
    null_value,
  )
  nil
end

#write_database(table_name, connection = nil, if_table_exists: "fail") ⇒ Integer

Note:

This functionality is experimental. It may be changed at any point without it being considered a breaking change.

Write the data in a Polars DataFrame to a database.

Parameters:

  • table_name (String)

    Schema-qualified name of the table to create or append to in the target SQL database.

  • connection (Object) (defaults to: nil)

    An existing Active Record connection against the target database.

  • if_table_exists ('append', 'replace', 'fail') (defaults to: "fail")

    The insert mode:

    • 'replace' will create a new database table, overwriting an existing one.
    • 'append' will append to an existing table.
    • 'fail' will fail if table already exists.

Returns:

  • (Integer)


1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
# File 'lib/polars/data_frame.rb', line 1036

def write_database(table_name, connection = nil, if_table_exists: "fail")
  if !defined?(ActiveRecord)
    raise Error, "Active Record not available"
  elsif ActiveRecord::VERSION::MAJOR < 7
    raise Error, "Requires Active Record 7+"
  end

  valid_write_modes = ["append", "replace", "fail"]
  if !valid_write_modes.include?(if_table_exists)
    msg = "write_database `if_table_exists` must be one of #{valid_write_modes.inspect}, got #{if_table_exists.inspect}"
    raise ArgumentError, msg
  end

  with_connection(connection) do |connection|
    table_exists = connection.table_exists?(table_name)
    if table_exists && if_table_exists == "fail"
      raise ArgumentError, "Table already exists"
    end

    create_table = !table_exists || if_table_exists == "replace"
    maybe_transaction(connection, create_table) do
      if create_table
        mysql = connection.adapter_name.match?(/mysql|trilogy/i)
        force = if_table_exists == "replace"
        connection.create_table(table_name, id: false, force: force) do |t|
          schema.each do |c, dtype|
            options = {}
            column_type =
              case dtype
              when Binary
                :binary
              when Boolean
                :boolean
              when Date
                :date
              when Datetime
                :datetime
              when Decimal
                if mysql
                  options[:precision] = dtype.precision || 65
                  options[:scale] = dtype.scale || 30
                end
                :decimal
              when Float32
                options[:limit] = 24
                :float
              when Float64
                options[:limit] = 53
                :float
              when Int8
                options[:limit] = 1
                :integer
              when Int16
                options[:limit] = 2
                :integer
              when Int32
                options[:limit] = 4
                :integer
              when Int64
                options[:limit] = 8
                :integer
              when UInt8
                if mysql
                  options[:limit] = 1
                  options[:unsigned] = true
                else
                  options[:limit] = 2
                end
                :integer
              when UInt16
                if mysql
                  options[:limit] = 2
                  options[:unsigned] = true
                else
                  options[:limit] = 4
                end
                :integer
              when UInt32
                if mysql
                  options[:limit] = 4
                  options[:unsigned] = true
                else
                  options[:limit] = 8
                end
                :integer
              when UInt64
                if mysql
                  options[:limit] = 8
                  options[:unsigned] = true
                  :integer
                else
                  options[:precision] = 20
                  options[:scale] = 0
                  :decimal
                end
              when String
                :text
              when Time
                :time
              else
                raise ArgumentError, "column type not supported yet: #{dtype}"
              end
            t.column c, column_type, **options
          end
        end
      end

      quoted_table = connection.quote_table_name(table_name)
      quoted_columns = columns.map { |c| connection.quote_column_name(c) }
      rows = cast({Polars::UInt64 => Polars::String}).rows(named: false).map { |row| "(#{row.map { |v| connection.quote(v) }.join(", ")})" }
      connection.exec_update("INSERT INTO #{quoted_table} (#{quoted_columns.join(", ")}) VALUES #{rows.join(", ")}")
    end
  end
end

#write_delta(target, mode: "error", storage_options: nil, delta_write_options: nil, delta_merge_options: nil) ⇒ nil

Write DataFrame as delta table.

Parameters:

  • target (Object)

    URI of a table or a DeltaTable object.

  • mode ("error", "append", "overwrite", "ignore", "merge") (defaults to: "error")

    How to handle existing data.

  • storage_options (Hash) (defaults to: nil)

    Extra options for the storage backends supported by deltalake-rb.

  • delta_write_options (Hash) (defaults to: nil)

    Additional keyword arguments while writing a Delta lake Table.

  • delta_merge_options (Hash) (defaults to: nil)

    Keyword arguments which are required to MERGE a Delta lake Table.

Returns:

  • (nil)


1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
# File 'lib/polars/data_frame.rb', line 1165

def write_delta(
  target,
  mode: "error",
  storage_options: nil,
  delta_write_options: nil,
  delta_merge_options: nil
)
  Polars.send(:_check_if_delta_available)

  if Utils.pathlike?(target)
    target = Polars.send(:_resolve_delta_lake_uri, target.to_s, strict: false)
  end

  data = self

  if mode == "merge"
    if delta_merge_options.nil?
      msg = "You need to pass delta_merge_options with at least a given predicate for `MERGE` to work."
      raise ArgumentError, msg
    end
    if target.is_a?(::String)
      dt = DeltaLake::Table.new(target, storage_options: storage_options)
    else
      dt = target
    end

    predicate = delta_merge_options.delete(:predicate)
    dt.merge(data, predicate, **delta_merge_options)
  else
    delta_write_options ||= {}

    DeltaLake.write(
      target,
      data,
      mode: mode,
      storage_options: storage_options,
      **delta_write_options
    )
  end
end

#write_ipc(file, compression: "uncompressed", compat_level: nil, storage_options: nil, retries: 2) ⇒ nil

Write to Arrow IPC binary stream or Feather file.

Parameters:

  • file (String)

    File path to which the file should be written.

  • compression ("uncompressed", "lz4", "zstd") (defaults to: "uncompressed")

    Compression method. Defaults to "uncompressed".

  • compat_level (Object) (defaults to: nil)

    Use a specific compatibility level when exporting Polars' internal data structures.

  • storage_options (Hash) (defaults to: nil)

    Options that indicate how to connect to a cloud provider.

    The cloud providers currently supported are AWS, GCP, and Azure. See supported keys here:

    • aws
    • gcp
    • azure
    • Hugging Face (hf://): Accepts an API key under the token parameter: {'token': '...'}, or by setting the HF_TOKEN environment variable.

    If storage_options is not provided, Polars will try to infer the information from environment variables.

  • retries (Integer) (defaults to: 2)

    Number of retries if accessing a cloud instance fails.

Returns:

  • (nil)


873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
# File 'lib/polars/data_frame.rb', line 873

def write_ipc(
  file,
  compression: "uncompressed",
  compat_level: nil,
  storage_options: nil,
  retries: 2
)
  return_bytes = file.nil?
  if return_bytes
    file = StringIO.new
    file.set_encoding(Encoding::BINARY)
  end
  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end

  if compat_level.nil?
    compat_level = true
  end

  if compression.nil?
    compression = "uncompressed"
  end

  if storage_options&.any?
    storage_options = storage_options.to_a
  else
    storage_options = nil
  end

  _df.write_ipc(file, compression, compat_level, storage_options, retries)
  return_bytes ? file.string : nil
end

#write_ipc_stream(file, compression: "uncompressed", compat_level: nil) ⇒ Object

Write to Arrow IPC record batch stream.

See "Streaming format" in https://arrow.apache.org/docs/python/ipc.html.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3, 4, 5],
    "bar" => [6, 7, 8, 9, 10],
    "ham" => ["a", "b", "c", "d", "e"]
  }
)
df.write_ipc_stream("new_file.arrow")

Parameters:

  • file (Object)

    Path or writable file-like object to which the IPC record batch data will be written. If set to nil, the output is returned as a BytesIO object.

  • compression ('uncompressed', 'lz4', 'zstd') (defaults to: "uncompressed")

    Compression method. Defaults to "uncompressed".

  • compat_level (Object) (defaults to: nil)

    Use a specific compatibility level when exporting Polars' internal data structures.

Returns:



931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
# File 'lib/polars/data_frame.rb', line 931

def write_ipc_stream(
  file,
  compression: "uncompressed",
  compat_level: nil
)
  return_bytes = file.nil?
  if return_bytes
    file = StringIO.new
    file.set_encoding(Encoding::BINARY)
  elsif Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end

  if compat_level.nil?
    compat_level = true
  end

  if compression.nil?
    compression = "uncompressed"
  end

  _df.write_ipc_stream(file, compression, compat_level)
  return_bytes ? file.string : nil
end

#write_json(file = nil) ⇒ nil

Serialize to JSON representation.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8]
  }
)
df.write_json
# => "[{\"foo\":1,\"bar\":6},{\"foo\":2,\"bar\":7},{\"foo\":3,\"bar\":8}]"

Parameters:

  • file (String) (defaults to: nil)

    File path to which the result should be written.

Returns:

  • (nil)


653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
# File 'lib/polars/data_frame.rb', line 653

def write_json(file = nil)
  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end
  to_string_io = !file.nil? && file.is_a?(StringIO)
  if file.nil? || to_string_io
    buf = StringIO.new
    buf.set_encoding(Encoding::BINARY)
    _df.write_json(buf)
    json_bytes = buf.string

    json_str = json_bytes.force_encoding(Encoding::UTF_8)
    if to_string_io
      file.write(json_str)
    else
      return json_str
    end
  else
    _df.write_json(file)
  end
  nil
end

#write_ndjson(file = nil) ⇒ nil

Serialize to newline delimited JSON representation.

Examples:

df = Polars::DataFrame.new(
  {
    "foo" => [1, 2, 3],
    "bar" => [6, 7, 8]
  }
)
df.write_ndjson
# => "{\"foo\":1,\"bar\":6}\n{\"foo\":2,\"bar\":7}\n{\"foo\":3,\"bar\":8}\n"

Parameters:

  • file (String) (defaults to: nil)

    File path to which the result should be written.

Returns:

  • (nil)


692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
# File 'lib/polars/data_frame.rb', line 692

def write_ndjson(file = nil)
  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end
  to_string_io = !file.nil? && file.is_a?(StringIO)
  if file.nil? || to_string_io
    buf = StringIO.new
    buf.set_encoding(Encoding::BINARY)
    _df.write_ndjson(buf)
    json_bytes = buf.string

    json_str = json_bytes.force_encoding(Encoding::UTF_8)
    if to_string_io
      file.write(json_str)
    else
      return json_str
    end
  else
    _df.write_ndjson(file)
  end
  nil
end

#write_parquet(file, compression: "zstd", compression_level: nil, statistics: false, row_group_size: nil, data_page_size: nil) ⇒ nil

Write to Apache Parquet file.

Parameters:

  • file (String, Pathname, StringIO)

    File path to which the file should be written.

  • compression ("lz4", "uncompressed", "snappy", "gzip", "lzo", "brotli", "zstd") (defaults to: "zstd")

    Choose "zstd" for good compression performance. Choose "lz4" for fast compression/decompression. Choose "snappy" for more backwards compatibility guarantees when you deal with older parquet readers.

  • compression_level (Integer, nil) (defaults to: nil)

    The level of compression to use. Higher compression means smaller files on disk.

    • "gzip" : min-level: 0, max-level: 10.
    • "brotli" : min-level: 0, max-level: 11.
    • "zstd" : min-level: 1, max-level: 22.
  • statistics (Boolean) (defaults to: false)

    Write statistics to the parquet headers. This requires extra compute.

  • row_group_size (Integer, nil) (defaults to: nil)

    Size of the row groups in number of rows. Defaults to 512^2 rows.

  • data_page_size (Integer, nil) (defaults to: nil)

    Size of the data page in bytes. Defaults to 1024^2 bytes.

Returns:

  • (nil)


980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
# File 'lib/polars/data_frame.rb', line 980

def write_parquet(
  file,
  compression: "zstd",
  compression_level: nil,
  statistics: false,
  row_group_size: nil,
  data_page_size: nil
)
  if compression.nil?
    compression = "uncompressed"
  end
  if Utils.pathlike?(file)
    file = Utils.normalize_filepath(file)
  end

  if statistics == true
    statistics = {
      min: true,
      max: true,
      distinct_count: false,
      null_count: true
    }
  elsif statistics == false
    statistics = {}
  elsif statistics == "full"
    statistics = {
      min: true,
      max: true,
      distinct_count: true,
      null_count: true
    }
  end

  _df.write_parquet(
    file, compression, compression_level, statistics, row_group_size, data_page_size
  )
end