Class: CSV

Inherits:
Object
  • Object
show all
Extended by:
Forwardable
Includes:
Enumerable
Defined in:
lib/csv.rb,
lib/csv.rb,
lib/csv/row.rb,
lib/csv/table.rb,
lib/csv/version.rb

Overview

This class provides a complete interface to CSV files and data. It offers tools to enable you to read and write to and from Strings or IO objects, as needed.

The most generic interface of a class is:

csv = CSV.new(string_or_io, **options)

# Reading: IO object should be open for read
csv.read # => array of rows
# or
csv.each do |row|
  # ...
end
# or
row = csv.shift

# Writing: IO object should be open for write
csv << row

There are several specialized class methods for one-statement reading or writing, described in the Specialized Methods section.

If a String passed into ::new, it is internally wrapped into a StringIO object.

options can be used for specifying the particular CSV flavor (column separators, row separators, value quoting and so on), and for data conversion, see Data Conversion section for the description of the latter.

Specialized Methods

Reading

# From a file: all at once
arr_of_rows = CSV.read("path/to/file.csv", **options)
# iterator-style:
CSV.foreach("path/to/file.csv", **options) do |row|
  # ...
end

# From a string
arr_of_rows = CSV.parse("CSV,data,String", **options)
# or
CSV.parse("CSV,data,String", **options) do |row|
  # ...
end

Writing

# To a file
CSV.open("path/to/file.csv", "wb") do |csv|
  csv << ["row", "of", "CSV", "data"]
  csv << ["another", "row"]
  # ...
end

# To a String
csv_string = CSV.generate do |csv|
  csv << ["row", "of", "CSV", "data"]
  csv << ["another", "row"]
  # ...
end

Shortcuts

# Core extensions for converting one line
csv_string = ["CSV", "data"].to_csv   # to CSV
csv_array  = "CSV,String".parse_csv   # from CSV

# CSV() method
CSV             { |csv_out| csv_out << %w{my data here} }  # to $stdout
CSV(csv = "")   { |csv_str| csv_str << %w{my data here} }  # to a String
CSV($stderr)    { |csv_err| csv_err << %w{my data here} }  # to $stderr
CSV($stdin)     { |csv_in|  csv_in.each { |row| p row } }  # from $stdin

Data Conversion

CSV with headers

CSV allows to specify column names of CSV file, whether they are in data, or provided separately. If headers specified, reading methods return an instance of CSV::Table, consisting of CSV::Row.

# Headers are part of data
data = CSV.parse(<<~ROWS, headers: true)
  Name,Department,Salary
  Bob,Engeneering,1000
  Jane,Sales,2000
  John,Management,5000
ROWS

data.class      #=> CSV::Table
data.first      #=> #<CSV::Row "Name":"Bob" "Department":"Engeneering" "Salary":"1000">
data.first.to_h #=> {"Name"=>"Bob", "Department"=>"Engeneering", "Salary"=>"1000"}

# Headers provided by developer
data = CSV.parse('Bob,Engeneering,1000', headers: %i[name department salary])
data.first      #=> #<CSV::Row name:"Bob" department:"Engeneering" salary:"1000">

Typed data reading

CSV allows to provide a set of data converters e.g. transformations to try on input data. Converter could be a symbol from CSV::Converters constant’s keys, or lambda.

# Without any converters:
CSV.parse('Bob,2018-03-01,100')
#=> [["Bob", "2018-03-01", "100"]]

# With built-in converters:
CSV.parse('Bob,2018-03-01,100', converters: %i[numeric date])
#=> [["Bob", #<Date: 2018-03-01>, 100]]

# With custom converters:
CSV.parse('Bob,2018-03-01,100', converters: [->(v) { Time.parse(v) rescue v }])
#=> [["Bob", 2018-03-01 00:00:00 +0200, "100"]]

CSV and Character Encodings (M17n or Multilingualization)

This new CSV parser is m17n savvy. The parser works in the Encoding of the IO or String object being read from or written to. Your data is never transcoded (unless you ask Ruby to transcode it for you) and will literally be parsed in the Encoding it is in. Thus CSV will return Arrays or Rows of Strings in the Encoding of your data. This is accomplished by transcoding the parser itself into your Encoding.

Some transcoding must take place, of course, to accomplish this multiencoding support. For example, :col_sep, :row_sep, and :quote_char must be transcoded to match your data. Hopefully this makes the entire process feel transparent, since CSV’s defaults should just magically work for your data. However, you can set these values manually in the target Encoding to avoid the translation.

It’s also important to note that while all of CSV’s core parser is now Encoding agnostic, some features are not. For example, the built-in converters will try to transcode data to UTF-8 before making conversions. Again, you can provide custom converters that are aware of your Encodings to avoid this translation. It’s just too hard for me to support native conversions in all of Ruby’s Encodings.

Anyway, the practical side of this is simple: make sure IO and String objects passed into CSV have the proper Encoding set and everything should just work. CSV methods that allow you to open IO objects (CSV::foreach(), CSV::open(), CSV::read(), and CSV::readlines()) do allow you to specify the Encoding.

One minor exception comes when generating CSV into a String with an Encoding that is not ASCII compatible. There’s no existing data for CSV to use to prepare itself and thus you will probably need to manually specify the desired Encoding for most of those cases. It will try to guess using the fields in a row of output though, when using CSV::generate_line() or Array#to_csv().

I try to point out any other Encoding issues in the documentation of methods as they come up.

This has been tested to the best of my ability with all non-“dummy” Encodings Ruby ships with. However, it is brave new code and may have some bugs. Please feel free to report any issues you find with it.

Defined Under Namespace

Modules: MatchP Classes: FieldInfo, MalformedCSVError, Row, Table

Constant Summary collapse

DateMatcher =

A Regexp used to find and convert some common Date formats.

/ \A(?: (\w+,?\s+)?\w+\s+\d{1,2},?\s+\d{2,4} |
\d{4}-\d{2}-\d{2} )\z /x
DateTimeMatcher =

A Regexp used to find and convert some common DateTime formats.

/ \A(?: (\w+,?\s+)?\w+\s+\d{1,2}\s+\d{1,2}:\d{1,2}:\d{1,2},?\s+\d{2,4} |
    \d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2} |
    # ISO-8601
    \d{4}-\d{2}-\d{2}
      (?:T\d{2}:\d{2}(?::\d{2}(?:\.\d+)?(?:[+-]\d{2}(?::\d{2})|Z)?)?)?
)\z /x
ConverterEncoding =

The encoding used by all converters.

Encoding.find("UTF-8")
Converters =

This Hash holds the built-in converters of CSV that can be accessed by name. You can select Converters with CSV.convert() or through the options Hash passed to CSV::new().

:integer

Converts any field Integer() accepts.

:float

Converts any field Float() accepts.

:numeric

A combination of :integer and :float.

:date

Converts any field Date::parse() accepts.

:date_time

Converts any field DateTime::parse() accepts.

:all

All built-in converters. A combination of :date_time and :numeric.

All built-in converters transcode field data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the field will remain unchanged.

This Hash is intentionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV objects.

To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.

{
  integer:   lambda { |f|
    Integer(f.encode(ConverterEncoding)) rescue f
  },
  float:     lambda { |f|
    Float(f.encode(ConverterEncoding)) rescue f
  },
  numeric:   [:integer, :float],
  date:      lambda { |f|
    begin
      e = f.encode(ConverterEncoding)
      e.match?(DateMatcher) ? Date.parse(e) : f
    rescue  # encoding conversion or date parse errors
      f
    end
  },
  date_time: lambda { |f|
    begin
      e = f.encode(ConverterEncoding)
      e.match?(DateTimeMatcher) ? DateTime.parse(e) : f
    rescue  # encoding conversion or date parse errors
      f
    end
  },
  all:       [:date_time, :numeric],
}
HeaderConverters =

This Hash holds the built-in header converters of CSV that can be accessed by name. You can select HeaderConverters with CSV.header_convert() or through the options Hash passed to CSV::new().

:downcase

Calls downcase() on the header String.

:symbol

Leading/trailing spaces are dropped, string is downcased, remaining spaces are replaced with underscores, non-word characters are dropped, and finally to_sym() is called.

All built-in header converters transcode header data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the header will remain unchanged.

This Hash is intentionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV objects.

To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.

{
  downcase: lambda { |h| h.encode(ConverterEncoding).downcase },
  symbol:   lambda { |h|
    h.encode(ConverterEncoding).downcase.gsub(/[^\s\w]+/, "").strip.
                                         gsub(/\s+/, "_").to_sym
  }
}
DEFAULT_OPTIONS =

The options used when no overrides are given by calling code. They are:

:col_sep

","

:row_sep

:auto

:quote_char

'"'

:field_size_limit

nil

:converters

nil

:unconverted_fields

nil

:headers

false

:return_headers

false

:header_converters

nil

:skip_blanks

false

:force_quotes

false

:skip_lines

nil

:liberal_parsing

false

{
  col_sep:            ",",
  row_sep:            :auto,
  quote_char:         '"',
  field_size_limit:   nil,
  converters:         nil,
  unconverted_fields: nil,
  headers:            false,
  return_headers:     false,
  header_converters:  nil,
  skip_blanks:        false,
  force_quotes:       false,
  skip_lines:         nil,
  liberal_parsing:    false,
}.freeze
VERSION =

The version of the installed library.

"3.0.0"

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(data, col_sep: ",", row_sep: :auto, quote_char: '"', field_size_limit: nil, converters: nil, unconverted_fields: nil, headers: false, return_headers: false, write_headers: nil, header_converters: nil, skip_blanks: false, force_quotes: false, skip_lines: nil, liberal_parsing: false, internal_encoding: nil, external_encoding: nil, encoding: nil, nil_value: nil, empty_value: "") ⇒ CSV

This constructor will wrap either a String or IO object passed in data for reading and/or writing. In addition to the CSV instance methods, several IO methods are delegated. (See CSV::open() for a complete list.) If you pass a String for data, you can later retrieve it (after writing to it, for example) with CSV.string().

Note that a wrapped String will be positioned at the beginning (for reading). If you want it at the end (for writing), use CSV::generate(). If you want any other positioning, pass a preset StringIO object instead.

You may set any reading and/or writing preferences in the options Hash. Available options are:

:col_sep

The String placed between each field. This String will be transcoded into the data’s Encoding before parsing.

:row_sep

The String appended to the end of each row. This can be set to the special :auto setting, which requests that CSV automatically discover this from the data. Auto-discovery reads ahead in the data looking for the next "\r\n", "\n", or "\r" sequence. A sequence will be selected even if it occurs in a quoted field, assuming that you would have the same line endings there. If none of those sequences is found, data is ARGF, STDIN, STDOUT, or STDERR, or the stream is only available for output, the default $INPUT_RECORD_SEPARATOR ($/) is used. Obviously, discovery takes a little time. Set manually if speed is important. Also note that IO objects should be opened in binary mode on Windows if this feature will be used as the line-ending translation can cause problems with resetting the document position to where it was before the read ahead. This String will be transcoded into the data’s Encoding before parsing.

:quote_char

The character used to quote fields. This has to be a single character String. This is useful for application that incorrectly use ' as the quote character instead of the correct ". CSV will always consider a double sequence of this character to be an escaped quote. This String will be transcoded into the data’s Encoding before parsing.

:field_size_limit

This is a maximum size CSV will read ahead looking for the closing quote for a field. (In truth, it reads to the first line ending beyond this size.) If a quote cannot be found within the limit CSV will raise a MalformedCSVError, assuming the data is faulty. You can use this limit to prevent what are effectively DoS attacks on the parser. However, this limit can cause a legitimate parse to fail and thus is set to nil, or off, by default.

:converters

An Array of names from the Converters Hash and/or lambdas that handle custom conversion. A single converter doesn’t have to be in an Array. All built-in converters try to transcode fields to UTF-8 before converting. The conversion will fail if the data cannot be transcoded, leaving the field unchanged.

:unconverted_fields

If set to true, an unconverted_fields() method will be added to all returned rows (Array or CSV::Row) that will return the fields as they were before conversion. Note that :headers supplied by Array or String were not fields of the document and thus will have an empty Array attached.

:headers

If set to :first_row or true, the initial row of the CSV file will be treated as a row of headers. If set to an Array, the contents will be used as the headers. If set to a String, the String is run through a call of CSV::parse_line() with the same :col_sep, :row_sep, and :quote_char as this instance to produce an Array of headers. This setting causes CSV#shift() to return rows as CSV::Row objects instead of Arrays and CSV#read() to return CSV::Table objects instead of an Array of Arrays.

:return_headers

When false, header rows are silently swallowed. If set to true, header rows are returned in a CSV::Row object with identical headers and fields (save that the fields do not go through the converters).

:write_headers

When true and :headers is set, a header row will be added to the output.

:header_converters

Identical in functionality to :converters save that the conversions are only made to header rows. All built-in converters try to transcode headers to UTF-8 before converting. The conversion will fail if the data cannot be transcoded, leaving the header unchanged.

:skip_blanks

When set to a true value, CSV will skip over any empty rows. Note that this setting will not skip rows that contain column separators, even if the rows contain no actual data. If you want to skip rows that contain separators but no content, consider using :skip_lines, or inspecting fields.compact.empty? on each row.

:force_quotes

When set to a true value, CSV will quote all CSV fields it creates.

:skip_lines

When set to an object responding to match, every line matching it is considered a comment and ignored during parsing. When set to a String, it is first converted to a Regexp. When set to nil no line is considered a comment. If the passed object does not respond to match, ArgumentError is thrown.

:liberal_parsing

When set to a true value, CSV will attempt to parse input not conformant with RFC 4180, such as double quotes in unquoted fields.

:nil_value

TODO: WRITE ME.

:empty_value

TODO: WRITE ME.

See CSV::DEFAULT_OPTIONS for the default settings.

Options cannot be overridden in the instance methods for performance reasons, so be sure to set what you want here.

Raises:

  • (ArgumentError)


901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
# File 'lib/csv.rb', line 901

def initialize(data, col_sep: ",", row_sep: :auto, quote_char: '"', field_size_limit:   nil,
               converters: nil, unconverted_fields: nil, headers: false, return_headers: false,
               write_headers: nil, header_converters: nil, skip_blanks: false, force_quotes: false,
               skip_lines: nil, liberal_parsing: false, internal_encoding: nil, external_encoding: nil, encoding: nil,
               nil_value: nil,
               empty_value: "")
  raise ArgumentError.new("Cannot parse nil as CSV") if data.nil?

  # create the IO object we will read from
  @io = data.is_a?(String) ? StringIO.new(data) : data
  @encoding = determine_encoding(encoding, internal_encoding)
  #
  # prepare for building safe regular expressions in the target encoding,
  # if we can transcode the needed characters
  #
  @re_esc   = "\\".encode(@encoding).freeze rescue ""
  @re_chars = /#{%"[-\\]\\[\\.^$?*+{}()|# \r\n\t\f\v]".encode(@encoding)}/
  @unconverted_fields = unconverted_fields

  # Stores header row settings and loads header converters, if needed.
  @use_headers    = headers
  @return_headers = return_headers
  @write_headers  = write_headers

  # headers must be delayed until shift(), in case they need a row of content
  @headers = nil

  @nil_value = nil_value
  @empty_value = empty_value
  @empty_value_is_empty_string = (empty_value == "")

  init_separators(col_sep, row_sep, quote_char, force_quotes)
  init_parsers(skip_blanks, field_size_limit, liberal_parsing)
  init_converters(converters, :@converters, :convert)
  init_converters(header_converters, :@header_converters, :header_convert)
  init_comments(skip_lines)

  @force_encoding = !!encoding

  # track our own lineno since IO gets confused about line-ends is CSV fields
  @lineno = 0

  # make sure headers have been assigned
  if header_row? and [Array, String].include? @use_headers.class and @write_headers
    parse_headers  # won't read data for Array or String
    self << @headers
  end
end

Instance Attribute Details

#col_sepObject (readonly)

The encoded :col_sep used in parsing and writing. See CSV::new for details.



954
955
956
# File 'lib/csv.rb', line 954

def col_sep
  @col_sep
end

#encodingObject (readonly)

The Encoding CSV is parsing or writing in. This will be the Encoding you receive parsed data in and/or the Encoding data will be written in.



1027
1028
1029
# File 'lib/csv.rb', line 1027

def encoding
  @encoding
end

#field_size_limitObject (readonly)

The limit for field size, if any. See CSV::new for details.



966
967
968
# File 'lib/csv.rb', line 966

def field_size_limit
  @field_size_limit
end

#lineObject (readonly)

The line number of the last row read from this file. Fields with nested line-end characters will not affect this count.



1033
1034
1035
# File 'lib/csv.rb', line 1033

def line
  @line
end

#linenoObject (readonly)

The line number of the last row read from this file. Fields with nested line-end characters will not affect this count.



1033
1034
1035
# File 'lib/csv.rb', line 1033

def lineno
  @lineno
end

#quote_charObject (readonly)

The encoded :quote_char used in parsing and writing. See CSV::new for details.



964
965
966
# File 'lib/csv.rb', line 964

def quote_char
  @quote_char
end

#row_sepObject (readonly)

The encoded :row_sep used in parsing and writing. See CSV::new for details.



959
960
961
# File 'lib/csv.rb', line 959

def row_sep
  @row_sep
end

#skip_linesObject (readonly)

The regex marking a line as a comment. See CSV::new for details



969
970
971
# File 'lib/csv.rb', line 969

def skip_lines
  @skip_lines
end

Class Method Details

.filter(input = nil, output = nil, **options) ⇒ Object

:call-seq:

filter( **options ) { |row| ... }
filter( input, **options ) { |row| ... }
filter( input, output, **options ) { |row| ... }

This method is a convenience for building Unix-like filters for CSV data. Each row is yielded to the provided block which can alter it as needed. After the block returns, the row is appended to output altered or not.

The input and output arguments can be anything CSV::new() accepts (generally String or IO objects). If not given, they default to ARGF and $stdout.

The options parameter is also filtered down to CSV::new() after some clever key parsing. Any key beginning with :in_ or :input_ will have that leading identifier stripped and will only be used in the options Hash for the input object. Keys starting with :out_ or :output_ affect only output. All other keys are assigned to both objects.

The :output_row_sep option defaults to $INPUT_RECORD_SEPARATOR ($/).



480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
# File 'lib/csv.rb', line 480

def self.filter(input=nil, output=nil, **options)
  # parse options for input, output, or both
  in_options, out_options = Hash.new, {row_sep: $INPUT_RECORD_SEPARATOR}
  options.each do |key, value|
    case key.to_s
    when /\Ain(?:put)?_(.+)\Z/
      in_options[$1.to_sym] = value
    when /\Aout(?:put)?_(.+)\Z/
      out_options[$1.to_sym] = value
    else
      in_options[key]  = value
      out_options[key] = value
    end
  end
  # build input and output wrappers
  input  = new(input  || ARGF,    in_options)
  output = new(output || $stdout, out_options)

  # read, yield, write
  input.each do |row|
    yield row
    output << row
  end
end

.foreach(path, **options, &block) ⇒ Object

This method is intended as the primary interface for reading CSV files. You pass a path and any options you wish to set for the read. Each row of file will be passed to the provided block in turn.

The options parameter can be anything CSV::new() understands. This method also understands an additional :encoding parameter that you can use to specify the Encoding of the data in the file to be read. You must provide this unless your data is in Encoding::default_external(). CSV will use this to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read. For example, encoding: "UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.



519
520
521
522
523
524
# File 'lib/csv.rb', line 519

def self.foreach(path, **options, &block)
  return to_enum(__method__, path, options) unless block_given?
  open(path, options) do |csv|
    csv.each(&block)
  end
end

.generate(str = nil, **options) {|csv| ... } ⇒ Object

:call-seq:

generate( str, **options ) { |csv| ... }
generate( **options ) { |csv| ... }

This method wraps a String you provide, or an empty default String, in a CSV object which is passed to the provided block. You can use the block to append CSV rows to the String and when the block exits, the final String will be returned.

Note that a passed String is modified by this method. Call dup() before passing if you need a new String.

The options parameter can be anything CSV::new() understands. This method understands an additional :encoding parameter when not passed a String to set the base Encoding for the output. CSV needs this hint if you plan to output non-ASCII compatible data.

Yields:

  • (csv)


544
545
546
547
548
549
550
551
552
553
554
555
556
557
# File 'lib/csv.rb', line 544

def self.generate(str=nil, **options)
  # add a default empty String, if none was given
  if str
    str = StringIO.new(str)
    str.seek(0, IO::SEEK_END)
  else
    encoding = options[:encoding]
    str      = String.new
    str.force_encoding(encoding) if encoding
  end
  csv = new(str, options) # wrap
  yield csv         # yield for appending
  csv.string        # return final String
end

.generate_line(row, **options) ⇒ Object

This method is a shortcut for converting a single row (Array) into a CSV String.

The options parameter can be anything CSV::new() understands. This method understands an additional :encoding parameter to set the base Encoding for the output. This method will try to guess your Encoding from the first non-nil field in row, if possible, but you may need to use this parameter as a backup plan.

The :row_sep option defaults to $INPUT_RECORD_SEPARATOR ($/) when calling this method.



572
573
574
575
576
577
578
579
580
581
# File 'lib/csv.rb', line 572

def self.generate_line(row, **options)
  options = {row_sep: $INPUT_RECORD_SEPARATOR}.merge(options)
  str = String.new
  if options[:encoding]
    str.force_encoding(options[:encoding])
  elsif field = row.find { |f| not f.nil? }
    str.force_encoding(String(field).encoding)
  end
  (new(str, options) << row).string
end

.instance(data = $stdout, **options) ⇒ Object

This method will return a CSV instance, just like CSV::new(), but the instance will be cached and returned for all future calls to this method for the same data object (tested by Object#object_id()) with the same options.

If a block is given, the instance is passed to the block and the return value becomes the return value of the block.



440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
# File 'lib/csv.rb', line 440

def self.instance(data = $stdout, **options)
  # create a _signature_ for this method call, data object and options
  sig = [data.object_id] +
        options.values_at(*DEFAULT_OPTIONS.keys.sort_by { |sym| sym.to_s })

  # fetch or create the instance for this signature
  @@instances ||= Hash.new
  instance = (@@instances[sig] ||= new(data, options))

  if block_given?
    yield instance  # run block, if given, returning result
  else
    instance        # or return the instance
  end
end

.open(filename, mode = "r", **options) ⇒ Object

:call-seq:

open( filename, mode = "rb", **options ) { |faster_csv| ... }
open( filename, **options ) { |faster_csv| ... }
open( filename, mode = "rb", **options )
open( filename, **options )

This method opens an IO object, and wraps that with CSV. This is intended as the primary interface for writing a CSV file.

You must pass a filename and may optionally add a mode for Ruby’s open(). You may also pass an optional Hash containing any options CSV::new() understands as the final argument.

This method works like Ruby’s open() call, in that it will pass a CSV object to a provided block and close it when the block terminates, or it will return the CSV object when no block is provided. (Note: This is different from the Ruby 1.8 CSV library which passed rows to the block. Use CSV::foreach() for that behavior.)

You must provide a mode with an embedded Encoding designator unless your data is in Encoding::default_external(). CSV will check the Encoding of the underlying IO object (set by the mode you pass) to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read just as you can with a normal call to IO::open(). For example, "rb:UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.

An opened CSV object will delegate to many IO methods for convenience. You may call:

  • binmode()

  • binmode?()

  • close()

  • close_read()

  • close_write()

  • closed?()

  • eof()

  • eof?()

  • external_encoding()

  • fcntl()

  • fileno()

  • flock()

  • flush()

  • fsync()

  • internal_encoding()

  • ioctl()

  • isatty()

  • path()

  • pid()

  • pos()

  • pos=()

  • reopen()

  • seek()

  • stat()

  • sync()

  • sync=()

  • tell()

  • to_i()

  • to_io()

  • truncate()

  • tty?()



646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
# File 'lib/csv.rb', line 646

def self.open(filename, mode="r", **options)
  # wrap a File opened with the remaining +args+ with no newline
  # decorator
  file_opts = {universal_newline: false}.merge(options)

  begin
    f = File.open(filename, mode, file_opts)
  rescue ArgumentError => e
    raise unless /needs binmode/.match?(e.message) and mode == "r"
    mode = "rb"
    file_opts = {encoding: Encoding.default_external}.merge(file_opts)
    retry
  end
  begin
    csv = new(f, options)
  rescue Exception
    f.close
    raise
  end

  # handle blocks like Ruby's open(), not like the CSV library
  if block_given?
    begin
      yield csv
    ensure
      csv.close
    end
  else
    csv
  end
end

.parse(*args, &block) ⇒ Object

:call-seq:

parse( str, **options ) { |row| ... }
parse( str, **options )

This method can be used to easily parse CSV out of a String. You may either provide a block which will be called with each row of the String in turn, or just use the returned Array of Arrays (when no block is given).

You pass your str to read from, and an optional options containing anything CSV::new() understands.



690
691
692
693
694
695
696
697
698
699
700
701
# File 'lib/csv.rb', line 690

def self.parse(*args, &block)
  csv = new(*args)

  return csv.each(&block) if block_given?

  # slurp contents, if no block is given
  begin
    csv.read
  ensure
    csv.close
  end
end

.parse_line(line, **options) ⇒ Object

This method is a shortcut for converting a single line of a CSV String into an Array. Note that if line contains multiple rows, anything beyond the first row is ignored.

The options parameter can be anything CSV::new() understands.



710
711
712
# File 'lib/csv.rb', line 710

def self.parse_line(line, **options)
  new(line, options).shift
end

.read(path, *options) ⇒ Object

Use to slurp a CSV file into an Array of Arrays. Pass the path to the file and any options CSV::new() understands. This method also understands an additional :encoding parameter that you can use to specify the Encoding of the data in the file to be read. You must provide this unless your data is in Encoding::default_external(). CSV will use this to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read. For example, encoding: "UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.



725
726
727
# File 'lib/csv.rb', line 725

def self.read(path, *options)
  open(path, *options) { |csv| csv.read }
end

.readlines(*args) ⇒ Object

Alias for CSV::read().



730
731
732
# File 'lib/csv.rb', line 730

def self.readlines(*args)
  read(*args)
end

.table(path, **options) ⇒ Object

A shortcut for:

CSV.read( path, { headers:           true,
                  converters:        :numeric,
                  header_converters: :symbol }.merge(options) )


741
742
743
744
745
# File 'lib/csv.rb', line 741

def self.table(path, **options)
  read( path, { headers:           true,
                converters:        :numeric,
                header_converters: :symbol }.merge(options) )
end

Instance Method Details

#<<(row) ⇒ Object Also known as: add_row, puts

The primary write method for wrapped Strings and IOs, row (an Array or CSV::Row) is converted to CSV and appended to the data source. When a CSV::Row is passed, only the row’s fields() are appended to the output.

The data source must be open for writing.



1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
# File 'lib/csv.rb', line 1062

def <<(row)
  # make sure headers have been assigned
  if header_row? and [Array, String].include? @use_headers.class and !@write_headers
    parse_headers  # won't read data for Array or String
  end

  # handle CSV::Row objects and Hashes
  row = case row
        when self.class::Row then row.fields
        when Hash            then @headers.map { |header| row[header] }
        else                      row
        end

  @headers =  row if header_row?
  @lineno  += 1

  output = row.map(&@quote).join(@col_sep) + @row_sep  # quote and separate
  if @io.is_a?(StringIO)             and
     output.encoding != (encoding = raw_encoding)
    if @force_encoding
      output = output.encode(encoding)
    elsif (compatible_encoding = Encoding.compatible?(@io.string, output))
      @io.set_encoding(compatible_encoding)
      @io.seek(0, IO::SEEK_END)
    end
  end
  @io << output

  self  # for chaining
end

#convert(name = nil, &converter) ⇒ Object

:call-seq:

convert( name )
convert { |field| ... }
convert { |field, field_info| ... }

You can use this method to install a CSV::Converters built-in, or provide a block that handles a custom conversion.

If you provide a block that takes one argument, it will be passed the field and is expected to return the converted value or the field itself. If your block takes two arguments, it will also be passed a CSV::FieldInfo Struct, containing details about the field. Again, the block should return a converted field or the field itself.



1110
1111
1112
# File 'lib/csv.rb', line 1110

def convert(name = nil, &converter)
  add_converter(:@converters, self.class::Converters, name, &converter)
end

#convertersObject

Returns the current list of converters in effect. See CSV::new for details. Built-in converters will be returned by name, while others will be returned as is.



976
977
978
979
980
981
# File 'lib/csv.rb', line 976

def converters
  @converters.map do |converter|
    name = Converters.rassoc(converter)
    name ? name.first : converter
  end
end

#eachObject

Yields each row of the data source in turn.

Support for Enumerable.

The data source must be open for reading.



1141
1142
1143
1144
1145
1146
1147
1148
1149
# File 'lib/csv.rb', line 1141

def each
  if block_given?
    while row = shift
      yield row
    end
  else
    to_enum
  end
end

#force_quotes?Boolean

Returns true if all output fields are quoted. See CSV::new for details.

Returns:

  • (Boolean)


1019
# File 'lib/csv.rb', line 1019

def force_quotes?()       @force_quotes       end

#header_convert(name = nil, &converter) ⇒ Object

:call-seq:

header_convert( name )
header_convert { |field| ... }
header_convert { |field, field_info| ... }

Identical to CSV#convert(), but for header rows.

Note that this method must be called before header rows are read to have any effect.



1125
1126
1127
1128
1129
1130
# File 'lib/csv.rb', line 1125

def header_convert(name = nil, &converter)
  add_converter( :@header_converters,
                 self.class::HeaderConverters,
                 name,
                 &converter )
end

#header_convertersObject

Returns the current list of converters in effect for headers. See CSV::new for details. Built-in converters will be returned by name, while others will be returned as is.



1007
1008
1009
1010
1011
1012
# File 'lib/csv.rb', line 1007

def header_converters
  @header_converters.map do |converter|
    name = HeaderConverters.rassoc(converter)
    name ? name.first : converter
  end
end

#header_row?Boolean

Returns true if the next row read will be a header row.

Returns:

  • (Boolean)


1167
1168
1169
# File 'lib/csv.rb', line 1167

def header_row?
  @use_headers and @headers.nil?
end

#headersObject

Returns nil if headers will not be used, true if they will but have not yet been read, or the actual headers after they have been read. See CSV::new for details.



992
993
994
# File 'lib/csv.rb', line 992

def headers
  @headers || true if @use_headers
end

#inspectObject

Returns a simplified description of the key CSV attributes in an ASCII compatible String.



1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
# File 'lib/csv.rb', line 1353

def inspect
  str = ["<#", self.class.to_s, " io_type:"]
  # show type of wrapped IO
  if    @io == $stdout then str << "$stdout"
  elsif @io == $stdin  then str << "$stdin"
  elsif @io == $stderr then str << "$stderr"
  else                      str << @io.class.to_s
  end
  # show IO.path(), if available
  if @io.respond_to?(:path) and (p = @io.path)
    str << " io_path:" << p.inspect
  end
  # show encoding
  str << " encoding:" << @encoding.name
  # show other attributes
  %w[ lineno     col_sep     row_sep
      quote_char skip_blanks liberal_parsing ].each do |attr_name|
    if a = instance_variable_get("@#{attr_name}")
      str << " " << attr_name << ":" << a.inspect
    end
  end
  if @use_headers
    str << " headers:" << headers.inspect
  end
  str << ">"
  begin
    str.join('')
  rescue  # any encoding error
    str.map do |s|
      e = Encoding::Converter.asciicompat_encoding(s.encoding)
      e ? s.encode(e) : s.force_encoding("ASCII-8BIT")
    end.join('')
  end
end

#liberal_parsing?Boolean

Returns true if illegal input is handled. See CSV::new for details.

Returns:

  • (Boolean)


1021
# File 'lib/csv.rb', line 1021

def liberal_parsing?()    @liberal_parsing    end

#readObject Also known as: readlines

Slurps the remaining rows and returns an Array of Arrays.

The data source must be open for reading.



1156
1157
1158
1159
1160
1161
1162
1163
# File 'lib/csv.rb', line 1156

def read
  rows = to_a
  if @use_headers
    Table.new(rows)
  else
    rows
  end
end

#return_headers?Boolean

Returns true if headers will be returned as a row of results. See CSV::new for details.

Returns:

  • (Boolean)


999
# File 'lib/csv.rb', line 999

def return_headers?()     @return_headers     end

#rewindObject

Rewinds the underlying IO object and resets CSV’s lineno() counter.



1046
1047
1048
1049
1050
1051
# File 'lib/csv.rb', line 1046

def rewind
  @headers = nil
  @lineno  = 0

  @io.rewind
end

#shiftObject Also known as: gets, readline

The primary read method for wrapped Strings and IOs, a single row is pulled from the data source, parsed and returned as an Array of fields (if header rows are not used) or a CSV::Row (when header rows are used).

The data source must be open for reading.



1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
# File 'lib/csv.rb', line 1178

def shift
  #########################################################################
  ### This method is purposefully kept a bit long as simple conditional ###
  ### checks are faster than numerous (expensive) method calls.         ###
  #########################################################################

  # handle headers not based on document content
  if header_row? and @return_headers and
     [Array, String].include? @use_headers.class
    if @unconverted_fields
      return add_unconverted_fields(parse_headers, Array.new)
    else
      return parse_headers
    end
  end

  #
  # it can take multiple calls to <tt>@io.gets()</tt> to get a full line,
  # because of \r and/or \n characters embedded in quoted fields
  #
  in_extended_col = false
  csv             = Array.new

  loop do
    # add another read to the line
    unless parse = @io.gets(@row_sep)
      return nil
    end

    if in_extended_col
      @line.concat(parse)
    else
      @line = parse.clone
    end

    begin
      parse.sub!(@parsers[:line_end], "")
    rescue ArgumentError
      unless parse.valid_encoding?
        message = "Invalid byte sequence in #{parse.encoding}"
        raise MalformedCSVError.new(message, lineno + 1)
      end
      raise
    end

    if csv.empty?
      #
      # I believe a blank line should be an <tt>Array.new</tt>, not Ruby 1.8
      # CSV's <tt>[nil]</tt>
      #
      if parse.empty?
        @lineno += 1
        if @skip_blanks
          next
        elsif @unconverted_fields
          return add_unconverted_fields(Array.new, Array.new)
        elsif @use_headers
          return self.class::Row.new(@headers, Array.new)
        else
          return Array.new
        end
      end
    end

    next if @skip_lines and @skip_lines.match parse

    parts =  parse.split(@col_sep_split_separator, -1)
    if parts.empty?
      if in_extended_col
        csv[-1] << @col_sep   # will be replaced with a @row_sep after the parts.each loop
      else
        csv << nil
      end
    end

    # This loop is the hot path of csv parsing. Some things may be non-dry
    # for a reason. Make sure to benchmark when refactoring.
    parts.each do |part|
      if in_extended_col
        # If we are continuing a previous column
        if part.end_with?(@quote_char) && part.count(@quote_char) % 2 != 0
          # extended column ends
          csv.last << part[0..-2]
          if csv.last.match?(@parsers[:stray_quote])
            raise MalformedCSVError.new("Missing or stray quote",
                                        lineno + 1)
          end
          csv.last.gsub!(@double_quote_char, @quote_char)
          in_extended_col = false
        else
          csv.last << part << @col_sep
        end
      elsif part.start_with?(@quote_char)
        # If we are starting a new quoted column
        if part.count(@quote_char) % 2 != 0
          # start an extended column
          csv << (part[1..-1] << @col_sep)
          in_extended_col =  true
        elsif part.end_with?(@quote_char)
          # regular quoted column
          csv << part[1..-2]
          if csv.last.match?(@parsers[:stray_quote])
            raise MalformedCSVError.new("Missing or stray quote",
                                        lineno + 1)
          end
          csv.last.gsub!(@double_quote_char, @quote_char)
        elsif @liberal_parsing
          csv << part
        else
          raise MalformedCSVError.new("Missing or stray quote",
                                      lineno + 1)
        end
      elsif part.match?(@parsers[:quote_or_nl])
        # Unquoted field with bad characters.
        if part.match?(@parsers[:nl_or_lf])
          message = "Unquoted fields do not allow \\r or \\n"
          raise MalformedCSVError.new(message, lineno + 1)
        else
          if @liberal_parsing
            csv << part
          else
            raise MalformedCSVError.new("Illegal quoting", lineno + 1)
          end
        end
      else
        # Regular ole unquoted field.
        csv << (part.empty? ? nil : part)
      end
    end

    # Replace tacked on @col_sep with @row_sep if we are still in an extended
    # column.
    csv[-1][-1] = @row_sep if in_extended_col

    if in_extended_col
      # if we're at eof?(), a quoted field wasn't closed...
      if @io.eof?
        raise MalformedCSVError.new("Unclosed quoted field",
                                    lineno + 1)
      elsif @field_size_limit and csv.last.size >= @field_size_limit
        raise MalformedCSVError.new("Field size exceeded",
                                    lineno + 1)
      end
      # otherwise, we need to loop and pull some more data to complete the row
    else
      @lineno += 1

      # save fields unconverted fields, if needed...
      unconverted = csv.dup if @unconverted_fields

      if @use_headers
        # parse out header rows and handle CSV::Row conversions...
        csv = parse_headers(csv)
      else
        # convert fields, if needed...
        csv = convert_fields(csv)
      end

      # inject unconverted fields and accessor, if requested...
      if @unconverted_fields and not csv.respond_to? :unconverted_fields
        add_unconverted_fields(csv, unconverted)
      end

      # return the results
      break csv
    end
  end
end

#skip_blanks?Boolean

Returns true blank lines are skipped by the parser. See CSV::new for details.

Returns:

  • (Boolean)


1017
# File 'lib/csv.rb', line 1017

def skip_blanks?()        @skip_blanks        end

#unconverted_fields?Boolean

Returns true if unconverted_fields() to parsed results. See CSV::new for details.

Returns:

  • (Boolean)


986
# File 'lib/csv.rb', line 986

def unconverted_fields?() @unconverted_fields end

#write_headers?Boolean

Returns true if headers are written in output. See CSV::new for details.

Returns:

  • (Boolean)


1001
# File 'lib/csv.rb', line 1001

def write_headers?()      @write_headers      end