Class: HexaPDF::Tokenizer

Inherits:
Object
  • Object
show all
Defined in:
lib/hexapdf/tokenizer.rb

Overview

Tokenizes the content of an IO object following the PDF rules.

See: PDF1.7 s7.2

Direct Known Subclasses

Content::Tokenizer

Defined Under Namespace

Classes: Token

Constant Summary collapse

TOKEN_DICT_START =

:nodoc:

Token.new('<<'.b)
TOKEN_DICT_END =

:nodoc:

Token.new('>>'.b)
TOKEN_ARRAY_START =

:nodoc:

Token.new('['.b)
TOKEN_ARRAY_END =

:nodoc:

Token.new(']'.b)
NO_MORE_TOKENS =

This object is returned when there are no more tokens to read.

::Object.new
WHITESPACE =

Characters defined as whitespace.

See: PDF1.7 s7.2.2

"\0\t\n\f\r "
DELIMITER =

Characters defined as delimiters.

See: PDF1.7 s7.2.2

"()<>{}/[]%"
WHITESPACE_MULTI_RE =

:nodoc:

/[#{WHITESPACE}]+/
WHITESPACE_OR_DELIMITER_RE =

:nodoc:

/(?=[#{Regexp.escape(WHITESPACE)}#{Regexp.escape(DELIMITER)}])/

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(io) ⇒ Tokenizer

Creates a new tokenizer.



75
76
77
78
79
80
# File 'lib/hexapdf/tokenizer.rb', line 75

def initialize(io)
  @io = io
  @ss = StringScanner.new(''.b)
  @original_pos = -1
  self.pos = 0
end

Instance Attribute Details

#ioObject (readonly)

The IO object from the tokens are read.



72
73
74
# File 'lib/hexapdf/tokenizer.rb', line 72

def io
  @io
end

Instance Method Details

#next_byteObject

Reads the byte (an integer) at the current position and advances the scan pointer.



190
191
192
193
194
# File 'lib/hexapdf/tokenizer.rb', line 190

def next_byte
  prepare_string_scanner(1)
  @ss.pos += 1
  @ss.string.getbyte(@ss.pos - 1)
end

#next_object(allow_end_array_token: false, allow_keyword: false) ⇒ Object

Returns the PDF object at the current position. This is different from #next_token because references, arrays and dictionaries consist of multiple tokens.

If the allow_end_array_token argument is true, the ‘]’ token is permitted to facilitate the use of this method during array parsing.

See: PDF1.7 s7.3



166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# File 'lib/hexapdf/tokenizer.rb', line 166

def next_object(allow_end_array_token: false, allow_keyword: false)
  token = next_token

  if token.kind_of?(Token)
    case token
    when TOKEN_DICT_START
      token = parse_dictionary
    when TOKEN_ARRAY_START
      token = parse_array
    when TOKEN_ARRAY_END
      unless allow_end_array_token
        raise HexaPDF::MalformedPDFError.new("Found invalid end array token ']'", pos: pos)
      end
    else
      unless allow_keyword
        raise HexaPDF::MalformedPDFError.new("Invalid object, got token #{token}", pos: pos)
      end
    end
  end

  token
end

#next_tokenObject

Returns a single token read from the current position and advances the scan pointer.

Comments and a run of whitespace characters are ignored. The value NO_MORE_TOKENS is returned if there are no more tokens available.



108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
# File 'lib/hexapdf/tokenizer.rb', line 108

def next_token
  prepare_string_scanner(20)
  prepare_string_scanner(20) while @ss.skip(WHITESPACE_MULTI_RE)
  case (@ss.eos? ? -1 : @ss.string.getbyte(@ss.pos))
  when 43, 45, 46, 48..57 # + - . 0..9
    parse_number
  when 47 # /
    parse_name
  when 40 # (
    parse_literal_string
  when 60 # <
    if @ss.string.getbyte(@ss.pos + 1) != 60
      parse_hex_string
    else
      @ss.pos += 2
      TOKEN_DICT_START
    end
  when 62 # >
    unless @ss.string.getbyte(@ss.pos + 1) == 62
      raise HexaPDF::MalformedPDFError.new("Delimiter '>' found at invalid position", pos: pos)
    end
    @ss.pos += 2
    TOKEN_DICT_END
  when 91 # [
    @ss.pos += 1
    TOKEN_ARRAY_START
  when 93 # ]
    @ss.pos += 1
    TOKEN_ARRAY_END
  when 123, 125 # { }
    Token.new(@ss.get_byte)
  when 37 # %
    until @ss.skip_until(/(?=[\r\n])/)
      return NO_MORE_TOKENS unless prepare_string_scanner
    end
    next_token
  when -1 # we reached the end of the file
    NO_MORE_TOKENS
  else # everything else consisting of regular characters
    parse_keyword
  end
end

#next_xref_entryObject

Reads the cross-reference subsection entry at the current position and advances the scan pointer.

See: PDF1.7 7.5.4



200
201
202
203
204
205
206
# File 'lib/hexapdf/tokenizer.rb', line 200

def next_xref_entry
  prepare_string_scanner(20)
  unless @ss.skip(/(\d{10}) (\d{5}) ([nf])(?: \r| \n|\r\n)/)
    raise HexaPDF::MalformedPDFError.new("Invalid cross-reference subsection entry", pos: pos)
  end
  [@ss[1].to_i, @ss[2].to_i, @ss[3]]
end

#peek_tokenObject

Returns the next token but does not advance the scan pointer.



152
153
154
155
156
157
# File 'lib/hexapdf/tokenizer.rb', line 152

def peek_token
  pos = self.pos
  tok = next_token
  self.pos = pos
  tok
end

#posObject

Returns the current position of the tokenizer inside in the IO object.

Note that this position might be different from io.pos since the latter could have been changed somewhere else.



86
87
88
# File 'lib/hexapdf/tokenizer.rb', line 86

def pos
  @original_pos + @ss.pos
end

#pos=(pos) ⇒ Object

Sets the position at which the next token should be read.

Note that this does not set io.pos directly (at the moment of invocation)!



93
94
95
96
97
98
99
100
101
102
# File 'lib/hexapdf/tokenizer.rb', line 93

def pos=(pos)
  if pos >= @original_pos && pos <= @original_pos + @ss.string.size
    @ss.pos = pos - @original_pos
  else
    @original_pos = pos
    @next_read_pos = pos
    @ss.string.clear
    @ss.reset
  end
end

#scan_until(re) ⇒ Object

Utility method for scanning until the given regular expression matches.

If the end of the file is reached in the process, nil is returned. Otherwise the matched string is returned.



220
221
222
223
224
225
# File 'lib/hexapdf/tokenizer.rb', line 220

def scan_until(re)
  until (data = @ss.scan_until(re))
    return nil unless prepare_string_scanner
  end
  data
end

#skip_whitespaceObject

Skips all whitespace at the current position.

See: PDF1.7 s7.2.2



211
212
213
214
# File 'lib/hexapdf/tokenizer.rb', line 211

def skip_whitespace
  prepare_string_scanner
  prepare_string_scanner while @ss.skip(WHITESPACE_MULTI_RE)
end