Class: Puppet::Pops::Parser::Lexer2

Inherits:
Object
  • Object
show all
Includes:
EppSupport, HeredocSupport, InterpolationSupport, LexerSupport, SlurpSupport
Defined in:
lib/puppet/pops/parser/lexer2.rb

Constant Summary collapse

TOKEN_LBRACK =

ALl tokens have three slots, the token name (a Symbol), the token text (String), and a token text length. All operator and punctuation tokens reuse singleton arrays Tokens that require unique values create a unique array per token.

PEFORMANCE NOTES: This construct reduces the amount of object that needs to be created for operators and punctuation. The length is pre-calculated for all singleton tokens. The length is used both to signal the length of the token, and to advance the scanner position (without having to advance it with a scan(regexp)).

[:LBRACK,       '['.freeze,   1].freeze
TOKEN_LISTSTART =
[:LISTSTART,    '['.freeze,   1].freeze
TOKEN_RBRACK =
[:RBRACK,       ']'.freeze,   1].freeze
TOKEN_LBRACE =
[:LBRACE,       '{'.freeze,   1].freeze
TOKEN_RBRACE =
[:RBRACE,       '}'.freeze,   1].freeze
TOKEN_SELBRACE =
[:SELBRACE,     '{'.freeze,   1].freeze
TOKEN_LPAREN =
[:LPAREN,       '('.freeze,   1].freeze
TOKEN_RPAREN =
[:RPAREN,       ')'.freeze,   1].freeze
TOKEN_EQUALS =
[:EQUALS,       '='.freeze,   1].freeze
TOKEN_APPENDS =
[:APPENDS,      '+='.freeze,  2].freeze
TOKEN_DELETES =
[:DELETES,      '-='.freeze,  2].freeze
TOKEN_ISEQUAL =
[:ISEQUAL,      '=='.freeze,  2].freeze
TOKEN_NOTEQUAL =
[:NOTEQUAL,     '!='.freeze,  2].freeze
TOKEN_MATCH =
[:MATCH,        '=~'.freeze,  2].freeze
TOKEN_NOMATCH =
[:NOMATCH,      '!~'.freeze,  2].freeze
TOKEN_GREATEREQUAL =
[:GREATEREQUAL, '>='.freeze,  2].freeze
TOKEN_GREATERTHAN =
[:GREATERTHAN,  '>'.freeze,   1].freeze
TOKEN_LESSEQUAL =
[:LESSEQUAL,    '<='.freeze,  2].freeze
TOKEN_LESSTHAN =
[:LESSTHAN,     '<'.freeze,   1].freeze
TOKEN_FARROW =
[:FARROW,       '=>'.freeze,  2].freeze
TOKEN_PARROW =
[:PARROW,       '+>'.freeze,  2].freeze
TOKEN_LSHIFT =
[:LSHIFT,       '<<'.freeze,  2].freeze
TOKEN_LLCOLLECT =
[:LLCOLLECT,    '<<|'.freeze, 3].freeze
TOKEN_LCOLLECT =
[:LCOLLECT,     '<|'.freeze,  2].freeze
TOKEN_RSHIFT =
[:RSHIFT,       '>>'.freeze,  2].freeze
TOKEN_RRCOLLECT =
[:RRCOLLECT,    '|>>'.freeze, 3].freeze
TOKEN_RCOLLECT =
[:RCOLLECT,     '|>'.freeze,  2].freeze
TOKEN_PLUS =
[:PLUS,         '+'.freeze,   1].freeze
TOKEN_MINUS =
[:MINUS,        '-'.freeze,   1].freeze
TOKEN_DIV =
[:DIV,          '/'.freeze,   1].freeze
TOKEN_TIMES =
[:TIMES,        '*'.freeze,   1].freeze
TOKEN_MODULO =
[:MODULO,       '%'.freeze,   1].freeze
TOKEN_NOT =
[:NOT,          '!'.freeze,   1].freeze
TOKEN_DOT =
[:DOT,          '.'.freeze,   1].freeze
TOKEN_PIPE =
[:PIPE,         '|'.freeze,   1].freeze
TOKEN_AT =
[:AT ,          '@'.freeze,   1].freeze
TOKEN_ATAT =
[:ATAT ,        '@@'.freeze,  2].freeze
TOKEN_COLON =
[:COLON,        ':'.freeze,   1].freeze
TOKEN_COMMA =
[:COMMA,        ','.freeze,   1].freeze
TOKEN_SEMIC =
[:SEMIC,        ';'.freeze,   1].freeze
TOKEN_QMARK =
[:QMARK,        '?'.freeze,   1].freeze
TOKEN_TILDE =

lexed but not an operator in Puppet

[:TILDE,        '~'.freeze,   1].freeze
TOKEN_REGEXP =
[:REGEXP,       nil,   0].freeze
TOKEN_IN_EDGE =
[:IN_EDGE,      '->'.freeze,  2].freeze
TOKEN_IN_EDGE_SUB =
[:IN_EDGE_SUB,  '~>'.freeze,  2].freeze
TOKEN_OUT_EDGE =
[:OUT_EDGE,     '<-'.freeze,  2].freeze
TOKEN_OUT_EDGE_SUB =
[:OUT_EDGE_SUB, '<~'.freeze,  2].freeze
TOKEN_STRING =

Tokens that are always unique to what has been lexed

[:STRING, nil,          0].freeze
TOKEN_WORD =
[:WORD, nil,            0].freeze
TOKEN_DQPRE =
[:DQPRE,  nil,          0].freeze
TOKEN_DQMID =
[:DQPRE,  nil,          0].freeze
TOKEN_DQPOS =
[:DQPRE,  nil,          0].freeze
TOKEN_NUMBER =
[:NUMBER, nil,          0].freeze
TOKEN_VARIABLE =
[:VARIABLE, nil,        1].freeze
TOKEN_VARIABLE_EMPTY =
[:VARIABLE, ''.freeze,  1].freeze
TOKEN_HEREDOC =

HEREDOC has syntax as an argument.

[:HEREDOC, nil, 0].freeze
TOKEN_EPPSTART =

EPP_START is currently a marker token, may later get syntax

[:EPP_START, nil, 0].freeze
TOKEN_EPPEND =
[:EPP_END, '%>', 2].freeze
TOKEN_EPPEND_TRIM =
[:EPP_END_TRIM, '-%>', 3].freeze
TOKEN_OTHER =

This is used for unrecognized tokens, will always be a single character. This particular instance is not used, but is kept here for documentation purposes.

[:OTHER,  nil,  0]
KEYWORDS =

Keywords are all singleton tokens with pre calculated lengths. Booleans are pre-calculated (rather than evaluating the strings “false” “true” repeatedly.

{
  "case"     => [:CASE,     'case',     4],
  "class"    => [:CLASS,    'class',    5],
  "default"  => [:DEFAULT,  'default',  7],
  "define"   => [:DEFINE,   'define',   6],
  "if"       => [:IF,       'if',       2],
  "elsif"    => [:ELSIF,    'elsif',    5],
  "else"     => [:ELSE,     'else',     4],
  "inherits" => [:INHERITS, 'inherits', 8],
  "node"     => [:NODE,     'node',     4],
  "and"      => [:AND,      'and',      3],
  "or"       => [:OR,       'or',       2],
  "undef"    => [:UNDEF,    'undef',    5],
  "false"    => [:BOOLEAN,  false,      5],
  "true"     => [:BOOLEAN,  true,       4],
  "in"       => [:IN,       'in',       2],
  "unless"   => [:UNLESS,   'unless',   6],
  "function" => [:FUNCTION, 'function', 8],
  "type"     => [:TYPE,     'type',     4],
  "attr"     => [:ATTR,     'attr',     4],
  "private"  => [:PRIVATE,  'private',  7],
}
KEYWORD_NAMES =

Reverse lookup of keyword name to string

{}
PATTERN_WS =
%r{[[:blank:]\r]+}
PATTERN_COMMENT =

The single line comment includes the line ending.

%r{#.*\r?}
PATTERN_MLCOMMENT =
%r{/\*(.*?)\*/}m
PATTERN_REGEX =
%r{/[^/\n]*/}
PATTERN_REGEX_END =
%r{/}
PATTERN_REGEX_A =

for replacement to “”

%r{\A/}
PATTERN_REGEX_Z =

for replacement to “”

%r{/\Z}
PATTERN_REGEX_ESC =

for replacement to “/”

%r{\\/}
PATTERN_CLASSREF =

The NAME and CLASSREF in 4x are strict. Each segment must start with a letter a-z and may not contain dashes (w includes letters, digits and _).

%r{((::){0,1}[A-Z][\w]*)+}
PATTERN_NAME =
%r{((::)?[a-z][\w]*)(::[a-z][\w]*)*}
PATTERN_BARE_WORD =
%r{[a-z_](?:[\w-]*[\w])?}
PATTERN_DOLLAR_VAR =
%r{\$(::)?(\w+::)*\w+}
PATTERN_NUMBER =
%r{\b(?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?)\b}
STRING_BSLASH_BSLASH =

PERFORMANCE NOTE: Comparison against a frozen string is faster (than unfrozen).

'\\'.freeze

Constants included from EppSupport

EppSupport::TOKEN_RENDER_EXPR, EppSupport::TOKEN_RENDER_STRING

Constants included from SlurpSupport

SlurpSupport::DQ_ESCAPES, SlurpSupport::SLURP_ALL_PATTERN, SlurpSupport::SLURP_DQ_PATTERN, SlurpSupport::SLURP_SQ_PATTERN, SlurpSupport::SLURP_UQ_PATTERN, SlurpSupport::SQ_ESCAPES, SlurpSupport::UQ_ESCAPES

Constants included from InterpolationSupport

InterpolationSupport::PATTERN_VARIABLE

Constants included from HeredocSupport

HeredocSupport::PATTERN_HEREDOC

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods included from EppSupport

#fullscan_epp, #interpolate_epp, #scan_epp

Methods included from SlurpSupport

#slurp, #slurp_dqstring, #slurp_sqstring, #slurp_uqstring

Methods included from InterpolationSupport

#enqueue_until, #interpolate_dq, #interpolate_tail_dq, #interpolate_tail_uq, #interpolate_uq, #interpolate_uq_to, #transform_to_variable

Methods included from HeredocSupport

#heredoc, #heredoc_text

Methods included from LexerSupport

#assert_numeric, #followed_by, #format_quote, #lex_error, #lex_error_without_pos, #positioned_message

Constructor Details

#initializeLexer2

Returns a new instance of Lexer2.



177
178
# File 'lib/puppet/pops/parser/lexer2.rb', line 177

def initialize()
end

Instance Attribute Details

#locatorObject (readonly)



175
176
177
# File 'lib/puppet/pops/parser/lexer2.rb', line 175

def locator
  @locator
end

Instance Method Details

#clearObject

Clears the lexer state (it is not required to call this as it will be garbage collected and the next lex call (lex_string, lex_file) will reset the internal state.



183
184
185
186
187
188
# File 'lib/puppet/pops/parser/lexer2.rb', line 183

def clear()
  # not really needed, but if someone wants to ensure garbage is collected as early as possible
  @scanner = nil
  @locator = nil
  @lexing_context = nil
end

#emit(token, byte_offset) ⇒ Object

Emits (produces) a token [:tokensymbol, TokenValue] and moves the scanner’s position past the token



646
647
648
649
# File 'lib/puppet/pops/parser/lexer2.rb', line 646

def emit(token, byte_offset)
  @scanner.pos = byte_offset + token[2]
  [token[0], TokenValue.new(token, byte_offset, @locator)]
end

#emit_completed(token, byte_offset) ⇒ Object

Emits the completed token on the form [:tokensymbol, TokenValue. This method does not alter the scanner’s position.



654
655
656
# File 'lib/puppet/pops/parser/lexer2.rb', line 654

def emit_completed(token, byte_offset)
  [token[0], TokenValue.new(token, byte_offset, @locator)]
end

#enqueue(emitted_token) ⇒ Object

Allows subprocessors for heredoc etc to enqueue tokens that are tokenized by a different lexer instance



665
666
667
# File 'lib/puppet/pops/parser/lexer2.rb', line 665

def enqueue(emitted_token)
  @token_queue << emitted_token
end

#enqueue_completed(token, byte_offset) ⇒ Object

Enqueues a completed token at the given offset



659
660
661
# File 'lib/puppet/pops/parser/lexer2.rb', line 659

def enqueue_completed(token, byte_offset)
  @token_queue << emit_completed(token, byte_offset)
end

#fileObject

TODO: This method should not be used, callers should get the locator since it is most likely required to compute line, position etc given offsets.



229
230
231
# File 'lib/puppet/pops/parser/lexer2.rb', line 229

def file
  @locator ? @locator.file : nil
end

#file=(file) ⇒ Object

Convenience method, and for compatibility with older lexer. Use the lex_file instead. (Bad form to use overloading of assignment operator for something that is not really an assignment).



222
223
224
# File 'lib/puppet/pops/parser/lexer2.rb', line 222

def file=(file)
  lex_file(file)
end

#fullscanObject

Scans all of the content and returns it in an array Note that the terminating [false, false] token is included in the result.



254
255
256
257
258
# File 'lib/puppet/pops/parser/lexer2.rb', line 254

def fullscan
  result = []
  scan {|token, value| result.push([token, value]) }
  result
end

#initvarsObject



242
243
244
245
246
247
248
249
# File 'lib/puppet/pops/parser/lexer2.rb', line 242

def initvars
  @token_queue = []
  # NOTE: additional keys are used; :escapes, :uq_slurp_pattern, :newline_jump, :epp_*
  @lexing_context = {
    :brace_count => 0,
    :after => nil,
  }
end

#lex_file(file) ⇒ Object

Initializes lexing of the content of the given file. An empty string is used if the file does not exist.



235
236
237
238
239
240
# File 'lib/puppet/pops/parser/lexer2.rb', line 235

def lex_file(file)
  initvars
  contents = Puppet::FileSystem.exist?(file) ? Puppet::FileSystem.read(file) : ""
  @scanner = StringScanner.new(contents.freeze)
  @locator = Puppet::Pops::Parser::Locator.locator(contents, file)
end

#lex_string(string, path = '') ⇒ Object



199
200
201
202
203
# File 'lib/puppet/pops/parser/lexer2.rb', line 199

def lex_string(string, path='')
  initvars
  @scanner = StringScanner.new(string)
  @locator = Puppet::Pops::Parser::Locator.locator(string, path)
end

#lex_tokenObject

This lexes one token at the current position of the scanner. PERFORMANCE NOTE: Any change to this logic should be performance measured.



296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
# File 'lib/puppet/pops/parser/lexer2.rb', line 296

def lex_token
  # Using three char look ahead (may be faster to do 2 char look ahead since only 2 tokens require a third
  scn = @scanner
  ctx = @lexing_context
  before = @scanner.pos

  # A look ahead of 3 characters is used since the longest operator ambiguity is resolved at that point.
  # PERFORMANCE NOTE: It is faster to peek once and use three separate variables for lookahead 0, 1 and 2.
  #
  la = scn.peek(3)
  return nil if la.empty?

  # Ruby 1.8.7 requires using offset and length (or integers are returned.
  # PERFORMANCE NOTE.
  # It is slightly faster to use these local variables than accessing la[0], la[1] etc. in ruby 1.9.3
  # But not big enough to warrant two completely different implementations.
  #
  la0 = la[0,1]
  la1 = la[1,1]
  la2 = la[2,1]

  # PERFORMANCE NOTE:
  # A case when, where all the cases are literal values is the fastest way to map from data to code.
  # It is much faster than using a hash with lambdas, hash with symbol used to then invoke send etc.
  # This case statement is evaluated for most character positions in puppet source, and great care must
  # be taken to not introduce performance regressions.
  #
  case la0

  when '.'
    emit(TOKEN_DOT, before)

  when ','
    emit(TOKEN_COMMA, before)

  when '['
    if (before == 0 || scn.string[locator.char_offset(before)-1,1] =~ /[[:blank:]\r\n]+/)
      emit(TOKEN_LISTSTART, before)
    else
      emit(TOKEN_LBRACK, before)
    end

  when ']'
    emit(TOKEN_RBRACK, before)

  when '('
    emit(TOKEN_LPAREN, before)

  when ')'
    emit(TOKEN_RPAREN, before)

  when ';'
    emit(TOKEN_SEMIC, before)

  when '?'
    emit(TOKEN_QMARK, before)

  when '*'
    emit(TOKEN_TIMES, before)

  when '%'
    if la1 == '>' && ctx[:epp_mode]
      scn.pos += 2
      if ctx[:epp_mode] == :expr
        enqueue_completed(TOKEN_EPPEND, before)
      end
      ctx[:epp_mode] = :text
      interpolate_epp
    else
      emit(TOKEN_MODULO, before)
    end

  when '{'
    # The lexer needs to help the parser since the technology used cannot deal with
    # lookahead of same token with different precedence. This is solved by making left brace
    # after ? into a separate token.
    #
    ctx[:brace_count] += 1
    emit(if ctx[:after] == :QMARK
      TOKEN_SELBRACE
    else
      TOKEN_LBRACE
    end, before)

  when '}'
    ctx[:brace_count] -= 1
    emit(TOKEN_RBRACE, before)

    # TOKENS @, @@, @(
  when '@'
    case la1
    when '@'
      emit(TOKEN_ATAT, before) # TODO; Check if this is good for the grammar
    when '('
      heredoc
    else
      emit(TOKEN_AT, before)
    end

    # TOKENS |, |>, |>>
  when '|'
    emit(case la1
    when '>'
      la2 == '>' ? TOKEN_RRCOLLECT : TOKEN_RCOLLECT
    else
      TOKEN_PIPE
    end, before)

    # TOKENS =, =>, ==, =~
  when '='
    emit(case la1
    when '='
      TOKEN_ISEQUAL
    when '>'
      TOKEN_FARROW
    when '~'
      TOKEN_MATCH
    else
      TOKEN_EQUALS
    end, before)

    # TOKENS '+', '+=', and '+>'
  when '+'
    emit(case la1
    when '='
      TOKEN_APPENDS
    when '>'
      TOKEN_PARROW
    else
      TOKEN_PLUS
    end, before)

    # TOKENS '-', '->', and epp '-%>' (end of interpolation with trim)
  when '-'
    if ctx[:epp_mode] && la1 == '%' && la2 == '>'
      scn.pos += 3
      if ctx[:epp_mode] == :expr
        enqueue_completed(TOKEN_EPPEND_TRIM, before)
      end
      interpolate_epp(:with_trim)
    else
      emit(case la1
      when '>'
        TOKEN_IN_EDGE
      when '='
        TOKEN_DELETES
      else
        TOKEN_MINUS
      end, before)
    end

    # TOKENS !, !=, !~
  when '!'
    emit(case la1
    when '='
      TOKEN_NOTEQUAL
    when '~'
      TOKEN_NOMATCH
    else
      TOKEN_NOT
    end, before)

    # TOKENS ~>, ~
  when '~'
    emit(la1 == '>' ? TOKEN_IN_EDGE_SUB : TOKEN_TILDE, before)

  when '#'
    scn.skip(PATTERN_COMMENT)
    nil

    # TOKENS '/', '/*' and '/ regexp /'
  when '/'
    case la1
    when '*'
      scn.skip(PATTERN_MLCOMMENT)
      nil

    else
      # regexp position is a regexp, else a div
      if regexp_acceptable? && value = scn.scan(PATTERN_REGEX)
        # Ensure an escaped / was not matched
        while value[-2..-2] == STRING_BSLASH_BSLASH # i.e. \\
          value += scn.scan_until(PATTERN_REGEX_END)
        end
        regex = value.sub(PATTERN_REGEX_A, '').sub(PATTERN_REGEX_Z, '').gsub(PATTERN_REGEX_ESC, '/')
        emit_completed([:REGEX, Regexp.new(regex), scn.pos-before], before)
      else
        emit(TOKEN_DIV, before)
      end
    end

    # TOKENS <, <=, <|, <<|, <<, <-, <~
  when '<'
    emit(case la1
    when '<'
      if la2 == '|'
        TOKEN_LLCOLLECT
      else
        TOKEN_LSHIFT
      end
    when '='
      TOKEN_LESSEQUAL
    when '|'
      TOKEN_LCOLLECT
    when '-'
      TOKEN_OUT_EDGE
    when '~'
      TOKEN_OUT_EDGE_SUB
    else
      TOKEN_LESSTHAN
    end, before)

    # TOKENS >, >=, >>
  when '>'
    emit(case la1
    when '>'
      TOKEN_RSHIFT
    when '='
      TOKEN_GREATEREQUAL
    else
      TOKEN_GREATERTHAN
    end, before)

    # TOKENS :, ::CLASSREF, ::NAME
  when ':'
    if la1 == ':'
      before = scn.pos
      # PERFORMANCE NOTE: This could potentially be speeded up by using a case/when listing all
      # upper case letters. Alternatively, the 'A', and 'Z' comparisons may be faster if they are
      # frozen.
      #
      if la2 >= 'A' && la2 <= 'Z'
        # CLASSREF or error
        value = scn.scan(PATTERN_CLASSREF)
        if value
          after = scn.pos
          emit_completed([:CLASSREF, value.freeze, after-before], before)
        else
          # move to faulty position ('::<uc-letter>' was ok)
          scn.pos = scn.pos + 3
          lex_error("Illegal fully qualified class reference")
        end
      else
        # NAME or error
        value = scn.scan(PATTERN_NAME)
        if value
          emit_completed([:NAME, value.freeze, scn.pos-before], before)
        else
          # move to faulty position ('::' was ok)
          scn.pos = scn.pos + 2
          lex_error("Illegal fully qualified name")
        end
      end
    else
      emit(TOKEN_COLON, before)
    end

  when '$'
    if value = scn.scan(PATTERN_DOLLAR_VAR)
      emit_completed([:VARIABLE, value[1..-1].freeze, scn.pos - before], before)
    else
      # consume the $ and let higher layer complain about the error instead of getting a syntax error
      emit(TOKEN_VARIABLE_EMPTY, before)
    end

  when '"'
    # Recursive string interpolation, 'interpolate' either returns a STRING token, or
    # a DQPRE with the rest of the string's tokens placed in the @token_queue
    interpolate_dq

  when "'"
    emit_completed([:STRING, slurp_sqstring.freeze, scn.pos - before], before)

  when '0', '1', '2', '3', '4', '5', '6', '7', '8', '9'
    value = scn.scan(PATTERN_NUMBER)
    if value
      length = scn.pos - before
      assert_numeric(value, length)
      emit_completed([:NUMBER, value.freeze, length], before)
    else
      # move to faulty position ([0-9] was ok)
      scn.pos = scn.pos + 1
      lex_error("Illegal number")
    end

  when 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
  'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '_'
    value = scn.scan(PATTERN_NAME)
    # NAME or false start because followed by hyphen(s), underscore or word
    if value && !scn.match?(/^-+\w/)
      emit_completed(KEYWORDS[value] || [:NAME, value.freeze, scn.pos - before], before)
    else
      # Restart and check entire pattern (for ease of detecting non allowed trailing hyphen)
      scn.pos = before
      value = scn.scan(PATTERN_BARE_WORD)
      # If the WORD continues with :: it must be a correct fully qualified name
      if value && !(fully_qualified = scn.match?(/::/))
        emit_completed([:WORD, value.freeze, scn.pos - before], before)
      else
        # move to faulty position ([a-z_] was ok)
        scn.pos = scn.pos + 1
        if fully_qualified
          lex_error("Illegal fully qualified name")
        else
          lex_error("Illegal name or bare word")
        end
      end
    end

  when 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',
  'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
    value = scn.scan(PATTERN_CLASSREF)
    if value
      emit_completed([:CLASSREF, value.freeze, scn.pos - before], before)
    else
      # move to faulty position ([A-Z] was ok)
      scn.pos = scn.pos + 1
      lex_error("Illegal class reference")
    end

  when "\n"
    # If heredoc_cont is in effect there are heredoc text lines to skip over
    # otherwise just skip the newline.
    #
    if ctx[:newline_jump]
      scn.pos = ctx[:newline_jump]
      ctx[:newline_jump] = nil
    else
      scn.pos += 1
    end
    return nil

  when ' ', "\t", "\r"
    scn.skip(PATTERN_WS)
    return nil

  else
    # In case of unicode spaces of various kinds that are captured by a regexp, but not by the
    # simpler case expression above (not worth handling those special cases with better performance).
    if scn.skip(PATTERN_WS)
      nil
    else
      # "unrecognized char"
      emit([:OTHER, la0, 1], before)
    end
  end
end

#lex_unquoted_string(string, locator, escapes, interpolate) ⇒ Object

Lexes an unquoted string.

Parameters:

  • string (String)

    the string to lex

  • locator (Puppet::Pops::Parser::Locator)

    the locator to use (a default is used if nil is given)

  • escapes (Array<String>)

    array of character strings representing the escape sequences to transform

  • interpolate (Boolean)

    whether interpolation of expressions should be made or not.



211
212
213
214
215
216
217
# File 'lib/puppet/pops/parser/lexer2.rb', line 211

def lex_unquoted_string(string, locator, escapes, interpolate)
  initvars
  @scanner = StringScanner.new(string)
  @locator = locator || Puppet::Pops::Parser::Locator.locator(string, '')
  @lexing_context[:escapes] = escapes || UQ_ESCAPES
  @lexing_context[:uq_slurp_pattern] = (interpolate || !escapes.empty?) ? SLURP_UQ_PATTERN : SLURP_ALL_PATTERN
end

#regexp_acceptable?Boolean

Answers after which tokens it is acceptable to lex a regular expression. PERFORMANCE NOTE: It may be beneficial to turn this into a hash with default value of true for missing entries. A case expression with literal values will however create a hash internally. Since a reference is always needed to the hash, this access is almost as costly as a method call.

Returns:

  • (Boolean)


675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
# File 'lib/puppet/pops/parser/lexer2.rb', line 675

def regexp_acceptable?
  case @lexing_context[:after]

  # Ends of (potential) R-value generating expressions
  when :RPAREN, :RBRACK, :RRCOLLECT, :RCOLLECT
    false

  # End of (potential) R-value - but must be allowed because of case expressions
  # Called out here to not be mistaken for a bug.
  when :RBRACE
    true

  # Operands (that can be followed by DIV (even if illegal in grammar)
  when :NAME, :CLASSREF, :NUMBER, :STRING, :BOOLEAN, :DQPRE, :DQMID, :DQPOST, :HEREDOC, :REGEX
    false

  else
    true
  end
end

#scan {|[false, false]| ... } ⇒ Object

A block must be passed to scan. It will be called with two arguments, a symbol for the token, and an instance of LexerSupport::TokenValue PERFORMANCE NOTE: The TokenValue is designed to reduce the amount of garbage / temporary data and to only convert the lexer’s internal tokens on demand. It is slightly more costly to create an instance of a class defined in Ruby than an Array or Hash, but the gain is much bigger since transformation logic is avoided for many of its members (most are never used (e.g. line/pos information which is only of value in general for error messages, and for some expressions (which the lexer does not know about).

Yields:

  • ([false, false])


268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
# File 'lib/puppet/pops/parser/lexer2.rb', line 268

def scan
  # PERFORMANCE note: it is faster to access local variables than instance variables.
  # This makes a small but notable difference since instance member access is avoided for
  # every token in the lexed content.
  #
  scn   = @scanner
  ctx   = @lexing_context
  queue = @token_queue

  lex_error_without_pos("Internal Error: No string or file given to lexer to process.") unless scn

  scn.skip(PATTERN_WS)

  # This is the lexer's main loop
  until queue.empty? && scn.eos? do
    if token = queue.shift || lex_token
      ctx[:after] = token[0]
      yield token
    end
  end

  # Signals end of input
  yield [false, false]
end

#string=(string) ⇒ Object

Convenience method, and for compatibility with older lexer. Use the lex_string instead which allows passing the path to use without first having to call file= (which reads the file if it exists). (Bad form to use overloading of assignment operator for something that is not really an assignment. Also, overloading of = does not allow passing more than one argument).



195
196
197
# File 'lib/puppet/pops/parser/lexer2.rb', line 195

def string=(string)
  lex_string(string, '')
end