Class: PuppetLint::Lexer
- Inherits:
-
Object
- Object
- PuppetLint::Lexer
- Defined in:
- lib/puppet-lint/lexer.rb,
lib/puppet-lint/lexer/token.rb
Overview
Internal: The puppet-lint lexer. Converts your manifest into its tokenised form.
Defined Under Namespace
Classes: Token
Constant Summary collapse
- KEYWORDS =
Internal: A Hash whose keys are Strings representing reserved keywords in the Puppet DSL.
{ 'class' => true, 'case' => true, 'default' => true, 'define' => true, 'import' => true, 'if' => true, 'else' => true, 'elsif' => true, 'inherits' => true, 'node' => true, 'and' => true, 'or' => true, 'undef' => true, 'true' => true, 'false' => true, 'in' => true, 'unless' => true, }
- REGEX_PREV_TOKENS =
Internal: A Hash whose keys are Symbols representing token types which a regular expression can follow.
{ :NODE => true, :LBRACE => true, :RBRACE => true, :MATCH => true, :NOMATCH => true, :COMMA => true, :LBRACK => true, }
- KNOWN_TOKENS =
Internal: An Array of Arrays containing tokens that can be described by a single regular expression. Each sub-Array contains 2 elements, the name of the token as a Symbol and a regular expression describing the value of the token.
[ [:TYPE, /\A(Integer|Float|Boolean|Regexp|String|Array|Hash|Resource|Class|Collection|Scalar|Numeric|CatalogEntry|Data|Tuple|Struct|Optional|NotUndef|Variant|Enum|Pattern|Any|Callable|Type|Runtime|Undef|Default)/], [:CLASSREF, /\A(((::){0,1}[A-Z][-\w]*)+)/], [:NUMBER, /\A\b((?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?))\b/], [:NAME, /\A(((::)?[a-z0-9][-\w]*)(::[a-z0-9][-\w]*)*)/], [:LBRACK, /\A(\[)/], [:RBRACK, /\A(\])/], [:LBRACE, /\A(\{)/], [:RBRACE, /\A(\})/], [:LPAREN, /\A(\()/], [:RPAREN, /\A(\))/], [:ISEQUAL, /\A(==)/], [:MATCH, /\A(=~)/], [:FARROW, /\A(=>)/], [:EQUALS, /\A(=)/], [:APPENDS, /\A(\+=)/], [:PARROW, /\A(\+>)/], [:PLUS, /\A(\+)/], [:GREATEREQUAL, /\A(>=)/], [:RSHIFT, /\A(>>)/], [:GREATERTHAN, /\A(>)/], [:LESSEQUAL, /\A(<=)/], [:LLCOLLECT, /\A(<<\|)/], [:OUT_EDGE, /\A(<-)/], [:OUT_EDGE_SUB, /\A(<~)/], [:LCOLLECT, /\A(<\|)/], [:LSHIFT, /\A(<<)/], [:LESSTHAN, /\A(<)/], [:NOMATCH, /\A(!~)/], [:NOTEQUAL, /\A(!=)/], [:NOT, /\A(!)/], [:RRCOLLECT, /\A(\|>>)/], [:RCOLLECT, /\A(\|>)/], [:IN_EDGE, /\A(->)/], [:IN_EDGE_SUB, /\A(~>)/], [:MINUS, /\A(-)/], [:COMMA, /\A(,)/], [:DOT, /\A(\.)/], [:COLON, /\A(:)/], [:AT, /\A(@)/], [:SEMIC, /\A(;)/], [:QMARK, /\A(\?)/], [:BACKSLASH, /\A(\\)/], [:TIMES, /\A(\*)/], [:MODULO, /\A(%)/], [:PIPE, /\A(\|)/], ]
- FORMATTING_TOKENS =
Internal: A Hash whose keys are Symbols representing token types which are considered to be formatting tokens (i.e. tokens that don’t contain code).
{ :WHITESPACE => true, :NEWLINE => true, :COMMENT => true, :MLCOMMENT => true, :SLASH_COMMENT => true, :INDENT => true, }
Instance Method Summary collapse
-
#get_string_segment(string, terminators) ⇒ Object
Internal: Split a string on multiple terminators, excluding escaped terminators.
-
#initialize ⇒ Lexer
constructor
A new instance of Lexer.
-
#interpolate_string(string, line, column) ⇒ Object
Internal: Tokenise the contents of a double quoted string.
-
#new_token(type, value, length, opts = {}) ⇒ Object
Internal: Create a new PuppetLint::Lexer::Token object, calculate its line number and column and then add it to the Linked List of tokens.
-
#possible_regex? ⇒ Boolean
Internal: Given the tokens already processed, determine if the next token could be a regular expression.
-
#tokenise(code) ⇒ Object
Internal: Convert a Puppet manifest into tokens.
-
#tokens ⇒ Object
Internal: Access the internal token storage.
Constructor Details
#initialize ⇒ Lexer
Returns a new instance of Lexer.
29 30 31 32 |
# File 'lib/puppet-lint/lexer.rb', line 29 def initialize @line_no = 1 @column = 1 end |
Instance Method Details
#get_string_segment(string, terminators) ⇒ Object
Internal: Split a string on multiple terminators, excluding escaped terminators.
string - The String to be split. terminators - The String of terminators that the String should be split
on.
Returns an Array consisting of two Strings, the String up to the first terminator and the terminator that was found.
309 310 311 312 313 314 315 316 |
# File 'lib/puppet-lint/lexer.rb', line 309 def get_string_segment(string, terminators) str = string.scan_until(/([^\\]|^|[^\\])([\\]{2})*[#{terminators}]+/) begin [str[0..-2], str[-1,1]] rescue [nil, nil] end end |
#interpolate_string(string, line, column) ⇒ Object
Internal: Tokenise the contents of a double quoted string.
string - The String to be tokenised. line - The Integer line number of the start of the passed string. column - The Integer column number of the start of the passed string.
Returns nothing.
325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 |
# File 'lib/puppet-lint/lexer.rb', line 325 def interpolate_string(string, line, column) ss = StringScanner.new(string) first = true value, terminator = get_string_segment(ss, '"$') until value.nil? if terminator == "\"" if first tokens << new_token(:STRING, value, value.size + 2, :line => line, :column => column) first = false else line += value.scan(/(\r\n|\r|\n)/).size token_column = column + (ss.pos - value.size) tokens << new_token(:DQPOST, value, value.size + 1, :line => line, :column => token_column) end else if first tokens << new_token(:DQPRE, value, value.size + 1, :line => line, :column => column) first = false else line += value.scan(/(\r\n|\r|\n)/).size token_column = column + (ss.pos - value.size) tokens << new_token(:DQMID, value, value.size, :line => line, :column => token_column) end if ss.scan(/\{/).nil? var_name = ss.scan(/(::)?([\w]+::)*[\w]+/) if var_name.nil? token_column = column + ss.pos - 1 tokens << new_token(:DQMID, "$", 1, :line => line, :column => token_column) else token_column = column + (ss.pos - var_name.size) tokens << new_token(:UNENC_VARIABLE, var_name, var_name.size, :line => line, :column => token_column) end else contents = ss.scan_until(/\}/)[0..-2] if contents.match(/\A(::)?([\w-]+::)*[\w-]+(\[.+?\])*/) contents = "$#{contents}" end lexer = PuppetLint::Lexer.new lexer.tokenise(contents) lexer.tokens.each do |token| tok_col = column + token.column + (ss.pos - contents.size - 1) tok_line = token.line + line - 1 tokens << new_token(token.type, token.value, token.value.size, :line => tok_line, :column => tok_col) end end end value, terminator = get_string_segment(ss, '"$') end end |
#new_token(type, value, length, opts = {}) ⇒ Object
Internal: Create a new PuppetLint::Lexer::Token object, calculate its line number and column and then add it to the Linked List of tokens.
type - The Symbol token type. value - The token value. length - The Integer length of the token’s value. opts - A Hash of additional values required to determine line number and
column:
:line - The Integer line number if calculated externally.
:column - The Integer column number if calculated externally.
Returns the instantiated PuppetLint::Lexer::Token object.
272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 |
# File 'lib/puppet-lint/lexer.rb', line 272 def new_token(type, value, length, opts = {}) column = opts[:column] || @column line_no = opts[:line] || @line_no token = Token.new(type, value, line_no, column) unless tokens.last.nil? token.prev_token = tokens.last tokens.last.next_token = token unless FORMATTING_TOKENS.include?(token.type) prev_nf_idx = tokens.rindex { |r| ! FORMATTING_TOKENS.include? r.type } unless prev_nf_idx.nil? prev_nf_token = tokens[prev_nf_idx] prev_nf_token.next_code_token = token token.prev_code_token = prev_nf_token end end end @column += length if type == :NEWLINE @line_no += 1 @column = 1 end token end |
#possible_regex? ⇒ Boolean
Internal: Given the tokens already processed, determine if the next token could be a regular expression.
Returns true if the next token could be a regex, otherwise return false.
246 247 248 249 250 251 252 253 254 255 256 257 258 |
# File 'lib/puppet-lint/lexer.rb', line 246 def possible_regex? prev_token = tokens.reject { |r| FORMATTING_TOKENS.include? r.type }.last return true if prev_token.nil? if REGEX_PREV_TOKENS.include? prev_token.type true else false end end |
#tokenise(code) ⇒ Object
Internal: Convert a Puppet manifest into tokens.
code - The Puppet manifest to be tokenised as a String.
Returns an Array of PuppetLint::Lexer::Token objects. Raises PuppetLint::LexerError if it encounters unexpected characters (usually the result of syntax errors).
146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 |
# File 'lib/puppet-lint/lexer.rb', line 146 def tokenise(code) i = 0 while i < code.size chunk = code[i..-1] found = false KNOWN_TOKENS.each do |type, regex| if value = chunk[regex, 1] length = value.size if type == :NAME if KEYWORDS.include? value tokens << new_token(value.upcase.to_sym, value, length) else tokens << new_token(type, value, length) end else tokens << new_token(type, value, length) end i += length found = true break end end unless found if var_name = chunk[/\A\$((::)?([\w]+::)*[\w]+(\[.+?\])*)/, 1] length = var_name.size + 1 tokens << new_token(:VARIABLE, var_name, length) elsif chunk.match(/\A'(.*?)'/m) str_content = StringScanner.new(code[i+1..-1]).scan_until(/(\A|[^\\])(\\\\)*'/m) length = str_content.size + 1 tokens << new_token(:SSTRING, str_content[0..-2], length) elsif chunk.match(/\A"/) str_contents = StringScanner.new(code[i+1..-1]).scan_until(/(\A|[^\\])(\\\\)*"/m) _ = code[0..i].split("\n") interpolate_string(str_contents, _.count, _.last.length) length = str_contents.size + 1 elsif comment = chunk[/\A(#.*)/, 1] length = comment.size comment.sub!(/#/, '') tokens << new_token(:COMMENT, comment, length) elsif slash_comment = chunk[/\A(\/\/.*)/, 1] length = slash_comment.size slash_comment.sub!(/\/\//, '') tokens << new_token(:SLASH_COMMENT, slash_comment, length) elsif mlcomment = chunk[/\A(\/\*.*?\*\/)/m, 1] length = mlcomment.size mlcomment_raw = mlcomment.dup mlcomment.sub!(/\A\/\* ?/, '') mlcomment.sub!(/ ?\*\/\Z/, '') mlcomment.gsub!(/^ *\*/, '') tokens << new_token(:MLCOMMENT, mlcomment, length) tokens.last.raw = mlcomment_raw elsif chunk.match(/\A\/.*?\//) && possible_regex? str_content = StringScanner.new(code[i+1..-1]).scan_until(/(\A|[^\\])(\\\\)*\//m) length = str_content.size + 1 tokens << new_token(:REGEX, str_content[0..-2], length) elsif eolindent = chunk[/\A((\r\n|\r|\n)[ \t]+)/m, 1] eol = eolindent[/\A([\r\n]+)/m, 1] indent = eolindent[/\A[\r\n]+([ \t]+)/m, 1] tokens << new_token(:NEWLINE, eol, eol.size) tokens << new_token(:INDENT, indent, indent.size) length = indent.size + eol.size elsif whitespace = chunk[/\A([ \t]+)/, 1] length = whitespace.size tokens << new_token(:WHITESPACE, whitespace, length) elsif eol = chunk[/\A(\r\n|\r|\n)/, 1] length = eol.size tokens << new_token(:NEWLINE, eol, length) elsif chunk.match(/\A\//) length = 1 tokens << new_token(:DIV, '/', length) else raise PuppetLint::LexerError.new(@line_no, @column) end i += length end end tokens end |
#tokens ⇒ Object
Internal: Access the internal token storage.
Returns an Array of PuppetLint::Lexer::Toxen objects.
135 136 137 |
# File 'lib/puppet-lint/lexer.rb', line 135 def tokens @tokens ||= [] end |