jinja2.lexer¶
This module implements a Jinja / Python combination lexer. The Lexer class provided by this module is used to do some preprocessing for Jinja.
On the one hand it filters out invalid operators like the bitshift operators we don’t allow in templates. On the other hand it separates template code and python code in expressions.
copyright: |
|
---|---|
license: | BSD, see LICENSE for more details. |
Functions¶
compile_rules (environment) |
Compiles all the rules from the environment into a list of rules. |
count_newlines (value) |
Count the number of newline characters in the string. |
describe_token (token) |
Returns a description of the token. |
describe_token_expr (expr) |
Like describe_token but for token expressions. |
get_lexer (environment) |
Return a lexer which is probably cached. |
implements_iterator (cls) |
|
intern ((string) -> string) |
``Intern’’ the given string. This enters the string in the (global) |
iteritems (d) |
Classes¶
Failure (message[, cls]) |
Class that raises a TemplateSyntaxError if called. |
LRUCache (capacity) |
A simple LRU Cache implementation. |
Lexer (environment) |
Class that implements a lexer for a given environment. |
Token |
Token class. |
TokenStream (generator, name, filename) |
A token stream is an iterable that yields Token s. |
TokenStreamIterator (stream) |
The iterator for tokenstreams. |
deque |
deque([iterable[, maxlen]]) –> deque object |
itemgetter |
itemgetter(item, ...) –> itemgetter object |
text_type |
alias of unicode |
Exceptions¶
TemplateSyntaxError (message, lineno[, name, ...]) |
Raised to tell the user that there is a problem with the template. |