3. Writing Extensions¶
Contents
By writing extensions you can add custom tags to Jinja2. This is a non-trivial task and usually not needed as the default tags and expressions cover all common use cases. The i18n extension is a good example of why extensions are useful. Another one would be fragment caching.
When writing extensions you have to keep in mind that you are working with the Jinja2 template compiler which does not validate the node tree you are passing to it. If the AST is malformed you will get all kinds of compiler or runtime errors that are horrible to debug. Always make sure you are using the nodes you create correctly. The API documentation below shows which nodes exist and how to use them.
3.1. Example Extension¶
The following example implements a cache tag for Jinja2 by using the Werkzeug caching contrib module:
And here is how you use it in an environment:
from jinja2 import Environment
from werkzeug.contrib.cache import SimpleCache
env = Environment(extensions=[FragmentCacheExtension])
env.fragment_cache = SimpleCache()
Inside the template it’s then possible to mark blocks as cacheable. The following example caches a sidebar for 300 seconds:
{% cache 'sidebar', 300 %}
<div class="sidebar">
...
</div>
{% endcache %}
3.2. Extension API¶
Extensions always have to extend the jinja2.ext.Extension
class:
3.3. Parser API¶
The parser passed to Extension.parse()
provides ways to parse
expressions of different types. The following methods may be used by
extensions:
3.3.2. jinja2.lexer.TokenStream¶
-
class
jinja2.lexer.
TokenStream
(generator, name, filename)[source]¶ A token stream is an iterable that yields
Token
s. The parser however does not iterate over it but callsnext()
to go one token ahead. The current active token is stored ascurrent
.-
eos
¶ Are we at the end of the stream?
-
expect
(expr)[source]¶ Expect a given token type and return it. This accepts the same argument as
jinja2.lexer.Token.test()
.
-
next
()¶ Go one token ahead and return the old one
-
3.3.3. jinja2.lexer.Token¶
-
class
jinja2.lexer.
Token
[source]¶ Token class.
Members: test, test_any -
lineno
¶ The line number of the token
-
type
¶ The type of the token. This string is interned so you may compare it with arbitrary strings using the is operator.
-
value
¶ The value of the token.
-
There is also a utility function in the lexer module that can count newline characters in strings:
3.4. AST¶
The AST (Abstract Syntax Tree) is used to represent a template after parsing. It’s build of nodes that the compiler then converts into executable Python code objects. Extensions that provide custom statements can return nodes to execute custom Python code.
The list below describes all nodes that are currently available. The AST may change between Jinja2 versions but will stay backwards compatible.
For more information have a look at the repr of jinja2.Environment.parse()
.