3. Writing Extensions

By writing extensions you can add custom tags to Jinja2. This is a non-trivial task and usually not needed as the default tags and expressions cover all common use cases. The i18n extension is a good example of why extensions are useful. Another one would be fragment caching.

When writing extensions you have to keep in mind that you are working with the Jinja2 template compiler which does not validate the node tree you are passing to it. If the AST is malformed you will get all kinds of compiler or runtime errors that are horrible to debug. Always make sure you are using the nodes you create correctly. The API documentation below shows which nodes exist and how to use them.

3.1. Example Extension

The following example implements a cache tag for Jinja2 by using the Werkzeug caching contrib module:

And here is how you use it in an environment:

from jinja2 import Environment
from werkzeug.contrib.cache import SimpleCache

env = Environment(extensions=[FragmentCacheExtension])
env.fragment_cache = SimpleCache()

Inside the template it’s then possible to mark blocks as cacheable. The following example caches a sidebar for 300 seconds:

{% cache 'sidebar', 300 %}
<div class="sidebar">
    ...
</div>
{% endcache %}

3.2. Extension API

Extensions always have to extend the jinja2.ext.Extension class:

3.3. Parser API

The parser passed to Extension.parse() provides ways to parse expressions of different types. The following methods may be used by extensions:

3.3.2. jinja2.lexer.TokenStream

class jinja2.lexer.TokenStream(generator, name, filename)[source]

A token stream is an iterable that yields Tokens. The parser however does not iterate over it but calls next() to go one token ahead. The current active token is stored as current.

current

The current Token.

eos

Are we at the end of the stream?

expect(expr)[source]

Expect a given token type and return it. This accepts the same argument as jinja2.lexer.Token.test().

look()[source]

Look at the next token.

next()

Go one token ahead and return the old one

next_if(expr)[source]

Perform the token test and return the token if it matched. Otherwise the return value is None.

push(token)[source]

Push a token back to the stream.

skip(n=1)[source]

Got n tokens ahead.

skip_if(expr)[source]

Like next_if() but only returns True or False.

3.3.3. jinja2.lexer.Token

class jinja2.lexer.Token[source]

Token class.

Members:test, test_any
lineno

The line number of the token

type

The type of the token. This string is interned so you may compare it with arbitrary strings using the is operator.

value

The value of the token.

There is also a utility function in the lexer module that can count newline characters in strings:

3.3.4. jinja2.lexer.count_newlines

jinja2.lexer.count_newlines(value)[source]

Count the number of newline characters in the string. This is useful for extensions that filter a stream.

3.4. AST

The AST (Abstract Syntax Tree) is used to represent a template after parsing. It’s build of nodes that the compiler then converts into executable Python code objects. Extensions that provide custom statements can return nodes to execute custom Python code.

The list below describes all nodes that are currently available. The AST may change between Jinja2 versions but will stay backwards compatible.

For more information have a look at the repr of jinja2.Environment.parse().

See jinja2.nodes module

exception jinja2.nodes.Impossible[source]

Raised if the node could not perform a requested action.