I admit I jumped on the pytest ship quite late, after a few years of relying on the good ol’ unittest module, and for a good part they worked reasonably well for me, for little apparent incentive to something else.

Over time I started using pytest more and more as I discovered quite a few interesting approaches available with it, which are out of unittest reach.

Writing a test

A single test function can be divided in three steps:

  • Arrange: create the tests conditions and initialize the environment in which you want run the subject of the test
  • Act: execute the function you want to test
  • Assert: verify the function outcome

The hardest part of writing a test is by far the Arrange one.

If you can get away with a limited setup in purely unit tests, as you move one step up in the testing pyramid, and you write integration or service tests, creating the correct conditions can be non trivial.

The risk with this is to put too many assertions in a single test function to “spare” too many tests sharing the same (or similar) test arrangements.

A slightly better solution is to share the arrangement between a few tests by calling a common function, but this can lead to make the tests less readable in the future due to the extra level of indirection.

Parametrize

There is one thing computers are good at: executing repeatedly the same code against different inputs.

What if we apply this to executing tests?

That’s the idea of parametrize:

1
2
3
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Thanks to how python works, you can treat a function as an input object to another function and let the latter alter the former or execute it with additional parameters (a pattern implemented by python decorators).

Parametrize uses this pattern to call our single test function using the list of inputs provided as arguments to effectively create (and the tests log reflects this) a list of test functions “generated” at runtime sharing the same body.

The bonus of this approach is that you can concisely define a set of input and expectations, yet they are very clearly isolated from the test function body; on the contrary calling shared functions in multiple concrete test functions bury the conditions and expectations in the code making it difficult to quickly isolate them.

A more complex example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@pytest.mark.asyncio
@pytest.mark.parametrize(
    "headers,status_code",
    [
        (None, 422),
        ({"Authorization": "bla bla"}, 403),
        ({"Authorization": f"Token {LICENSE_API_TOKEN}"}, 200),
    ],
    ids=["no_header", "wrong_auth", "authenticated"],
)
async def test_clear_cache_auth(self, redis, headers, status_code):
    """ Cache is cleared if view is called with proper authentication token, is left intact otherwise. """
    async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
        cache = await redis(app)

        await cache.set("somekey", "someval")

        url = app.url_path_for("cache_clear")
        response = await client.post(url, headers=headers)
        assert response.status_code == status_code
        check_value = await cache.get("somekey")
        if status_code == 200:
            assert check_value is None
        else:
            assert check_value is not None

Caveats

There are two things which one must be aware when using parametrize:

  • each test function run is executed in isolation, as if hey were different functions (as you would expect), so the arrange part is also execute multiple times. If it’s time consuming this will make your test suite slower very quickly as you add input combinations
  • you might to adapt the assertions depending on the expected outcome (if you are testing an email send function you may want to check the email headers of the delivered email the send is successful, which you can’t when testing for send failures): this can get out of hand very quickly and creating a mess of conditions and assertions, I advise against doing more than a single if to distinguish the normal tested function flow and the error handing; take this as signal that you are forcing too many unrelated assertions in a single test and evaluate whether create different test functions or test some of the assertions in other test functions.

Don’t do this!:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
testdata = [
    (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
    (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
]

@pytest.mark.parametrize("a,b,expected", testdata)
def test_timedistance_v0(a, b, expected):
    """ This function is intentionally bad. """
    diff = a - b
    if a < datetime(2001, 3, 31):
        a.tzinfo.dst() == 0
    elif a > datetime(2001, 10, 31):
        a.tzinfo.dst() == 0
    else:
        a.tzinfo.dst() == 1
    if b < datetime(2001, 3, 31):
        b.tzinfo.dst() == 0
    elif b > datetime(2001, 10, 31):
        b.tzinfo.dst() == 0
    else:
        b.tzinfo.dst() == 1
    assert diff == expected

Interesting parametrize features

Parametrize allows for very complex scenarios and I encourage you to check the documentation for the details.

Still there are a couple of features worth mentioning

Assign ids to each combination

By default parametrize creates the test function run name by combining the function name and the parameters set (to something like test_timedistance_v0[a0-b0-expected0]) which might be self explanatory. To customize this you can use the ids argument to provide a terser name test_timedistance_v0[forward]

Tests with automatic and manually specified ids:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
testdata = [
    (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
    (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
]


@pytest.mark.parametrize("a,b,expected", testdata)
def test_timedistance_v0(a, b, expected):
    """ Tests with autogenerated ids. """
    diff = a - b
    assert diff == expected


@pytest.mark.parametrize("a,b,expected", testdata, ids=["forward", "backward"])
def test_timedistance_v1(a, b, expected):
    """ Tests with customized ids. """
    diff = a - b
    assert diff == expected

Generated test names:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ pytest test_time.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 8 items

<Module test_time.py>
  <Function test_timedistance_v0[a0-b0-expected0]>
  <Function test_timedistance_v0[a1-b1-expected1]>
  <Function test_timedistance_v1[forward]>
  <Function test_timedistance_v1[backward]>

======================== 4 tests collected in 0.12s ========================

Mark parameters combination

One of the great pytest features are marker which allow to decorate test functions for different behavior (parametrize itself is a marker). You can use marks on a parametrize decorated function as you would normally do, but you can also decorate single parameter set, using the pytest.param function, which you can also use to define a single Id without defined one for each combination

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
@pytest.mark.parametrize(
    "test_input,expected",
    [
        ("3+5", 8),
        pytest.param("2+4", 6, id="basic_2+4"),
        pytest.param(
            "6*9", 42, marks=[pytest.mark.xfail], id="basic_6*9"
        ),
    ],
)
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Take home

One of the signatures of pytest is to completely embrace the dynamic nature of python, which allows it to work on a “meta” level, by enriching the test behaviors working outside the test functions, thus separating very clearly the boundaries and creating a test suite easier to understand and extended.

Parametrize is a perfect example of the pytest approach and I advise you to experiment with it, as it will greatly improve your test writing experience.