Adam Johnson

Home | Blog | Projects | Colophon

My Most Used Pytest Commandline Flags

2019-10-03

Look on ye Pytesters

Pytest is quickly becoming the “standard” Python testing framework. However it can be overwhelming to new users.

pytest --help currently outputs 275 lines of command line flags and options. Where do you even begin?

I searched my ZSH history for my recent usage of Pytest. I found 184 unique invocations, out of which I split the command line flags I used. Here are the top five flags I used by frequency.

1. -v

-v, or --verbose, increases the verbosity level. The default level outputs a reasonable amount of information to debug most test failures. However when there are many differences between actual and expected data, some get hidden. In such cases Pytest normally appends a message to the failure text such as:

...Full output truncated (23 lines hidden), use '-vv' to show

This tells us to see the hidden lines, we need to rerun the tests with verbosity level 2. To do this we can pass -v twice as -v -v, or more easily as -vv.

Another change from passing -v at least once is that the test names are output one per line as they run:

$ pytest -v
tests.py::test_parrot_status PASSED                                      [100%]

Sometimes I use this when looking for slow or hanging tests.

I use -v quite frequently and -vv occasionally. I never resorted to -vvv in my current history though I think there is still extra output it provides that is useful in certain cases.

2. --pdb

Passing --pdb makes Pytest start PDB, Python’s built-in debugger, when a test fails. Rather than seeing static failure output, you can directly interact with the objects, in the test environment, right at the point of failure.

The Pytest documentation explains it very well. It’s also worth learning all the things PDB can do - it’s very powerful!

This is my go-to method of fixing broken tests. I also use it when writing tests to find what data I should be expecting.

For example, take the following half-test. It is missing assertions on the state of parrot. Instead we have an always-fail assert 0:

def test_parrot_is_alive():
    parrot = get_parrot()
    assert 0

If you ran this test with pytest --pdb, it would fail, and Pytest would immediately open PDB on the line after the assert. You could then use PDB’s “p” shortcut to print the current contents of parrot:

$ pytest tests.py --pdb
...
tests.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    def test_parrot_is_alive():
        parrot = get_parrot()
>       assert 0
E       assert 0

../../../tmp/test.py:15: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>
> /Users/chainz/tmp/test.py(15)test_parrot_is_alive()
-> assert 0
(Pdb) p parrot
Parrot(deceased=True, pining_for_fjords=False, species="Norwegian Blue")

Finishing the test could then be as easy as copying the output back into the test file, prefixed with assert parrot == . Unfortunately in this case it seems the test has found a bug - deceased shoud be False!

3. -k

The -k option allows you to filter which tests to run, by matching their names against a “keyword expression”. The documentation describes the many types of keyword expression.

I found I’ve only use simple string expressions recently. These simply perform substring matches on test names. Often this means running all tests for a specific component, such as passing -k http to run all tests with “http” in their names.

I also use this as a quicker way of running a specific test. The definitive way to run a single test is by passing its “Node ID” as a positional argument:

$ pytest tests/creatures/birds/test_parrot.py::test_parrot_is_alive

This can be tedious to construct. I usually find it easier to filter by name with -k:

$ pytest -k test_parrot_is_alive

If your test suite has many tests with generic names like test_success, this is less useful. But maybe that’s an incentive to use more specific names!

4. -s

By default Pytest captures standard output while running tests. It’s only if a test fails that it shows the captured output.

This is great default behaviour but I sometimes find it getting in the way of debugging. Often this means running PDB at a specific point with import pdb; pdb.set_trace() (breakpoint() on Python 3.7+) or adding calling print() to trace usage. In such cases, the output capture needs disabling with -s.

5. -x

The -x, or --exitfirst option stops the test run after the first failure. This is equivalent to unittest’s -f, or --failfast, option.´

I often use this when doing a sweeping refactor on a project. If my change has broken something fundamental, I don’t want to wait to see a bazillion identical failures. The first failure is normally enough to figure out what went wrong.

There’s also the --maxfail=num to stop after num failures, but I don’t recall using it. Essentially -x is a shortcut for --maxfail=1.

Fin

Hope this helps you up your Pytest game,

—Adam


Subscribe via RSS, Twitter, or email:

Related posts:

Tags: django, pytest, python