My Most Used Pytest Commandline Flags2019-10-03
Pytest is quickly becoming the “standard” Python testing framework. However it can be overwhelming to new users.
pytest --help currently outputs 275 lines of command line flags and options.
Where do you even begin?
I searched my ZSH history for my recent usage of Pytest. I found 184 unique invocations, out of which I split the command line flags I used. Here are the top five flags I used by frequency.
--verbose, increases the verbosity level.
The default level outputs a reasonable amount of information to debug most test failures.
However when there are many differences between actual and expected data, some get hidden.
In such cases Pytest normally appends a message to the failure text such as:
...Full output truncated (23 lines hidden), use '-vv' to show
This tells us to see the hidden lines, we need to rerun the tests with verbosity level 2.
To do this we can pass
-v twice as
-v -v, or more easily as
Another change from passing
-v at least once is that the test names are output one per line as they run:
$ pytest -v tests.py::test_parrot_status PASSED [100%]
Sometimes I use this when looking for slow or hanging tests.
-v quite frequently and
I never resorted to
-vvv in my current history though I think there is still extra output it provides that is useful in certain cases.
--pdb makes Pytest start PDB, Python’s built-in debugger, when a test fails.
Rather than seeing static failure output, you can directly interact with the objects, in the test environment, right at the point of failure.
This is my go-to method of fixing broken tests. I also use it when writing tests to find what data I should be expecting.
For example, take the following half-test.
It is missing assertions on the state of
Instead we have an always-fail
If you ran this test with
pytest --pdb, it would fail, and Pytest would immediately open PDB on the line after the
You could then use PDB’s “p” shortcut to print the current contents of
Finishing the test could then be as easy as copying the output back into the test file, prefixed with
assert parrot == .
Unfortunately in this case it seems the test has found a bug -
deceased shoud be
-k option allows you to filter which tests to run, by matching their names against a “keyword expression”.
The documentation describes the many types of keyword expression.
I found I’ve only use simple string expressions recently.
These simply perform substring matches on test names.
Often this means running all tests for a specific component, such as passing
-k http to run all tests with “http” in their names.
I also use this as a quicker way of running a specific test. The definitive way to run a single test is by passing its “Node ID” as a positional argument:
This can be tedious to construct.
I usually find it easier to filter by name with
If your test suite has many tests with generic names like
test_success, this is less useful.
But maybe that’s an incentive to use more specific names!
By default Pytest captures standard output while running tests. It’s only if a test fails that it shows the captured output.
This is great default behaviour but I sometimes find it getting in the way of debugging.
Often this means running PDB at a specific point with
import pdb; pdb.set_trace() (
breakpoint() on Python 3.7+) or adding calling
print() to trace usage.
In such cases, the output capture needs disabling with
--exitfirst option stops the test run after the first failure.
This is equivalent to unittest’s
I often use this when doing a sweeping refactor on a project. If my change has broken something fundamental, I don’t want to wait to see a bazillion identical failures. The first failure is normally enough to figure out what went wrong.
There’s also the
--maxfail=num to stop after
num failures, but I don’t recall using it.
-x is a shortcut for
Hope this helps you up your Pytest game,
© 2019 All rights reserved.