Mike Slinn

Pytest and Visual Studio Code

Published 2023-08-20.
Time to read: 4 minutes.

This page is part of the posts collection, categorized under Python, Visual Studio Code.

Python has several well-known testing frameworks. Popular choices for functional and unit testing include pytest, the Robot framework, and unittest. Lettuce and Behave are popular for behavior-driven testing.

I decided to use pytest for a recent Python project.

I found that pytest’s test discovery mechanism was finicky, and demanded that Python packages and modules be set up before tests could be run. This is in sharp contrast to writing unit tests for Ruby, for example, using rspec, because Ruby modules are so much more flexible, forgiving and malleable.

However, getting pytest to work in Visual Studio Code was nearly impossible. Microsoft's documentation leaves a lot unsaid. I share what I learned, and how I made things work in this article.

Pytest

You can run pytest from the command line 3 ways.

Pytest Command

The most common way to run pytest is to type the pytest command, like the following, which runs all tests:

Shell
$ pytest

Pytest Module

The second way to run pytest is to invoke the pytest Python module, as follows:

Shell
$ python -m pytest

Running pytest as a Python module is nearly the same as running the pytest command, except that running it as a module adds the current directory to sys.path, which is standard python behavior, and can be helpful.

I had best results running pytest tests from Visual Studio Code when pytest was invoked as a module.

Calling Pytest from Python

Your code can call pytest. Lots of interesting and advanced development setups could be crafted using this approach. Baby steps.

Help Message

The help message for pytest is:

Shell
$ pytest -h
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments: file_or_dir
general: -k EXPRESSION Only run tests which match the given substring expression. An expression is a Python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst Exit instantly on first error or failed test --fixtures, --funcargs Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test Show fixtures per test --pdb Start the interactive Python debugger on errors or KeyboardInterrupt --pdbcls=modulename:classname Specify a custom interactive Python debugger for use with --pdb.For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test --capture=method Per-test capturing method: one of fd|sys|no|tee-sys -s Shortcut for --capture=no --runxfail Report the results of xfail tests as if they were not marked --lf, --last-failed Rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first Run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first Run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] Show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear Remove all cache contents at start of test run --lfnf={all,none}, --last-failed-no-failures={all,none} Which tests to run with no previously (known) failures --sw, --stepwise Exit on test failure and continue from last failing test next time --sw-skip, --stepwise-skip Ignore the first failing test but stop on the next failing test. Implicitly enables --stepwise.
Reporting: --durations=N Show N slowest setup/test durations (N=0 for all) --durations-min=N Minimal duration in seconds for inclusion in slowest list. Default: 0.005. -v, --verbose Increase verbosity --no-header Disable header --no-summary Disable summary -q, --quiet Decrease verbosity --verbosity=VERBOSE Set verbosity. Default: 0. -r chars Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings Disable warnings summary -l, --showlocals Show locals in tracebacks (disabled by default) --no-showlocals Hide locals in tracebacks (negate --showlocals passed through addopts) --tb=style Traceback print mode (auto/long/short/line/native/no) --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default: all. --full-trace Don't cut any tracebacks (default is to cut) --color=color Color terminal output (yes/no/auto) --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled). Default: yes. --pastebin=mode Send failed|all info to bpaste.net pastebin service --junit-xml=path Create junit-xml style report file at given path --junit-prefix=str Prepend prefix to classnames in junit-xml output
pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS Set which warnings to report, see -W option of Python itself --maxfail=num Exit after first num failures or errors --strict-config Any warnings encountered while parsing the `pytest` section of the configuration file raise errors --strict-markers Markers not registered in the `markers` section of the configuration file raise errors --strict (Deprecated) alias to --strict-markers -c FILE, --config-file=FILE Load configuration from `FILE` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'.
collection: --collect-only, --co Only collect tests, don't execute them --pyargs Try to interpret all arguments as Python packages --ignore=path Ignore path during collection (multi-allowed) --ignore-glob=path Ignore path pattern during collection (multi-allowed) --deselect=nodeid_prefix Deselect item (via node id prefix) during collection (multi-allowed) --confcutdir=dir Only load conftest.py's relative to specified dir --noconftest Don't load any conftest.py files --keep-duplicates Keep duplicate tests --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} Prepend/append to sys.path when importing test modules and conftest files. Default: prepend. --doctest-modules Run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} Choose another output format for diffs on doctest failure --doctest-glob=pat Doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors Ignore doctest ImportErrors --doctest-continue-on-failure For a given doctest, continue to run after the first failure
test session debugging and configuration: --basetemp=dir Base temporary directory for this test run. (Warning: this directory is removed if it exists.) -V, --version Display pytest version and information about plugins. When given twice, also display information about plugins. -h, --help Show help message and configuration info -p name Early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config Trace considerations of conftest.py files --debug=[DEBUG_FILE_NAME] Store internal tracing debug information in this log file. This file is opened with 'w' and truncated as a result, care advised. Default: pytestdebug.log. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only Only setup fixtures, do not execute tests --setup-show Show setup of fixtures while executing tests --setup-plan Show what fixtures and tests would be executed but don't execute anything
logging: --log-level=LEVEL Level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT Log format used by the logging module --log-date-format=LOG_DATE_FORMAT Log date format used by the logging module --log-cli-level=LOG_CLI_LEVEL CLI logging level --log-cli-format=LOG_CLI_FORMAT Log format used by the logging module --log-cli-date-format=LOG_CLI_DATE_FORMAT Log date format used by the logging module --log-file=LOG_FILE Path to a file when logging will be written to --log-file-level=LOG_FILE_LEVEL Log file logging level --log-file-format=LOG_FILE_FORMAT Log format used by the logging module --log-file-date-format=LOG_FILE_DATE_FORMAT Log date format used by the logging module --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. --log-disable=LOGGER_DISABLE Disable a logger by name. Can be passed multiple times.
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:
markers (linelist): Markers for test functions empty_parameter_set_mark (string): Default marker for empty parametersets norecursedirs (args): Directory patterns to avoid for recursion testpaths (args): Directories to search for tests when no files or directories are given on the command line filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. usefixtures (args): List of default fixtures to be used with this project python_files (args): Glob-style file patterns for Python test module discovery python_classes (args): Prefixes or glob names for Python test class discovery python_functions (args): Prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own risk) console_output_style (string): Console output: "classic", or with additional progress information ("progress" (percentage) | "count" | "progress-even-when-capture-no" (forces progress even when capture=no) xfail_strict (bool): Default for the strict parameter of xfail markers when not given explicitly (default: False) tmp_path_retention_count (string): How many sessions should we keep the `tmp_path` directories, according to `tmp_path_retention_policy`. tmp_path_retention_policy (string): Controls which directories created by the `tmp_path` fixture are kept around, based on test outcome. (all/failed/none) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache files. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): Option flags for doctests doctest_encoding (string): Encoding used for doctest files cache_dir (string): Cache directory path log_level (string): Default value for --log-level log_format (string): Default value for --log-format log_date_format (string): Default value for --log-date-format log_cli (bool): Enable log display during test run (also known as "live logging") log_cli_level (string): Default value for --log-cli-level log_cli_format (string): Default value for --log-cli-format log_cli_date_format (string): Default value for --log-cli-date-format log_file (string): Default value for --log-file log_file_level (string): Default value for --log-file-level log_file_format (string): Default value for --log-file-format log_file_date_format (string): Default value for --log-file-date-format log_auto_indent (string): Default value for --log-auto-indent pythonpath (paths): Add paths to sys.path faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish addopts (args): Extra command line options minversion (string): Minimally required pytest version required_plugins (args): Plugins that must be present for pytest to run
Environment variables: PYTEST_ADDOPTS Extra command line options PYTEST_PLUGINS Comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading PYTEST_DEBUG Set to enable debug tracing of pytest's internals

to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option

Helpful Options

It is handy to run new and failing tests, but to skip previously passing tests. The ‑‑nf and ‑‑lf options, respectively, invoke that mode of behavior:

Shell
$ pytest --nf --lf

Setup

Configuration

Pytest needed to know that the source code for my Python project is stored in the src directory. I created pyproject.toml as follows, which configured pytest:

pyproject.toml
[tool.pytest.ini_options]
addopts = [
    "--import-mode=importlib",
]
pythonpath = "src"

The way in which pytest discovers tests is complex. TL;DR: specify the newest and best mechanism for test discovery by using the ‑‑import-mode=importlib option as shown above.

To always run pytest with the ‑‑nf and ‑‑lf options, add those options to pyproject.toml:

pyproject.toml
[tool.pytest.ini_options]
addopts = [
    "--import-mode=importlib",
    "--lf",
    "--nf",
]
pythonpath = "src"

Test Directories

Flexibility generally adds complexity. Pytest allows for a lot of flexibility regarding the location of where tests can reside. Unfortunately, Python’s package and module mechanism in combination with the pytest discovery mechanism for tests can lead to a frustrating experience when setting up a new project.

You can read about good practices for pytest. Those recommendations are intimidating for first-time pytest programmers. I did not follow all of them.

I found the simplest approach was to create a subdirectory for the bulk of the Python project code. If a file called __init__.py is created in that directory (use touch for this) then the subdirectory becomes a submodule of your Python program. For this type of pytest setup, each submodule needs a tests directory.

Here is how you could use touch to define an imaginary subdirectory/submodule called my_submodule:

Shell
$ mkdir -p src/my_submodule/
$ touch src/my_submodule/__init__.py

My project structure was as follows. Tree docs are here.

pytest project structure
$ tree -I .git -I __pycache__
.
├── README.md
├── dl
├── dl.config
├── pyproject.toml
├── requirements.txt
└── src
    ├── __main__.py
    └── dl
        ├── __init__.py
        ├── argument_parse.py
        ├── dl_config.py
        ├── media_file.py
        ├── remote.py
        ├── tests
        │   ├── __init__.py
        │   ├── test_dl_config.py
        │   └── test_uti.py
        └── util.py

3 directories, 15 files 

src/dl/tests/__init__.py was an empty marker file, but the ___init__.py file in the parent module defined imports for the submodule:

src/dl/__init__.py
from dl.argument_parse import ArgParse
from dl.dl_config import DLConfig
from dl.media_file import MediaFile
from dl.remote import Remote
from dl.util import *

Sample Test

Here is a sample pytest file, containing 2 unit tests. It is simple and clear.

src/dl/tests/test_util.py
import pytest
from pathlib import Path
from dl import samba_mount, samba_parse

class TestUtil:
    def test_samba_parse(self):
        remote_drive, local_path = samba_parse('e:/media/renders')
        assert remote_drive=='e'
        assert local_path=='/media/renders'

    def test_samba_mount(self):
        samba_root = samba_mount('bear', 'e', False)
        assert samba_root==Path('/mnt/bear/e')
        assert samba_root.is_mount()

Secret Sauce

If you have been paying attention, you should realize that one option for running new and failing tests is to type the following from the command line. We will use this method of invocation in the next section to integrate with Visual Studio Code.

Shell
$ python -m pytest --nf --lf

Visual Studio Code Integration

Visual Studio Code has a Testing explorer and a Run and Debug explorer. I find the existance of two very similar explorers annoying, because you often want to debug a test. Yes, you can do that by pushing the right button in the Testing Explorer ... if it works. This did not work for me.

Shopify Ruby, the latest and greatest Ruby extension for Visual Studio Code, uses the Run and Debug explorer to run and debug tests as well as entire programs. This makes sense to me.

Microsoft Pylance (for Python) follows a different convention, and uses the Run and Debug explorer to run and debug Python programs. You must use the Testing explorer for running or debugging Python unit tests. Annoying.

In the Testing explorer, I was able to see my dl module, but no tests were discovered. This was really annoying, and this is where more documentation from Microsoft would have been helpful. After wasting lots of time trying to solve this problem, I discovered a solution.

Invoke Pytest As a Module from VSCode

Run and Debug configurations that invoke the pytest module work perfectly. Here is an example .vscode/launch.json that demonstrates this:

.vscode/launch.json
{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "PyTest All",
      "type": "python",
      "request": "launch",
      "module": "pytest",
      "justMyCode": true
    },
    {
      "args": ["--nf", "--lf"],
      "name": "PyTest New and Failing",
      "type": "python",
      "request": "launch",
      "module": "pytest",
      "justMyCode": true
    },
  ]
}
😁

You can set breakpoints in your code and debug your tests this way.



* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.