⏱️ Speeding up your Python & Django test suite

Artwork by @vitzi_art

Less time waiting, more time hacking!

Yes yes, we all know. Writing tests and thoroughly running them on our code is important. None of us enjoy doing it but we almost all see the benefits of this process. But what isn’t as great about testing is the waiting, the context shifting and the loss of focus. At least for me, this distraction is a real drag, especially when I have to run a full test suite.

This is why I find it crucial to have a fine-tuned test suite that runs as fast as possible, and why I always put some effort into speeding up my test runs, both locally and in the CI. While working on different  Python / Django projects I’ve discovered some tips & tricks that can make your life easier. Plenty of them are included in various documentations, like the almighty Django docs, but I think there’s some value in collecting them all in a single place.

As a bonus, I’ll be sharing some examples / tips for enhancing your test runs when using Github Actions, as well as a case study to showcase the benefit of all these suggestions.

The quick wins

Running a part of the test suite

This first one is kind of obvious, but you don’t have to run the whole test suite every single time. You can run tests in a single package, module, class, or even function by using the path on the test command.

> python manage.py test package.module.class.function
System check identified no issues (0 silenced).
..
----------------------------------------------------------------------
Ran 2 tests in 6.570s

OK

Keeping the database between test runs

By default, Django creates a test database for each test run, which is destroyed at the end. This is a rather slow process, especially if you want to run just a few tests! The --keepdb option will not destroy and recreate the database locally on every run. This gives a huge speedup when running tests locally and is a pretty safe option to use in general.

> python manage.py test <path or nothing> --keepdb
Using existing test database for alias 'default'...          <--- Reused!
System check identified no issues (0 silenced).
..
----------------------------------------------------------------------
Ran 2 tests in 6.570s

OK
Preserving test database for alias 'default'...              <--- Not destroyed!

This is not as error-prone as it sounds, since every test usually takes care of restoring the state of the database, either by rolling back transactions or truncating tables. We’ll talk more about this later on.

If you see errors that may be related with the database not being recreated at the start of the test (like IntegrityError, etc), you can always remove the flag on the next run. This will destroy the database and recreate it.

Running tests in parallel

By default, Django runs tests sequentially. However, whether you’re running tests locally or in your CI (Github Actions, Jenkins CI, etc) more often than not you’ll have multiple cores. To leverage them, you can use the --parallel flag. Django will create additional processes to run your tests and additional databases to run them against.

You will see something like this:

> python3 manage.py test --parallel --keepdb
Using existing test database for alias 'default'...
Using existing clone for alias 'default'...        --
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |    => 12 processes!
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...         |
Using existing clone for alias 'default'...        -- 

< running tests > 

Preserving test database for alias 'default'...
... x10 ...
Preserving test database for alias 'default'...

In Github runners, it usually spawns 2-3 processes.

When running with --parallel, Django will try to pickle tracebacks from errors to display them in the end. You’ll have to add tblib as a dependency to make this work.

Caching your Python environment (CI)

Usually when running tests in CI/CD environments a step of the process is building the Python environment (creating a virtualenv, installing dependencies etc). A common practice to speed things up here is to cache this environment and keep it between builds since it doesn’t change often. Keep in mind, you’ll need to invalidate this whenever your requirements change.

An example for Github Actions could be adding something like this to your workflow’s yaml file:

- name: Cache pip
  uses: actions/cache@v2
  with:
    # This path is specific to Ubuntu
    path: ${{ env.pythonLocation }}
    # Look to see if there is a cache hit for the corresponding requirements file
    key: ${{ env.pythonLocation }}-${{ hashFiles('requirements.txt','test_requirements.txt') }}

Make sure to include all your requirements files in hashFiles.

If you search online you’ll find various guides, including the official Github guide, advising to cache just the retrieved packages (the pip cache). I prefer the above method which caches the built packages since I saw no speedup with the suggestions there.

The slow but powerful

Prefer TestCase instead of TransactionTestCase

Django offers 2 different base classes for test cases: TestCase and TransactionTestCase. Actually it offers more, but for this case we only care about those two.

But what’s the difference? Quoting the docs:

Django’s TestCase class is a more commonly used subclass of TransactionTestCase that makes use of database transaction facilities to speed up the process of resetting the database to a known state at the beginning of each test. A consequence of this, however, is that some database behaviors cannot be tested within a Django TestCase class. For instance, you cannot test that a block of code is executing within a transaction, as is required when using select_for_update(). In those cases, you should use TransactionTestCase.

TransactionTestCase and TestCase are identical except for the manner in which the database is reset to a known state and the ability for test code to test the effects of commit and rollback:

  • A TransactionTestCase resets the database after the test runs by truncating all tables. A TransactionTestCase may call commit and rollback and observe the effects of these calls on the database.
  • A TestCase, on the other hand, does not truncate tables after a test. Instead, it encloses the test code in a database transaction that is rolled back at the end of the test. This guarantees that the rollback at the end of the test restores the database to its initial state.

In each project, there’s often this one test that breaks with TestCase but works with TransactionTestCase. When engineers see this, they consider it more reliable and switch their base test classes to TransactionTestCase, without considering the performance impact. We’ll see later on that this is not negligible at all.

TL;DR:

You probably only need TestCase for most of your tests. Use TransactionTestCase wisely!

Some additional indicators where TransactionTestCase might be useful are:
– emulating transaction errors
– using on_commit hooks
– firing async tasks (which will run outside the transaction by definition)

Try to use setUpTestData instead of setUp

Whenever you want to set up data for your tests, you usually override & use setUp. This runs before every test and creates the data you need.

However, if you don’t change the data in each test case, you can also use setupTestData (docs). This runs once for every function in the same class and creates the necessary data. This is definitely faster, but if your tests alter the test data you can end up with weird cases. Use with caution.

Finding slow tests with nose

Last but not least, remember that tests are code. So there’s always the chance that some tests are really slow because you didn’t develop them with performance in mind. If this is the case, the best thing you can do is rewrite them. But figuring out a single slow test is not that easy.

Luckily, you can use django-nose (docs) and nose-timer to find the slowest tests in your suite.

To do that:

  • add django-nose, nose-timer to your requirements
  • In your settings.py, change the test runner and use some nose-specific arguments

Example settings.py:

TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = [
    '--nocapture',
    '--verbosity=2',
    '--with-timer',
    '--timer-top-n=10',
    '--with-id'
]

The above arguments will make nose output:

  • The name of each test
  • The time each test takes
  • The top 10 slowest tests in the end

Example output:

....
#656 test_function_name (path.to.test.module.TestCase) ... ok (1.3138s)
#657 test_function_name (path.to.test.module.TestCase) ... ok (3.0827s)
#658 test_function_name (path.to.test.module.TestCase) ... ok (5.0743s)
#659 test_function_name (path.to.test.module.TestCase) ... ok (5.3729s)
....
#665 test_function_name (path.to.test.module.TestCase) ... ok (3.1782s)
#666 test_function_name (path.to.test.module.TestCase) ... ok (0.7577s)
#667 test_function_name (path.to.test.module.TestCase) ... ok (0.7488s)

[success] 6.67% path.to.slow.test.TestCase.function: 5.3729s    ----
[success] 6.30% path.to.slow.test.TestCase.function: 5.0743s       |
[success] 5.61% path.to.slow.test.TestCase.function: 4.5148s       |
[success] 5.50% path.to.slow.test.TestCase.function: 4.4254s       |
[success] 5.09% path.to.slow.test.TestCase.function: 4.0960s       | 10 slowest
[success] 4.32% path.to.slow.test.TestCase.function: 3.4779s       |    tests
[success] 3.95% path.to.slow.test.TestCase.function: 3.1782s       |
[success] 3.83% path.to.slow.test.TestCase.function: 3.0827s       |
[success] 3.47% path.to.slow.test.TestCase.function: 2.7970s       |
[success] 3.20% path.to.slow.test.TestCase.function: 2.5786s    ---- 
----------------------------------------------------------------------
Ran 72 tests in 80.877s

OK

Now it’s easier to find out slow tests and debug why they take so much time to run.

Case study

To showcase the value of each of these suggestions, we’ll be running a series of scenarios and measuring how much time we save with each improvement.

The scenarios

  • Locally
    • Run a single test with / without --keepdb, to measure the overhead of recreating the database
    • Run a whole test suite locally with / without --parallel, to see how much faster this is
  • On Github Actions
    • Run a whole test suite with no improvements
    • Add --parallel and re-run
    • Cache the python environment and re-run
    • Change the base test case to TestCase from TransactionTestCase

Locally

Performance of --keepdb

Let’s run a single test without --keepdb:

> time python3 manage.py test package.module.TestCase.test 
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 4.297s

OK
Destroying test database for alias 'default'...

real    0m50.299s
user    1m0.945s
sys     0m1.922s

Now the same test with --keepdb:

> time python3 manage.py test package.module.TestCase.test --keepdb
Using existing test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 4.148s

OK
Preserving test database for alias 'default'...

real    0m6.899s
user    0m20.640s
sys     0m1.845s

Difference: 50 sec vs 7 sec or 7 times faster

Performance of –parallel

Without --parallel:

> python3 manage.py test --keepdb
...
----------------------------------------------------------------------
Ran 591 tests in 670.560s

With --parallel (concurrency: 6):

> python3 manage.py test --keepdb --parallel 6
...
----------------------------------------------------------------------
Ran 591 tests in 305.394s

Difference: 670 sec vs 305 sec or > 2x faster

On Github Actions

Without any improvements, the build took ~25 mins to run. The breakdown for each action can be seen below.

Note that running the test suite took 20 mins
Note that running the test suite took 20 mins

When running with --parallel, the whole build took ~17 mins to run (~30% less). Running the tests took 13 mins (vs 20 mins without --parallel, an improvement of ~40%).

By caching the python environment, we can see that the Install dependencies step takes a few seconds to run instead of ~4 mins, reducing the build time to 14 mins

Finally, by changing the base test case from TransactionTestCase to TestCase and fixing the 3 tests that required it, the time dropped again:

Neat, isn’t it?

Key takeaway: We managed to reduce the build time from ~25 mins to less than 10 mins, which is less than half of the original time.

Bonus: Using coverage.py in parallel mode

If you are using coverage.py setting the --parallel flag is not enough for your tests to run in parallel.

First, you will need to set parallel = True and concurrency = multiprocessing to your .coveragerc. For example:

# .coveragerc
[run]
branch = True
omit = */__init__*
       */test*.py
       */migrations/*
       */urls.py
       */admin.py
       */apps.p

# Required for parallel
parallel = true
# Required for parallel
concurrency = multiprocessing

[report]
precision = 1
show_missing = True
ignore_errors = True
exclude_lines =
    pragma: no cover
    raise NotImplementedError
    except ImportError
    def __repr__
    if self.logger.debug
    if __name__ == .__main__.:

Then, add a sitecustomize.py to your project’s root directory (where you’ll be running your tests from).

# sitecustomize.py
import coverage

coverage.process_startup()

Finally, you’ll need to do some extra steps to run with coverage and create a report.

# change the command to something like this
COVERAGE_PROCESS_START=./.coveragerc coverage run --parallel-mode --concurrency=multiprocessing --rcfile=./.coveragerc manage.py test --parallel
# combine individual coverage files 
coverage combine --rcfile=./.coveragerc
# and then create the coverage report
coverage report -m --rcfile=./.coveragerc

Enjoy your way faster test suite!

Sergios Aftsidis

Senior Backend Software Engineer @ ΟRFIUM

https://www.linkedin.com/in/saftsidis/

https://iamsafts.com/

https://github.com/safts