Essential Python Tools #1: On demand pytest fixtures
Tags: Python, Software Engineering, Testing, Pytest, Essential Python Tools, Fixtures
I think the dependencies you use for development should differ from the ones that you use for testing. It’s quite easy to introduce dependencies in your code that are based off of data or configuration that you have created outside of your codebase. To get around this we can use an entirely separate set of test fixtures which are created before each test run, configured, tested against and torn down when the test run is over.
So, we want to setup our dependencies:
import pytest
import pytest_postgresql.factories as pg_factories
import pytest_redis.factories as redis_factories
test_postgresql = pg_factories.postgresql_proc()
redis_server_path = (
subprocess.run(["which", "redis-server"], capture_output=True, check=True)
.stdout.decode("utf-8")
.strip("\n")
)
test_redisdb = redis_factories.redis_proc(executable=redis_server_path)
The above will create an on demand instance of both redis and postgresql. Some extra configuration is required for the redis instance factory requires a path to the redis binary. These factories will create accompanying pytest fixtures that are accessible through the normal methods.
Now we’ve got some setup dependencies, we can use them to build out our database.
from sqlalchemy import create_engine
from pytest_postgresql.executor import PostgreSQLExecutor
@pytest.fixture(scope="session")
def _test_db(test_postgresql: PostgreSQLExecutor) -> Engine:
"""Internal fixture used within the `conftest.py`."""
test_db_dsn = (
f"postgresql://{test_postgresql.user}:{test_postgresql.password}"
f"@127.0.0.1:{test_postgresql.port}/{test_postgresql.dbname}",
)
engine = create_engine(test_db_dsn, future=True)
# This is where you'd setup your tables.
...
# Return the engine to use in any downstream tests.
return engine
We can do the same for redis.
from redis import Redis
from pytest_redis.executor import RedisExecutor
@pytest.fixture(scope="session")
def _test_redis(test_redisdb: RedisExecutor) -> Redis:
"""Internal fixture used within the `conftest.py`."""
redis_db_dsn = f"redis://{test_redisdb.host}:{test_redisdb.port}"
redis = Redis.from_url(redis_db_dsn)
# This is where you'd setup redis if you needed to
...
# Return the client to use in any downstream tests
return redis
Now all our application setup is done we can use these two fixtures as dependencies for our application test fixture. With the dependency tree we’ve formed we can be sure that all pre-requisites will be up and ready before a test is run.
@pytest.fixture(scope="session")
def app(
_test_redis: RedisExecutor,
_test_db: Engine,
) -> FastAPI:
# Do any other pre setup tasks here.
app = main()
return app
This new fixture is a child of both fixture _test_redis
and _test_db
which are both
in turn children of test_postgresql
and test_redisdb
. To graph it just so we’re all
on the same page.
stateDiagram-v2 test_redisdb --> _test_redis test_postgresql --> _test_db _test_redis --> app _test_db --> app
These dependency trees for better or worse can get arbitrarily complex, however by breaking down the configuration steps you can turn complex dependency trees into simple steps joined through the pytest fixture framework1.
So we have talked about what to do, but what about what you get for this? Well, with the ability to create on demand dependencies you can run tests without having to worry about breaking your local development environment. Some things I like to do are:
- Test my database migrations, I find it’s best if these are tested by running them from the base to head, back to base and finally back to head. This works imilar to a syn-ack handshake, you validate that your up works, you validate that down works, and then you validate that you’ve not missed any resources in the downgrade.
- Testing anything that requires a fresh start of a dependency. Some configuration isn’t able to be hot changed, by setting up test fixtures and starting them just for a test you don’t have to worry about changing them back.
- Each test run is starting from a fresh state, use this to your advantage to dogfood any logic that is used to spawn new users or flows.
Now this isn’t a silver bullet without downsides, each test run is going to have some extra overhead. Once you end up with hundreds of SQL migrations you can have quite a wait, doubly so if you ingest data or so some other slow activity. There are mitigations against this though, you could use memdisks2 as a backing store. As per most things in computer science, there is almost never a free lunch.
The other big drawback here is the added complexity involved in onboarding new
staff members. By adding a local dependency for everything, what used to be a
docker compose up -d
becomes a less than entertaining trudge through package
managers.
-
pytest-fixture-tools includes some tooling to generate these trees, once you get to a certain stage they are VERY handy. ↩︎
-
This is a pretty good rundown of what might be involved in improving performance of your test suite. ↩︎