20 Pytest concepts with Before-and-After Examples
Table of contents
- 1. Setting up a Basic Test Function π οΈ
- 2. Running Tests with pytest πββοΈ
- 3. Testing with Parameters π§ͺ
- 4. Testing Exceptions π¨
- 5. Skipping Tests βοΈ
- 6. Fixtures for Test Setup π§
- 7. Temporary Files and Directories ποΈ
- 8. Running Tests in Parallel β‘
- 9. Capturing Output π₯οΈ
- 10. Mocking Functions with pytest-mock π οΈ
- 11. Test Grouping (Test Classes) π§βπ€βπ§
- 12. Test Coverage with pytest-cov π
- 13. Running Specific Tests π―
- 14. Marking Tests as Expected Failures β
- 15. Running Tests Based on Markers π·οΈ
- 16. Fixture Scope (Session/Module/Class) π§βπ§
- 17. Testing for Warnings β οΈ
- 18. Using the pytest Debugger (pdb) π
- 19. Monkey Patching π΅
- 20. Testing Database Transactions with Rollbacks ποΈ
1. Setting up a Basic Test Function π οΈ
Boilerplate Code:
def test_example():
assert 1 == 1
Use Case: Create a simple test function to verify basic functionality in your code.
Goal: Write a basic pytest function to check if a condition holds true. π―
Sample Code:
def test_example():
assert 1 == 1
Before Example:
You manually check conditions in your script by printing values, leading to messy and inefficient debugging. π
if 1 == 1:
print("Test passed")
After Example:
With pytest, you write concise test functions and automatically check assertions. π οΈ
$ pytest
# Output: All tests passed.
Challenge: π Write more complex test functions that include conditions for more than one assertion.
2. Running Tests with pytest πββοΈ
Boilerplate Code:
pytest
Use Case: Execute tests across your project in one go.
Goal: Use the pytest command to run all test cases in your project. π―
Sample Command:
pytest
Before Example:
You manually execute each function or file to check if it works, leading to inefficiency. π€
python test_example.py
# Manually running each script.
After Example:
With pytest
, all tests are executed in a single command, and the results are consolidated. πββοΈ
$ pytest
# Output: 1 passed, 0 failed
Challenge: π Set up pytest to run tests on multiple files in one go.
3. Testing with Parameters π§ͺ
Boilerplate Code:
import pytest
@pytest.mark.parametrize("input,expected", [(1, 1), (2, 4), (3, 9)])
def test_square(input, expected):
assert input ** 2 == expected
Use Case: Test multiple sets of inputs in one function.
Goal: Use pytestβs parametrize feature to check a function with multiple input-output pairs. π―
Sample Code:
@pytest.mark.parametrize("input,expected", [(1, 1), (2, 4), (3, 9)])
def test_square(input, expected):
assert input ** 2 == expected
Before Example:
You write separate test functions for each input, duplicating your code. π©
def test_square_1():
assert 1 ** 2 == 1
def test_square_2():
assert 2 ** 2 == 4
After Example:
With parametrize, you test multiple inputs in one concise test function. π§ͺ
$ pytest
# Output: All parameterized tests passed.
Challenge: π Use pytest.mark.parametrize
to test various edge cases in your functions.
4. Testing Exceptions π¨
Boilerplate Code:
import pytest
def test_raises_exception():
with pytest.raises(ZeroDivisionError):
1 / 0
Use Case: Verify that your code raises the correct exceptions in edge cases.
Goal: Use pytest.raises to ensure specific exceptions are triggered in your functions. π―
Sample Code:
def test_raises_exception():
with pytest.raises(ZeroDivisionError):
1 / 0
Before Example:
You manually check for exceptions using try-except blocks in your code. π
try:
1 / 0
except ZeroDivisionError:
print("Caught ZeroDivisionError")
After Example:
With pytest, you assert that the correct exceptions are raised cleanly. π¨
$ pytest
# Output: Test passed, ZeroDivisionError was raised.
Challenge: π Write tests for a function that may raise multiple types of exceptions.
5. Skipping Tests βοΈ
Boilerplate Code:
import pytest
@pytest.mark.skip(reason="Not ready for this test yet")
def test_not_ready():
assert 1 == 1
Use Case: Temporarily skip certain tests that are not ready or necessary.
Goal: Use pytest.mark.skip to skip specific tests without deleting or commenting them out. π―
Sample Code:
@pytest.mark.skip(reason="Not ready for this test yet")
def test_not_ready():
assert 1 == 1
Before Example:
You comment out tests you donβt want to run, making your code messy. π
# def test_not_ready():
# assert 1 == 1
After Example:
With pytest, you can skip tests easily and keep your code clean. βοΈ
$ pytest
# Output: Test skipped with reason: Not ready for this test yet
Challenge: π Try using pytest.mark.skipif
to conditionally skip tests based on certain conditions.
6. Fixtures for Test Setup π§
Boilerplate Code:
import pytest
@pytest.fixture
def setup_data():
return {"user": "admin", "password": "password123"}
def test_fixture_example(setup_data):
assert setup_data["user"] == "admin"
Use Case: Provide reusable setup for tests that need to prepare some data or state.
Goal: Use pytest fixtures to share setup code across multiple test functions. π―
Sample Code:
@pytest.fixture
def setup_data():
return {"user": "admin", "password": "password123"}
def test_fixture_example(setup_data):
assert setup_data["user"] == "admin"
Before Example:
You copy-paste setup code in every test, making the code redundant. π
def test_fixture_example():
data = {"user": "admin", "password": "password123"}
assert data["user"] == "admin"
After Example:
With fixtures, setup code is shared across test functions, reducing redundancy. π§
$ pytest
# Output: All tests passed using fixture data.
Challenge: π Create a fixture that sets up a database connection and reuses it in multiple test cases.
7. Temporary Files and Directories ποΈ
Boilerplate Code:
def test_temp_file(tmpdir):
temp_file = tmpdir.join("temp.txt")
temp_file.write("Temporary data")
assert temp_file.read() == "Temporary data"
Use Case: Create and use temporary files or directories during tests.
Goal: Use pytest's tmpdir
fixture to handle temporary files and directories. π―
Sample Code:
def test_temp_file(tmpdir):
temp_file = tmpdir.join("temp.txt")
temp_file.write("Temporary data")
assert temp_file.read() == "Temporary data"
Before Example:
You manually create temporary files, cluttering your project. π¬
Creating temporary files manually.
After Example:
With tmpdir
, pytest automatically creates and cleans up temporary files. ποΈ
$ pytest
# Output: Test passed, temp file handled and cleaned.
Challenge: π Write a test that creates a temporary log file, writes to it, and verifies the content.
8. Running Tests in Parallel β‘
Boilerplate Code:
pytest -n 4
Use Case: Run tests in parallel to save time on larger test suites.
Goal: Use pytest-xdist to execute tests across multiple CPUs concurrently. π―
Sample Command:
pytest -n 4
Before Example:
Tests run sequentially, causing long waiting times. β³
pytest
# Tests are executed one after another.
After Example:
With parallel execution, tests run faster and more efficiently. β‘
$ pytest -n 4
# Output: All tests executed in parallel, total time reduced.
Challenge: π Run tests in parallel across multiple files and measure the speed improvement.
9. Capturing Output π₯οΈ
Boilerplate Code:
def test_output(capsys):
print("Hello, pytest!")
captured = capsys.readouterr()
assert captured.out == "Hello, pytest!\n"
Use Case: Capture printed output during tests.
Goal: Use pytestβs capsys
fixture to capture and assert standard output and error streams. π―
Sample Code:
def test_output(capsys):
print("Hello, pytest!")
captured = capsys
It seems the response got cut off! Let's continue with the pytest example you wanted:
---
### 9. **Capturing Output** π₯οΈ
**Boilerplate Code**:
```python
def test_output(capsys):
print("Hello, pytest!")
captured = capsys.readouterr()
assert captured.out == "Hello, pytest!\n"
Use Case: Capture printed output during tests.
Goal: Use pytestβs capsys
fixture to capture and assert standard output and error streams. π―
Sample Code:
def test_output(capsys):
print("Hello, pytest!")
captured = capsys.readouterr()
assert captured.out == "Hello, pytest!\n"
Before Example:
You manually check print outputs in the console during the test run. π
print("Checking printed output.")
After Example:
With capsys
, you capture and assert output directly in the test, ensuring it behaves as expected. π₯οΈ
$ pytest
# Output: Test passed, output matched expectations.
Challenge: π Use capsys
to capture error streams as well, and verify that your error-handling code works as expected.
10. Mocking Functions with pytest-mock
π οΈ
Boilerplate Code:
def test_mocking(mocker):
mocker.patch('module.function_name', return_value=42)
assert module.function_name() == 42
Use Case: Replace parts of your code with mocks during testing to isolate specific functionality.
Goal: Use pytest-mock to mock out specific functions or methods during your test runs. π―
Sample Code:
def test_mocking(mocker):
mocker.patch('module.function_name', return_value=42)
assert module.function_name() == 42
Before Example:
You rely on real function calls during tests, which may produce unpredictable results. π
# Calling actual functions may lead to side effects or reliance on external resources.
After Example:
With mocking, you isolate and control function outputs during tests, ensuring consistency. π οΈ
$ pytest
# Output: Mock function returned expected value.
Challenge: π Mock a function that reads from a file, and instead, return a mock file object during your tests.
Let's continue with pytest examples 11-20, including before-and-after examples for clarity.
11. Test Grouping (Test Classes) π§βπ€βπ§
Boilerplate Code:
class TestMathOperations:
def test_addition(self):
assert 1 + 1 == 2
def test_multiplication(self):
assert 2 * 2 == 4
Use Case: Organize related tests into groups using test classes.
Goal: Use classes to group tests that are related to similar functionality. π―
Sample Code:
class TestMathOperations:
def test_addition(self):
assert 1 + 1 == 2
def test_multiplication(self):
assert 2 * 2 == 4
Before Example:
Your tests are scattered and unorganized, making it hard to manage related functionality. π
Separate test functions, no clear grouping.
After Example:
Tests are grouped into test classes, making them more structured and easier to manage. π§βπ€βπ§
$ pytest
# Output: Tests grouped under TestMathOperations passed.
Challenge: π Try grouping tests for multiple mathematical operations (addition, multiplication, subtraction) into one class and run them together.
12. Test Coverage with pytest-cov
π
Boilerplate Code:
pytest --cov=my_module
Use Case: Measure how much of your code is executed during tests.
Goal: Use pytest-cov to track your test coverage and identify untested code paths. π―
Sample Command:
pytest --cov=my_module
Before Example:
You run tests but have no insight into how much of your code is actually tested. π
$ pytest
# Output: Tests run, but no coverage info.
After Example:
With pytest-cov, you get a detailed report showing how much of your code is covered by tests. π
$ pytest --cov=my_module
# Output: 85% test coverage for my_module.py
Challenge: π Try reaching 100% coverage on a small Python script and check which lines were not covered.
13. Running Specific Tests π―
Boilerplate Code:
pytest -k "test_addition"
Use Case: Run a specific test or group of tests by name.
Goal: Use the -k
option to run specific tests that match a certain string pattern. π―
Sample Command:
pytest -k "test_addition"
Before Example:
You run the entire test suite when only one or a few tests need to be verified. π€
$ pytest
# Running all tests when you only need one or two.
After Example:
You run only the tests that are relevant to your current work. π―
$ pytest -k "test_addition"
# Output: Only test_addition is executed.
Challenge: π Use the -k
option to run all tests related to a certain module or feature, like "login" or "database."
14. Marking Tests as Expected Failures β
Boilerplate Code:
import pytest
@pytest.mark.xfail
def test_failure_case():
assert 1 == 2
Use Case: Mark tests that are expected to fail under certain conditions.
Goal: Use pytest.mark.xfail to flag known failing tests without failing the entire test suite. π―
Sample Code:
@pytest.mark.xfail
def test_failure_case():
assert 1 == 2
Before Example:
Failing tests stop your entire workflow and cause frustration. π‘
$ pytest
# Output: Test fails and interrupts progress.
After Example:
With xfail, you allow known failures without breaking the test suite. β
$ pytest
# Output: Test marked as expected failure (xfail), suite continues.
Challenge: π Use xfail
on tests that depend on external systems (e.g., API downtime) to prevent unnecessary failures.
15. Running Tests Based on Markers π·οΈ
Boilerplate Code:
import pytest
@pytest.mark.slow
def test_heavy_computation():
assert 2 ** 100 == 1267650600228229401496703205376
Use Case: Categorize tests with markers, like slow
, database
, or network
.
Goal: Use pytest.mark to classify tests and run only specific categories. π―
Sample Code:
@pytest.mark.slow
def test_heavy_computation():
assert 2 ** 100 == 1267650600228229401496703205376
Before Example:
You run all tests regardless of whether they are fast or slow. β³
$ pytest
# All tests run, including slow ones.
After Example:
You categorize slow tests and run them separately when needed. π·οΈ
$ pytest -m slow
# Output: Only tests marked as "slow" are executed.
Challenge: π Create multiple test markers like network
, database
, and api
to control which parts of the test suite run based on system dependencies.
16. Fixture Scope (Session/Module/Class) π§βπ§
Boilerplate Code:
import pytest
@pytest.fixture(scope="module")
def setup_data():
return {"data": "important_data"}
Use Case: Define fixture scope to control how often fixtures are created and destroyed.
Goal: Use pytest fixture scopes to optimize test performance by sharing setup across tests. π―
Sample Code:
@pytest.fixture(scope="module")
def setup_data():
return {"data": "important_data"}
def test_1(setup_data):
assert setup_data["data"] == "important_data"
def test_2(setup_data):
assert setup_data["data"] == "important_data"
Before Example:
Fixtures are created and torn down for every test, causing inefficiency. π
Setup and teardown happening for every test, even when not necessary.
After Example:
With fixture scope, the fixture is created only once per module, improving performance. π§βπ§
$ pytest
# Output: Fixture "setup_data" created only once per module.
Challenge: π Experiment with the scope="session"
option to share fixtures across the entire test session.
17. Testing for Warnings β οΈ
Boilerplate Code:
import warnings
def test_warning_case():
with pytest.warns(UserWarning):
warnings.warn("This is a warning", UserWarning)
Use Case: Capture and assert that certain warnings are raised.
Goal: Use pytest.warns to ensure your code triggers the expected warnings. π―
Sample Code:
def test_warning_case():
with pytest.warns(UserWarning):
warnings.warn("This is a warning", UserWarning)
Before Example:
You see warnings in the console but don't confirm if they are raised correctly. π
Warnings in the output, but not programmatically captured.
After Example:
With pytest.warns, you explicitly check for expected warnings. β οΈ
$ pytest
# Output: Warning captured and test passed.
Challenge: π Write tests for deprecated functions to ensure they raise the correct deprecation warnings.
18. Using the pytest
Debugger (pdb) π
Boilerplate Code:
pytest --pdb
Use Case: Drop into a debugger when a test fails.
Goal: Use pdb in pytest to debug test failures interactively. π―
Sample Command:
pytest --pdb
Before Example:
You print variables and re-run tests manually to debug issues, wasting time. π©
print("debugging information")
After Example:
With pdb, you drop into an interactive debugger as soon as a test fails. π
$ pytest --pdb
# Output: Interactive debugger starts when a test fails.
Challenge: π Use the pytest debugger to step through a failing test, inspect variables, and fix the issue live.
19. Monkey Patching π΅
Boilerplate Code:
def test_monkeypatch(monkeypatch):
monkeypatch.setattr('os.getcwd', lambda: "/mock/directory")
assert os.getcwd() == "/mock/directory"
Use Case: Temporarily replace attributes, functions, or environments during tests.
Goal: Use monkeypatch to mock external dependencies or system calls (like os.getcwd()
) in a controlled test environment. π―
Sample Code:
def test_monkeypatch(monkeypatch):
monkeypatch.setattr('os.getcwd', lambda: "/mock/directory")
assert os.getcwd() == "/mock/directory"
Before Example:
You rely on real system calls or functions, which makes your tests dependent on external resources, potentially leading to inconsistent results. π
Calling real functions like os.getcwd() returns the actual current working directory, making it hard to test behavior with a mock directory.
After Example:
With monkeypatch, you mock specific functions and ensure tests run in a controlled, predictable environment. π΅
$ pytest
# Output: os.getcwd() was successfully mocked to return "/mock/directory".
Challenge: π Use monkeypatch
to mock an API call or environment variable, and write a test to check how your code handles different mocked responses.
20. Testing Database Transactions with Rollbacks ποΈ
Boilerplate Code:
import pytest
@pytest.fixture
def db_transaction():
db.begin_transaction()
yield
db.rollback_transaction()
def test_database(db_transaction):
db.execute("INSERT INTO users VALUES (1, 'John')")
result = db.fetch("SELECT * FROM users WHERE id=1")
assert result == (1, 'John')
Use Case: Test database operations without leaving permanent changes.
Goal: Use a fixture that initiates and rolls back database transactions, ensuring the database stays clean after each test. π―
Sample Code:
@pytest.fixture
def db_transaction():
db.begin_transaction()
yield
db.rollback_transaction()
def test_database(db_transaction):
db.execute("INSERT INTO users VALUES (1, 'John')")
result = db.fetch("SELECT * FROM users WHERE id=1")
assert result == (1, 'John')
Before Example:
Running tests alters your database, and you have to manually clean it up after tests. π
Database entries are inserted permanently, requiring manual cleanup.
After Example:
With transaction rollbacks, database changes made during tests are automatically undone, keeping your database clean. ποΈ
$ pytest
# Output: Transaction rolled back after test, no permanent changes in the database.
Challenge: π Implement rollbacks on a more complex database setup involving multiple tables with foreign key constraints, and test data consistency.
Subscribe to my newsletter
Read articles from Anix Lynch directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by