20 Pytest concepts with Before-and-After Examples

Anix LynchAnix Lynch
13 min read

1. Setting up a Basic Test Function πŸ› οΈ

Boilerplate Code:

def test_example():
    assert 1 == 1

Use Case: Create a simple test function to verify basic functionality in your code.

Goal: Write a basic pytest function to check if a condition holds true. 🎯

Sample Code:

def test_example():
    assert 1 == 1

Before Example:
You manually check conditions in your script by printing values, leading to messy and inefficient debugging. πŸ˜•

if 1 == 1:
    print("Test passed")

After Example:
With pytest, you write concise test functions and automatically check assertions. πŸ› οΈ

$ pytest
# Output: All tests passed.

Challenge: 🌟 Write more complex test functions that include conditions for more than one assertion.


2. Running Tests with pytest πŸƒβ€β™‚οΈ

Boilerplate Code:

pytest

Use Case: Execute tests across your project in one go.

Goal: Use the pytest command to run all test cases in your project. 🎯

Sample Command:

pytest

Before Example:
You manually execute each function or file to check if it works, leading to inefficiency. πŸ€”

python test_example.py
# Manually running each script.

After Example:
With pytest, all tests are executed in a single command, and the results are consolidated. πŸƒβ€β™‚οΈ

$ pytest
# Output: 1 passed, 0 failed

Challenge: 🌟 Set up pytest to run tests on multiple files in one go.


3. Testing with Parameters πŸ§ͺ

Boilerplate Code:

import pytest

@pytest.mark.parametrize("input,expected", [(1, 1), (2, 4), (3, 9)])
def test_square(input, expected):
    assert input ** 2 == expected

Use Case: Test multiple sets of inputs in one function.

Goal: Use pytest’s parametrize feature to check a function with multiple input-output pairs. 🎯

Sample Code:

@pytest.mark.parametrize("input,expected", [(1, 1), (2, 4), (3, 9)])
def test_square(input, expected):
    assert input ** 2 == expected

Before Example:
You write separate test functions for each input, duplicating your code. 😩

def test_square_1():
    assert 1 ** 2 == 1
def test_square_2():
    assert 2 ** 2 == 4

After Example:
With parametrize, you test multiple inputs in one concise test function. πŸ§ͺ

$ pytest
# Output: All parameterized tests passed.

Challenge: 🌟 Use pytest.mark.parametrize to test various edge cases in your functions.


4. Testing Exceptions 🚨

Boilerplate Code:

import pytest

def test_raises_exception():
    with pytest.raises(ZeroDivisionError):
        1 / 0

Use Case: Verify that your code raises the correct exceptions in edge cases.

Goal: Use pytest.raises to ensure specific exceptions are triggered in your functions. 🎯

Sample Code:

def test_raises_exception():
    with pytest.raises(ZeroDivisionError):
        1 / 0

Before Example:
You manually check for exceptions using try-except blocks in your code. πŸ˜•

try:
    1 / 0
except ZeroDivisionError:
    print("Caught ZeroDivisionError")

After Example:
With pytest, you assert that the correct exceptions are raised cleanly. 🚨

$ pytest
# Output: Test passed, ZeroDivisionError was raised.

Challenge: 🌟 Write tests for a function that may raise multiple types of exceptions.


5. Skipping Tests ⏭️

Boilerplate Code:

import pytest

@pytest.mark.skip(reason="Not ready for this test yet")
def test_not_ready():
    assert 1 == 1

Use Case: Temporarily skip certain tests that are not ready or necessary.

Goal: Use pytest.mark.skip to skip specific tests without deleting or commenting them out. 🎯

Sample Code:

@pytest.mark.skip(reason="Not ready for this test yet")
def test_not_ready():
    assert 1 == 1

Before Example:
You comment out tests you don’t want to run, making your code messy. πŸ˜“

# def test_not_ready():
#     assert 1 == 1

After Example:
With pytest, you can skip tests easily and keep your code clean. ⏭️

$ pytest
# Output: Test skipped with reason: Not ready for this test yet

Challenge: 🌟 Try using pytest.mark.skipif to conditionally skip tests based on certain conditions.


6. Fixtures for Test Setup πŸ”§

Boilerplate Code:

import pytest

@pytest.fixture
def setup_data():
    return {"user": "admin", "password": "password123"}

def test_fixture_example(setup_data):
    assert setup_data["user"] == "admin"

Use Case: Provide reusable setup for tests that need to prepare some data or state.

Goal: Use pytest fixtures to share setup code across multiple test functions. 🎯

Sample Code:

@pytest.fixture
def setup_data():
    return {"user": "admin", "password": "password123"}

def test_fixture_example(setup_data):
    assert setup_data["user"] == "admin"

Before Example:
You copy-paste setup code in every test, making the code redundant. πŸ˜•

def test_fixture_example():
    data = {"user": "admin", "password": "password123"}
    assert data["user"] == "admin"

After Example:
With fixtures, setup code is shared across test functions, reducing redundancy. πŸ”§

$ pytest
# Output: All tests passed using fixture data.

Challenge: 🌟 Create a fixture that sets up a database connection and reuses it in multiple test cases.


7. Temporary Files and Directories πŸ—ƒοΈ

Boilerplate Code:

def test_temp_file(tmpdir):
    temp_file = tmpdir.join("temp.txt")
    temp_file.write("Temporary data")
    assert temp_file.read() == "Temporary data"

Use Case: Create and use temporary files or directories during tests.

Goal: Use pytest's tmpdir fixture to handle temporary files and directories. 🎯

Sample Code:

def test_temp_file(tmpdir):
    temp_file = tmpdir.join("temp.txt")
    temp_file.write("Temporary data")
    assert temp_file.read() == "Temporary data"

Before Example:
You manually create temporary files, cluttering your project. 😬

Creating temporary files manually.

After Example:
With tmpdir, pytest automatically creates and cleans up temporary files. πŸ—ƒοΈ

$ pytest
# Output: Test passed, temp file handled and cleaned.

Challenge: 🌟 Write a test that creates a temporary log file, writes to it, and verifies the content.


8. Running Tests in Parallel ⚑

Boilerplate Code:

pytest -n 4

Use Case: Run tests in parallel to save time on larger test suites.

Goal: Use pytest-xdist to execute tests across multiple CPUs concurrently. 🎯

Sample Command:

pytest -n 4

Before Example:
Tests run sequentially, causing long waiting times. ⏳

pytest
# Tests are executed one after another.

After Example:
With parallel execution, tests run faster and more efficiently. ⚑

$ pytest -n 4
# Output: All tests executed in parallel, total time reduced.

Challenge: 🌟 Run tests in parallel across multiple files and measure the speed improvement.


9. Capturing Output πŸ–₯️

Boilerplate Code:

def test_output(capsys):
    print("Hello, pytest!")
    captured = capsys.readouterr()
    assert captured.out == "Hello, pytest!\n"

Use Case: Capture printed output during tests.

Goal: Use pytest’s capsys fixture to capture and assert standard output and error streams. 🎯

Sample Code:

def test_output(capsys):
    print("Hello, pytest!")
    captured = capsys

It seems the response got cut off! Let's continue with the pytest example you wanted:

---

### 9. **Capturing Output** πŸ–₯️

**Boilerplate Code**:
```python
def test_output(capsys):
    print("Hello, pytest!")
    captured = capsys.readouterr()
    assert captured.out == "Hello, pytest!\n"

Use Case: Capture printed output during tests.

Goal: Use pytest’s capsys fixture to capture and assert standard output and error streams. 🎯

Sample Code:

def test_output(capsys):
    print("Hello, pytest!")
    captured = capsys.readouterr()
    assert captured.out == "Hello, pytest!\n"

Before Example:
You manually check print outputs in the console during the test run. πŸ˜•

print("Checking printed output.")

After Example:
With capsys, you capture and assert output directly in the test, ensuring it behaves as expected. πŸ–₯️

$ pytest
# Output: Test passed, output matched expectations.

Challenge: 🌟 Use capsys to capture error streams as well, and verify that your error-handling code works as expected.


10. Mocking Functions with pytest-mock πŸ› οΈ

Boilerplate Code:

def test_mocking(mocker):
    mocker.patch('module.function_name', return_value=42)
    assert module.function_name() == 42

Use Case: Replace parts of your code with mocks during testing to isolate specific functionality.

Goal: Use pytest-mock to mock out specific functions or methods during your test runs. 🎯

Sample Code:

def test_mocking(mocker):
    mocker.patch('module.function_name', return_value=42)
    assert module.function_name() == 42

Before Example:
You rely on real function calls during tests, which may produce unpredictable results. πŸ˜•

# Calling actual functions may lead to side effects or reliance on external resources.

After Example:
With mocking, you isolate and control function outputs during tests, ensuring consistency. πŸ› οΈ

$ pytest
# Output: Mock function returned expected value.

Challenge: 🌟 Mock a function that reads from a file, and instead, return a mock file object during your tests.


Let's continue with pytest examples 11-20, including before-and-after examples for clarity.


11. Test Grouping (Test Classes) πŸ§‘β€πŸ€β€πŸ§‘

Boilerplate Code:

class TestMathOperations:
    def test_addition(self):
        assert 1 + 1 == 2

    def test_multiplication(self):
        assert 2 * 2 == 4

Use Case: Organize related tests into groups using test classes.

Goal: Use classes to group tests that are related to similar functionality. 🎯

Sample Code:

class TestMathOperations:
    def test_addition(self):
        assert 1 + 1 == 2

    def test_multiplication(self):
        assert 2 * 2 == 4

Before Example:
Your tests are scattered and unorganized, making it hard to manage related functionality. πŸ˜•

Separate test functions, no clear grouping.

After Example:
Tests are grouped into test classes, making them more structured and easier to manage. πŸ§‘β€πŸ€β€πŸ§‘

$ pytest
# Output: Tests grouped under TestMathOperations passed.

Challenge: 🌟 Try grouping tests for multiple mathematical operations (addition, multiplication, subtraction) into one class and run them together.


12. Test Coverage with pytest-cov πŸ“Š

Boilerplate Code:

pytest --cov=my_module

Use Case: Measure how much of your code is executed during tests.

Goal: Use pytest-cov to track your test coverage and identify untested code paths. 🎯

Sample Command:

pytest --cov=my_module

Before Example:
You run tests but have no insight into how much of your code is actually tested. πŸ˜•

$ pytest
# Output: Tests run, but no coverage info.

After Example:
With pytest-cov, you get a detailed report showing how much of your code is covered by tests. πŸ“Š

$ pytest --cov=my_module
# Output: 85% test coverage for my_module.py

Challenge: 🌟 Try reaching 100% coverage on a small Python script and check which lines were not covered.


13. Running Specific Tests 🎯

Boilerplate Code:

pytest -k "test_addition"

Use Case: Run a specific test or group of tests by name.

Goal: Use the -k option to run specific tests that match a certain string pattern. 🎯

Sample Command:

pytest -k "test_addition"

Before Example:
You run the entire test suite when only one or a few tests need to be verified. πŸ€”

$ pytest
# Running all tests when you only need one or two.

After Example:
You run only the tests that are relevant to your current work. 🎯

$ pytest -k "test_addition"
# Output: Only test_addition is executed.

Challenge: 🌟 Use the -k option to run all tests related to a certain module or feature, like "login" or "database."


14. Marking Tests as Expected Failures ❌

Boilerplate Code:

import pytest

@pytest.mark.xfail
def test_failure_case():
    assert 1 == 2

Use Case: Mark tests that are expected to fail under certain conditions.

Goal: Use pytest.mark.xfail to flag known failing tests without failing the entire test suite. 🎯

Sample Code:

@pytest.mark.xfail
def test_failure_case():
    assert 1 == 2

Before Example:
Failing tests stop your entire workflow and cause frustration. 😑

$ pytest
# Output: Test fails and interrupts progress.

After Example:
With xfail, you allow known failures without breaking the test suite. ❌

$ pytest
# Output: Test marked as expected failure (xfail), suite continues.

Challenge: 🌟 Use xfail on tests that depend on external systems (e.g., API downtime) to prevent unnecessary failures.


15. Running Tests Based on Markers 🏷️

Boilerplate Code:

import pytest

@pytest.mark.slow
def test_heavy_computation():
    assert 2 ** 100 == 1267650600228229401496703205376

Use Case: Categorize tests with markers, like slow, database, or network.

Goal: Use pytest.mark to classify tests and run only specific categories. 🎯

Sample Code:

@pytest.mark.slow
def test_heavy_computation():
    assert 2 ** 100 == 1267650600228229401496703205376

Before Example:
You run all tests regardless of whether they are fast or slow. ⏳

$ pytest
# All tests run, including slow ones.

After Example:
You categorize slow tests and run them separately when needed. 🏷️

$ pytest -m slow
# Output: Only tests marked as "slow" are executed.

Challenge: 🌟 Create multiple test markers like network, database, and api to control which parts of the test suite run based on system dependencies.


16. Fixture Scope (Session/Module/Class) πŸ§‘β€πŸ”§

Boilerplate Code:

import pytest

@pytest.fixture(scope="module")
def setup_data():
    return {"data": "important_data"}

Use Case: Define fixture scope to control how often fixtures are created and destroyed.

Goal: Use pytest fixture scopes to optimize test performance by sharing setup across tests. 🎯

Sample Code:

@pytest.fixture(scope="module")
def setup_data():
    return {"data": "important_data"}

def test_1(setup_data):
    assert setup_data["data"] == "important_data"

def test_2(setup_data):
    assert setup_data["data"] == "important_data"

Before Example:
Fixtures are created and torn down for every test, causing inefficiency. πŸ˜•

Setup and teardown happening for every test, even when not necessary.

After Example:
With fixture scope, the fixture is created only once per module, improving performance. πŸ§‘β€πŸ”§

$ pytest
# Output: Fixture "setup_data" created only once per module.

Challenge: 🌟 Experiment with the scope="session" option to share fixtures across the entire test session.


17. Testing for Warnings ⚠️

Boilerplate Code:

import warnings

def test_warning_case():
    with pytest.warns(UserWarning):
        warnings.warn("This is a warning", UserWarning)

Use Case: Capture and assert that certain warnings are raised.

Goal: Use pytest.warns to ensure your code triggers the expected warnings. 🎯

Sample Code:

def test_warning_case():
    with pytest.warns(UserWarning):
        warnings.warn("This is a warning", UserWarning)

Before Example:
You see warnings in the console but don't confirm if they are raised correctly. πŸ˜•

Warnings in the output, but not programmatically captured.

After Example:
With pytest.warns, you explicitly check for expected warnings. ⚠️

$ pytest
# Output: Warning captured and test passed.

Challenge: 🌟 Write tests for deprecated functions to ensure they raise the correct deprecation warnings.


18. Using the pytest Debugger (pdb) 🐞

Boilerplate Code:

pytest --pdb

Use Case: Drop into a debugger when a test fails.

Goal: Use pdb in pytest to debug test failures interactively. 🎯

Sample Command:

pytest --pdb

Before Example:
You print variables and re-run tests manually to debug issues, wasting time. 😩

print("debugging information")

After Example:
With pdb, you drop into an interactive debugger as soon as a test fails. 🐞

$ pytest --pdb
# Output: Interactive debugger starts when a test fails.

Challenge: 🌟 Use the pytest debugger to step through a failing test, inspect variables, and fix the issue live.


19. Monkey Patching 🐡

Boilerplate Code:

def test_monkeypatch(monkeypatch):
    monkeypatch.setattr('os.getcwd', lambda: "/mock/directory")
    assert os.getcwd() == "/mock/directory"

Use Case: Temporarily replace attributes, functions, or environments during tests.

Goal: Use monkeypatch to mock external dependencies or system calls (like os.getcwd()) in a controlled test environment. 🎯

Sample Code:

def test_monkeypatch(monkeypatch):
    monkeypatch.setattr('os.getcwd', lambda: "/mock/directory")
    assert os.getcwd() == "/mock/directory"

Before Example:
You rely on real system calls or functions, which makes your tests dependent on external resources, potentially leading to inconsistent results. πŸ˜•

Calling real functions like os.getcwd() returns the actual current working directory, making it hard to test behavior with a mock directory.

After Example:
With monkeypatch, you mock specific functions and ensure tests run in a controlled, predictable environment. 🐡

$ pytest
# Output: os.getcwd() was successfully mocked to return "/mock/directory".

Challenge: 🌟 Use monkeypatch to mock an API call or environment variable, and write a test to check how your code handles different mocked responses.


20. Testing Database Transactions with Rollbacks πŸ—„οΈ

Boilerplate Code:

import pytest

@pytest.fixture
def db_transaction():
    db.begin_transaction()
    yield
    db.rollback_transaction()

def test_database(db_transaction):
    db.execute("INSERT INTO users VALUES (1, 'John')")
    result = db.fetch("SELECT * FROM users WHERE id=1")
    assert result == (1, 'John')

Use Case: Test database operations without leaving permanent changes.

Goal: Use a fixture that initiates and rolls back database transactions, ensuring the database stays clean after each test. 🎯

Sample Code:

@pytest.fixture
def db_transaction():
    db.begin_transaction()
    yield
    db.rollback_transaction()

def test_database(db_transaction):
    db.execute("INSERT INTO users VALUES (1, 'John')")
    result = db.fetch("SELECT * FROM users WHERE id=1")
    assert result == (1, 'John')

Before Example:
Running tests alters your database, and you have to manually clean it up after tests. πŸ˜“

Database entries are inserted permanently, requiring manual cleanup.

After Example:
With transaction rollbacks, database changes made during tests are automatically undone, keeping your database clean. πŸ—„οΈ

$ pytest
# Output: Transaction rolled back after test, no permanent changes in the database.

Challenge: 🌟 Implement rollbacks on a more complex database setup involving multiple tables with foreign key constraints, and test data consistency.


0
Subscribe to my newsletter

Read articles from Anix Lynch directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Anix Lynch
Anix Lynch