How to make TDD click in practice

Pepijn KrijnsenPepijn Krijnsen
5 min read

In my experience, almost nobody is good at explaining how to do test-driven development in practice. Developers who don't naturally "get it" tend to abandon it quickly. Developers who grasp it intuitively sometimes struggle to clearly explain their thinking and methodology.

Small code examples don't really help here either. Using TDD to solve FizzBuzz works, but it's not easier than just writing down the solution.

I'm someone who (A) was attracted to the idea of TDD as soon as I heard about it, and (B) spent years learning to apply it correctly. I hope to save you a little time by showing you my mistakes and what I should have been doing instead.

The TDD cycle: red, green, refactor

As the name suggests, test-driven development aims to drive the implementation of a system through tests. Production code is only written in response to a failing test (but not your entire system should necessarily be developed through TDD). Tests are only written in response to a functional requirement.

With a requirement in hand we can identify a new behaviour that we need our system to have. We start by writing a test that confirms that (part of) this behaviour is in place. We run the test to confirm that it fails.

Now that we have a failing test, we write the production code to make it pass. The code doesn't have to be good. It just needs to work. Run the test again to confirm that it now passes.

Finally, look at your test and look at your bad code. Make them better. After every small change that you make, run the tests again to verify that your change did not alter the behaviour of your system.

So far, so good; but how does following this process help you in practice?

Three insights that transformed my TDD practice

Big TDD names such as Kent Beck and Ian Cooper excel at summarising a core element in a single sentence. Personally I found it difficult to incorporate such distilled wisdom into my practice. Here is how I translated three common guidelines into practical advice.

Test observable behaviour

Distilled wisdom #1: test the observable behaviour of your system, not its internal implementation (Resources 1).

When working test-first, after writing a bit of behaviour in response to a test, then during refactoring I would reason "This bit of logic is not really central to the module's responsibilities. I'll move it into an internal method." Now suddenly my tests were calling internal methods everywhere.

I've learned that we shouldn't try to decide what code is part of a module's interface and what isn't. Our own decision of whether or not the code warrants testing is the only factor.

Practical advice #1: If there is code we want to test, then that code must be part of a module's behaviour.

Once the module is in place and all tests are passing, you can decide to write a contract (interface or protocol) which your new module implements. The tests can then be refactored to call the contract instead of the module directly.

Types of complexity

Distilled wisdom #2: Don't mock too much (Resources 2)

Distilled wisdom #3: Don't copy computed values into tests in order to test them (Resources 3)

Generally speaking, modules have two axes of complexity: domain complexity (business rules) and technical complexity (algorithms).

Modules with high domain complexity tend to have many collaborators and little complex logic. Tests that exercise such modules tend to have long expressive names that summarise the behaviour being tested in natural (ish) language.

When I find myself using mocks extensively, it's a signal that the module under test is too complex along the domain axis.

Practical advice #2: Avoid extensive mocking in tests by reducing the amount of domain complexity in the module

Modules with high technical complexity tend to have few collaborators and complex internal logic. Tests that exercise such modules naturally produce functional, side-effect-free interfaces.

When I find myself struggling to express behaviour in a test without copying computations or values from production into tests, it's a signal that the module is too complex along the technical axis.

Practical advice #3: Avoid complex logic inside tests by reducing the amount of technical complexity in the module

Why this changes everything

Test-driven development helps me to be a better developer. When I'm implementing, it reduces cognitive load and anxiety. When I'm refactoring, I can make changes confidently knowing my tests will catch any regressions. Seeing a test go from red to green makes me feel ever so slightly excited every time. I noticed these positive effects even when I was not practicing TDD very well.

A good test-driven practice has positive effects beyond the developer's well-being. Writing tests in response to requirements, contracts (interfaces, protocols) in response to tests, and implementation in response to contracts leads to clean, maintainable design.

Resources

  1. Ian Cooper: TDD Revisited from 15:53: TDD is a Contract-First approach to testing. Behaviour in this context means that contract.
  2. Kent Beck: Is TDD dead? Part I from 21:10: "My personal preference is I mock almost nothing. If I can't find a way to test efficiently with the real stuff, I find another way of creating a feedback loop for myself. (...) I just don't go very far down the mock path."
  3. Kent Beck: Canon TDD "Mistake: copying actual, computed values & pasting them into the expected values of the test."
0
Subscribe to my newsletter

Read articles from Pepijn Krijnsen directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pepijn Krijnsen
Pepijn Krijnsen