Static can be good

When dynamism starts to hurt
One of Python's greatest strengths is its dynamic typing. It's what enables fast iteration on ideas and significantly lowers the barrier to entry for anyone who's not already familiar with the concept of types.
As projects grow, this advantage can become a liability. Recognising this issue, static type annotations were added in version 3.5 (PEP 484) and the Protocol construct in v3.8 (PEP 544). Protocols allow Python developers to build dependency inversion and implementation substitution into their projects. Both are essential for long-term maintainability.
Let's take a closer look at what a Protocol is and how it works.
What are protocols?
Python's Protocol
construct shares some characteristics with the Interface
construct found in several statically typed OO languages. Like all of Python's typing features, protocols are not enforced at runtime.
A class that implements a protocol makes a promise to implement the attributes defined by that protocol. Consider the following toy example.
from typing import Protocol
class Writer(Protocol):
def write(self, msg: str) -> None: ...
class ValidWriter:
def write(self, msg: str) -> None:
with open("my_file", "w") as mf:
mf.write(msg)
mw: Writer = ValidWriter()
Although ValidWriter
does not explicitly inherit from the Writer
protocol, the line mw: Writer = ValidWriter()
is considered correct by static type checkers. This is because ValidWriter
implements a method write(self, msg: str) -> None
which matches all (one) methods of the Writer
protocol.
Explicitly implementing a protocol may be preferred. Starting the class definition with Class ValidWriter(Writer):
supports IDE hints and gives your AI assistant more context. Inheriting from a protocol signals that a class implements that protocol.
Building a real-world example
Let's imagine we have a small program that goes through the following steps:
- Call a weather forecast API
- Parses and formats the response
- Uses the response to fill a HTML template
- Sends an HTML email with today's weather forecast
These steps are orchestrated by a JobRunner
dataclass in main.py
that looks like this:
from api_caller import MetIEAPICaller
# etc
@dataclass
class JobDependencies:
api_caller: MetIEAPICaller
response_parser: ResponseParser
templater: Templater
email_sender: SMTPSender
def run(self) -> None:
raw = self.api_caller.fetch_forecast()
parsed = self.response_parser.parse(raw)
html = self.templater.render(parsed)
self.email_sender.send(html)
These types are being imported from various modules that main.py
depends on.
Of these steps, the API caller benefits from depending on protocols instead of concrete classes.
- When testing, depending on a protocol gives me an easy way to substitute the implementation for a mock without having to actually call the API.
- When I want to use a different API -- for example for a location that the current weather service does not cover -- I can plug in a different implementation and be safe in the knowledge that none of the rest of the program needs to change.
To do proper dependency inversion, both the main.py
script and the api_caller
module should depend on another module that provides the protocol:
# interfaces.py
from typing import Protocol
class APICaller(Protocol):
def fetch_forecast(self) -> dict: ...
# ----
# `main.py`
from interfaces import APICaller
# etc
@dataclass
class JobRunner:
api_caller: APICaller
# rest of the class as before
# ----
# api_caller.py
from interfaces import APICaller
class MetIEAPICaller(APICaller):
# defines at least fetch_forecast(self) -> dict
Python allows a simplification of this pattern. Instead of defining APICaller
in an interfaces
module, I can define it right inside main.py
. This means that the api_caller
module cannot import it: main.py
depends on api_caller
, not the other way around. As long as api_caller.MetIEAPICaller
defines a method that matches all (one) methods from the protocol, it is considered to be implementing that protocol. From the Python interpreter's point of view: it quacks, so I'll treat it like a duck.
Why protocols matter in practice
The ability to easily replace one implementation with another comes with three key benefits.
- Instead of relying on mocking frameworks or setting up a dedicated environment, you can substitute production components with simple, purpose-built test implementations. This reduces test flakiness and speeds up feedback cycles.
- When behaviour is described via a protocol, changes to the implementation are isolated. You can refactor or optimize the underlying logic without touching the code that uses it. If the change you make violates the contract, your static type checker will let you know. This separation of concerns means fewer side effects and faster iteration.
- The protocol can be named after what it does without having to worry about the how. This keeps the high level code looking clean. The concrete class that implements the protocol, by contrast, contains a reference to the how: the
MetIEAPICaller
implementation uses met.ie, not some other weather service.
Together, these benefits translate into faster development, safer changes, and fewer bugs.
Once your project hits a certain size, Protocols aren’t just useful—they're essential. Even a single well-placed Protocol can unlock cleaner structure, faster testing, and greater adaptability.
Further reading
- Fluent Python, 2nd ed. by Luciano Ramalho, pp. 468–485
- I want a new duck by Glyph
- Protocols and structural subtyping by Mypy
- Dynamic typing and Role Interface by Martin Fowler
Subscribe to my newsletter
Read articles from Pepijn Krijnsen directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
