Object Calisthenics: Rule 3

Leo BchecheLeo Bcheche
9 min read

Hi Dev, have you heard about Object Calisthenics before?

Object Calisthenics is a set of nine coding rules introduced by Jeff Bay in The ThoughtWorks Anthology. The term combines "Object", referring to object-oriented programming (OOP), and "Calisthenics", which means structured exercises — in this case, applied to code. These rules are meant to help developers internalize good OOP principles by practicing them at the micro level, directly in their day-to-day coding.

You’re not expected to follow every rule 100% of the time. Instead, the idea is to use them as a guide — a structured constraint — to help shift your coding habits and improve the design of your software. As you apply these rules consistently, you’ll begin to see the benefits: better encapsulation, cleaner abstractions, and more testable, readable, and maintainable code.

Seven of the nine rules focus directly on strengthening encapsulation — one of the core principles of OOP. Another encourages replacing conditionals with polymorphism. The final rule helps improve naming clarity by avoiding cryptic abbreviations. Together, they push developers to write code that is free from duplication and easier to reason about.

At first, following these rules may feel uncomfortable or even counterproductive. But that friction is exactly the point — it forces you to break old habits and rethink how your objects interact. Over time, these small design constraints train you to write code that is simpler, more focused, and easier to evolve.

After flattening our control flow and kicking out the else, it’s time to look at the data moving through our code. Rule 3 asks us to wrap primitives and strings in small, meaningful objects. It’s a simple idea with huge pay-offs for clarity, safety and domain expressiveness.


Wrap All Primitives and Strings

Never pass around naked int, float, str, bool (or None) that represent something richer in your domain. Encapsulate each value inside its own tiny class, often called a Value Object.


Why Use This Rule?

Primitives are ambiguous:

  • Is 42 an age, an inventory count, or the answer to life?

  • Is "BR" a country code, a language, or a stock ticker?

When everything is just an int or str, the compiler can’t protect you, and humans must remember what each value means.

Wrapping turns implicit knowledge into explicit, self-documenting types that can validate, format, and constrain themselves.

Example 1 – Money vs float

Let’s start with a classic case of primitive obsession: using raw types like float to represent domain concepts such as money. While it seems convenient at first, it leads to fragile code where any float is accepted, whether it represents a total, discount, tax rate, percentage, or even temperature. The compiler (and often even reviewers) have no way of knowing if we’re mixing concepts.

# ❌ Primitive obsession
def pay(invoice_total: float, discount: float) -> float:
    return invoice_total - discount

At a glance, this seems fine. But it’s dangerously permissive. You could accidentally pass discount=0.2 thinking it’s 20% off, and Python won’t blink. There's no validation, no encapsulation, no domain meaning.

Now let’s wrap this primitive in a value object that represents Money explicitly. This enforces type safety, enables domain rules (like preventing negative amounts), and gives semantic power to our code.

# ✅ Value object — we're defining a value object to avoid primitive obsession 
# (using raw floats or ints for money)

from dataclasses import dataclass  
# Import the dataclass decorator to simplify class definition

# Define an immutable (frozen) dataclass named 'Money' to represent monetary values
@dataclass(frozen=True)
class Money:
    cents: int  # Store the amount in cents as an integer to avoid floating point precision issues

    # This method runs after object initialization to enforce constraints
    def __post_init__(self):
        # Raise an error if a negative amount is passed, since money 
        # shouldn't be negative in this context
        if self.cents < 0:
            raise ValueError("Money cannot be negative")

    # Define how subtraction works between two Money objects
    def __sub__(self, other: "Money") -> "Money":
        # Return a new Money instance with the difference in cents
        return Money(self.cents - other.cents)

    # Define how to prints money
    def __str__(self) -> str:
        # Format the cents into a standard currency string (e.g., $10.00)
        dollars = self.cents // 100
        cents = self.cents % 100
        return f"${dollars}.{cents:02d}"  # Always show two decimal places

# Define a function to calculate the final amount after applying a discount
def pay(invoice_total: Money, discount: Money) -> Money:
    # Use the overloaded subtraction operator to subtract discount from total
    return invoice_total - discount

Now we’ve made several improvements at once:

  • Money is clearly distinguished from generic floats.

  • We prevent illegal states (e.g. negative amounts) at object creation.

  • The subtraction is overloaded safely, and only works between two Money instances.

This small change creates a type boundary that protects your business logic. If someone tries to subtract a tax percentage or a distance from a Money, Python will raise a type error at runtime — or even better, your IDE or linter will warn you ahead of time.

Example 2 – Email Address Validation

Email validation is one of those concerns that developers often handle just-in-time — typically by throwing a regex somewhere near a form submission or inside a service method. That approach works... until it doesn’t.

Using a bare string to represent an email means that every piece of code accepting or using that string needs to validate it, remember to validate it, and do so in the same way. This leads to duplication, inconsistency, and bugs when one place forgets the check.

# ❌ Bare string
def send_reset_link(email: str) -> None:
    ...

This function assumes that the caller is sending a valid email. But what if it came from user input? Or from a test fixture? Or from another API?
Without encapsulation, you have no guarantee that the string is even a proper email, and you’ll probably end up repeating the same regex validation all over your codebase.

Let’s wrap the string in a value object that takes care of validation once, at the moment of object creation. That way, any Email instance is guaranteed to be valid, and you can remove dozens of scattered if not re.match(...) checks.

# ✅ Encapsulated

import re  # Regular expressions module used for validating the email format
from dataclasses import dataclass  # Dataclass decorator for boilerplate reduction

# Define an immutable (frozen) value object for Email
@dataclass(frozen=True)
class Email:
    value: str  # The raw email string

    # A simple regular expression to validate basic email formats
    _PATTERN = re.compile(r"^[\w\.-]+@[\w\.-]+\.\w+$")

    # Called automatically after the instance is created
    def __post_init__(self):
        # If the provided value doesn't match the email pattern, raise an error
        if not self._PATTERN.match(self.value):
            raise ValueError(f"Invalid email: {self.value}")

# Function that expects a valid Email object instead of a plain string
def send_reset_link(email: Email) -> None:
    # Implementation goes here
    ...

Now the validation logic is centralized and enforced. No instance of Email can be created unless it meets the regex pattern. From this point forward:

  • Every Email you pass around is guaranteed to be valid.

  • You eliminate copy-pasted regex logic in dozens of places.

  • You improve readability by giving semantic weight to the parameter.

This approach also enables richer behavior later. Want to add domain-level checks like disposable em

Example 3 – Strongly-Typed Identifiers

Let’s talk about one of the most subtle and dangerous forms of primitive obsession: using generic identifiers like str or UUID for multiple distinct entities.

Consider a typical function that links a user to a device. Both identifiers are just strings or UUIDs. What happens if you accidentally swap them? Nothing, until your production logs scream.

# ❌ Two UUIDs that could be swapped
def link_device_to_user(device_id: str, user_id: str) -> None:
    ...

This looks harmless, but it’s a footgun waiting to go off.
Since both arguments are plain strings (or even UUIDs), nothing prevents someone from calling:

Python won’t complain, your editor won’t warn you, and your unit tests might not even cover this path. It’s a recipe for data corruption and hours of debugging.

Now let’s fix this using strongly-typed value objects for each identifier. Even though under the hood they still wrap strings, Python (and tools like mypy, pyright, or IDEs) will now treat them as distinct types. This prevents dangerous mix-ups at the call site.

# ✅ Distinct types prevent mix-ups
from dataclasses import dataclass

# A value object representing a Device ID
@dataclass(frozen=True)
class DeviceId:
    value: str

# A value object representing a User ID
@dataclass(frozen=True)
class UserId:
    value: str

# Function now requires clearly distinct types
def link_device_to_user(device: DeviceId, user: UserId) -> None:
    ...

Here’s what this change gives you:

  • Clear intention at the call site:
    You can't accidentally pass a UserId into a DeviceId parameter anymore.

  • Safer refactors:
    If you ever rename or reorder parameters, your type checker will catch mismatches.

  • Extensibility:
    Later on, if you want to validate ID formats, log creation times, or embed related metadata, your value objects are ready to grow without breaking external code.

Now, this line becomes impossible without raising a warning:

link_device_to_user(UserId("abc123"), DeviceId("xyz789"))  # ❌ Invalid types

Your linter will scream, your IDE will underline it, and your team will thank you.


Why This Rule Matters

BenefitImpact
ExpressivenessCode reads like the domain: Money, Email, UserId.
Validation at the edgeBad data is rejected once, not re-checked everywhere.
Behaviour lives with dataFormatting, arithmetic, equality, and serialization live inside the value object.
Refactor safetySwapping parameter order or misusing a value becomes difficult or impossible.

Trade-Offs

  • More classes & files – Your project gains many small types. IDE navigation is essential.

  • Slight runtime overhead – A wrapper object adds memory, though usually negligible.

  • Learning curve – Teammates unfamiliar with value objects may resist at first.


Practical Tips

  1. Start at boundaries: wrap request/response DTOs, then work inward.

  2. Use @dataclass(frozen=True): quick, immutable, and hashable.

  3. Add semantics, not bloat: value objects should stay tiny—fields + invariants + helpers.

  4. Let IDEs help: leverage “Inline/Extract Class” refactors to migrate incrementally.

  5. Lean on typing: mypy, pyright, or Pyre will catch mix-ups between your new types.


Works Great With…

  • Rule #4 – First-Class Collections: wrap lists/dicts the same way you wrap primitives.

  • Domain-Driven Design value objects.

  • Type-Driven Development: richer static analysis and auto-completion.


Final Thoughts

Rule 3 turns invisible assumptions into concrete, enforceable code. By wrapping primitives and strings, you give names, constraints, and behaviour to your raw data, leading to safer, more self-explanatory systems. The extra lines you write today are paid back every time a bug is prevented rather than fixed.

Take a moment to scan your codebase. Where do anonymous strings or numbers hide meaning? Wrap one today, feel the difference tomorrow.

Happy refactoring!

0
Subscribe to my newsletter

Read articles from Leo Bcheche directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Leo Bcheche
Leo Bcheche