Testing Our Application

The PTSD of Long-Running Test Suites
In a previous life, I found myself staring at my terminal, watching a test suite crawl through its execution. Forty minutes. That's how long it took to run the entire test suite. Forty minutes of waiting, hoping that my small change didn't break anything, and praying that I wouldn't have to fix something and start the process all over again.
The experience left me with a mild form of developer PTSD 🥲. Every time I see a test run for more than a few seconds, I get flashbacks of those endless waits, the context switching as I tried to find something productive to do while the tests ran, and the frustration when a test failed at the 39-minute mark.
When we started building Pulse, I was determined not to repeat that mistake. We needed a testing strategy that would be thorough but fast, comprehensive but focused. And most importantly, it needed to give us confidence in our code without slowing down our development process.
Our Testing Philosophy
Our approach to testing is built around a few key principles:
Test each layer independently: By focusing on unit tests for each layer (exposition, service, model), we can catch most issues early and quickly.
Mock dependencies: We use mocking extensively to isolate the component being tested from its dependencies.
Focus on happy paths for end-to-end tests: We use scenario testing for the most common user flows, but we don't try to test every possible edge case at this level.
Keep tests fast: By using mocks and focusing on unit tests, we keep our test suite running quickly.
Let's dive into how we implement these principles in practice.
Testing Each Layer
Our application follows a three-layer architecture: exposition, service, and model. Each layer has its own testing approach.
Model Layer Testing
The model layer is responsible for data access and persistence. Our tests for this layer focus on:
CRUD operations
Query methods
Authorization scopes
Data validation
Custom queries and actions specific to business requirements
Here's a simplified example of a model repository test:
RSpec.describe WebhookRepository, type: :repository do
# Set up a clean database context for each test
let(:repo) { WebhookRepository.new }
let(:webhook_data) do
{
name: "Test Webhook",
url: "https://example.com/webhook",
secret: "webhook_secret",
events: ["user.created", "project.updated"]
}
end
describe "#create" do
it "creates a new webhook" do
# Create a webhook and verify it was saved
id = repo.create(webhook_data)
webhook = repo.find(id)
# Verify the webhook has the correct attributes
expect(webhook.name).to eq(webhook_data[:name])
expect(webhook.url).to eq(webhook_data[:url])
expect(webhook.secret).to eq(webhook_data[:secret])
expect(webhook.events).to eq(webhook_data[:events])
end
end
describe "#find_by_event" do
it "finds webhooks that subscribe to a specific event" do
# Create a webhook that subscribes to user.created
repo.create(webhook_data)
# Create another webhook that subscribes to a different event
repo.create(webhook_data.merge(
name: "Another webhook",
events: ["project.deleted"]
))
# Find webhooks that subscribe to user.created
webhooks = repo.find_by_event("user.created")
# Verify only the first webhook is returned
expect(webhooks.size).to eq(1)
expect(webhooks.first.name).to eq("Test Webhook")
end
end
end
These tests run in a transaction that's rolled back at the end, ensuring they don't leave any data behind. This approach allows us to test database operations without worrying about test isolation.
Service Layer Testing
The service layer contains our business logic. Tests for this layer focus on:
Business rules and validations
Orchestration of operations across multiple models
Error handling
Event publishing
Here's a more straightforward example of a service test:
RSpec.describe ProjectService, type: :service do
let(:service) { ProjectService.new }
let(:user_id) { 123 }
let(:project_data) do
{
name: "New Project",
description: "A test project",
deadline: Date.today + 30
}
end
describe "#create_project" do
it "creates a project and assigns the creator as owner" do
# Mock the repository to verify it's called correctly
project_repo = instance_double(ProjectRepository)
allow(ProjectRepository).to receive(:new).and_return(project_repo)
# Expect the repository to be called with the right parameters
expect(project_repo).to receive(:create).with(
hash_including(project_data.merge(owner_id: user_id))
).and_return(42) # Return a project ID
# Mock the project member repository
member_repo = instance_double(ProjectMemberRepository)
allow(ProjectMemberRepository).to receive(:new).and_return(member_repo)
# Expect the member repository to add the creator as owner
expect(member_repo).to receive(:create).with(
hash_including(project_id: 42, user_id: user_id, role: "owner")
)
# Call the service method
project_id = service.create_project(user_id, project_data)
# Verify the result
expect(project_id).to eq(42)
end
end
end
Notice how we mock the repository layer using RSpec's instance_double
, allow
, and expect
methods. This approach allows us to test the service layer in isolation, without depending on the actual database operations. By mocking the repositories, we can verify that the service calls them with the correct parameters and properly handles their responses, all without executing any real database queries.
Exposition Layer Testing
The exposition layer handles HTTP requests and responses. Tests for this layer focus on:
Route handling
Input ingestion
Response formatting
HTTP status codes
Here's an example of an exposition test:
RSpec.describe Expo::Projects, type: :exposition do
describe "#show", as: :system do
it "returns a specific project" do
# Create a mock project record that the service will return
project = {
id: 1,
type: "projects",
attributes: {
name: "Test Project",
description: "A test project",
created_at: "2023-01-01T00:00:00Z"
}
}
# Mock the service to return our project when called
expect_any_instance_of(Service::ProjectService).to receive(:show).with(
1,
included: []
).and_return(project)
# Make the request
get "/projects/1"
# Verify the response
expect(last_response.status).to eq 200
expect(JSON.parse(last_response.body, symbolize_names: true)).to eq({ data: project })
end
end
# More tests...
end
Again, we're using mocking to isolate the exposition layer from the service layer. We're testing that the exposition correctly calls the service and formats the response, without actually executing the service logic.
The Power of Mocking
Mocking is a key part of our testing strategy. By replacing real dependencies with test doubles, we can:
Isolate the component being tested: We can test a component without worrying about its dependencies.
Control the test environment: We can simulate different scenarios, including error conditions.
Speed up tests: We don't need to execute potentially slow operations like database queries or API calls.
RSpec provides several methods for creating test doubles:
double
: Creates a basic test doubleinstance_double
: Creates a test double for a specific classallow
: Sets up a method stub that returns a specific valueexpect
: Sets up a method stub and verifies that it was called
Here's an example of how we use these methods:
# Create a test double for an account
account = double(
"account",
{
id: 1,
type: "iam/accounts",
email: "test@example.com",
person: double(
{
first_name: "John",
last_name: "Doe"
}
)
}
)
# Set up a method stub
expect(Api.iam).to receive(:show_accounts).and_return(account)
This code creates a test double for an account and sets up a stub for the Api.iam.show_accounts
method to return that account. When the code under test calls Api.iam.show_accounts
, it will receive the test double instead of making a real API call.
It's worth noting that we use the WebMock gem to disable all real HTTP calls during tests. If any code attempts to make a real HTTP request that hasn't been explicitly stubbed, the test will fail with a clear error message. This ensures that:
All external dependencies are properly mocked
Tests don't depend on external services
Tests run consistently regardless of network conditions
We're explicitly aware of all external calls our code makes
This adds an extra layer of protection against tests that might silently pass while making real API calls, which could lead to inconsistent test results or even unintended side effects in production systems.
Scenario Testing: The Happy Path
While unit tests are great for testing individual components, they don't test how those components work together in real-world scenarios. That's where our scenario tests come in.
We use scenario tests (sometimes called journey tests or end-to-end tests) to test the most common user flows through the application. These tests simulate a user interacting with the system and verify that the system behaves as expected.
Here's an example of a scenario test:
To make these scenario tests more readable and maintainable, we use helper methods defined in a separate file (typically steps.rb
). These helpers abstract away the implementation details and provide a more declarative way to write tests:
RSpec.describe "Simple form spec", type: :exposition, database: true do
include Journey::Steps
it "[SIMPLE FORM] creates a project, assign users, participate to project" do
project = nil
allow_any_instance_of(Model::Project::MemberRepository).to receive(:after_commit).and_yield
as_user(:system) do
project = create_project("example", "samples/journey/simple_form.yml")
expect_project_to_be_created(project)
first_agent_id = add_project_member(project.id, 1, %w[agent])
add_project_member(project.id, 2, %w[agent])
expect_project_to_have_members(project, 2)
update_member_role(project, first_agent_id, %w[agent control], 200)
expect_member_to_have_roles(project, 1, %w[agent control])
end
# More steps...
end
# More tests...
end
This test simulates a user creating a project, adding members to it, and updating member roles. It then switches to a different user and simulates that user creating a work unit, starting work on an activity, and completing the activity.
Importantly, we only test the happy path in these scenario tests. We don't try to test every possible error condition or edge case. Those are better handled by unit tests, which are faster and more focused.
Testing Helpers
To make our tests more readable and maintainable, we've created several testing helpers:
Role-Based Testing
We use the as: :role
helper to run tests within the context of a specific user role:
context "as hr staff", as: :hr_staff do
subject { described_class.new(current_auth_context) }
describe "#scoped" do
it "returns only the current user's webhooks" do
webhooks = subject.index({})
expect(webhooks.size).to eq(2)
# More assertions...
end
end
# More tests...
end
This helper sets up the current_auth_context
with the appropriate role and permissions, allowing us to test authorization rules.
Repository Testing
We use the type: :repository
helper to run tests with a database connection:
RSpec.describe Model::WebhookRepository, type: :repository, as: :system do
# Tests...
end
This helper sets up for each test a database transaction that's rolled back at the end of the test, ensuring that tests don't interfere with each other.
We also have another helper called database: true
, which can be used with any test type. While type: :repository
provides repository-specific features like automatic transaction handling and repository helper methods, database: true
simply ensures that the test runs within a database transaction without adding the repository-specific helpers. This is useful for tests that need database access but aren't specifically testing repository functionality, such as service or exposition tests that need to interact with the database.
Exposition Testing
We use the type: :exposition
helper to run tests with a mocked HTTP server:
RSpec.describe Expo::Projects, type: :exposition do
# Tests...
end
This helper sets up a Rack test client that allows us to make HTTP requests to our exposition layer.
Event Bus Configuration
In our application, we use an event bus for asynchronous communication between services. In production, events are processed asynchronously, but this can make testing difficult because we can't easily predict when an event will be processed.
To solve this problem, we configure the event bus to run in immediate mode during tests:
Verse::Event::Dispatcher.event_mode = :immediate
This configuration is crucial for testing event-based systems. In production, events are typically processed asynchronously, which means they might be processed after the code that published them has completed execution. In tests, this would be problematic because, since the transaction rollback, the event will never be dispatched.
By setting the event mode to :immediate
, we ensure that events are processed synchronously during the test, immediately after they're published. This makes the test behavior predictable and allows us to verify both the publishing and processing of events within the same test.
Keeping Tests Fast and Comprehensive
One of our key goals is to keep our test suite running quickly while maintaining high code coverage. We achieve this through several strategies:
Focusing on unit tests: Unit tests are faster than integration or end-to-end tests because they don't need to set up as much context.
Using mocks: By mocking dependencies, we avoid slow operations like database queries and API calls.
Running tests in transactions: By running tests in transactions that are rolled back at the end, we avoid the overhead of setting up and tearing down test data.
Limiting scenario tests: We only use scenario tests for the most common user flows, not for every possible edge case.
These strategies have paid off. Our entire test suite runs in just a few seconds, not forty minutes, while maintaining an impressive 96% code coverage across the platform. This high coverage gives us confidence that our tests are thorough, while the speed means we can run the tests frequently during development, catching issues early and maintaining our development velocity.
We use SimpleCov to track our code coverage and have integrated it into our CI pipeline to ensure coverage doesn't drop below our established thresholds. This helps us identify areas of the codebase that might need additional testing and ensures that new features are properly tested before they're merged.
Conclusion
Our testing approach is designed to give us confidence in our code without slowing down our development process. By testing each layer independently, using mocking to isolate components, focusing on happy paths for end-to-end tests, and keeping tests fast, we've created a test suite that's both thorough and efficient.
If you're struggling with a slow test suite, consider adopting some of these strategies. Your future self (and your team) will thank you.
This article is part of our series on microservice architecture. Stay tuned for more insights into how we've built a scalable, maintainable system.
Subscribe to my newsletter
Read articles from Yacine Petitprez directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
