Building Scalable Enterprise MCP Servers

Patrick SkinnerPatrick Skinner
16 min read

Introduction

I've built several custom MCP servers, ranging from basic implementations to sophisticated setups capable of commanding satellite imaging through SkyFi's API using natural language. In this tutorial, I will share my in-depth development philosophy for creating production-ready, scalable MCP (Model Context Protocol) servers. We'll break down core concepts, detailed architecture, and essential best practices.

What is an MCP Server?

An MCP server is a backend service designed to interact with various external APIs, manage complex authentication workflows, and provide structured data transformations. MCP servers facilitate natural language interactions by exposing modular, reusable "tools" to language models (LMs), enabling dynamic, secure, and efficient automation of sophisticated workflows.

Why Use an MCP Server?

  • Simplified API Integration: Abstract complex external APIs into standardized, easy-to-consume tools.
  • Scalability and Security: Layered architecture ensures scalability, maintainability, and robust security.
  • Flexibility: Multi-method authentication and modular services provide flexibility for diverse enterprise environments.
  • Enhanced User Experience: Intuitive, structured responses and comprehensive error handling greatly improve user interactions.

If you want to read more about the original outline of what an MCP server is, by the very team that originally created the MCP server standard, check out this article from Anthropic: https://www.anthropic.com/news/model-context-protocol

Core Development Philosophy

1. Enterprise-First, Developer-Friendly

My MCP server approach prioritizes enterprise-grade scalability and robust security from inception, but it also ensures simplicity and clarity for developers working with basic use cases. We can embed complexity only where its needed while also giving developers the ability to easily engage with foundational functionality without losing the extensibility of advanced scenarios.

Not AI slop… I actually wrote that… πŸ‘†

Implementation Strategies:

  • Flexible Authentication: Support dual authentication modes, balancing simplicity through API key usage for straightforward integrations, alongside comprehensive OAuth 2.0 and SAML authentication mechanisms for secure, enterprise-wide deployments.
  • Scalable Deployment: Offer deployment strategies that scale seamlessly from straightforward Docker containers for quick starts and prototyping to Kubernetes orchestrations designed for robust, high-availability production environments.
  • Versatile Service Management: Accommodate both single-service instances for targeted applications and multi-tenant configurations suitable for large-scale, shared deployments.

2. Strict Layering with Clear Boundaries

The MCP server architecture is meticulously structured, with each layer clearly defined by distinct responsibilities, isolated interfaces, and explicit boundaries. This modular design promotes maintainability, simplifies troubleshooting, and ensures each component functions optimally within a cohesive system.

Architectural Layers:

  • MCP Transport Layer: Facilitates robust communication protocols, including standard IO (STDIO), Server-Sent Events (SSE), and streamable HTTP, ensuring reliable and efficient interactions.
  • FastMCP Server Layer: Integrates core framework features, dynamically filtering and exposing the appropriate tools and capabilities. The word "FastMCP" definitely seems coined and branded, which kinda is. But, it is open source, so don't worry. You can dive into FastMCP v1 and v2 in different approaches. Feel free to dive into them there.
  • Service Layer: Encapsulates dedicated business logic, tailored specifically to the operational requirements of each service.
  • Data Processing Layer: Handles data validation, transformation, and ensures structured, human-readable responses, enhancing data integrity and clarity.
  • Authentication Layer: Implements rigorous multi-method authentication strategies to ensure secure access control tailored to diverse security demands.
  • Network Layer: Manages efficient and optimized HTTP client interactions with external APIs, ensuring performance and reliability.

3. Independent, Mountable Modules

Every service within an MCP server ecosystem is structured as a self-contained, mountable module, promoting ease of deployment, maintenance, and scaling. This modular independence enhances operational agility, facilitates parallel development, and enables focused security management.

Module Characteristics:

  • Dedicated Configuration: Each service module features independent, environment-based configuration factories (from_env) for consistent and flexible setup.
  • Robust Authentication Validation: Modules independently verify their authentication configurations (is_auth_configured), ensuring operational readiness and secure service access.
  • Isolated Client Management: Each module maintains its own dedicated client with optimized connection pooling, ensuring efficient and resilient API communications.
  • Clear Logging and Error Handling: Comprehensive and distinctly scoped logging and error handling namespaces are maintained for precise diagnostics and straightforward troubleshooting.

Why this MCP Approach is a Scalable Enterprise Model

This MCP server approach is a scalable enterprise model because it fundamentally incorporates clear separation of concerns, modularity, and extensibility from the very beginning. Unlike traditional monolithic systems, which become increasingly challenging to scale and maintain due to tightly coupled components and rigid architectures, …(or just unguided AI slop)… MCP's layered with a modular structure enables incremental scaling, targeted optimizations, and independent component upgrades. Also, its multi-method authentication strategy and well-defined interfaces ensure secure integrations within complex enterprise ecosystems, which tend to accommodate evolving security requirements and diverse deployment scenarios. Within other methods (or lack there of methodology), they often lack clear separation and modular architecture which often leads to limitations in scalability, maintainability, and adaptability.

Project Structure Template

A robust project structure is critical for maintainability, scalability, and clarity. Here's a detailed breakdown of the recommended project structure:

mcp-{service-name}/
β”œβ”€β”€ src/                            # Source code root
β”‚   └── mcp_{service_name}/
β”‚       β”œβ”€β”€ __init__.py             # Main entry point and CLI
β”‚       β”œβ”€β”€ exceptions.py           # Custom exception classes
β”‚       β”œβ”€β”€ servers/                # Contains server classes, middleware, and lifecycle management
β”‚       β”œβ”€β”€ {service_a}/            # Dedicated modules for service A
β”‚       β”‚   β”œβ”€β”€ __init__.py
β”‚       β”‚   β”œβ”€β”€ client.py           # API client for service A
β”‚       β”‚   β”œβ”€β”€ config.py           # Configuration model for service A
β”‚       β”‚   └── tools.py            # MCP tools for service A
β”‚       β”œβ”€β”€ {service_b}/            # Dedicated modules for service B
β”‚       β”‚   └── [same structure]
β”‚       β”œβ”€β”€ models/                 # Data models and validation schemas
β”‚       β”œβ”€β”€ preprocessing/          # Data transformation and preprocessing utilities
β”‚       └── utils/                  # Common utilities (logging, auth, networking)
β”œβ”€β”€ tests/                          # Comprehensive testing suite
β”‚   β”œβ”€β”€ unit/                       # Unit tests for isolated logic
β”‚   β”œβ”€β”€ integration/                # Integration tests for combined components
β”‚   └── fixtures/                   # Shared test data and mocks
β”œβ”€β”€ docs/                           # Documentation, guides, and references
β”œβ”€β”€ Dockerfile                      # Docker configuration for containerization
└── README.md                       # Project overview and quick start guide

Detailed Server Implementation

Main Server Class

The main server class acts as the central orchestrator. It is responsible for the server's lifecycle, configuring middleware, and, most importantly, dynamically filtering which tools are available based on the current context (e.g., user permissions, server configuration). This ensures that a language model is only exposed to the tools it is authorized and able to use.

import logging
from typing import Any
from fastmcp import FastMCP, MCPTool
from starlette.middleware import Middleware
from .context import MainAppContext
from .dependencies import get_available_services
from .{service_a} import service_a_mcp

logger = logging.getLogger("mcp-server.main")

class {ServiceName}MCP(FastMCP[MainAppContext]):
    """
    Custom FastMCP server with multi-service support and dynamic tool filtering.
    This class overrides the default tool discovery to apply business logic.
    """
    async def _mcp_list_tools(self) -> list[MCPTool]:
        """Filters and lists available tools based on application context."""
        req_context = self._mcp_server.request_context
        if req_context is None or req_context.lifespan_context is None:
            logger.warning("Lifespan context not available, returning no tools.")
            return []

        app_context = req_context.lifespan_context.get("app_lifespan_context")
        if not app_context:
            return []

        all_tools = await self.get_tools()
        filtered_tools = [
            tool_obj.to_mcp_tool(name=tool_name)
            for tool_name, tool_obj in all_tools.items()
            if self._should_include_tool(tool_name, tool_obj, app_context)
        ]
        return filtered_tools

    def _should_include_tool(self, tool_name: str, tool_obj: Any, context: MainAppContext) -> bool:
        """Applies multi-level filtering logic for tool inclusion."""
        # Filter 1: Exclude tools if they are explicitly disabled.
        if context.enabled_tools and tool_name not in context.enabled_tools:
            return False

        # Filter 2: Exclude tools with 'write' operations if in read-only mode.
        if context.read_only and "write" in tool_obj.tags:
            return False

        # Filter 3: Exclude tools if their parent service is not configured.
        tool_tags = tool_obj.tags
        for service_name in ["service_a", "service_b"]:
            if service_name in tool_tags:
                if not getattr(context, f"full_{service_name}_config"):
                    return False
        return True

    def http_app(self, middleware=None):
        """Defines a middleware pipeline for auth, logging, and request handling."""
        from .servers.middleware import UserTokenMiddleware

        auth_middleware = Middleware(UserTokenMiddleware, mcp_server_ref=self)
        final_middleware = [auth_middleware]
        if middleware:
            final_middleware.extend(middleware)

        return super().http_app(middleware=final_middleware)

Service Configuration

Each service module requires a detailed configuration class. Using a dataclass with a from_env class method allows for type-safe, validated configuration loaded securely from environment variables. This prevents misconfigurations and keeps secrets out of the codebase.

from __future__ import annotations
import os
from dataclasses import dataclass
from typing import Optional, List, Dict

@dataclass
class {ServiceName}Config:
    """Configuration for the {ServiceName} service, loaded from environment variables."""
    url: str                                  # Base URL of the service API

    # Authentication Methods (in order of precedence)
    oauth_client_id: Optional[str] = None     # OAuth client identifier
    oauth_client_secret: Optional[str] = None # OAuth client secret
    personal_token: Optional[str] = None      # Personal Access Token for enterprise
    api_key: Optional[str] = None             # Simple API key auth

    # Network settings
    ssl_verify: bool = True
    timeout: int = 30

    @classmethod
    def from_env(cls) -> {ServiceName}Config:
        """Constructs configuration from environment variables."""
        return cls(
            url=os.getenv("{SERVICE_NAME}_URL", ""),
            oauth_client_id=os.getenv("{SERVICE_NAME}_OAUTH_CLIENT_ID"),
            oauth_client_secret=os.getenv("{SERVICE_NAME}_OAUTH_CLIENT_SECRET"),
            personal_token=os.getenv("{SERVICE_NAME}_PERSONAL_TOKEN"),
            api_key=os.getenv("{SERVICE_NAME}_API_KEY"),
            ssl_verify=os.getenv("{SERVICE_NAME}_SSL_VERIFY", "true").lower() == "true",
            timeout=int(os.getenv("{SERVICE_NAME}_TIMEOUT", "30")),
        )

    def is_auth_configured(self) -> bool:
        """Checks if any valid authentication method is configured."""
        # Checks for OAuth, PAT, or API Key credentials.
        has_oauth = self.oauth_client_id and self.oauth_client_secret
        has_pat = bool(self.personal_token)
        has_api_key = bool(self.api_key)
        return has_oauth or has_pat or has_api_key

Service Client

A robust client class is essential for interacting with external APIs. It should encapsulate all logic for making requests, including handling authentication, retries, and error management. This isolates network concerns from the business logic within the tools themselves.

import httpx
import logging
from typing import Any, Optional
from .config import {ServiceName}Config
from ..exceptions import {ServiceName}APIError, {ServiceName}AuthenticationError

logger = logging.getLogger("mcp-server.{service_name}.client")

class {ServiceName}Client:
    """Asynchronous HTTP client for the {ServiceName} API."""

    def __init__(self, config: {ServiceName}Config, user_token: Optional[str] = None):
        self.config = config
        self.user_token = user_token
        self._client: Optional[httpx.AsyncClient] = None

    async def _ensure_client(self):
        """Initializes and configures the httpx.AsyncClient if not already present."""
        if self._client is None:
            headers = {"User-Agent": "MCP-{ServiceName}/1.0"}
            # Precedence: User-provided token > Server-configured token
            token = self.user_token or self.config.personal_token
            if token:
                headers["Authorization"] = f"Bearer {token}"
            elif self.config.api_key:
                headers["X-API-Key"] = self.config.api_key

            self._client = httpx.AsyncClient(
                base_url=self.config.url,
                headers=headers,
                verify=self.config.ssl_verify,
                timeout=self.config.timeout,
            )

    async def get(self, endpoint: str, params: Optional[dict] = None) -> Any:
        """Executes an authenticated GET request with error handling."""
        await self._ensure_client()
        try:
            response = await self._client.get(endpoint, params=params)
            response.raise_for_status()
            return response.json()
        except httpx.HTTPStatusError as e:
            if e.response.status_code == 401:
                raise {ServiceName}AuthenticationError("Authentication failed.")
            raise {ServiceName}APIError(f"API Error: {e.response.text}")
        except httpx.RequestError as e:
            raise {ServiceName}APIError(f"Network Error: {str(e)}")

    async def __aenter__(self):
        return self

    async def __aexit__(self, exc_type, exc, tb):
        if self._client:
            await self._client.aclose()

Authentication Architecture

Multi-Method Authentication

A scalable MCP server must support multiple authentication strategies to accommodate different environments. The architecture should follow a clear order of precedence, prioritizing the most secure and context-specific methods first.

  • Per-Request OAuth 2.0/PAT (Highest Priority): A token provided in the request header (Authorization: Bearer ...). This is ideal for multi-tenant systems where each user has their own credentials.
  • Server-Configured Personal Access Token (PAT): A single token configured via environment variables, suitable for self-hosted or single-user enterprise environments.
  • Server-Configured API Key/Token: A simpler static key for standard cloud integrations where OAuth is overkill.
  • Basic Authentication (Legacy): Username and password, included for compatibility with older systems.

Authentication Middleware

Middleware is the component that intercepts every incoming request to extract and validate authentication tokens. It systematically checks headers for credentials and attaches them to the request's state, making them available to service clients and tools securely without passing them as function arguments.

from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseFunction
from starlette.requests import Request
from starlette.responses import Response

class UserTokenMiddleware(BaseHTTPMiddleware):
    """
    Middleware to extract Bearer or API key tokens from request headers
    and attach them to the request state for per-request authentication.
    """
    async def dispatch(self, request: Request, call_next: RequestResponseFunction) -> Response:
        auth_header = request.headers.get("Authorization")
        api_key_header = request.headers.get("X-API-Key")

        user_auth_token = None
        user_auth_type = None

        if auth_header and auth_header.startswith("Bearer "):
            user_auth_token = auth_header.split(" ")[1]
            user_auth_type = "bearer"
        elif api_key_header:
            user_auth_token = api_key_header
            user_auth_type = "api_key"

        request.state.user_auth_token = user_auth_token
        request.state.user_auth_type = user_auth_type

        response = await call_next(request)
        return response

Tool Development

Tool Organization

For clarity and maintainability, tools should be organized by feature domain, not by their technical operation (e.g., GET, POST). This makes the codebase intuitive and easier for developers to navigate.

# βœ… GOOD: Organization by feature domain
{service_name}/
β”œβ”€β”€ projects.py       # Tools related to project management (create, get, list)
β”œβ”€β”€ issues.py         # Tools for issue tracking (create, search, update)
└── users.py          # User management tools (get, invite)

# ❌ BAD: Organization by CRUD operation
{service_name}/
β”œβ”€β”€ read.py           # Contains get_project, get_issue, get_user
β”œβ”€β”€ create.py         # Contains create_project, create_issue

Tool Definition Standards

Every tool must be clearly defined with structured annotations, comprehensive docstrings, and consistent error handling. This metadata is what allows the language model to understand how and when to use the tool correctly.

from typing import Annotated
from fastmcp import MCPError, Context
from .{service_name}.context import {ServiceName}Context

@{service}_mcp.tool(
    name="{service}_{feature}_{action}",
    description="Performs a specific action on a feature within the service.",
    tags=["{service}", "read", "{feature_category}"] # Tags for filtering
)
def {feature}_{action}_tool(
    project_id: Annotated[str, "The unique identifier for the project. Example: 'PROJ-123'"],
    context: Annotated[{ServiceName}Context, Context] # Injects service-specific context
) -> str:
    """
    Retrieves details for a specific project.

    Args:
        project_id: The ID of the project to retrieve.

    Returns:
        A formatted string containing the project details.

    Raises:
        MCPError: If the project is not found or authentication fails.
    """
    try:
        # Business logic is handled in the client
        client = context.{service}_client
        project_data = client.get_project(project_id)

        # Response formatting is handled by a dedicated utility
        return ResponseFormatter.format_item_response(project_data, item_type="Project")

    except {ServiceName}AuthenticationError as e:
        raise MCPError(f"Authentication failed for {context.service_name}: {e}")
    except {ServiceName}NotFoundError as e:
        raise MCPError(f"Project not found: {project_id}")
    except Exception as e:
        logger.error(f"Unexpected error in get_project_tool: {e}", exc_info=True)
        raise MCPError(f"An unexpected error occurred: {e}")

Data Processing Standards

Structured and human-readable responses are crucial for a good user experience. A dedicated formatting class ensures that all tools return data in a consistent, predictable, and clear manner. This greatly improves the language model's ability to interpret results and present them to the end-user.

class ResponseFormatter:
    """A utility class for formatting API responses into human-readable strings."""

    @staticmethod
    def format_item_response(item: dict, item_type: str) -> str:
        """Formats a single dictionary item into a structured Markdown string."""
        if not item:
            return f"{item_type} not found."

        lines = [
            f"# {item_type}: {item.get('name', item.get('id', 'N/A'))}",
            f"**ID**: `{item.get('id', 'N/A')}`",
            f"**Status**: {item.get('status', 'N/A')}",
            f"**Link**: {item.get('url', 'No link available')}",
            "\n## Description",
            item.get('description', 'No description provided.'),
        ]
        return "\n".join(lines)

    @staticmethod
    def format_list_response(items: list[dict], item_type: str) -> str:
        """Formats a list of items into a summary table."""
        if not items:
            return f"No {item_type}s found."

        headers = ["ID", "Name", "Status"]
        rows = [[item.get('id', 'N/A'), item.get('name', 'N/A'), item.get('status', 'N/A')] for item in items]

        # Simple text-based table generation
        header_str = " | ".join(headers)
        divider_str = " | ".join(["---"] * len(headers))
        row_strs = [" | ".join(map(str, row)) for row in rows]

        return "\n".join([header_str, divider_str] + row_strs)

Environment Configuration

Standardizing environment variable names is a simple but powerful practice for maintaining clarity and avoiding configuration errors, especially in complex deployments with multiple services.

# Convention: {SERVICE_NAME}_{COMPONENT}_{SETTING}
# Examples:
GITHUB_API_URL="https://api.github.com"
GITHUB_PERSONAL_TOKEN="ghp_xxxxxxxx"
GITHUB_TIMEOUT="45"

SLACK_API_TOKEN="xoxb-xxxxxxxx"
SLACK_SSL_VERIFY="false"

User Experience Optimization

Progressive Disclosure Documentation

Structure documentation to cater to different user needs, from a quick start for beginners to advanced guides for expert users. This approach prevents information overload and helps users find what they need quickly.

  • Quick Start (30 seconds): A minimal set of commands to get the server running with basic functionality.
  • Standard Setup (5 minutes): Covers common production configurations, including API key authentication and Docker deployment.
  • Advanced Configuration (15+ minutes): Details enterprise features like OAuth setup, multi-tenant deployments, and custom middleware.

Error Message Standards

Provide error messages that are not only descriptive but also actionable. A good error message tells the user what went wrong, why it went wrong, and how to fix it.

class ErrorMessageTemplates:
    AUTHENTICATION_ERROR = """
    ❌ **Authentication Failed for {service_name}**

    **Cause:** The provided credentials (API Key or Token) are invalid or have insufficient permissions.
    **Solutions:**
    1. Verify your credentials in the `.env` file or request headers.
    2. Ensure the token has not expired and has the required scopes.
    3. Check the service's documentation for authentication help: {docs_url}
    """

    CONFIGURATION_ERROR = """
    ❌ **Configuration Error for {service_name}**

    **Missing Variable:** The `{missing_variable}` environment variable is not set.
    **To Fix:**
    - Set the variable in your terminal: `export {missing_variable}="value"`
    - Add it to your `.env` file: `{missing_variable}="value"`
    """

Testing Philosophy

Comprehensive Testing

A multi-layered testing strategy ensures reliability and stability. Each layer focuses on a different aspect of the system, from individual functions to the complete, integrated server.

tests/
β”œβ”€β”€ unit/            # Tests for individual functions and classes in isolation.
β”‚   β”œβ”€β”€ test_config.py
β”‚   └── test_formatter.py
β”œβ”€β”€ integration/     # Tests that verify interactions between components (e.g., client and a mock API).
β”‚   └── test_service_a_client.py
└── mcp_protocol/    # End-to-end tests that check compliance with the MCP specification.
    └── test_tool_execution.py
└── fixtures/        # Reusable test data, mock responses, and helper factories.
    └── mock_api_responses.py

Mocking Strategy

Use mock factories to produce consistent and reliable mock objects for your tests. This ensures that tests are repeatable and not dependent on the state of external services.

from unittest.mock import Mock

class ServiceMockFactory:
    """Creates standardized mock objects for service clients for use in unit tests."""

    @staticmethod
    def create_service_client_mock(service_name: str, is_auth_ok: bool = True):
        """Creates a mock service client with predefined success or failure responses."""
        mock_client = Mock()

        if not is_auth_ok:
            mock_client.get.side_effect = {ServiceName}AuthenticationError("Invalid token")
            return mock_client

        # Configure standard successful responses
        mock_client.get.return_value = {"id": "123", "name": "Test Item"}
        mock_client.post.return_value = {"id": "456", "status": "created"}

        # Configure common error scenarios
        mock_client.get_not_found = Mock(side_effect={ServiceName}NotFoundError("Item not found"))

        return mock_client

Performance and Scalability

Caching and Connection Pooling

For high-throughput services, performance is critical. A properly configured HTTP client with effective connection pooling and caching reduces latency and minimizes the overhead of establishing new connections for every request.

import httpx

def create_optimized_http_client(base_url: str, verify: bool, timeout: int) -> httpx.AsyncClient:
    """
    Configures an optimized HTTP client with effective connection pooling
    and precise timeouts to ensure performance and reliability.
    """
    # Define connection limits to reuse connections efficiently
    limits = httpx.Limits(
        max_keepalive_connections=20,  # Max idle connections to keep open
        max_connections=100,           # Max total connections in the pool
    )

    # Configure granular timeouts
    timeouts = httpx.Timeout(
        connect=5.0,   # Time to establish a connection
        read=20.0,     # Time to wait for a chunk of the response
        write=10.0,    # Time to wait for a chunk of the request to be sent
    )

    return httpx.AsyncClient(
        base_url=base_url,
        limits=limits,
        timeout=timeouts,
        verify=verify,
        http2=True,  # Enable HTTP/2 for multiplexing if the server supports it
    )

Deployment Strategies

Container and Kubernetes-Ready Design

Design your application to be container-first using Docker. A multi-stage Dockerfile creates a small, secure, and efficient image. For scaling, use Kubernetes manifests to define deployments, services, and configurations, enabling automated scaling and high availability.

Secure Dockerfile:

# Stage 1: Builder - Installs dependencies
FROM python:3.11-slim AS builder
WORKDIR /app
RUN pip install poetry
COPY poetry.lock pyproject.toml ./
RUN poetry install --no-root --no-dev

# Stage 2: Runtime - Creates the final, minimal image
FROM python:3.11-slim
RUN useradd --create-home app
USER app
WORKDIR /home/app

# Copy dependencies from the builder stage and source code
COPY --from=builder /app/.venv ./.venv
COPY --chown=app:app src/ ./src/

# Set non-root user and define health check
ENV PATH="/home/app/.venv/bin:$PATH"
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
  CMD curl -f http://localhost:8000/healthz || exit 1

EXPOSE 8000
CMD ["python", "-m", "src.mcp_{service_name}", "--transport", "streamable-http"]

Kubernetes Deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mcp-{service-name}
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mcp-{service-name}
  template:
    metadata:
      labels:
        app: mcp-{service-name}
    spec:
      containers:
      - name: mcp-{service-name}
        image: your-registry/mcp-{service-name}:latest
        ports:
        - containerPort: 8000
        envFrom:
        - configMapRef:
            name: mcp-env-config
        - secretRef:
            name: mcp-api-secrets
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8000
          initialDelaySeconds: 30

Monitoring and Observability

Structured Logging and Metrics

Implement structured logging (e.g., JSON format) from the start. It allows for powerful filtering and querying in modern observability platforms. This is essential for debugging issues in a distributed, containerized environment.

import structlog
import sys

def configure_logging():
    """
    Sets up structured logging using structlog to output JSON logs.
    This enables easier parsing, searching, and analysis in production.
    """
    structlog.configure(
        processors=[
            structlog.stdlib.add_log_level,
            structlog.processors.TimeStamper(fmt="iso"),
            structlog.processors.JSONRenderer(),
        ],
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.BoundLogger,
        cache_logger_on_first_use=True,
    )

    # Example usage:
    # log = structlog.get_logger("my_app")
    # log.info("server_started", port=8000, transport="http")

Conclusion

Though this write up is pretty brief in some areas, I hope it works as a guide for youβ€”or your AIβ€” to guide you to building scalable, secure, and user-friendly MCP servers. If you stick to these simple principles, you ensure your MCP server deployments will be capable of handling complex enterprise scenarios. Overall, I hope you enjoyed this guide. Feel free to hit me up on X if you have follow up questions... PEACE!!! ✌️ https://x.com/PSkinnerTech

0
Subscribe to my newsletter

Read articles from Patrick Skinner directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Patrick Skinner
Patrick Skinner

As a former Paratrooper Medic and Mass Casualty Coordinator, I made the leap into software engineering. Through my journey, I've continued to grow and learn, and I'm eager to share my knowledge with others. As a self-taught software engineer, I'm passionate about empowering others to pursue their dreams and learn new skills. Active Member of Developer DAO.