Understanding the Model Context Protocol (MCP)

Rodolfo YabutRodolfo Yabut
5 min read

It’s March 2025, and I bet you’ve been seeing “MCP” all over your AI newsfeeds for the past few weeks. What the heck is it?

Model Context Protocol (MCP) is a new spec proposed by Anthropic that aims to standardize how large language models (LLMs) communicate with external tools and data sources. This post explores MCP's motivations, architecture, and ways it simplifies AI workflows. I'm learning about this topic alongside you and the spec is in its infancy, so consider this an introductory guide to help give you a high-level overview based on what I've discovered so far. It is by no means exhaustive.

In fact, I encourage you to read through the spec.

https://spec.modelcontextprotocol.io/specification/2024-11-05/


Why Do We Need MCP? Making the Case for a Standard Protocol

Integrating LLMs with external tools, APIs, and databases often presents significant challenges:

  1. Fragmented APIs: Each tool or service requires custom integration, increasing development time and maintenance complexity.

  2. Lack of Standardization: Without a unified protocol, developers must repeatedly address issues like communication, serialization, and error handling.

  3. Security Risks: Ad hoc integrations can expose vulnerabilities such as improper access control or data leaks.

  4. Vendor Lock-In: Proprietary APIs tie applications to specific LLM providers, limiting flexibility.

MCP addresses these issues by acting as a universal adapter that connects LLMs to external systems. It serves as an upstream layer that enforces data contracts between agents. And though you can build MCP Servers in the same service as your business-logic, I’m guessing production-grade remote MCP Servers will look like BFFS / query-facades or an API Gateway.


Core Concepts (EM-SEE-PEE)

MCP's Defining Feature is the Ability to Introspect Server Capabilities

Unlike REST APIs, which use fixed endpoints and something like OpenAPI schemas for documentation but lack built-in introspection, MCP follows GraphQL's approach to API introspection. However, instead of introspecting data schemas, MCP allows clients to dynamically query the Context (resources, tools, and prompts).

Model

This refers to the LLM that consumes the Context in MCP

Context

  • Resources: Read-only endpoints that allow LLMs to load data into their context without triggering side effects.

  • Tools: Executable actions that enable LLMs to perform computations or interact with external systems.

  • Prompts: Reusable prompt templates that standardize interactions between LLMs and tools.

Protocol

  • JSON-RPC 2.0 messaging

  • Remote & Local Communication

    • Server-Sent Events (SSE)
  • Local-only Communication

    • stdio (standard input/output)

Topology

MCP is built on a client-server architecture with clearly defined roles and supports bidirectional communication and introspection:

  • Hosts: Applications (e.g., IDEs or chat platforms) that integrate with LLMs and initiate connections to MCP servers. Example: Cursor.

  • Clients: Embedded within hosts, clients manage communication with MCP servers and handle routing, introspection queries, and result coordination. Example: Cursor Composer Agent-mode.

  • Servers: Expose tools, resources, and prompt interfaces that LLMs can invoke dynamically. These tools include agents, retrieval APIs, web search, and other services invoked by the client. So if you’re developing an agent and you want it to be usable by other agents or applications, you have to create an MCP server for it.

flowchart LR
    subgraph "Host (e.g., IDE or Chat App)"
        host1["Host UI / LLM Integration"]
        client1["MCP Client (e.g., Composer)"]
        host1 --> client1
    end

    subgraph "MCP Servers"
        server1["MCP Server (Tools: Agents, Search, etc)"]
    end

    client1 <-->|"Bidirectional Transport (JSON-RPC, SSE, stdio)"| server1

Example of the workflow

Consider an LLM tasked with calculating a user's BMI:

  • Step 1: The LLM queries a resource (users://{userId}/profile) to fetch the user's height and weight.

  • Step 2: It invokes a tool (calculate-bmi) to compute the BMI.

  • Step 3: The result is returned to the host application for display or further processing.

The Agent performs these tasks dynamically as needed since it has access to the Tools and Resources. It can then forward the output to another Agent that has its own set of tools.

MCP Server Exposing Resources and Tools

The following example demonstrates how an MCP server exposes resources and tools:

const server = new McpServer({
  name: "Health App",
  version: "1.0.0"
});

// Define a resource for user profiles
server.resource(
  "user-profile",
  new ResourceTemplate("users://{userId}/profile", { list: undefined }),
  async (uri, { userId }) => ({
    contents: [
      {
        uri: uri.href,
        text: `Profile data for user ${userId}`
      }
    ]
  })
);

// Define a tool for BMI calculation
server.tool(
  "calculate-bmi",
  {
    weightKg: z.number(),
    heightM: z.number()
  },
  async ({ weightKg, heightM }) => ({
    content: [
      {
        type: "text",
        text: `BMI: ${(weightKg / (heightM * heightM)).toFixed(2)}`
      }
    ]
  })
);

// Start the server with a transport layer
const transport = new StdioServerTransport();
await server.connect(transport);

Biggest Benefits of MCP for LLM-Based Workflows

  1. Interoperability: MCP enables seamless switching between LLM providers without rewriting integrations.

  2. Extensibility: New tools and resources can be added without impacting existing clients.

  3. Security: (Draft) Built-in mechanisms for authentication, authorization, and rate limiting protect sensitive data.

  4. You don’t need to think about all of this yourself and convince a whole bunch of people to adopt it.


Roadmap and the Future

MCP provides an open standard for integrating LLMs with external tools, data sources, and other agents. It promises to simplify interoperability between agentic systems. Keep in mind that the spec is still in early days and features such as authentication are still in draft and formalized. You can find the roadmap here:

https://modelcontextprotocol.io/development/roadmap

I hope this gives you a clear understanding of what MCP is. I've been exploring MCP and have created a reference implementation for an MCP server that runs on Cursor. I plan to explore the specifications further and develop a client implementation too. I'll share more of my findings in a follow-up post. 🙂

0
Subscribe to my newsletter

Read articles from Rodolfo Yabut directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rodolfo Yabut
Rodolfo Yabut