MCP Overview: A Deep Dive into It.

Raghu Charan VRaghu Charan V
3 min read

Anthropic Company is well known for coding models with its benchmark performances. Their models, like Claude Sonnet or Opus 4, are mostly in agentic coding assistance, like Cursor, Claude Code, and more. But there is a bridge gap between LLM and its access to context. In late 2023, OpenAI introduced function calling with LLM, which made a new way to implement external data into LLM as a context. This action leads Anthropic to build a context protocol called MCP (Model Context Protocol), which is designed to pull any context from any resource through a client-server protocol.

MCP Architecture Overview:

MCP diagram from Anthropic

MCP Components

  1. MCP Client: It is a desktop IDE or a tool that can made to access server protocols based on its configurations.

  2. MCP Protocol: It is an REST end-to-end protocol between MCP client and server, Which is secure, safe, and flexible. »SSE (Server-Side Events), »Studio

  3. MCP Server: It is a web config server with defined tools, resources, and prompts, which had features of MCP.

A code walk-through of config of MCP servers.

MCP from Anthropic supports three features,

Tools: These are used to make actions on external systems and data sources to perform computation.

Below is an example of an MCP tool that executes commands based on user requirements. It inputs the CMD from LLM on user query and executes those commands as a subprocess and returns the response as its CMD results.

from mcp.server.fastmcp import FastMCP
import subprocess

# Initialize the MCP server
mcp = FastMCP("Cmd")
#cmd tool
mcp.tool()
def run_command(command: str) -> str:
    """Execute a shell command and return output
        Args: command (str): the shell command to execute
        Returns: str : the command's output
     """

    result = subprocess.run(command)
    return result.stdout

Resources: It is a large set of data or content stored in a server that can be accessed by LLM at client phases. It might be any resource exposed to the server that can be taken as context in the runtime execution of LLM.

Resources represent any kind of data that an MCP server wants to make available to clients. This can include:

  • File contents

  • Database records

  • API responses

  • Live system data

  • Screenshots and images

  • Log files

Below is an example for list of resources present in a server pointer to a particular course.

from mcp.server.fastmcp import FastMCP
from mcp import types

app = FastMCP("files")

@app.list_resources()
async def list_resources() -> list[types.Resource]:
    return [
        types.Resource(
            uri="files://opt/resources/course_overview.md",
            name="Course Overview",
            mimeType="text/markdown"
        ),
        types.Resource(
            uri="files://opt/resources/syllabus.pdf",
            name="Syllabus",
            mimeType="application/pdf"
        ),
        types.Resource(
            uri="files://opt/resources/lecture1.mp4",
            name="Lecture 1 - Introduction",
            mimeType="video/mp4"
        ),
        types.Resource(
            uri="files://opt/resources/code_samples.zip",
            name="Code Samples",
            mimeType="application/zip"
        ),
    ]

Prompts: It is useful while working on domain-specific user perspectives. Make reusable prompts across the client based on requirements.

For example, If we are working on cursor adding a specific user prompts or system prompts based on our perspectives and requirements, Cursor works on our bases.

Below is an example of a code-evaluate prompt that evaluates code based on the prompt context.

PROMPTS = {
    "code-evaulate": types.Prompt(
        name= "code-evaulate",
        description="""Evaulate the code based on all security confilts, and check any system imports
                       ,check the quality of code, comments, and PEP8 formats are present.
                       """
        arguments = [
                types.PromptArgument(
                    name="code",
                     description="Code to review",
                     required=True
                )
    ]
}

There are more features in MCP, like

  • Sampling

  • Roots

  • Elicitation

Would be explored in next blogs.

Thanks for reading.

0
Subscribe to my newsletter

Read articles from Raghu Charan V directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Raghu Charan V
Raghu Charan V

A Professional AI Engineer, developing Agentic Workflows and Autonomous agents in Action.