Why MCP Matters — And How to Build a Working MCP Server Yourself

Asif SiddiqueAsif Siddique
8 min read

Introduction

Ah, the Model Context Protocol (MCP)—because what the world truly needed was yet another acronym to spice up our tech conversations. Just when we thought we had enough protocols to keep our AI models busy, along comes MCP, promising to be the "USB-C port" for AI integrations. But don't worry, this isn't just another fleeting tech fad; it's here to standardize our AI's social life, ensuring they mingle seamlessly with every data source and tool in the digital playground. So, let's dive into the riveting world of MCP and discover how it's transforming AI interactions, one standardized handshake at a time.

But before we dive deeper into MCP, let’s quickly talk about LLMs — the brains behind the whole thing.

  • LLMs are advanced artificial intelligence models trained on vast amounts of data. By processing and learning from diverse datasets, they generate human-like responses, understand context, and perform a wide range of language-related tasks.

The thing with LLMs is — they’re super smart, no doubt. They can write like humans, understand context, even crack a decent joke. But there’s one big limitation: they don’t naturally have access to your world — your data, tools, or preferences. They’ve been trained on tons of past data, so unless someone gives them fresh info, they’re basically like a genius stuck in a time capsule — smart, but a little outdated.

Next Iteration Is Retrieval-Augmented Generation (RAG)

RAG is a system architecture that combines an information retrieval component with a generative LLM.
In simple terms, it lets you enhance or modify the output of an LLM by injecting real-time, contextual information from an external knowledge base or tool.

What is MCP?

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

Let’s break it down with a real-life example:

Imagine you're using a smart assistant like Siri or Alexa. Without any context, every time you say “schedule a meeting”, it might have to ask:

Which calendar? What time zone? Who's invited?

— every single time. Frustrating, right?

Now imagine if your assistant already knew:

  • You mean your work calendar.

  • You prefer meetings between 2–5 PM.

  • When you say “call,” you usually mean Zoom.

That’s exactly what MCP enables for LLMs.
It gives models structured, standardized context about your environment, preferences, and connected tools — so they don’t have to start from scratch each time.

What’s The Difference Between MCP & RAGs?

Both RAG and MCP aim to achieve the same goal — empowering LLMs with external context and tools so they can generate more accurate, focused, and useful responses. The key difference lies in how they achieve this: RAG is an architectural technique that retrieves and injects relevant data at runtime, while MCP is a standard protocol that defines how external systems should connect and provide structured context to the LLM.

Why MCP?

Large Language Models are smart — but only within their bubble. They don’t naturally know your data, your tools, or what’s going on in the outside world. That’s where MCP steps in.

It gives LLMs a standardized way to connect with tools and data sources, so you can actually build useful stuff — like smart agents and complex workflows — on top of them.

Bonus?
You get access to a growing ecosystem of ready-to-use integrations, and you're not locked into any specific LLM provider. Swap in whatever model you like, and it just works.

Digging Deep Into MCP

MCP is an open-source project with growing support from the AI community, including tools like Claude from Anthropic. Think of it like this: just as HTTP standardized how web browsers talk to servers (making the internet work smoothly), MCP aims to do the same — but for LLMs. It defines a clear, structured way to plug tools and data into language models, so instead of everyone hacking together custom solutions, there's now a common language for integration.

Architecture

  • MCP Hosts:

    These are applications that act as the interface for LLMs, allowing them to request and use tools or data via MCP. Claude, Cursor, IDEs, or AI tools that want to access data through MCP.

  • MCP Clients: Protocol clients that maintain 1:1 connections with servers.

  • MCP Servers: These are lightweight services that expose specific tools, actions, or datasets through the MCP protocol.

  • Tools/Services: These are the actual functionalities or databases (e.g., a calendar API, SQL database, or internal tool) that the MCP Server wraps and exposes.

Building an MCP Server

In this example we’ll build a simple MCP latest news fetcher and connect it to host.

What we’ll be building

Many LLMs do not currently have the ability to fetch the latest news. Let’s use MCP to solve that!

We’ll build a server that exposes a tool: get-latest-news. Then we’ll connect the server to an MCP host (in this case, Claude for Desktop):

If we ask about latest-news to the Claude it’ll give response like this without MCP tool:

Core MCP Concepts

MCP servers can provide three main types of capabilities:

  1. Resources: File-like data that can be read by clients (like API responses or file contents)

  2. Tools: Functions that can be called by the LLM (with user approval)

  3. Prompts: Pre-written templates that help users accomplish specific tasks

Prerequisite Knowledge

This quickstart assumes you have familiarity with:

  • TypeScript

  • LLMs like Claude

Setting Up Environment

Let’s setup the project:

# Create a new directory for our project
mkdir news-fetcher
cd news-fetcher

# Initialize a new npm project
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk zod
npm install -D @types/node typescript

# Create our files
mkdir src
touch src/index.js

Update package.json to add type:”module” & build script:

 {
  "name": "news-mcp",
  "version": "1.0.0",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "build": "tsc && chmod 755 build/index"
  },
  "keywords": [],
  "author": "",
  "license": "ISC"
}

Building the server

Add these inside src/index.js:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const API_KEY = process.env.API_KEY
const NWS_API_BASE =
  "https://newsapi.org/v2/top-headlines?";
const USER_AGENT = "news-app/1.0";

// Create server instance
const server = new McpServer({
  name: "News",
  version: "1.0.0",
  capabilities: {
    resources: {},
    tools: {},
  },
});

Helper Functions

Next, let’s add our helper functions for querying and formatting the data:

async function makeNWSRequest(url) {
  const headers = {
    "User-Agent": USER_AGENT,
    Accept: "application/geo+json",
  };

  try {
    const response = await fetch(url, { headers });
    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }
    return await response.json();
  } catch (error) {
    console.error("Error making NWS request:", error);
    return null;
  }
}
function formatNews(article) {
  return [
    `Title: ${article.title || "Unknown"}`,
    `Source: ${article.source?.name || "Unknown"}`,
    `Published At: ${article.publishedAt || "Unknown"}`,
    `Description: ${article.description || "Unknown"}`,
    `URL: ${article.url || "Unknown"}`,
    `Content: ${article.content || "Unknown"}`,
  ].join("\n");
}

Implementing the tool execution

The tool execution handler is responsible for actually executing the logic of each tool.

server.tool(
  "get-latest-news",
  "Get Latest News of a country",
  {
    country: z.string().describe("Country code to fetch news for"),
  },
  async ({ country }) => {
    const countryCode = country.toUpperCase() || "US";
    const newsUrl = `${NWS_API_BASE}country=${countryCode}&${API_KEY}`;
    const newsResponse = await makeNWSRequest(newsUrl);

    if (!newsResponse || newsResponse.status !== "ok") {
      return {
        content: [
          {
            type: "text",
            text: `Error fetching news: ${
              newsResponse?.status || "Unknown error"
            }`,
          },
        ],
        error: true,
      };
    }

    const formattedNews = newsResponse.articles.map(formatNews);
    const newsText =
      `Latest news for ${countryCode}:\n\n` + formattedNews.join("\n\n"); // Added extra newline for better readability

    return {
      content: [
        {
          type: "text",
          text: newsText,
        },
      ],
    };
  }
);

Running the server

const transport = new StdioServerTransport();
await server.connect(transport);

Adding MCP Configuration for the Client

To configure the MCP settings on Claude Desktop, follow these steps:

  1. Open the configuration file

    Use the following command in your terminal to open the config file.

code ~/Library/Application\ Support/Claude/claude_desktop_config.json
  1. Add the MCP server configuration

    Inside the JSON file, add the following configuration (or merge it into the existing structure if needed):

{
    "mcpServers": {
    "news": {
            "command": "node",
            "args": ["/PATH/TO/SERVER/FOLDER/news-mcp/src/index.js"]
        }
    }
}
  1. Save and restard Claude Desktop

After editing, save the file and restart Claude Desktop to apply the changes.

⚠️ Note:

The exact configuration structure might vary depending on the client.
Some clients may require a different key name, additional fields, or a different file path altogether.

Testing MCP tool with Client

Now, we should be able to see the hammer icon in the Claude Desktop.

On click of which the available MCP tools can be viewed:

As shown, the get-latest-news tool is now available. Let’s try asking for the latest news again.

This time, we receive a proper response — and it also clearly mentions which tool was used to fetch the information.

Conclusion

In this exploration, we’ve delved into the what, why, and how of the Model Context Protocol (MCP), and demonstrated a fundamental implementation with a simple news-fetcher tool integrated with our client. This setup serves as an introductory example of MCP’s capabilities, which extend far beyond this basic use case.

MCPs are rapidly gaining traction across industries, with organizations developing tailored MCPs for various tools. For instance, companies have created MCPs for platforms like WhatsApp, which can retrieve the latest chats, extract audio messages, and more directly from the client itself.

Major tech giants such as Google, Microsoft, IBM, AWS, and Cloudflare have also begun offering MCP servers, showcasing the growing adoption and potential of this protocol.

As the ecosystem continues to evolve, we can expect MCPs to become a standard approach for building and integrating tools, offering enhanced interoperability and more seamless experiences. The future of MCP is promising, and we are likely to see it become a cornerstone of how modern applications and services interact.

Reference & Citation

0
Subscribe to my newsletter

Read articles from Asif Siddique directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Asif Siddique
Asif Siddique