🧠 Function Calling vs. Model Context Protocol (MCP): Building the Future of Intelligent AI Agents

ANURAG KULEANURAG KULE
6 min read

As Large Language Models (LLMs) continue to mature, they are shifting from being just language generators to action-oriented agents capable of executing complex tasks. In this evolving landscape, two key paradigms have emerged for integrating LLMs with external systems:

  • Function Calling

  • Model Context Protocol (MCP)

In this blog, we'll dive deep into how these two approaches differ, where each shines, and how MCP is shaping the next generation of intelligent AI applications.

πŸ”§ What is Function Calling?

Function Calling allows an LLM to translate natural language into structured API calls. It’s a way to bridge the gap between user queries and backend logic.

βœ… Key Characteristics:

  • Linear, Stateless: Each prompt results in an independent function call.

  • Predefined Functions: Developers must define the full list of callable functions at design time.

  • Predictable Flow: Ideal for straightforward tasks like weather retrieval, booking tickets, or database lookups.

πŸ” Example Use Case:

jsonCopyEdit{
  "name": "get_weather",
  "parameters": {
    "location": "Pune",
    "unit": "Celsius"
  }
}

An LLM can extract these parameters from user input and generate this call, but it cannot adapt if the toolset or flow changes at runtime.

Let us understand the function calling in LLM πŸ€”

πŸ—οΈ Architecture Overview: Function Calling using LLM

πŸ”„ Architecture Walkthrough with example

1. User β†’ Query

  • The user starts by submitting a natural language query (e.g., "What’s the weather in Pune tomorrow?").

2. Query β†’ LLM (User Input)

  • The query is passed to the LLM, which interprets the intent and extracts relevant parameters.

3. LLM β†’ Function Definition

  • The LLM matches the query to a predefined function schema (e.g., a get_weather(location, date) function).

4. Function Definition β†’ Function Call

  • Using the extracted parameters, the LLM constructs a function call in a structured format (typically JSON).

5. Function Call β†’ Tools & API

  • This structured call is routed to the appropriate external tool or API (like OpenWeather, Weatherstack, etc.).

6. Tools & API β†’ Tool Output

  • The API processes the call and returns structured output (like weather data, price quotes, etc.).

7. Tool Output β†’ LLM (Prompt)

  • The tool output is formatted into a prompt or data input and sent back to the LLM.

8. LLM β†’ LLM Response (Generate)

  • The LLM generates a user-friendly response using both the tool output and the original context.

9. LLM Response β†’ User

  • The final response is shown to the user (e.g., "The weather in Pune tomorrow is 28Β°C with clear skies").

🌐 What is Model Context Protocol (MCP)?

MCP is a more powerful framework introduced to move beyond the constraints of stateless function calling. Think of MCP as a USB-C port for AI agentsβ€”a standard interface for connecting LLMs to dynamic tools, resources, and workflows.

🧩 Key Features:

  • Stateful Sessions: Maintains memory and task continuity across multiple turns.

  • Dynamic Tool Discovery: Tools don’t need to be predefined; MCP can discover and invoke tools at runtime.

  • Multi-Step Workflow Support: Handles branching logic, intermediate steps, and long-running processes.

  • Additional Primitives:

    • Tools: Like function calls, but discoverable dynamically

    • Resources: External data sources or assets (e.g., documents, APIs, file systems)

    • Prompts: Context injection points that guide model behavior

Let us understand the Model Context Protocol πŸ€”

πŸ—οΈ Architecture Overview: MCP

πŸ” Architecture Walkthrough with example:

1. User Query

  • A user sends a query (e.g., β€œWhat’s the weather in Tokyo next week?”).

2. MCP Client

  • The query is received by the MCP Client, which acts as an orchestrator between the user, the LLM, and tools.

  • The MCP Host (Claude) helps select the most appropriate MCP Tool for the job based on the query's context.

3. Tool Selection

  • The LLM (Claude) decides which tool is needed (e.g., a weather API).

  • This tool selection is returned to the MCP Client.

4. Tool Invocation via MCP Server

  • The MCP Client makes a tool call request to the MCP Server.

  • The MCP Server is responsible for:

    • Invoking the correct MCP Tool

    • Making requests to external APIs if needed

    • Returning the tool output to the MCP Client

5. Handling External APIs

  • If needed, the MCP Tool or Server communicates with external Web APIs (e.g., OpenWeatherMap).

  • The server handles the response (e.g., 200 OK) and prepares the tool output.

6. Response Back to LLM

  • Tool output is passed back to the MCP Client, then to the LLM for response generation.

  • The LLM generates the final output by combining its reasoning with the tool's result.

7. Final Response to User

  • The system sends the final LLM response (with tool-enhanced content) back to the user.

πŸ“¦ Key Components:

ComponentRole
MCP ClientIntermediary managing input/output and tool routing
MCP HostLLM-powered assistant (e.g., Claude) that selects tools
MCP ServerExecutes tool requests and manages tool/plugin calls
MCP ToolCustom tool for specific tasks (e.g., weather, finance)
Web APIExternal service provider integrated into tools

βœ… Why is MCP Powerful? πŸ€”

  • Dynamic Tool Use: Tools are selected and used only when relevant.

  • Composable: New tools can be added without retraining the LLM.

  • Secure: Execution is separated from the LLM logic, reducing risk.

  • Scalable: Can serve many users and tasks simultaneously.

πŸ” Request Definition in MCP:

pythonCopyEditrequest = {
    "tool_name": "get_weather_data",
    "input": {
        "location": "Pune",
        "date": "2025-05-11"
    }
}

πŸ” Function Calling vs MCP – Side-by-Side

FeatureFunction CallingModel Context Protocol (MCP)
State ManagementStatelessStateful
Tool DefinitionPredefined onlyDynamic discovery at runtime
Workflow ComplexitySimple, linearComplex, multi-step, conditional
ExtensibilityLimited to defined API callsSupports tools, resources, and custom prompt injection
Use CaseBasic automationFull AI agent orchestration

πŸ› οΈ When to Use What?

ScenarioBest Approach
Querying a weather APIFunction Calling
Multi-step process with conditional logicMCP
Static chatbot with fixed toolsFunction Calling
AI-powered assistant with evolving toolsMCP

πŸ§ͺ Final Thoughts

While Function Calling laid the groundwork for LLM interactivity, MCP unlocks full-fledged autonomy. It's a major leap forward in building agentic AI systems that can reason, adapt, and interact with the world intelligently.

Thanks for reading the article! 😊

0
Subscribe to my newsletter

Read articles from ANURAG KULE directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

ANURAG KULE
ANURAG KULE