π§ Function Calling vs. Model Context Protocol (MCP): Building the Future of Intelligent AI Agents


As Large Language Models (LLMs) continue to mature, they are shifting from being just language generators to action-oriented agents capable of executing complex tasks. In this evolving landscape, two key paradigms have emerged for integrating LLMs with external systems:
Function Calling
Model Context Protocol (MCP)
In this blog, we'll dive deep into how these two approaches differ, where each shines, and how MCP is shaping the next generation of intelligent AI applications.
π§ What is Function Calling?
Function Calling allows an LLM to translate natural language into structured API calls. Itβs a way to bridge the gap between user queries and backend logic.
β Key Characteristics:
Linear, Stateless: Each prompt results in an independent function call.
Predefined Functions: Developers must define the full list of callable functions at design time.
Predictable Flow: Ideal for straightforward tasks like weather retrieval, booking tickets, or database lookups.
π Example Use Case:
jsonCopyEdit{
"name": "get_weather",
"parameters": {
"location": "Pune",
"unit": "Celsius"
}
}
An LLM can extract these parameters from user input and generate this call, but it cannot adapt if the toolset or flow changes at runtime.
Let us understand the function calling in LLM π€
ποΈ Architecture Overview: Function Calling using LLM
π Architecture Walkthrough with example
1. User β Query
- The user starts by submitting a natural language query (e.g., "Whatβs the weather in Pune tomorrow?").
2. Query β LLM (User Input)
- The query is passed to the LLM, which interprets the intent and extracts relevant parameters.
3. LLM β Function Definition
- The LLM matches the query to a predefined function schema (e.g., a
get_weather(location, date)
function).
4. Function Definition β Function Call
- Using the extracted parameters, the LLM constructs a function call in a structured format (typically JSON).
5. Function Call β Tools & API
- This structured call is routed to the appropriate external tool or API (like OpenWeather, Weatherstack, etc.).
6. Tools & API β Tool Output
- The API processes the call and returns structured output (like weather data, price quotes, etc.).
7. Tool Output β LLM (Prompt)
- The tool output is formatted into a prompt or data input and sent back to the LLM.
8. LLM β LLM Response (Generate)
- The LLM generates a user-friendly response using both the tool output and the original context.
9. LLM Response β User
- The final response is shown to the user (e.g., "The weather in Pune tomorrow is 28Β°C with clear skies").
π What is Model Context Protocol (MCP)?
MCP is a more powerful framework introduced to move beyond the constraints of stateless function calling. Think of MCP as a USB-C port for AI agentsβa standard interface for connecting LLMs to dynamic tools, resources, and workflows.
π§© Key Features:
Stateful Sessions: Maintains memory and task continuity across multiple turns.
Dynamic Tool Discovery: Tools donβt need to be predefined; MCP can discover and invoke tools at runtime.
Multi-Step Workflow Support: Handles branching logic, intermediate steps, and long-running processes.
Additional Primitives:
Tools: Like function calls, but discoverable dynamically
Resources: External data sources or assets (e.g., documents, APIs, file systems)
Prompts: Context injection points that guide model behavior
Let us understand the Model Context Protocol π€
ποΈ Architecture Overview: MCP
π Architecture Walkthrough with example:
1. User Query
- A user sends a query (e.g., βWhatβs the weather in Tokyo next week?β).
2. MCP Client
The query is received by the MCP Client, which acts as an orchestrator between the user, the LLM, and tools.
The MCP Host (Claude) helps select the most appropriate MCP Tool for the job based on the query's context.
3. Tool Selection
The LLM (Claude) decides which tool is needed (e.g., a weather API).
This tool selection is returned to the MCP Client.
4. Tool Invocation via MCP Server
The MCP Client makes a tool call request to the MCP Server.
The MCP Server is responsible for:
Invoking the correct MCP Tool
Making requests to external APIs if needed
Returning the tool output to the MCP Client
5. Handling External APIs
If needed, the MCP Tool or Server communicates with external Web APIs (e.g., OpenWeatherMap).
The server handles the response (e.g.,
200 OK
) and prepares the tool output.
6. Response Back to LLM
Tool output is passed back to the MCP Client, then to the LLM for response generation.
The LLM generates the final output by combining its reasoning with the tool's result.
7. Final Response to User
- The system sends the final LLM response (with tool-enhanced content) back to the user.
π¦ Key Components:
Component | Role |
MCP Client | Intermediary managing input/output and tool routing |
MCP Host | LLM-powered assistant (e.g., Claude) that selects tools |
MCP Server | Executes tool requests and manages tool/plugin calls |
MCP Tool | Custom tool for specific tasks (e.g., weather, finance) |
Web API | External service provider integrated into tools |
β Why is MCP Powerful? π€
Dynamic Tool Use: Tools are selected and used only when relevant.
Composable: New tools can be added without retraining the LLM.
Secure: Execution is separated from the LLM logic, reducing risk.
Scalable: Can serve many users and tasks simultaneously.
π Request Definition in MCP:
pythonCopyEditrequest = {
"tool_name": "get_weather_data",
"input": {
"location": "Pune",
"date": "2025-05-11"
}
}
π Function Calling vs MCP β Side-by-Side
Feature | Function Calling | Model Context Protocol (MCP) |
State Management | Stateless | Stateful |
Tool Definition | Predefined only | Dynamic discovery at runtime |
Workflow Complexity | Simple, linear | Complex, multi-step, conditional |
Extensibility | Limited to defined API calls | Supports tools, resources, and custom prompt injection |
Use Case | Basic automation | Full AI agent orchestration |
π οΈ When to Use What?
Scenario | Best Approach |
Querying a weather API | Function Calling |
Multi-step process with conditional logic | MCP |
Static chatbot with fixed tools | Function Calling |
AI-powered assistant with evolving tools | MCP |
π§ͺ Final Thoughts
While Function Calling laid the groundwork for LLM interactivity, MCP unlocks full-fledged autonomy. It's a major leap forward in building agentic AI systems that can reason, adapt, and interact with the world intelligently.
Thanks for reading the article! π
Subscribe to my newsletter
Read articles from ANURAG KULE directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
