The Git Oracle: A Journey into the Model Context Protocol (MCP)


Prologue: Git Hell
"It's not that Git is hard. It's that we've made a mess, and now Git is telling us exactly how big a mess we've made." — Sarah, Lead Developer
Three monitors, seven terminal windows, and a whiteboard full of hastily scrawled branch names that looked more like a conspiracy theory than a development workflow. That was my Tuesday morning.
I was staring at what our team had lovingly named "The Merge From Hell" — a three-way collision between our main branch, the new payment system, and a UI redesign that touched every corner of our codebase. Git was dutifully showing me thousands of conflicts across 87 files.
"This is insane," I muttered, reaching for my fourth coffee. "There's got to be a better way."
Little did I know that this moment of frustration would lead us down a rabbit hole, into the creation of what we'd later call "The Git Oracle" — and our discovery of a powerful new protocol that would change how we think about AI development forever.
Act I: The Dream
"What if Git could just... understand what we want?"
The question came from Alex, our junior dev who'd spent the last three hours trying to cherry-pick specific changes from a massive commit. We were gathered in the conference room for what was supposed to be a quick standup, but had devolved into a Git therapy session.
"Imagine if we could just ask it: 'Show me who last modified this function and why.' Or 'Stage all the files related to the payment API but not the UI changes,'" Alex continued, gesturing wildly at his laptop.
"Or 'Help me understand this merge conflict,'" added Sarah, our lead developer, who'd been battling the same conflict resolution for days.
I leaned back in my chair, the beginnings of an idea forming. "What if we actually built that? An intelligent Git client. Something that combines Git's power with an LLM that actually understands what you're trying to do."
The room fell silent. Then smiles spread, the kind that appear when engineers sense an interesting problem.
"The Git Oracle," Raj, our infrastructure wizard, said with a grin. "I like it."
Twenty minutes and one whiteboard later, we had the beginnings of a plan. Build a new kind of Git client that would:
Show the repository state clearly and intuitively
Provide intelligent shortcuts for common Git tasks
Allow natural language commands to perform Git operations
Proactively analyze repositories to provide insights
"Shouldn't be too hard," I said with a confidence I didn't feel. "We just need to connect Git, a UI, and an LLM."
Sarah raised an eyebrow. "That's like saying we just need to connect the engine, wheels, and steering wheel to build a Formula 1 car."
She was right, of course. But none of us knew just how right until we started building.
Act II: The Wall
Two weeks later, we hit our first major roadblock.
"It's a mess," Raj declared, slapping his laptop shut during our update meeting. "We've got three separate systems that don't know how to talk to each other."
The prototype was a Frankenstein's monster. We had:
A sleek Electron-based UI that could display Git status
A Git service that could execute commands
An LLM integration that could generate text
But they were disconnected islands. The UI couldn't discover what the Git service could do. The LLM couldn't trigger Git operations. The Git service couldn't tell the UI when things changed.
"We need a protocol," Sarah said quietly, scrolling through documentation on her laptop. "Not just random API calls between components."
"Like JSON-RPC?" Alex suggested.
"Deeper than that," Sarah replied. "We need something that defines not just how messages are formatted, but what kinds of interactions are possible. Something that handles discovery, capabilities, permissions..."
She turned her laptop around to show us a page titled "Model Context Protocol."
"I think this is what we need."
Act III: Resources - Seeing the Matrix
Three days later
"Okay, so MCP separates our application into a Client and a Server," I explained, sketching on the whiteboard. "The Client is our UI plus the LLM. The Server is our Git logic."
"And they communicate over a Transport layer using standard message patterns," Raj added. "But the cool part is the interaction patterns MCP defines. Let's start with Resources."
We gathered around Sarah's screen as she implemented the first MCP component in our Git Oracle.
javascript// Git Server exposing Resources
function handleResourcesList() {
return [
{ uri: "git://status", name: "Repository Status" },
{ uri: "git://log?limit=20", name: "Recent Commits" },
{ uri: "git://branches", name: "All Branches" },
{ uri: "git://commit/{hash}", name: "Commit Details" },
// More resources...
];
}
function handleResourcesRead(uri) {
if (uri === "git://status") {
return { text: runGitCommand("status") };
} else if (uri.startsWith("git://log")) {
// Parse limit parameter and format log output
// ...
}
// Handle other resources...
}
"So the Client can discover what Resources the Server offers, then request specific ones?" Alex asked.
"Exactly," Sarah nodded. "Resources are how the Server exposes its data in a structured, discoverable way. The Client controls when to request them."
We launched the prototype, and suddenly our UI came alive. The Client simply called resources/list
to discover what data was available, then resources/read
to fetch what it needed. The repository status, branch list, commit history - all presented in clean, organized panels.
"This is already better than most Git UIs I've used," Alex said, clicking through the interface. "But the user still has to figure out what to do with this information."
"That's where Prompts come in," Sarah smiled. "Ready for phase two?"
Act IV: Prompts - The Fast Lane
One week later
"I don't get why we need Prompts," Alex admitted during our daily standup. "Can't users just type what they want to the LLM?"
Raj pulled up a terminal. "Let me show you why. What's more effective: This..."
He typed out a long message to the LLM:
Generate a concise but descriptive Git commit message for these changes:
Then pasted in 200 lines of diff output.
"...or this?"
He clicked a button labeled "/commit-message" in our Git Oracle interface.
"Prompts are reusable templates for common LLM interactions," I explained. "The Server defines them, exposes them through prompts/list
, and the Client shows them as buttons or slash commands."
Sarah's implementation made it clear:
javascript// Server-side Prompt handling
function handlePromptsList() {
return [
{
name: "commit-message",
description: "Generate a commit message for staged changes",
arguments: []
},
{
name: "explain-commit",
description: "Explain what a commit does",
arguments: [{ name: "hash", required: true }]
},
{
name: "analyze-branch",
description: "Analyze a branch's changes",
arguments: [{ name: "branch", required: true }]
}
// More prompts...
];
}
function handlePromptsGet(name, arguments) {
if (name === "commit-message") {
// Server fetches the diff (a Resource) and constructs the message
const diff = runGitCommand("diff --staged");
return {
messages: [{
role: "user",
content: {
type: "text",
text: `Generate a concise but descriptive Git commit message for these changes:\n\n\`\`\`diff\n${diff}\n\`\`\``
}
}]
};
}
// Handle other prompts...
}
"So the Server knows what data to gather and how to format the right question for the LLM," Alex nodded. "And the user just triggers the right prompt instead of writing everything from scratch."
"Exactly!" Raj closed his laptop with a flourish. "It's the difference between writing the same email template every day versus clicking a button that says 'Send Daily Update'."
Our prototype now had clickable commands for common Git workflows. Users could instantly generate commit messages, understand confusing code history, or analyze branch differences with a single click.
"This is getting good," I said, watching Alex zip through a workflow that would have taken minutes before. "But the LLM is still just giving advice. It can't actually do anything to the repository."
Act V: Tools - Command and Control
Two weeks later
The tension in the room was palpable as we prepared for our first demo to the engineering team. Our Git Oracle had evolved from a prototype to something genuinely useful, and we were about to show off its most impressive feature yet: Tools.
"Think of Tools as actions the LLM can request," I explained to the packed conference room. "When you ask the Git Oracle to do something, the LLM figures out what Git commands are needed and calls them."
I typed into the chat: "Stage all JavaScript files that I've modified and commit them with a message explaining what I changed."
The LLM analyzed my request, then:
I'll help you stage and commit your JavaScript file changes.
First, I'll identify and stage the modified JavaScript files.
What happened next drew gasps. The UI showed the LLM making a tool call:
Tool: stage_files
Parameters: { "patterns": ["*.js"] }
Our Server received this via MCP, executed git add *.js
, and returned the list of staged files. Then another tool call:
Tool: commit
Parameters: { "message": "Refactor authentication flow in JavaScript files for better error handling" }
The Server executed the commit and returned the result. All without me typing a single Git command.
"The magic happens in three parts," Sarah explained to the amazed audience. "First, we define Tool schemas that tell the LLM what actions are available and how to call them. Second, the LLM decides which Tools to use based on user requests. Third, the Client routes those calls to our Server using MCP's standard format."
She briefly showed the code:
javascript// Server-side Tool definitions
const tools = [
{
name: "stage_files",
description: "Stage files for commit",
parameters: {
type: "object",
properties: {
patterns: {
type: "array",
items: { type: "string" },
description: "Glob patterns of files to stage"
}
},
required: ["patterns"]
}
},
// More tool definitions...
];
// Server-side Tool handling
function handleToolCall(name, params) {
if (name === "stage_files") {
const { patterns } = params;
const results = [];
for (const pattern of patterns) {
const output = runGitCommand(`add ${pattern}`);
results.push({ pattern, output });
}
return { results };
}
// Handle other tools...
}
"This is incredible," our CTO said from the back of the room. "The LLM is doing the translation from natural language to Git commands, but your server is executing the actual commands in a controlled way."
"Exactly," I nodded. "The LLM decides what to do, but the Server handles the execution within defined boundaries. It's a perfect separation of concerns."
The demo was a hit. But as we packed up, Sarah pulled me aside.
"We're still missing something," she said quietly. "We've got the LLM asking the Server to do things. But what if the Server needs to ask the LLM something?"
"Like what?" I asked.
"Like analyzing a complex merge conflict. Or suggesting an optimal branch strategy. Things where the Server's Git expertise meets the LLM's reasoning power."
I stared at her for a moment. "Can MCP do that?"
Her eyes gleamed. "That's what Sampling is for."
Act VI: Sampling - The Oracle Speaks
Three weeks later
It was 2 AM, and Raj, Sarah, and I were the only ones left in the office. The light from our screens cast long shadows as we hunched over keyboards, wrestling with our final and most complex MCP component: Sampling.
"Okay, explain this to me one more time," I said, rubbing my eyes. "How is Sampling different from just calling an LLM API from our Server?"
Raj pointed to his diagram. "With direct API calls, the Server would need an API key, would bypass user control, and wouldn't respect the user's context or preferences. With Sampling, the Server asks the Client to get an LLM completion on its behalf. The Client—under user control—decides whether to allow it, which model to use, and what context to include."
Sarah's implementation showed the difference:
javascript// Server-side Sampling request
async function analyzeMergeConflict(conflictData, mcp_session) {
// Server prepares a request for the LLM
const samplingRequest = {
messages: [
{
role: "user",
content: {
type: "text",
text: `Analyze this Git merge conflict and suggest resolution strategies:\n\n${conflictData}\n\nConsider the context of both branches.`
}
}
],
modelPreferences: {
intelligencePriority: 0.8, // Hint: Use a smart model
costPriority: 0.2 // Willing to spend some tokens on this
},
maxTokens: 1000
};
// Server sends this request TO THE CLIENT via MCP
try {
const response = await mcp_session.sendRequest(
{ method: "sampling/createMessage", params: samplingRequest }
);
// Server gets the LLM's response back from the Client
const analysis = response.content.text;
// Now the Server can use this analysis
return {
conflictAnalysis: analysis,
suggestedActions: extractActionableSteps(analysis)
};
} catch (error) {
console.error("Sampling request was denied or failed");
return { error: "Unable to analyze conflict" };
}
}
"The key thing," Sarah pointed out, "is that the Server doesn't talk directly to the LLM. It sends a structured request to the Client, which might ask the user for permission, choose the appropriate model, add context, and then return the result."
We integrated it into our Git Oracle, focusing on merge conflicts—the original problem that sparked our journey. When a user encounters a conflict, our Server analyzes the conflicting branches and uses Sampling to ask the LLM for insights.
The results were astonishing. The Git Oracle could now:
Identify likely root causes of conflicts
Suggest specific resolution strategies based on the code's intent
Explain the history that led to the conflict
Recommend ways to avoid similar conflicts in the future
It was 4 AM when we finally had it working. The three of us sat in silence, watching our creation suggest an elegant solution to a conflict that would have taken hours to resolve manually.
"We did it," Raj whispered. "The Git Oracle actually works."
I nodded, too tired to speak but feeling a deep sense of satisfaction. We had built something genuinely useful, and in the process, discovered the power of the Model Context Protocol.
Epilogue: Beyond Git
Six months later, the Git Oracle had become standard across our company. Merge conflicts were no longer dreaded. Complex operations were handled with simple natural language requests. Our productivity had skyrocketed.
But more importantly, we had discovered a pattern for building intelligent applications that went far beyond Git:
Resources gave us a way to expose data in a discoverable, structured way
Prompts provided user-triggered shortcuts for common LLM interactions
Tools allowed the LLM to request specific actions
Sampling enabled the backend to leverage LLM intelligence under user control
Together, these patterns—unified by the Model Context Protocol—created a framework for building applications where humans, AIs, and specialized systems could collaborate effectively.
As I looked around our weekly engineering meeting, I saw teams planning to apply these patterns to our code review process, our documentation system, and even our customer support portal.
"The Model Context Protocol isn't just about connecting an LLM to an application," I told the team. "It's about establishing clear responsibilities and communication patterns between humans, AIs, and specialized systems. It's about building intelligent applications that respect user control while leveraging AI capabilities."
Sarah nodded. "The Git Oracle was just the beginning. MCP gives us a blueprint for a whole new generation of intelligent tools."
Subscribe to my newsletter
Read articles from Mehmet Öner Yalçın directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
