Making AI Useful: A Quick Dive into MCP and PostgreSQL


Meet MCP: The AI Protocol That Gets Your Data Without the Drama (with PostgreSQL, Too!)
Ever tried getting an AI to talk to your database and ended up knee-deep in custom connectors, brittle scripts, and existential dread? Yeah—same. That’s why I’m excited about MCP, short for Model Context Protocol.
MCP is like the translator you didn’t know your AI needed. It speaks fluently between your large language model and real-world tools like databases, APIs, and code repos—so you don’t have to build a new bridge every time you want them to shake hands.
First introduced by Anthropic in late 2024, MCP is quickly gaining love in the AI world for making integrations feel less like surgery and more like LEGO: snap together what you need, no special tools required.
Let me show you what MCP is, why it’s awesome, and how I used it to connect an AI to a PostgreSQL database with a clean little UI that runs read-only queries.
What the Heck Is MCP?
MCP (Model Context Protocol) is an open standard that helps AI models connect with external data, tools, and services—without writing one-off integrations for every single system.
It uses JSON-RPC 2.0 for communication and works over different transport layers like HTTP (with Server-Sent Events), WebSockets, or even plain ol’ stdio.
The idea? Set up a reusable, lightweight interface that any AI app can use to talk to databases, APIs, file systems, or even internal company tools—securely and at scale.
Key Parts of the Puzzle
Here’s the cast of characters in an MCP setup:
MCP Client: Lives inside the host and connects to external servers. One client per server—like a personal concierge.
MCP Server: The gateway to your data. This small program exposes your stuff (like PostgreSQL, GitHub, or a file system) through MCP.
Transport Layer: Handles the messages. Could be HTTP SSE, WebSockets, or whatever gets the job done.
Tools, Resources, Prompts: These are what the server exposes. Tools are like SQL query functions, resources could be schemas or files, and prompts are reusable templates to help AI form good requests
Here’s where MCP really shines:
Querying Databases like PostgreSQL or SQLite for live insights.
Browsing Code Repos on GitHub or GitLab—great for generating summaries or suggesting PR changes.
Interacting with Productivity Tools like Slack or Notion to manage tasks or fetch messages.
Running Web Searches using APIs like Brave Search.
Basically, if your AI needs context, MCP delivers it..
Building Something Real: PostgreSQL + MCP + AI + UI
I decided to put MCP to the test with a hands-on project: connect an AI app to a PostgreSQL database through an MCP server and run read-only SQL queries from a web UI.
Goal:
Build a lightweight tool where a user can type something like “Show me all products under $50”, and the AI pulls that info from a product catalog in PostgreSQL—without needing to write or understand SQL.
Stack I Used:
PostgreSQL: A demo database with a product catalog.
MCP Server: I used the FastMCP package to expose the database.
Gemini: Acts as the AI interface for testing.
Python + FastAPI: For a basic web UI.
Docker: Because nobody wants to install Postgres manually anymore.
Let the coding begin
Installation
python -m venv env && source env/bin/activate
pip install fastmcp google-genai fastapi psycopg2-binary dotenv uvicorn jinja2 python-multipart
The playground
query_context/
|- .env
|- client.py
|- server.py
|- db.py
|- web_interface.py
|- env
|- static
|- template
The Environment
#.env
DB_NAME=mydatabase
DB_USER=myuser
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=5432
GEMINI_API_KEY=<your ultra secret API key>
The DB Person
#db.py
from dotenv import load_dotenv
import psycopg2
import os
load_dotenv()
DB_CONFIG = {
'dbname': os.getenv('DB_NAME'),
'user': os.getenv('DB_USER'),
'password': os.getenv('DB_PASSWORD'),
'host': os.getenv('DB_HOST'),
'port': os.getenv('DB_PORT')
}
def get_db_connection():
try:
print(os.environ.get("DB_NAME"))
conn = psycopg2.connect(**DB_CONFIG)
return conn
except Exception as e:
print(f"Error connecting to database: {e}")
return None
def get_db_schema():
print("Pulling schema from database")
query = """
SELECT table_name, column_name, data_type
FROM information_schema.columns
WHERE table_schema = 'public';
"""
conn = get_db_connection()
if not conn:
return json.dumps({'error': 'Ahem.. Database connection failed'}), 500
cursor = conn.cursor()
cursor.execute(query)
schema = cursor.fetchall()
cursor.close()
return schema
The AI
LLM(Gemini) - to generate the executable query from the natural language, using the schema as context
#llm_model.py
from google import genai
import os
import re
from server import get_db_schema
# It's recommended to set your API key as an environment variable (e.g., GEMINI_API_KEY or GOOGLE_API_KEY)
# For initial testing, you might do:
# client = genai.Client(api_key="YOUR_API_KEY")
# But for better security:
client = genai.Client(api_key=os.environ.get("GEMINI_API_KEY"))
chat = client.chats.create(model="gemini-2.0-flash")
schema = get_db_schema()
def generate_sql_query(natural_language_query):
prompt = f"""
Given the following database schema:
{schema}
Generate executable postgres compatable SQL query for the following request:
{natural_language_query}
"""
response = chat.send_message(prompt)
return remove_markdown_code_blocks(response.text)
def remove_markdown_code_blocks(text):
# This regex looks for three backticks, followed by any characters (non-greedy),
# followed by three backticks. The 's' flag allows '.' to match newlines.
cleaned_sql = re.sub(r'```(?:\w+)?\n?(.*?)```', r'\1', text, flags=re.DOTALL)
print(cleaned_sql)
return cleaned_sql
The Server
Let’s first creates a FastMCP-based server that exposes a single tool to execute safe SQL queries on a PostgreSQL database, returning results or errors in JSON format.
# server.py
from fastmcp import FastMCP
import json
from datetime import date, datetime
from db import get_db_connection
mcp = FastMCP("Demo")
def json_serial(obj):
"""JSON serializer for objects not serializable by default json code"""
if isinstance(obj, (date, datetime)):
return obj.isoformat()
raise TypeError(f"Type {type(obj)} not serializable")
@mcp.tool(name="execute_sql", description="Executes a SQL query on the PostgreSQL database and returns the result.")
def execute_sql(query: str) -> str:
"""
Executes a SQL query and returns the result.
Only use for safe SELECT queries.
"""
conn = get_db_connection()
if not conn:
return json.dumps({'error': 'Database connection failed'}), 500
try:
with conn.cursor() as cur:
cur.execute(query)
if query.lower().strip().startswith('select'):
columns = [desc[0] for desc in cur.description]
results = cur.fetchall()
return json.dumps({
'columns': columns,
'results': results
}, default=json_serial)
else:
conn.commit()
return json.dumps({
'message': 'Query executed successfully'
})
except Exception as e:
conn.rollback()
return json.dumps({'error': str(e)}), 500
finally:
conn.close()
if __name__ == "__main__":
mcp.run()
The Client
Calls the MCP(Server) tool to execute the DB queries
#client.py
import asyncio
from fastmcp import Client
client = Client("server.py")
async def call_tool_with_db():
async with client:
result = await client.call_tool("execute_sql", {"query": "SELECT * FROM public.admission"})
print(result)
asyncio.run(call_tool_with_db())
The facilitator
Combine everything !!
#web_interface.py
from fastapi import FastAPI, Request, Form
from fastapi.templating import Jinja2Templates
from fastapi.responses import HTMLResponse
from fastapi.staticfiles import StaticFiles
import uvicorn
from client import Client
import asyncio
import json
from llm_model import generate_sql_query
app = FastAPI()
templates = Jinja2Templates(directory="templates")
app.mount("/static", StaticFiles(directory="static"), name="static")
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
@app.post("/execute-query")
async def execute_query(natural_language_query : str = Form(...)):
client = Client("server.py")
print("natural_language_query:", natural_language_query)
sql_query = generate_sql_query(natural_language_query)
print(sql_query)
async with client:
result = await client.call_tool("execute_sql", {"query": sql_query})
return result
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
The View
<!DOCTYPE html>
<html>
<head>
<title>SQL Query Interface</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet">
<style>
.result-table {
margin-top: 20px;
}
.error-message {
color: red;
margin-top: 10px;
}
</style>
</head>
<body>
<div class="container mt-5">
<h2>SQL Query Interface</h2>
<form id="queryForm" class="mt-4">
<div class="mb-3">
<label for="natural_language_query" class="form-label">Enter your query:</label>
<textarea class="form-control" id="natural_language_query" rows="4" placeholder="SELECT * FROM public.admission_cycle"></textarea>
</div>
<button type="submit" class="btn btn-primary">Execute Query</button>
</form>
<div id="result" class="result-table"></div>
<div id="error" class="error-message"></div>
</div>
<footer>
MIT License | version 4da6fac9fa90a48d21b24f05da86d6da49485cf0
</footer>
<script>
document.getElementById('queryForm').addEventListener('submit', async (e) => {
e.preventDefault();
const natural_language_query = document.getElementById('natural_language_query').value;
const resultDiv = document.getElementById('result');
const errorDiv = document.getElementById('error');
try {
const response = await fetch('/execute-query', {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body: `natural_language_query=${encodeURIComponent(natural_language_query)}`
});
const responseText = await response.text();
console.log('Response Text:', responseText); // Log the raw response text
const parsedResponse = JSON.parse(responseText);
console.log('Parsed Response:', parsedResponse); // Log the parsed response
if (parsedResponse && parsedResponse.length > 0 && parsedResponse[0].text) {
const data = JSON.parse(parsedResponse[0].text);
console.log('Parsed Data:', data); // Log the parsed data
if (data.error) {
errorDiv.textContent = data.error;
resultDiv.innerHTML = '';
} else {
errorDiv.textContent = '';
if (data.columns && data.results) {
let table = '<table class="table table-striped table-bordered">';
// Add header
table += '<thead><tr>';
data.columns.forEach(column => {
table += `<th>${column}</th>`;
});
table += '</tr></thead>';
// Add body
table += '<tbody>';
data.results.forEach(row => {
table += '<tr>';
row.forEach(cell => {
table += `<td>${cell !== null ? cell : ''}</td>`;
});
table += '</tr>';
});
table += '</tbody></table>';
resultDiv.innerHTML = table;
} else {
resultDiv.innerHTML = `<div class="alert alert-success">${data.message}</div>`;
}
}
} else {
errorDiv.textContent = 'Invalid response format';
resultDiv.innerHTML = '';
}
} catch (error) {
console.error('Error:', error); // Log any errors
errorDiv.textContent = 'Error executing query: ' + error.message;
resultDiv.innerHTML = '';
}
});
</script>
</body>
</html>
The Output
The Conclusion
And there you have it—an AI that understands plain English, reads between the lines (and tables), and speaks fluent SQL. With MCP as the translator, your LLM taps into your database schema, turns natural questions into executable queries, and serves answers right to your browser. No custom connectors, no backend drama—just context-aware magic that works out of the box.
A word of caution: always keep access scoped and permissions tight—AI is powerful, but you don’t want it wandering into sensitive territory uninvited. And remember, while the results are often impressive, they should be taken with a pinch of salt—LLMs are brilliant, but not infallible. Validate before you trust.
Subscribe to my newsletter
Read articles from Alankar Srivastava directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
