LLM-Sentinel: Shield your AI calls from leaking secrets for FREE!


Every day, API keys, tokens, emails, and DB URLs slip into prompts, logs, or demos. Once they hit the LLM, they’re out of your control.
I built LLM-Sentinel, a privacy-first proxy that:
Intercepts requests to OpenAI, Ollama, Claude, etc.
Masks 50+ types of sensitive data (API keys, creds, emails, credit cards, SSNs).
Works with streaming, adds no noticeable latency (1-3ms).
Comes with a real-time dashboard to watch what’s being masked.
Zero data retention. All secrets stay local.
How it works:
Swap your SDK base URL:
client = openai.OpenAI(
api_key="sk-...",
base_url="http://localhost:5050/openai/v1"
)
# Input: My AWS key is AKIA... and email user@company.com
# Model sees: My AWS key is [AWS_ACCESS_KEY_MASKED] and email [EMAIL_MASKED].
Install:
npm install -g llm-sentinel
llmsentinel help
or run via Docker:
docker pull raaihank/llm-sentinel:latest
docker run -p 5050:5050 raaihank/llm-sentinel:latest
Repo: github.com/raaihank/llm-sentinel
Feedback and contributions welcome. If you find this useful, drop a ⭐ on GitHub.
Subscribe to my newsletter
Read articles from Raihan K. directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
