The Tech World Is Changing in Real Time — and So Are Security Risks


Introduction
Technology has always evolved fast — but lately, it feels like we’re living through one of those inflection points that you usually only read about years later.
AI-assisted development tools, code generation platforms, and MCP (Model Context Protocol) integrations are now quietly making their way into our everyday work. One week you’re coding as usual; the next, you’re giving a prompt and getting an entire component scaffolded for you.
At first, I assumed that with AI in the mix, security risks would shrink — maybe the tools could automatically patch vulnerabilities before they even hit production. Turns out… not quite.
A Surprise Discovery
While integrating one such AI tool into my workflow, I noticed an unexpected security warning.
On further digging — I came across a term I’d never heard before: indirect prompt injection.
The last time I’d really kept up with security threats, the big names were:
SQL injection – manipulating database queries via malicious inputs.
DoS attacks – overwhelming systems until they crash.
Cross-Site Scripting (XSS) – injecting malicious code into trusted web pages.
Phishing – tricking users into giving away sensitive credentials.
Now, prompt injection joins that list — but instead of targeting a database or a browser, it manipulates the context an AI model uses to produce answers.
When AI Can See Everything
Here’s the part that really hit me — many AI integrations need access to your entire codebase to work effectively.
That means any sensitive files in your project are potentially in scope:
API keys
Auth tokens
Private config files
Database connection strings
Even if the tool promises your data is safe, it’s still on us as developers to keep secrets out of code, mask sensitive information, and control which files the AI can actually see.
Old Problems, New Forms
The more I experimented, the more I realised: security fundamentals haven’t changed — but the threats have adapted.
We still need to validate inputs, review outputs, and keep our code secure; the difference is, now these vulnerabilities might hide in an AI-generated snippet or a design-to-code conversion.
One of the most fascinating (and concerning) things I learned about was data poisoning attacks — where an AI model is deliberately fed malicious or misleading data during its training or fine-tuning phase. This can cause it to behave incorrectly, embed subtle biases, or even leak sensitive information.
In short: the AI’s knowledge itself can be compromised before you even use it.
We’re still doing the same jobs we’ve always done — but in a completely different landscape.
Defending Against New AI Threats
To navigate this new security landscape, I found a few practical steps worth considering:
Enable Prompt Shield (or equivalent safeguards)
Some AI platforms now provide a Prompt Shield or input sanitization layer that scans incoming prompts for signs of prompt injection and filters out malicious patterns before the AI processes them. If your tool supports this, turn it on — it’s your first line of defense.Guard Against Data Poisoning
While you can’t always control an AI’s original training data, you can vet the datasets you fine-tune or extend it with. Use trusted data sources, validate samples, and regularly monitor outputs for unexpected or suspicious behavior.Apply Least Privilege Access
Don’t give the AI blanket access to your repositories. Use selective file access and exclude sensitive directories.Scan AI Outputs Like Any Other Code
Treat AI-generated code the same way you’d treat code from an unknown contributor — review it, test it, and run it through static analysis tools before merging.Keep Secrets Out of Repos
Environment variables, secret managers, and .env files (properly gitignored) are still your best friends.
Conclusion
AI is changing how we build, test, and ship software. It’s helping me work faster, learn new frameworks, and experiment more freely. But whether the threat is SQL injection in 2005 or prompt injection in 2025, one rule hasn’t changed: security is still our responsibility.
And in this new AI era, staying aware might just be the most valuable skill we can have.
Subscribe to my newsletter
Read articles from Niriksha kamath directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
