AI Is More Predictive Than Intuitive, Which Means You May Still Not Be 100% Replaceable. Yet.


As AI continues to advance at breakneck speed, it’s tempting to believe that coding itself is about to become obsolete and outsourced to large language models that can write, refactor and even debug our code. But here’s the thing:
Even the smartest AI is still more predictive than intuitive.
Let me show you what that means with a real-world example from a recent experiment I ran.
🔧 I Tried Using AI to Generate XPath from UI Events
I was working on a system that captures real-time UI interactions using Playwright. Every click or input was logged through custom event listeners and stored in a clean JSON format:
{
"action": "click",
"selector": "div#root > div > nav > span",
"text": "Submit",
"timestamp": 1753822291987,
"url": "https://example.com"
}
The goal was to feed this data to an AI model and have it generate a relative XPath for each UI action. Something simple like:
//span[text()='Submit']
To achieve this, I used one of the most advanced coding and reasoning models available today.
❌ But It Kept Generating Absolute XPaths
Despite detailed prompts and specific examples, the AI kept returning long, brittle, absolute paths derived from the CSS-style selector
field:
//div[@id='root']/div/div/nav/span
I tweaked the prompt. I rephrased the instructions. I even laid out a set of XPath generation rules, citing from different websites and blogs.
Nothing worked.
🧠 Debugging the AI's Logic
Eventually, I stopped trying to out-prompt the model and did what any good developer would do - I debugged its behavior.
The root cause? The model was always using the selector
field, which mimics a CSS path, to build the XPath. It completely ignored the much more stable text
value.
Once I explicitly instructed it to use the text
field and fall back to the last CSS class only when necessary, it started producing correct relative paths. The change looked like this:
//span[text()='Submit']
Exactly what I wanted! But it only worked when I walked the model through the logic step-by-step.
🤖 What This Tells Us About AI Today
This wasn’t a hallucination. It wasn’t an error. The model was simply predicting the most likely output based on its training, not understanding the logic behind what I was trying to do.
And that’s the bigger lesson.
Large language models today:
Generate code based on pattern recognition, not business rules
Prioritize statistical frequency over semantic relevance
Do not reason or intuit unless you explicitly guide them to
They’re phenomenal at accelerating tasks but they’re not autonomous problem-solvers. Not yet.
👩💻 Why You're Still (Mostly) Irreplaceable
What this experience really drove home for me is that AI needs direction. It needs human insight. It needs someone who understands why a solution is better and not just what it looks like.
So, if you're a developer, QA or engineer wondering whether AI is going to take your job tomorrow:
No. But it might work alongside you and make you faster, if you know how to guide it.
🧭 Final Thoughts
AI may write perfect syntax, but it still struggles with intent.
It’s not intuitive. It doesn’t question assumptions. It doesn’t understand your constraints unless you tell it.
That’s why the real value lies not just in knowing how to prompt AI, but how to debug it when it behaves like a clueless intern.
You’re still needed. Not just to code, but to think.
Image Credit:*
Custom illustration generated using OpenAI's DALL·E, created specifically for this article.
Subscribe to my newsletter
Read articles from arjun kamath directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
