AI is not the problem; people are

In June 2015, Sam Altman (now CEO of OpenAI) said something like: “I think AI will lead to the end of the world. But in the meantime, there will be great companies…” [1]
On what earth do you think a person, believing that the technology he is responsible for could be an existential risk for all of humanity, will continue the development and even promote it?
Well. On our earth. In our economic system.
AI: The Good, The Bad and The ugly
AI comes with so many promises. We will be able to cure cancer. We will be able to finally find “the answer to life, the universe, and everything.” We will “democratize” software development, arts, and unicorns will start pooping with candies.
That’s at least what some CEOs want us to believe. Surprisingly, those very CEOs were warning about AI.
Elon Musk suggested AI was the biggest existential threat [2][3], and now he is leading Grok (which, ironically, could be quite threatening [4]).
Sam Altman suggested AI was dangerous, but it doesn’t stop him from releasing ChatGPT 5 and suggesting it is like a “PhD in everything” [5].
(Image source: from credit on ChatGPT 5 and math)
So, some people do believe that AI is almighty; it can outsmart each and every programmer. There’s no need to learn anything now; there’s AI that just “knows.” When presented with AI flaws and mistakes, they just brush it off: “A new model will solve it, piece of cake!” CEOs of companies selling either AI or AI-related products are happy to corroborate.
Another group jumps on the opposite side and claims that AI is dumb and can’t do anything useful. They are often branded as “change-resistant” or just “difficult.” Some companies go even as far as to call them “unacceptable.”
(Image source: Zapier new hiring policy)
Of course, both groups are right and wrong.
Indeed, AI can be (and is already) used to replace some white-collar workers. Indeed, not at all surprisingly, it leads to all sorts of errors and mistakes.
I am just a software engineer sitting in the middle of this mess, looking at it all with a mixture of awe, disgust, and laughter.
If one were to spend just a couple of weeks learning how LLMs (which is what “AI” apparently stands for now) work, they would see that there’s no way the CEOs' claims could be true. Similarly, white-collar workers can’t just relax and sleep well knowing that “AI won’t replace them.”
Because we let it happen
“Why are you doing this?”
“Because you let us.”
This chilling quote is used in a couple of movies, most notably in the film “The Guests” (which was the basis for the “Speak No Evil” remake, which, arguably, lost most of the movie’s charm). This is the question two people ask their future killer - before being killed.
This dark, grim, and thought-provoking movie challenges the idea of “tolerance” and “civility” -which are all great qualities, but only until you meet someone who doesn’t really share them. One can’t negotiate with a sadist or a bully.
How is this relevant, you ask?
Well, have a look at our two friends - Sam and Elon. Both know that AI could be risky, and both are choosing to ignore it and double down on AI development. Elon doesn’t shy away from skewing AI results to enforce his points of view [7] (Sam could do the same, just not as blatantly obvious). Companies push for “AI mandates” while demanding people return to the office to work on the projects that will be used to make them redundant.
They are being bullies. And the worst part - we let them.
We let them by continuing to use and pay for their services. We let them by not resigning when they cross the line. We let them by not even speaking up because we are afraid of losing our jobs when times are already rough enough.
But make no mistake - like in the movie(s) “The Guests” and “Speak No Evil”, abuse starts small. It is death by a thousand concessions.
Believing lies is dangerous; telling lies is a crime
For any sane person, it should be obvious that LLMs can’t be used for “deep reasoning” (whatever it means nowadays), simply because, at its core, it just predicts the next token based on the previous token(s). Yet, somehow, people claim that it is going to be possible, and somehow, people believe it.
For a person who doesn’t know the subject in question (be it software engineering, health care or law), LLM output is more than believable (which is not surprising - it is what it was created and optimised for, in the first place). And since it is believable, it doesn’t take too much effort to see why many choose to believe that it is, indeed, capable of “creating”.
If you look at any company in isolation, it may seem that replacing employees with AI is sensible now. Models are cheap; they don’t complain, they don’t resign, or get sick (well, they have outages, but “who cares?”). So, people are being let go, offered fewer perks, or smaller salaries, while stakeholders are getting better revenues, at least for the next quarter.
However, no company lives in isolation. There’s no place in the world where having fewer high-paid employees could lead to higher revenues in the long run. Or, more simply, if no one can buy your product, it doesn’t matter how “efficient” the production pipeline is.
Yet, it doesn’t stop companies. Not at all. They use VC capital and burn millions of kilowatt-hours of energy like there’s no tomorrow. They are afraid that if they don’t do it, their competitors will, and in their minds, losing to a competitor is the worse option than not playing at all.
To make things worse, LLMs do not deliver what AI-companies promise. There’s no fault in LLMs - they are absolutely awesome in many areas, such as reviewing, planning, and autocorrecting. But apparently, thoughtful and responsible use of technology is not as exciting and does not bring huge investments.
Yet, I can understand the companies using AI.
What I can’t understand are the people who know (or should know) that LLMs can’t deliver (and even if they could, they better should not, for our own sake), yet continue pushing ahead.
So, what do we do?
It may look grim, but it does not have to. What we need to do now is to stop being NPCs in someone else’s game and start playing our own game.
Each person has their own way, but here’s what I decided I will do:
Learn how LLMs work and build one for myself
- (of course - not to compete with GPT/DeepSeek/Whatever, but to understand the technology, its strengths and limitations)
-
do not use paid products from companies involved in dodgy practices
do not work for companies caring for themselves more than they care for their customers and/or employees
Plan ahead
What future do I want for myself?
How can I get there?
And I am going to share my path, in the hope of inspiring (or warning) others along the way.
References
[1] https://siepr.stanford.edu/news/what-point-do-we-decide-ais-risks-outweigh-its-promise
[3] https://x.com/elonmusk/status/896166762361704450
[4] https://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfb
Subscribe to my newsletter
Read articles from Alexander Pushkarev directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Alexander Pushkarev
Alexander Pushkarev
With more than 10 years in IT, I had a chance to work in different areas, such as development, testing and management. I had a chance to work with PHP, Java, Python and .Net platforms with different application from microservices to monolithic and monstrous desktop UI applications. Currently, I am holding the position of Senior Software Engineer, but I prefer to consider myself a full-stack engineer. My passions are quality and efficiency. Agile fan and XP practitioner.