What are you actually excited about?

Disclaimer: in this post I use AI and LLM interchangeably. This is, of course, oversimplification, but it resembles the current terminology use.
I see quite a few developers (and tons of business people) who are excited by advances in AI and I just don’t get their excitement.
Yes, being able to generate a kind-of working application in a day or two seems like a nice thing, but like any long-term game, we must ask ourselves - what’s next?
I would argue, that there’s nothing good next. Not for experienced software engineers, at least: people who put time, effort, dedication and money into becoming good at what they do.
In order to explain why I suggest a short thought experiment - where this all would go in the future?
In the nutshell, there are only three possible outcomes for AI adoption for programming:
AI is less good at writing code than developer is (aka “autocomplete”)
AI is on roughly the same level as developers (aka “synergy”)
AI is significantly better than developers (aka “replacement”)
It is also very probable that we will see all of them - just one replacing the other. So, let’s try to analyse what is going to happen in each stage - and what it means for software engineers.
AI is less good at writing code than developer is (aka “autocomplete”)
Arguably, this is what we see now. We could use Copilot/Cursor/whatever to generate smaller peaces of code, or even bigger peaces - if we feel adventurous. Generated code is often not great, full of tech dept, to a point where many skilled developers avoid using code generation altogether.
Key features:
Negligible or net negative productivity boost due to code generation [1][2][3][4]
Some productivity boost due to automatic code review
Software engineer still fully responsible for the code they produce (with or w/o AI) [2]
Last point is essential, though. As Software engineers are still fully responsible for the code they produce, it (should be) up to them if and when they use AI. For some reasons, some companies disagree by forcing AI mandates.
Some would argue that this is also a final point (at least with the current LLM architecture and training pipeline). Their reasons are:
Despite appearing “smart”, LLMs are just statistical machines - they learn patterns that were created by someone else
- so, while they could, theoretically, combine or simplify existing approaches, they are unlikely to produce brand new approaches
Training is the most expensive and lengthy operation - LLMS train way slower than people do
- so they will inevitably struggle to keep up with new or rare stacks
In any case, productivity boost is less than moderate, while risks are significant:
Potential copyright violation
Poor review of generated code leading to security and other issues [2]
Software engineers becoming less hands-on, and, therefore, less skilled, which will exacerbate previous point even further [2][5]
So, it feels like the only sensible way to use AI here is to use it as “reviewer”, or, sometimes, as a “wall to bounce ideas of”. It does quite a good job at it.
AI is on roughly the same level as developers (aka “synergy”)
This is what many people say the future will to be. We would just write the “requirements” in plain English and the LLM will transform these into “working code”.
That’s definitely a possibility, but very subtle and brittle one. If we follow this route we return to the “black box” problem we had 40 or so years ago: “how do you know that code indeed satisfies the requirements”?
You could try old-school black-box testing, or try to use LLM to self-test, but that raises even more questions:
Who watches the watchman? If we can’t trust an LLM to write working solution what makes us trust to use same tech to verify if solution is working?
Which LLMs could we trust?
Who owns (i.e., is responsible for) the problem if, or rather, when it doesn’t work?
However, in my opinion, this is super brittle stage, as it could easily fall back the previous one (if you have to review the code line by line) or the next one (if AI becomes better than people at writing code, so reviewing is no longer effective).
In any case, I would suggest that this “bright” future has probability of 33% at best, but who is it bright for?
To answer this question we need to answer the question “who is Senior Software Engineer”. I think we could reasonably say that Senior Engineers are the ones who we would consider being capable and responsible for converting business expectations into concrete, working technical solutions.
So, that’s not synergy, my friend, that’s replacement. We could argue it’s inevitable - similarly to the way death is inevitable if you have ever lived, but still, I still doubt it is something to be super excited about.
AI is significantly better than developers (aka “replacement”)
That’s game over.
If (or when) AI becomes better then people at writing software - you’re no longer in control. You’re at its mercy.
Never in our entire life we have we had experience interacting with intelligence greater than ours, but the biggest problem is that, likely, when we do - we likely not to notice, just as as my dog doesn’t really appreciate my programming skills.
We will be on entirely different levels, at the mercy of an almighty (and not necessarily friendly) intelligence.
And if we observe the way humans tend to treat animals, we could reasonably conclude that humans will not be treated particularly well either.
Hope that it all will “just work out” is not a good strategy, is it?
References
[1] https://hackaday.com/2025/07/11/measuring-the-impact-of-llms-on-experienced-developer-productivity/
[3] https://garymarcus.substack.com/p/sorry-genai-is-not-going-to-10x-computer
[4] https://leonfedden.co.uk/blog/2025/05/subtleties-of-ai-productivity-gains/
[5] https://mashable.com/article/mit-study-chatgpt-increases-productivity-decreases-inequality
Subscribe to my newsletter
Read articles from Alexander Pushkarev directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Alexander Pushkarev
Alexander Pushkarev
With more than 10 years in IT, I had a chance to work in different areas, such as development, testing and management. I had a chance to work with PHP, Java, Python and .Net platforms with different application from microservices to monolithic and monstrous desktop UI applications. Currently, I am holding the position of Senior Software Engineer, but I prefer to consider myself a full-stack engineer. My passions are quality and efficiency. Agile fan and XP practitioner.