Software engineering principles and AI

Francesc TravesaFrancesc Travesa
12 min read

If you have read any or some of my articles, you know what this blog is about. It’s all about software development, good practices, and, well, “evangelizing” some of those practices. I always defend myself saying that I did not create those, smarter people than me wrote books and had experiences, that were documented, and even there’s studies, surveys, data, to back up those.

To save your time reading the rest of my blog posts, I can summarize all of that in those acronyms and concepts: DDD, SOLID, clean architecture, agile and DevOps practices (Please read the rest of my posts so I don’t need to nuance all of those concepts to what I actually mean).

All of what I stand for in software engineering and development, these practices, those acronyms and words sometimes swayed like swords, they all share underneath a common principle, which is fundamental. If you want to throw away all of those, you should at least keep this very fundamental principle:

Everything should be as easy to understand as possible.

That’s why decouple infrastructure and isolate business rules in a place so they can be easily understood. That’s why we isolate a part of the system in a microservice so we don’t have the cognitive load to think on other things irrelevant to the task. That’s why we leave things open for extension but close for modification (the O on SOLID). That’s why we refactor everything, back and forth, so the intent of what we want to do is clear. That’s why we analyze deeply the business processes and use an adequate language to name properly things so there’s no ambiguity and the intent is clear.

Then generative AI came and it’s changing the way we work. It’s undeniable. At first there was Copilot but it’s been mainly with ChatGPT and its similar counterparts that AI usage became mainstream. The genie is out of the bottle available to the world. This is like a big giant social economic experiment without any safety controls.

Will the common adoption of generative AI affect the principles for what I stand for? The principle above, about making things as easy to understand as possible, it takes for granted that who needs to understand whatever thing is a human, but if it’s going to be a machine, then what? Does this mean we don’t need to work by this principle anymore? How are things changing and going to change, and the most dreaded question: am I becoming obsolete?

The fear has been there since the beginning, as you could read everywhere: “AI is coming for your job”. After using AI for a while, and being a regular, even kind of dependent to it, I don’t see how’s that possible, this statement is just fear-mongering. But my conviction of that could be just confirmation bias, because if I don’t think that, what’s the alternative? To really thing that AI will get my job, so what would I do then? But I’m confident I won’t need to reinvent myself in such a extreme way, and I am going to explain why here.

In between “AI is going to revolutionize the world in ways we can’t foresee” and “this is a complete non-sense hype promoted by strong interests” there’s an actual fact and is that we really use generative AI and rely on it in order to do our day to day work -as software engineers. So I am going to focus on that, on what really is changing and on what can this lead us. Change is really happening and it doesn’t necessarily mean is for the better.

With the current iteration generative AI

Since the media is echoing and amplifying the statements of some distinguished and select idiots of society behind the AI hype, I have to frame all of what I am saying to the current capabilities of AI, not of what “it may do one day” (as those idiots scare us that it might happen). It’s already disrupting as it is now.

OK let’s unpack all of the questions and see if a conclusion is reachable.

First of all, I am worried that if we delegate all the work to AI, the fundamental principle that things should be as easy to understand as possible (and the never ending list of coding patterns, architectural pattern, “modus operandi”, etc derived from it) won’t matter or will become obsolete.

If you are one of my readers, I would like to assume that you positively know that generative AI has no underlying reasoning behind (and also that you use it completely aware of that!). It just outputs the most probable answer with a given input. It’s impressive how accurate it can be, yes, but it’s just an “algorithm”.

Now, let’s say you want to debug a piece of code, or extend something or make a new feature. How do you think the output will be best: if the given code is unintelligible for humans, or if the given code is “clean” and easy to follow even for a human?

If you want to get the best use of generative AI, the clearer you give the output, the higher chances that the output is of higher quality, as well as you can feed it a lot of more input - a whole project- without “confusing it”. So I would say that following the “simplicity” principle is now more important than ever, if you want to get the most out of generative AI prompting.

I would also argue - but here I am no expert, I just assume that would be the case- that if AI is trained with high quality pieces of code, the output should be also of more quality. So if you really want to rely on AI, you’d better make sure that your code that you make publicly out there (because of course AI only trains with public available data with no copyright infringements… of course).

But those two last paragraphs beg a question: what is high quality code? I’ve read discussion on the comments on other blogs where it’s said that “this can’t be solved, software engineers can’t agree on what’s good code”. Well I disagree, I believe it’s hard to measure, but I believe that good code is the one that does exactly what it should do. It does nothing more nor nothing less.

That’s the hard part, to decide what it should do, that means designing, -probably iterating several times, as not even the customer really know what they want- to translate intention down to code. This is the core principle of domain driven design which is often ignored and displaced by technical concerns that are secondary like tactical patterns or clean architecture rules. - it’s in the name for Smurfs’ sake!-

And that’s the main reason I don’t think software engineers will become obsolete. We might get more productive thanks to AI, but our work is far from only coding. So yes, generative AI is a disruptive tool, but a tool can’t replace real expertise.

Trying to get rid of software engineers

Still, knucklehead from company ACME decides that generative AI is enough for all the IT needs that company ACME has.

I have to admit that this is a likely future scenario. The knucklehead being a knucklehead it’s a certainty, but filling all IT needs with generative AI might come. But yet again it depends on the expertise of the person using generative AI.

On a competent company you might have something like this:

It doesn’t matter if there’s an AI bot or not, there’s real reasoning behind the actual pieces of work.

If knucklehead wants to rely solely on AI, without the expertise, first of all this will go nowhere, as they won’t even know what to prompt in the first case, but then, the output will be doubtful at best.

In that scenario there has to be a company behind. Which is a basic setup for many companies that outsource IT services to a third party. Nothing new under the sun. You pay for the expertise. You need to pay taxes, you hire a tax lawyer, or risk being in trouble. You want to ask that to AI, well you risk being in trouble.

AI will be helpful as long as the problem is generic enough, being your tax situation, or your IT needs. If you’re in a somewhat dire situation, unless enough people where in that situation, asked in enough forums, got enough replies, and all of that was part of the training data for the model, you will probably be out of luck. The same for a very specific IT need. Unless you already have the expertise.

And what would happen if the usage of generative AI gets spread, without human supervision, either because generative AI can be connected through Model Context Protocol to several places so it can, alone, deploy things to internet for real, or because a company behind it can support so many clients, since they use AI tools to power themselves and they are developing a thousand million projects, but without human supervision as there’s not enough manpower?

That’s an interesting scenario. I would say that is akin to having factories that do mass production products. I would say that all code will start converging to solve generic stuff. As long as your needs don’t deviate from the norm - which the only example I can think of right now is a typical e-commerce selling products, AI might give you actually the best code, and it might get very good at it. Like a factory optimizes for a specific kind of product.

Now you need something specific? Shipping only at night, customizable products, etc. You will need the craftsman to help you, as the factory is not designed for that. And there’s not enough training data on those specific stuff to have a non-flawed random generated output.

Very generic problems are normally solved by libraries, frameworks and up to common products like CMSs. So a similar struggle was already in place: should we develop something in house or re-use an existing tool? (I wrote a post about that already) I am afraid that generative AI doesn’t change the struggle: only that in the hands of software engineers, developing in house now can be way faster.

Anyway, in this scenario, which is likely, I predict a world where there will be a lot and a lot of very low quality products generated by AI that they all look alike and will start to converge. Customization on top of those might become impossible as it will break existing functionality, -AI will generate code, it can’t know if it’s actually working or not! And generated random code will have lower and lower probability of success in that case, so the only way forward will be that somebody with expertise comes and fixes them -if possible, as it might mean starting from zero having reached technical bankruptcy.

But also, there will be proper “crafted” products solving the specific problems of a specific business. The ones done with human supervision (with or without generative AI). But those will be minority. People will still be cheap, or not willing, or not having time, or whatever reason, to not invest properly. So the amount of AI generated code -without expertise- will be way bigger. (As is the amount of generated pictures, text and someday music and others…)

If that crap generated by AI is taken as role model to follow it will be a problem. Still right now I would say a small percentage of companies follow proper software engineering principles (the way I like them and describe in my blog posts). Many developers -most of the time not really engineers- are blindsided by the next framework, the next programming language, and forgetting that actually what matters most is what you do before you start coding. So this will just get worse with this avalanche of low quality AI generated random code. That is going to feed and train yet future models making the problem even worse.

So, does it matter to follow proper principles in software engineering? I would say now more than ever! The problem is that AI can only get the output produced by following those principles and it can’t really infer the intent and the meaning behind. Those principles are as much in the code than out of the code!

This fight about work quality is not only in software engineering, but in all the professions/crafts where AI can just generate something. The difference in software engineering, is that we are more open minded in sharing the work. Open source is the norm, and intellectual property is not as important as showing off to the world: look how much engineering we did while engineering this! We are smart!

So generative AI adoption in software development has no much moral concerns as writing a book, a song, or drawing something, which can be only possible by basically stealing authors work to train the data.

And the chances of there being slackers in software development, doing the minimum effort and not caring about quality is probably higher too.

When AI gets better

I don’t think generative AI based in LLM can get better. Machine learning algorithms will get better. Research on that will happen and there will be breakthroughs etc.

But LLMs and LRM (where having the word “reasoning” there is an insult to the truth) by design can’t get better. Yes, it’s impressive what they can do, and it’s very interesting the fact that they can do things they weren’t trained to when the amount of training data is absurdly huge.

But that is a hint that the actual problem is probably easier than previously thought, not that those models are “smart”. It doesn’t mean that throwing there more training data they will suddenly get even more “skills” that they weren’t trained to do.

That highly depends on the quality of the data too. I am no expert on this, but my “reasoning” skills make me side more with the people that write these kind of articles: https://www.theguardian.com/commentisfree/2025/jun/10/billion-dollar-ai-puzzle-break-down, than the people that have a high interest on keeping the hype going (regardless of the consequences).

What I can see when I use generative AI is this lack of “reasoning”. I go to dead ends quite often. And I see this is by design. Hallucinations are there by design. Changing that will be fundamentally changing LLMs, and betting for other kinds of AI that could really “understand” concepts, not just generating what can be a highly probably correct answer.

It doesn’t need to get better though. Change is already happening. I don’t know what we will do when the bubble burst, and OpenAI and others’ business models renders themself non sustainable (as they survive only by investment money that keeps pouring because they believe the promise that AI will get better), that suddenly everything goes behind a big pay wall. Better to start having in house any LLMs, but it’s only half of the equation, the other half is the training data…

And if at some day in the future, there’s really AI agents -you know, agent that has agency, not what we have now!- maybe there will be no need for us humans to understand the code. But in that scenario, I would say machines will appreciate the beauty of simple code and solutions, of elegant design. The principles will remain, for there to follow, either by humans or machines.

(I wrote this article by myself, just asking sporadically to ChatGPT some idioms or some words, as I sometimes feel that there’s some English idiom or expression that fits exactly what I want to say. But I didn’t ask it to correct the post fully. I am leaving here all my mistakes as non native speaker, and typos, etc, for the world to see).

0
Subscribe to my newsletter

Read articles from Francesc Travesa directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Francesc Travesa
Francesc Travesa