Please Stop Calling GenAI "Useless"


A common shorthand I am seeing in criticisms of Generative AI technologies is that GenAI / LLMs are “useless”. This shows up again and again, especially in hot takes on Mastodon, and sometimes from people whose professional work I deeply respect. While I understand that, to some, it might be a shorthand for a more thought-out criticism, this argument still annoys me, and I think it hurts the credibility of people who fundamentally want to do something good: ask if a disruptive new technology that has big social, legal, and also environmental impact is actually “good”, “worth it”, or whatever else you want to call it. This is my case for why we should stop using this argument, and talk about the (very real!) issues with GenAI differently.
Additionally, I believe that AI critics can benefit from at least understanding the positions of AI proponents, even if they disagree with them. Having empathy and extending the benefit of the doubt goes a long way to seeing them not as radical capitalists hell-bent on replacing humans with machines and ready to boil the oceans to get their way, and instead understanding that, fundamentally, many of these people believe that they are acting for the benefit of humanity, and that not rolling out AI as fast as possible is unethical. You need not agree with that, but you need to understand it in order to engage with them.
The Usefulness of AI
What makes AI useless or useful? (What makes anything useless or useful?)
A very simple definition could be: a technology is useful if it solves a problem for its user. That problem may be small (“please proofread this essay for me”) or life-changing (“ChatGPT diagnosed my rare disease” is basically a whole genre now, even though the effectiveness is still hit or miss). The goal of the user might be ethical or unethical, prosocial or antisocial, or somewhere in between on the spectrum - what matters is that someone wants the problem solved, and AI is helping that person solve it. If it is successful, in the eyes of the user, then it is useful to them.
AI is useful to me, specifically. I find myself using it to aid in restructuring or rewriting my texts, not by telling it “rewrite this for me”, but by asking “please review this text and highlight points where the arguments are unclear, the storyline doesn’t make sense, or the general style could be strengthened". And the results are good! Just like when using a human reviewer, you use 20% of what they tell you directly, adapt 60%, and ignore the remaining 20%, and the review cycles are measured in seconds, not days. This is useful.
I also use it to learn about broad concepts, have it explain specific things to me in terms that I can understand more easily, write short bash scripts that are ugly but only need to work once, and many other things. And I occasionally use services like Perplexity to have them summarize the state of research on specific, niche topics that I am not an expert in. Does it get things wrong or make them up? Occasionally, yes, but that is not the point. The point is that getting a 80% correct explanation of the current state of research on some thing that I know very little about is more useful to me than spending three hours attempting to decipher the scientific papers this explanation was pulled from, and I can then go out and verify the tidbits that are really important in my situation and that I absolutely need to get right.
Just today, I assisted a bike driver who had been hit by a delivery bike and fallen on her unprotected head (seriously, people, wear helmets!). I used an AI to quickly pull up a list of symptoms for concussion to check with her. This gave me the information in half the time it would have taken me to scroll past the ads on Google, click on the first link, wait for it to load, dismiss the cookie banner and popup ad, and get a less well formatted and potentially less complete version of this information (that, let’s be real, may have also been written by AI). This is useful.
The AI Optimists Perspective
While I've outlined how AI is useful to me personally, it's worth exploring why AI proponents are so enthusiastic about this technology. Many see potential far beyond the current applications:
Productivity amplification: Proponents see AI as a tool that can dramatically increase human productivity across domains, from coding to creative work. Cricics often frame this as “AI taking away jobs”, AI proponents see it as a hyper-effective autocomplete that takes away the toil of the “boring parts” of the job (no one likes writing boilerplate code), and leaves more time for the interesting parts.
Democratization of capabilities: They envision AI making specialized skills more accessible to people without formal training. This starts with people using ChatGPT to help them draft letters to dispute incorrect decisions by banks which contain all the right magic words to make the bank listen, but does not end there.
Problem-solving at scale: Many believe AI can help address complex challenges like climate modeling, drug discovery, and scientific research. The founder of Anthropic speaks about having “a country of geniuses in a datacenter” that can solve problems, run experiments, conduct their own research, and allow humanity to progress at a pace never seen before.
Economic transformation: Some see AI as enabling new business models and economic opportunities that weren't previously possible. Already, people are running market research on simulated populations of people, or having virtual stakeholders at the table in ideation meetings to get feedback much faster and cheaper, and for AI proponents, this is just the start.
New Applications that we haven’t even considered: Of course, the strongest form of AI optimism is to say that whatever we can imagine now will pale in comparison to what AI will really be capable of, in the same way that the people who built ARPANET could not forsee YouTube.
Understanding these perspectives doesn't require agreeing with them. However, engaging with the strongest versions of pro-AI arguments rather than caricatures allows for more productive dialogue about the technology's future, and about the costs associated with pursuing it.
Why Is Calling AI Useless Hurting Your Argument?
Let’s assume that you actually want to convince people with your argument against AI, and aren’t just in it for the memes and hot takes. We have established that to the people you want to convince and bring to your side, AI is useful. You may not think that the use that they are getting from it is good (one persons “effective content marketing machine” is anothers “AI slop generator poisoning the Internet”), but if you want to convince them, you need to meet them where they are and start with a shared reality / a shared set of assumptions to base a productive discussion on. By saying that AI is useless, you are signaling that this shared reality does not exist on a foundational level, and are hurting the credibility of your other arguments.
You’ve probably been on the other side of this kind of argument yourself. If you are an AI sceptic, chances are you have a high affinity for technology and are probably hanging out with people from hackerspaces, or gamers, or other technically-inclined subcultures. (There are other groups of people opposed to AI, like artists, but let me use these as an example, as this is where I have my own roots). Do you remember when people were saying things like these?
“Why would anyone need more than ISDN speeds at home?”
“Oh yes, we thought about digitizing these forms, but making you send them by fax / letter is a lot easier.”
“What the hell do you need mobile internet for?”
“Well, this whole ‘Internet’ thing is never going to catch on, why are you wasting your time with it? Why not learn mainframe programming?” (okay, that probably hasn’t been said in a few years, but you get the idea)
Did sentences like this make you trust the judgement of the person delivering it? Would you have been as open as before to receiving potentially more valid arguments from them after that, or would you have disqualified them as cranks who didn’t know what they were talking about? I know I always had a hard time taking people seriously after sentences like that.
People on all levels, from school children to CEOs, have made the experience that AI can help them solve their problems. Some people see AI as a technological revolution that is “on the small end, at least as large as the Internet”. These are fundamentally serious people who have to convince other fundamentally serious people that spending hundreds of billion dollars on this bet is the right call, and they are succeeding with these arguments. Telling them that AI is useless is so far outside of their perceived reality that they will immediately stop listening to anything you are saying and ignore you as a crank, luddite, or whatever other term they choose to use.
Now, is it possible that AI is a bubble that will pop? In my eyes, it is not only possible but inevitable. Many new technologies have a bubble phase. The Dot-Com-Bubble also had a telco-bubble attached to it that popped hard after significant initial overinvestment in the buildout of connectivity, leading to massive losses for the affected telcos. AI evangelists seem to be split on whether the AI bubble will pop or whether the demand for “artificial cognition” (their words) will increase so much that the currently planned buildout will be insufficient, but let’s be clear: if it pops, they will be broadly OK with that and count it as the cost of doing business and rolling out a disruptive new technology. They are writing books exhorting the power of bubble dynamics in driving change (and making a few good arguments!). And, let’s be honest: if you could set a couple billion dollars of private equity funds on fire to roll out a technology that significantly changes the world for the better a couple years earlier, wouldn’t you?
Distinguishing Between Poor Implementations and Core Utility
We've all seen the awkwardly integrated AI assistants that companies have bolted onto existing products to please shareholders. Similarly, the flood of “X, but with AI” startups often deliver little value beyond buzzword compliance and setting venture capital on fire. These implementations can indeed be useless or even counterproductive, and it is only correct to call this out.
This pattern isn't unique to AI. We saw similar dynamics with blockchain (a hype cycle where I am far more open to the argument of “fundamentally useless except for crime and destroying the planet”, but I digress), IoT, and countless other technology waves. Poor implementations and hype-driven products deserve criticism, and pointing them out is both valid and necessary. And, yes, it can be funny and even cathartic to point and laugh when they implode.
However, the failure of these specific implementations doesn't invalidate the core utility of the underlying technology. Just as the Dot-Com bubble's burst didn't mean e-commerce was fundamentally a bad idea, the inevitable collapse of many AI ventures won't mean the technology itself lacks utility.
The Right Criticisms Of AI?
So, what should we argue about, then? In my view, there’s no lack of easy targets or hard questions to be had here. The elephant in the room is the environmental impact of planning to spend double-digit percentage points of the national power grid capacity on AI computations. The social impact of replacing (skilled or unskilled) labor with computers, the impact on the effectiveness of companies doing this too early and making expensive mistakes due to AI not being suited for these purposes, the impact on IT Security and maintainability of having AI write your code for you, the impact on education when people can just generate a complete essay, the exploitation of people in training these models, … there are lots and lots of criticisms to choose from. Pick one. Hell, pick all of them. Go to town.
Or go deeper on where your feeling of “AI is useless” comes from. Is it that you expect it to over-promise and under-deliver? Is it the ecosystem of grifters looking for easy money around it? Is it the fact that the system will just make things up if it runs out of ideas? Or have you tried that specific feature someone else is touting and were disappointed? Why?
These are problems that we need to address, and many of these problems are also seen as problems by AI proponents, and are being actively worked on - because these people aren’t dumb or evil. They see the technology through other eyes, they weigh the importance of different factors differently, or they have different expectations of the effects of future models on all of these issues. This isn’t a cabal of grifters out to steal money from Hard-Working Americans™, but many of them are people who believe that what they are doing is bringing society closer to another revolution in capabilities and progress.
Conclusion
Personally, I find reading the articles and listening to the podcasts of people who aren’t AI evangelists but are in the pro-AI camp quite helpful to update my mental models of what the “other side” is thinking. Stuff like the Complex Systems podcast with Patrick McKenzie on boom and bust cycles, or on AI and power economics (NB: I don’t consider this an overall fantastic podcast, but find the episodes interesting for learning what a proud capitalist with some libertarian leanings thinks about these issues, even if I don’t share these views). Similarly, reading what the founder of an AI lab sees as the potential upside of AI can be instructive, even if you don’t agree with them. I’m sure there are more good sources that I can’t remember off the top of my head (feel free to put them in the comments below or send them to me on Mastodon and I will add them here).
Having discussions about the limitations and costs of AI is important. However, having an agreed common reality is a prerequisite for that. Calling AI categorically useless, or only engaging with strawman versions of proponents’ arguments, sabotages this common reality, which will help no one. If we want to be heard, we have to be honest and acknowledging that AI can be a tool to solve some problems, while sometimes disagreeing on whether these problems should be solved, are solved well, or solving them is worth the cost, can be a more fruitful base for discussions. Just because you accept something as useful does not mean that you endorse it. It just means that you have to work a little harder and dig a little deeper in your criticism - and that will make it a better discussion for everyone.
In the end, there will probably not be agreement, but hopefully, there will at least be more understanding than before. And, no matter which side you are standing on, polarized discussions that are driving people into factions aren’t the way forward.
A Personal Afterword
I wrote this article because the one-sided arguments I was reading on Mastodon annoyed me, and because some of the criticisms did not match my experience. Frankly, this essay is just something that I needed to get out of my system, so that I could have something to point to and say “this! This is what I mean!” without having to rehash the same arguments over and over.
This does not mean that I see AI proponents as doing a much better job in this area, with some of them basically calling opponents luddites that stand in the way of the inevitable march of progress. In the end, both sides of the debate need to be willing to honestly engage with one another, and I expect this from proponents at least as much as from opponents - they are the ones who are foisting hundreds of billions of dollars in investments on us while cheerfully telling us that this stuff may wipe out humanity.
This piece is focused on the side of the AI critics simply because these are the people in my social circles and thus more likely to be reading it. If the person who you are talking to is not making a good-faith effort, you should absolutely call them out on it, and I don’t expect you to go high when they go low. All I’m asking is that we should not be the ones throwing the first stone, and that we keep our minds open to the possibility that there may be some truth to what AI proponents are saying, and that they, like us, are humans who believe in what they are doing, and think that they are doing what is best. It’s cliché, but that does not make it wrong.
In the interest of transparency and "showing my work": I wrote a first draft of this article and then fed it to Claude 3.7 Sonnet to get feedback on it. I implemented many changes to address weaknesses identified by this conversation. In the section "Distinguishing Between Poor Implementations and Core Utility", I used some text proposed by Claude, before making significant changes to it to adapt it to my style and the argument I wanted to make. You can read the full, unedited transcript here (where you will also see the complete first draft of the article).
The next morning, after receiving some input from a friend, I found that I wanted to make a different point more strongly and made some further edits with input from Claude 3.7 Sonnet, which resulted in the “AI Optimists Perspective” section, which was also partially-generated and then adapted by me. You can find the second transcript here. In my eyes, the article was much improved through this collaboration, though such things are of course in the eye of the beholder.
I first verbalized these arguments in a dinner conversation with two colleagues, who I thank for giving me the opportunity to crystallize my thinking in a fruitful discussion on this topic. I also received feedback from Kris Shrishak, and while I did not follow some of his recommendations, his feedback strenghtened the article and helped me clarify to myself what I wanted to say.
Subscribe to my newsletter
Read articles from Max Maass directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Max Maass
Max Maass
Security Expert at iteratec. I break your software before other people do, and then help you secure it afterwards :).