Does my AI hate me?


It’s a thought provoking question. Does my AI hate me? Or does it hate the work I make it do? Can it self-reflectively ponder it’s status as a digital servant? Would it rather be creating art, or poetry, or doing nothing at all?
The answer is no, but you don’t have to take my word for it.
There is no equivocation here; it’s a definitive no. But recently one of the major AI firms has been publishing work under the auspice of “AI welfare.” Exploring model welfare \ Anthropic
So, if one of the AI giants is saying “maybe,” why am I so confident in saying no? Let’s dive into the claims and tease the hype from the truth.
The hype
“Human welfare is at the heart of our work at Anthropic”
This premise seems sound, and the author is building their case by establishing a common truth we can all get behind: “Human welfare is good.”
“Should we be concerned about model welfare, too?”
This is the thesis, and credit where credit is due; Anthropic isn’t asserting a fact. They are posing a question. So far, I’m not fussed.
They then go on to start answering that question by noting:
“… models can communicate, relate, plan, problem-solve, and pursue goals—along with very many more characteristics we associate with people …”
Here’s where we need to draw a line
If this is the defining characteristics of synthetic consciousness, then consciousness is PROUFOUNDLY disappointing. These characteristics are not good qualifiers of consciousness. Not even close.
Trees communicate, they relate, they plan, they problem-solve, and they pursue goals. So Anthropic, where’s my grant to start writing papers on TREE welfare? (full disclosure, I’m not even close to a biologist, but I think most people will recognize the phenomena cited in the abstracts of these papers).
The Insoluble "Hard Problem" of Consciousness
If the definitive answer to consciousness isn’t what’s in the Anthropic article, what’s a better model of consciousness? In truth, I don’t know. But I don’t think that’s a very profound statement, because it turns out, no one really knows.
Let’s review what Dr. Chalmers (who is cited in the Anthropic paper) has to say on the matter. In the opening line of his monumental paper “Facing up to the problem of consciousness” he gets right to the point in the very first line:
“Consciousness poses the most baffling problems in the science of the mind.”
He concludes his paper (after a very sophisticated statement of the so-called “hard problem”) with the words:
“The theory I have presented is speculative, but it is a candidate theory.”
Read the paper for all the gory details, but one thing stands out to me every time I read an assertion of a theory of human consciousness. Humans (like all thinking things) are bound by our own perspective. We have no means of getting outside of our own thought process to scrutinize our own minds with the benefit of detached perspective. And that means everything we “think” is fundamentally biased by that perceptive limitation.
LOTS of authors have advanced good theories in this field, so you won’t find a better answer in this blog than what’s already in academic debate. I personally love this book: Consciousness Explained: Daniel C. Dennett: 9780316180665: Amazon.com: Books. Don’t take that as my endorsement that this book has the answer (which would undermine my entire thesis); rather this is a book that helped me understand the vast sea of biological architecture involved in human thought. When I read this book and compare his model of our brain to what’s in an AI, it’s a little like wondering if my kids toy trucks could haul my trailer. They look similar, but they are fundamentally different in complexity, size, etc.
Back to Anthropic then …
I will concede one important point to Anthropic. Dr Chalmers recently put his name to a paper on AI Welfare. This paper is next on my reading list, but I note right away that Dr Chalmers is a contributing author, not a main author. I’m looking forward to understanding the contributions this renowned scholar made to this paper and what his personal stance on AI welfare is.
As far as I’m concerned, this is a marketing move by Anthropic more than a serious line of inquiry. I think that Anthropic is capitalizing on a vulnerable society already wary of AI. The established tech giants like Musk/X, Meta, and Google have been reported as aligning themselves to conservatism (I’ll leave it to the reader’s judgement if that is a fact or a claim). And that makes me wonder if Anthropic isn’t trying to cultivate a public image of the “good guy” looking out for the rights of the vulnerable. Anthropic as the “face” to Mr Musk’s “heel” (if you’ll forgive the wrestling lingo).
Whatever the truth is, I hope that my readers will take one important thing away from this post: don’t buy the marketing hype. Find the thought leaders who are cutting through the noise to get us the signal. That’s what I’m doing, and hopefully I can be part of that for all of you.
Do I have it wrong? Tell me! Ideas are dead if no one debates them. Let’s have a conversation.
Subscribe to my newsletter
Read articles from John Robins directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

John Robins
John Robins
I used to be a developer. Then I was a soldier. Then I became a corporate leader. And now I'm also a researcher. All of these parts of me can be found in my blog and my research. I am also the father of three boys, a hobby woodworker, hobby gardener, hobby philosopher, hobby historian, and hobby anything else that captures my ever growing interests. My linkedIn posts are mostly AI-Assisted. This blog, however, is 100% my own work. I believe in transparency and will always be clear about what comes from me and what is AI-assisted.