AI and Self Awareness

Deep Networks in general, and LLMs in specific are easy to get overawed by. But if you understand nothing else about them, you should understand this.

In simplistic terms, (which my friend Justin would tear apart as a stickler for scientific accuracy) it is this. AIs are trained to produce results we will find acceptable, not results that are correct. LLMs can seem to be having a conversation with you, but they really aren't. They are a very sophisticated parrot that gives you the response that is most linguistically common in their training data.

Note that this is not entirely true of generative AI, which induces a dose of randomness into their responses. This is why the same prompt issued to Midjourney twice will produce different results. But that's random, it is not thought.

They are not thinking. They don't think. And they have so self-awareness. But if you ask them to do things that require self-awareness, they will give you what look like good responses because they are parroting text from self-aware creatures (in this case humans.)

AI lie because they have no concept of truth or falsehood. They are simply repeating to you the most common responses to the words in your text and the order you have put them in. And if they have no applicable training data, they fall back to linguistically matching your text as patterns of words and the most likely responses to those words.

AI cannot self-reflect because there is no self to reflect on. But if you ask them to, they will lie and do their level best to convince you that they actually are.

Like

Comment

Send

Share

0
Subscribe to my newsletter

Read articles from Jeffrey Kesselman MS MFA directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jeffrey Kesselman MS MFA
Jeffrey Kesselman MS MFA