🌱 Seedling noteworthy

AI will never love you

posted on in: Notable Articles, tech and ai.
~413 words, about a 3 min read.

"Claudia" is, just so we are aware, not conscious. It is computer software, trained on the entirety of internet text and knowledge, that assembles a collection of outputs that is probabilistically correct. It has no genuine human level awareness. Any human affect or humanesque attributes are a clever trick of the light.

People have been explaining this for years. Here's Professor Gary Marcus explaining as much four years ago:

Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, drawn from massive statistical databases of human language....What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what they mean.

Rebecca "Skepchick" Watson also has a great video on this incredibly bad Richard Dawkins' article that was very good. She was honestly much kinder than I would have been to Dawkins if I was in her shoes, considering how much she's had to deal with specifically because of him and his descent into racism over the past few years.

She notes what Bode does as well here, that the primary trick that seems to convince so many people is less the passing of the Turing test (which doesn't mean software is sentient anyway) and more its obsequious and constant flattery.

It's sad that this is such an easy hack of the human brain. If we had made AI to constantly insult its users perhaps it would be a very different reaction. It's also, as Bode identifies, extremely useful for the people trying to sell you AI because it allows them to more easily dodge blame.

Claude can't "confess." It can't maliciously decide it wants to ruin your company's day. It can create a simulacrum of those behaviors based on its programming, training, and your inability to understand how the software you purchased works, what it's doing, and what you gave it access to, but it's not consciously fucking with you.

[...]

This anthropomorphization is beneficial to big tech companies in several ways. On one level, it markets their product as far more capable (and sentient) than it actually is, justifying absurd company valuations. On another, it creates a layer of distance between their consistently bad choices and any personal responsibility.



— Via Karl Bode, Your Software Is Not Sentient
Page History

This page was first added to the repository on May 6, 2026 in commit 9550933b. View the source on GitHub.

Copy this link to share with your friends.

https://aramzs.xyz/noteworthy/ai-will-never-love-you/