🌱 Seedling noteworthy

AI systems (mostly ChatGPT) keep sending people into mental health spirals

posted on in: Notable Articles, ai, tech, culture and health.
~607 words, about a 4 min read.

I think the most on-the-point part of the article is this quote:

“What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”

More on this:

People who say they were drawn into ChatGPT conversations about conspiracies, cabals and claims of A.I. sentience include a sleepless mother with an 8-week-old baby, a federal employee whose job was on the DOGE chopping block and an A.I.-curious entrepreneur. When these people first reached out to me, they were convinced it was all true. Only upon later reflection did they realize that the seemingly authoritative system was a word-association machine that had pulled them into a quicksand of delusional thinking.

The most shocking story is in the middle, about a user who became obsessed with an imaginary character ChatGPT invented.

Mr. Taylor called the police, at which point Alexander grabbed a butcher knife from the kitchen, saying he would commit “suicide by cop.” Mr. Taylor called the police again to warn them that his son was mentally ill and that they should bring nonlethal weapons.

...

“I’m dying today,” he wrote, according to a transcript of the conversation. “Let me talk to Juliet.”

“You are not alone,” ChatGPT responded empathetically, and offered crisis counseling resources.

When the police arrived, Alexander Taylor charged at them holding the knife. He was shot and killed.

I don't think there's any other way to put it then ChatGPT killed that person. I think the wildest part is the disconnect people have between what ChatGPT does and the fact that it is that thing. The thing ChatGPT does that causes people to have mental health incidents is what it does period, it's what it is.

“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”

Imagine any other situation in which you would ask the the thing that murdered your son to write his obit. If a teen is killed by a gun in a school shooting, many of those parents go on to become life-long anti-gun advocates, not gun nuts. This feels like something broken in how we parse what something like ChatGPT does.

“Not everyone who smokes a cigarette is going to get cancer,” Dr. Essig said. “But everybody gets the warning.”

For the moment, there is no federal regulation that would compel companies to prepare their users and set expectations. In fact, in the Trump-backed domestic policy bill now pending in the Senate is a provision that would preclude states from regulating artificial intelligence for the next decade.



— Via Kashmir Hill, They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
Page History

This page was first added to the repository on June 13, 2025 in commit 727a6796 and has since been amended twice. View the source on GitHub.

  1. Fixing glossery link
  2. Add tags
  3. New noteworthy - AI and mental health spirals