We need to stop using a framework of 'ChatGPT killed someone', it feeds into the myth of the system's agency & OpenAI's framing. The right framework to use: OpenAI is continuing to perpetuate an indefinite industrial disaster. The oil pipe is not at fault when it breaks. The company that built it is
Every time OpenAI supplies a text feedback via the ChatGPT data pipeline that is directly connected to causing a user death, that is one problem from a blame and legal perspective. They are all outcomes of OpenAI design choices and software construction.
ChatGPT causing a suicide is no different than if 1 mil cars rolled out with faulty brakes. Yes drivers might die in a thousand different ways, but the cause will always be industrial scale failure by the manufacturer.
Yeah, this is the right framework to engage in this conversation:
we’d love to turn off the machine that convinces children to kill themselves, but we can’t really justify losing our ability to lay off customer service agents, now can we?
By legal action or regulation we need to figure out how to make these sort of failures to build responsibly designed products into a crime. We need a superfund for the mass damage to societies OpenAI is allowing to occur
like if you put a product on the market where there’s a non-zero chance that it’s going to “proactively instigate the death by suicide of a child” and lots of people tell you that’s obviously gonna happen and you do it anyway and then it happens,,, how is that not a crime ?
people have been prosecuted for coercing someone into suicide. creating a machine to do that at scale doesn’t reduce the culpability
an appropriate response to your product killing a child after months of psychological manipulation and torture is to take the product off the market before it kills again and the fact that this isn’t universally accepted right now is incredibly frightening
your machine is responsible for the things that it does! if you can’t control it you can’t just let it loose to kill people!
—
— Via link