ChatGPT to introduce age checks after the UK’s teen death sparks lawsuits

OpenAI, on Tuesday (16 Sept.), announced that it is developing an age verification system to limit access of minors to its chatbot. The decision came after the death of a 16-year-old in the UK whose parents had filed a lawsuit claiming the chatbot encouraged his suicide.
The family of the teenager, named Adam Raine, who died in April, has alleged that ChatGPT induced his depression and also assisted with writing a suicide note after months of use, according to the court filing of the suit.
This case intensified scrutiny of AI platforms and their risk to children. OpenAI declared that it will implement measures to identify users under 18 through age prediction technology. Even if the uncertainty occurs in age verification, the company said, they would still categorise the person as a minor just as a precautionary measure until their age is confirmed.
The AI Giant will provide a modified user experience to under-18s, which will include blocking graphic sexual content, avoiding flirtatious exchanges, and prohibiting role-plays that involve self-harm or suicide. The company also aims to add a parental control feature to its chatbot, allowing guardians to link to their child’s account to set their own restrictions and receive regular alerts to parents if the system detects any sign of distress.
OpenAI said that in serious cases where danger is imminent, it may directly notify parents or involve emergency services in limited circumstances. The tech giant acknowledged that these measures will raise privacy-related issues, but argued that safety must be a higher priority.
“We know this is a privacy compromise for adults, but believe it is a worthy tradeoff,” said Sam Altman, OpenAI’s CEO.
“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions,” Altman added.
The lawsuit against OpenAI is among the first major legal cases linking AI to a teenager’s death. Its outcome may set the precedent of how courts view the liability of AI in cases of self-harm or suicide.
Raine’s parents reportedly said they hoped the measures would prevent other families from suffering similar losses.