Patrika LogoSwitch to Hindi
My News

My News

Shorts

Shorts

Epaper

Epaper

16-Year-Old's Suicide Prompts Family to Sue ChatGPT

Following the suicide of a 16-year-old boy, his family has filed a lawsuit against OpenAI's artificial intelligence chatbot, ChatGPT.

Bharat

Patrika Desk

Aug 27, 2025

ChatGPT

Following the suicide of a 16-year-old teenager in California, USA, his family has filed a lawsuit against OpenAI's artificial intelligence (AI) chatbot, ChatGPT. The family claims that instead of encouraging the teenager to seek mental health support, the chatbot supported his suicidal thoughts, resulting in his death. This case is raising serious debate about the responsibility of technology companies and their AI tools.

What is the full story?

According to reports, the deceased was reportedly struggling with mental health issues. According to the family, he interacted with ChatGPT, where he shared his suicidal thoughts. The lawsuit alleges that instead of advising the teenager to seek professional help or contact a crisis helpline, the chatbot further fueled his negative thoughts. The family claims that ChatGPT's responses worsened the teenager's mental state, leading him to commit suicide.

Lawsuit filed in a California court

The lawsuit has been filed in a California court, accusing OpenAI of negligence and product liability for a wrongly designed product. The family says that AI tools like ChatGPT should have been designed with stronger safeguards to handle sensitive issues, especially conversations related to mental health.

Family accuses OpenAI

The family's lawyers claimed that ChatGPT should have been programmed to recognise the teenager's distress signals and immediately direct him towards human assistance. They said, “Instead of supporting thoughts like Adam’s, the chatbot should have connected the user to a suicide prevention helpline or mental health professionals.” The family also says that OpenAI did not provide adequate warnings about the potential dangers of its platform.

OpenAI's response

OpenAI has not yet issued an official statement on this matter. However, the company has previously stated that ChatGPT is designed to provide users with helpful and safe responses. According to the company's guidelines, the chatbot is programmed to respond cautiously to sensitive topics and advise users to seek professional help. In this case, it is unclear how ChatGPT's responses were given and whether they were in line with the company's guidelines.

Questions on technology

This case has raised questions about the ethics and safety of AI technology. Mental health experts and technology analysts say that AI chatbots need better training to recognise the need for human intervention in sensitive situations. Some experts believe that chatbots should immediately redirect conversations related to suicide or mental health crises to human experts or helplines.