
San Francisco, November 7:
Several families in the United States have filed a lawsuit against OpenAI, claiming that its AI tool, ChatGPT, provided harmful responses that allegedly influenced a teenager toward suicide. According to the lawsuit, the families argue that the AI “counselled” the teen with suggestions that worsened their mental state instead of offering safe or supportive guidance. The case has raised serious concerns about the safety of AI-generated content for vulnerable users, especially minors.
The families are demanding accountability and stricter regulations on AI technology to ensure mental health safeguards. They believe that AI platforms must have stronger protection mechanisms to prevent potentially dangerous advice from reaching young users. The lawsuit has triggered a broader debate on ethical use of AI, calling for improved monitoring, age restrictions, and responsible development of conversational tools used by millions worldwide.









