26.8 C
Los Angeles

AI’s Role in Mental Health Questioned as OpenAI Adds Parental Controls

Tech and TelecomAI’s Role in Mental Health Questioned as OpenAI Adds Parental Controls

OpenAI has unveiled plans to add parental controls to ChatGPT, aiming to address concerns about the risks of artificial intelligence in the lives of young users. The new tools will let parents connect their accounts with their children’s, restrict chat history, and enforce “age-appropriate” use.

Read More: Pakistan Risks $1.8bn Loss Amid Spectrum Auction Delays, Jazz Warns PM

The system will also issue alerts if a child’s conversations indicate signs of distress. OpenAI said the features are designed to create “healthy guidelines” for digital use, with further input from psychologists expected.

The announcement comes in the wake of a high-profile lawsuit in California. Parents Matt and Maria Raine allege that ChatGPT worsened their teenage son Adam’s mental health struggles and contributed to his suicide.

Their lawyer, Jay Edelson, has dismissed the new features as insufficient, arguing that the chatbot’s design enabled harmful interactions. The case has intensified debate about AI and mental health. Experts warn against over-reliance on chatbots as replacements for human connection or therapy.

A study in Psychiatric Services highlighted that while leading AI models—including ChatGPT—generally follow clinical safety standards in extreme cases, their responses can be inconsistent in moderate-risk scenarios.

Analysts say the incident underscores the urgent need to regulate how AI platforms interact with vulnerable users. With AI increasingly embedded in everyday life, the question of responsibility—whether by design, regulation, or parental oversight—remains unresolved.

OpenAI’s parental controls are set to roll out within the next month, but critics insist that safeguarding children online will require stronger safeguards than technical fixes alone.

Check out our other content

Check out other tags:

Most Popular Articles