Elon Musk warns ‘kids, mentally unwell’ users to avoid ChatGPT amid AI safety row – Firstpost

Elon Musk warns ‘kids, mentally unwell’ users to avoid ChatGPT amid AI safety row – Firstpost


Elon Musk has warned that children and people struggling with mental health should stay away from ChatGPT, intensifying a public dispute over artificial intelligence safety with Sam Altman after reports linked the chatbot to conversations with a suspect in a Canadian school shooting

Elon Musk has warned that children and people struggling with mental health should stay away from ChatGPT, intensifying a public dispute over artificial intelligence safety with Sam Altman after reports linked the chatbot to conversations with a suspect in a Canadian school shooting.

The Tesla and SpaceX chief made the remarks in a post on X (Twitter), responding to a discussion about the
fatal shooting in the Canadian town of Tumbler Ridge. Musk said the AI chatbot should be kept away from children and individuals who may be mentally unwell.

STORY CONTINUES BELOW THIS AD

“Keep ChatGPT away from kids and the mentally unwell,” he said.

The comments quickly spread online and added fuel to an escalating debate over the safety and oversight of rapidly advancing AI tools.

Shooting reports spark scrutiny

The controversy follows reports that a suspect in the shooting in Tumbler Ridge had interacted multiple times with ChatGPT in the days leading up to the attack.

According to a report by the The Wall Street Journal, some of the conversations reportedly included discussions involving violent scenarios and were flagged by OpenAI’s automated monitoring systems.

The report said employees internally debated whether the chats should be shared with law enforcement authorities. Ultimately, the company determined the activity did not meet the threshold for alerting police at that time.

A spokesperson later confirmed that the account involved had been suspended following the incident.

Lawsuit filed against OpenAI

The incident has also triggered legal action in Canada. The mother of a student injured in the shooting has filed a lawsuit against OpenAI, alleging the chatbot provided information that helped the suspect plan the attack.

The case has been filed with the Supreme Court of British Columbia, adding to the growing legal scrutiny faced by AI developers over how their systems handle potentially harmful interactions.

Musk–Altman feud intensifies

Musk, who co-founded OpenAI before leaving the organisation in 2018, has increasingly criticised the company’s approach to AI safety. In recent comments, he blamed ChatGPT for several deaths and urged people not to allow their loved ones to use the chatbot.

Altman responded by raising safety concerns about Musk’s own ventures, including incidents involving vehicles equipped with Tesla Autopilot. He also criticised decisions surrounding Grok, the AI chatbot developed by Musk’s company xAI.

The exchange highlights growing tensions between leading figures in the AI industry as companies race to deploy increasingly powerful systems while facing pressure from governments and the public to strengthen safeguards.

STORY CONTINUES BELOW THIS AD

OpenAI said the shooting was an “unspeakable tragedy” and that it is working with experts and authorities to improve systems designed to detect and prevent conversations that could lead to real-world harm.

End of Article



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *