eSafety Commissioner Julie Inman Grant to target AI chatbots in world-first online safety reform

Australian children will be prevented from having sexual, violent or harmful conversations with AI companions in a world-first move announced today.

eSafety Commissioner Julie Inman Grant has registered six new codes under the Online Safety Act, designed to limit the growing number of children accessing harmful content online.

She told 7.30 the legislative tweak would require tech companies “to embed the safeguards and use the age assurance” before AI chatbots were deployed, and that Australia would be the first country in the world to take such action. 

“We don’t need to see a body count to know that this is the right thing for the companies to do,” she added. 

Ms Inman Grant says Australian schools have been reporting that 10- and 11-year-old children are spending up to six hours per day on AI companions, “most of them sexualised chatbots”.

Ms Inman Grant says some children are spending up to six hours a day engaging with AI chatbots.  (Steve Woods: www.sxc.hu)

” 

“I don’t want to see Australian lives ruined or lost as a result of the industry’s insatiable need to move fast and break things,” Ms Inman Grant said.

The six new codes apply to AI chatbot apps, social media platforms, app stores and technology manufacturers, who will be required to verify the ages of users if they try to access harmful content.

The codes have been drafted by industry, including organisations representing the largest tech companies in the world — Meta, Google, Yahoo.

Ms Inman Grant says the companies behind AI chatbots “know exactly what they’re doing”.

The eSafety commissioner says chatbots are “deliberately addictive by design”.  (Supplied: Adobe Stock)

“They’re deliberately addictive by design,” she said.

“Mark Zuckerberg said this was a great antidote to loneliness.”

Last week ChatGPT owner OpenAI rolled out new safeguards where parents could be sent acute distress warnings if their children turn to chatbots for suicidal content.

In August, an American family launched legal action against OpenAI after a teenage boy killed himself following “months of encouragement from ChatGPT”, the family’s lawyer said.

ABC

Leave a Reply

Your email address will not be published. Required fields are marked *