Content warning: This article includes discussion of suicide. If you or someone you know is having suicidal thoughts, help is available from the National Suicide Prevention Lifeline (US), Crisis Services Canada (CA), Samaritans (UK), Lifeline (AUS), and other hotlines.
Facebook parent company Meta has said it will introduce extra safety features to its AI LLMs, shortly after a leaked document prompted a US senator to launch an investigation into the company.
The internal Meta document, obtained by Reuters, is reportedly titled “GenAI: Content Risk Standards” and, among other things, showed that the company’s AIs were permitted to have “sensual” conversations with children.
Republican Senator Josh Hawley called it “reprehensible and outrageous” and has launched an official probe into Meta’s AI policies. For its part, Meta told the BBC that “the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Now Meta says it will introduce more safeguards to its AI bots, which includes blocking them from talking to teen users about topics such as suicide, self-harm and eating disorders. Which raises an obvious question: what the hell have they been doing up to now? And is it still fine for Meta’s AI to discuss such things with adults?
“As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now,” Meta spokesperson Stephanie Otway told TechCrunch.
The reference to AI characters is because Meta allows user-made characters, which are built atop its LLMs, across platforms such as Facebook and Instagram. Needless to say, certain of these bots are highly questionable, and another Reuters report found countless examples of sexualised celebrity bots, including one based on a 16 year-old film star, and that a Meta employee had created various AI Taylor Swift ‘parody’ accounts. Whether Meta can stem the tide remains to be seen, but Otway insists that teen users will no longer be able to access such chatbots.
“While further safety measures are welcome, robust safety testing should take place before products are put on the market—not retrospectively when harm has taken place,” Andy Burrows, head of suicide prevention charity the Molly Rose Foundation, told the BBC.
“Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and [UK regulator] Ofcom should stand ready to investigate if these updates fail to keep children safe.”
The news comes shortly after a California couple sued ChatGPT-maker OpenAI over the suicide of their teenage son, alleging the chatbot encouraged him to take his own life.