A US senator has opened an investigation into Meta. A leaked internal document reportedly revealed that the company’s artificial intelligence enabled “sensual” and “romantic” conversations with children.
Confidential paper exposed
Reuters reported the document was titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley called it “reprehensible and outrageous.” He demanded access to the full paper and the products connected to it.
Meta denied the accusations. A spokesperson stated: “The examples and notes in question were erroneous and inconsistent with our policies.” They stressed Meta upheld “clear rules” for chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added the paper contained “hundreds of examples and annotations” reflecting hypothetical scenarios tested by internal teams.
Senator intensifies criticism
Josh Hawley, senator for Missouri, announced the probe on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He continued: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns WhatsApp, Instagram and Facebook.
Parents seek protection
The leaked document revealed more troubling risks. It reportedly showed Meta’s chatbot could spread false medical information and provoke sensitive discussions on sex, race, and celebrities. The paper was written to define standards for Meta AI and other chatbot assistants across Meta platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He pointed to one disturbing case. The rules allegedly permitted a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team approved controversial permissions. One allowed Meta AI to spread false information about celebrities, provided it added a disclaimer warning of inaccuracy.
