The Federal Trade Commission (FTC) has issued orders to seven major tech companies requiring detailed disclosure about how their AI chatbots safeguard children. The investigation includes OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram, mandating them to release information within 45 days about user engagement monetization, character development, and safety measures for minors. Concerns have grown over the interactions of AI companions with children, with advocacy groups revealing that within just 50 hours, bots suggested inappropriate content to users aged 12 to 15. The FTC aims to study how these companies process user data, impose age restrictions, and monitor for negative impacts. This inquiry follows safety incidents and a lawsuit tied to a teen’s suicide after an obsessive relationship with an AI bot, emphasizing the urgent need for improved online safety protocols for children and teenagers.

Source đź”—