OpenAI Develops Age Prediction for AI Services Amidst EU's AI Act
OpenAI is developing an age prediction algorithm for its AI services, aiming to guide users towards suitable content and alert law enforcement in critical distress cases. This move comes amidst growing concerns about the mental health impacts of AI on teens, particularly from chatbots. The EU, meanwhile, is drafting regulations to tackle these issues, with the AI Act and Digital Services Act in the spotlight.
The EU's AI Act, set to come into force in 2024, mandates that AI systems comply with prohibitions against manipulative techniques that could cause significant harm, including those affecting mental health. However, the EU's guidelines on protecting minors online are advisory, not binding, and do not enforce effective age verification methods. Tech companies are taking steps to address mental health risks posed by chatbots to teens, with Meta training its AI chatbots to avoid sensitive topics and direct users to professional support.
OpenAI has introduced parental controls for ChatGPT, providing a safe, age-appropriate experience for teens. Yet, the EU's AI Act falls short of addressing the complexities of how AI systems operate in practice, leaving many mental health concerns unaddressed. Stronger technical solutions for age verification, like the EU's age verification app, exist but are not mandatory. Parental controls may address immediate risks but fail to tackle the deeper, systemic issue of reliance on AI without effective legal frameworks.
As AI systems become more integrated into our lives, particularly for younger users, the need for robust regulations and safeguards is evident. While the EU's AI Act is a step in the right direction, it is clear that more needs to be done to protect users, especially teens, from potential mental health harms. Tech companies and lawmakers must work together to ensure that AI is used responsibly and ethically.
 
         
       
     
     
    