Tech

Anthropic now lets kids use its AI technology – within limits

AI startup Anthropic is changing its policies to allow miners to use its generative AI systems – at least in certain circumstances.

Announced Friday in a post on the company’s official blog, Anthropic will begin allowing teens and tweens to use third-party apps (but not necessarily its own apps) powered by its AI models, provided that the Developers of these applications implement specific security features and disclose to users the Anthropic technologies they leverage.

In a support article, Anthropic lists several safety measures that developers creating AI-based apps for minors should include, such as age verification systems, moderation, and filtering of content and resources educational on “safe and responsible” use of AI for minors. The company also says it could make available “technical measures” intended to tailor AI product experiences to minors, such as a “child safety system prompt” that developers targeting minors would be required to implement.

Developers using Anthropic’s AI models will also be required to comply with “applicable” child safety and data privacy regulations, such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. to “periodically” check application compliance, suspending or terminating the accounts of those who repeatedly violate compliance requirements, and requiring developers to “clearly indicate” on sites or public documents that they are compliance.

“There are certain use cases where AI tools can provide significant benefits to younger users, such as test preparation or tutoring assistance,” Anthropic writes in its article. “With this in mind, our updated policy allows organizations to integrate our API into their products aimed at minors.”

Anthropic’s policy change comes as children and teenagers increasingly turn to generative AI tools to help them not only with schoolwork but also with personal problems, and as competing providers Generative AI companies, including Google and OpenAI, are exploring more use cases aimed at children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on child-friendly AI guidelines. And Google has made its Bard chatbot, since renamed Gemini, available to teenagers in English in certain regions.

According to a survey from the Center for Democracy and Technology, 29% of children say they have used generative AI like OpenAI’s ChatGPT to manage anxiety or mental health issues, 22% for problems with friends, and 16% for family conflicts.

Last summer, schools and colleges rushed to ban generative AI applications – particularly ChatGPT – over fears of plagiarism and misinformation. Since then, some have reversed their bans. But not everyone is convinced of the positive potential of generative AI, as evidenced by surveys such as those from the UK Safer Internet Centre, which found that more than half of children (53%) say they have seen people of their age use generative AI in a negative way, for example by creating false information or credible images used to upset someone (including pornographic deepfakes).

Calls for guidelines on children’s use of generative AI are growing.

Last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) pushed governments to regulate the use of generative AI in education, including imposing limits on age for users and safeguards for data protection and user confidentiality. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and bias,” said Audrey Azoulay, director-general of UNESCO, in a press release. “It cannot be integrated into education without public engagement and without the necessary guarantees and regulations from governments. »

techcrunch

Back to top button