Tech

Internet users are getting younger; now the UK is weighing up if AI can help protect them

Artificial intelligence is in the crosshairs of governments concerned about how it could be used for fraud, disinformation and other malicious activity online; Now in the UK, a regulator is preparing to explore how AI is used to combat some of these phenomena, particularly around content harmful to children.

Ofcom, the regulator responsible for enforcing the UK’s online safety law, has announced plans to launch a consultation on how AI and other automated tools are used today, and can be used in the future, to proactively detect and remove illegal content online, in particular to protect children from harmful content and identify child pornography content previously difficult to detect.

These tools would be part of a wider set of proposals that Ofcom is putting in place focusing on children’s safety online. Consultations on the overall proposals will begin in the coming weeks and the consultation on AI will take place later this year, Ofcom said.

Mark Bunting, director of Ofcom’s online safety group, says his interest in AI begins by looking at how it is used as a screening tool today.

“Some services are already using these tools to identify and protect children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about the accuracy and effectiveness of these tools. We want to look at ways in which we can ensure that industry assesses (this) when they use them, ensuring that risks to free speech and privacy are managed.

A likely outcome will be that Ofcom will recommend how and what platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but also potentially to fines if they fail to make improvements, either by blocking content or creating better ways to keep younger users. to see it.

“As with many online safety regulations, it is up to businesses to ensure they are taking the appropriate steps and using the appropriate tools to protect users,” he said.

There will be both critics and supporters of these measures. AI researchers are discovering ever more sophisticated ways to use AI to detect, for example, deep fakes, as well as to verify users online. Yet many skeptics believe that AI detection is far from infallible.

Ofcom announced the consultation on AI tools at the same time as it published its latest research into how children engage online in the UK, which found that overall there has more young children connected than ever before, so much so that Ofcom is now breaking down activity among younger and younger age groups.

Nearly a quarter, or 24%, of all 5- to 7-year-olds now own their own smartphone, and if you include tablets, that figure rises to 76%, according to a survey of American parents. This same age group also uses media significantly more on these devices: 65% have made voice and video calls (up from 59% just a year ago), and half of children (up from 39% a year) watch streaming media. .

Age restrictions around some mainstream social media apps are getting lower and lower, but whatever the limits, in the UK they don’t seem to be enforced anyway. Some 38% of 5- to 7-year-olds use social media, Ofcom found. Meta’s WhatsApp, with 37%, is the most popular application among them. And in perhaps the first case where Meta’s flagship picture app is relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5- to 7-year-olds, along with Instagram at “only” 22%. Discord completes the list but is significantly less popular with only 4%.

About a third, or 32 percent, of children this age go online on their own, and 30 percent of parents said they are OK with their minor children having social media profiles. YouTube Kids remains the most popular network among young users, at 48%.

The game, a perennial favorite among children, is now used by 41% of children aged 5 to 7, with 15% of children in this age group playing shooting games.

While 76% of parents surveyed said they had spoken to their young children about online safety, there are question marks, Ofcom points out, between what a child sees and what they might report. When looking for older children aged 8 to 17, Ofcom asked them directly. The study found that 32% of children said they had seen worrying content online, but only 20% of their parents said they had reported anything.

Even accounting for some inconsistencies in reporting, “research suggests a disconnect between older children’s exposure to potentially harmful online content and what they share with their parents about their online experiences,” writes l ‘Ofcom. And disturbing content is just one of the challenges: deep fakes are also a problem. Among children aged 16 to 17, Ofcom said, 25% said they were unsure whether they could tell fakes from real ones online.

techcrunch

Back to top button