Artificial intelligence is in the reticle governments concerned about how this data could be misused for fraud, disinformation and other malicious activity online; Now in the UK, a regulator is preparing to explore how AI is used to combat some of these phenomena, particularly around content harmful to children.
Ofcomthe regulator responsible for enforcing UK rules Online Safety Actannounced plans to launch a consultation on how AI and other automated tools are used today, and may be used in the future, to proactively detect and remove illegal content online, particularly to protect children from harmful content and to identify child sexual abuse. material previously difficult to detect.
These tools would be part of a wider set of proposals that Ofcom is putting in place focusing on children’s safety online. Consultations on the overall proposals will begin in the coming weeks and the consultation on AI will take place later this year, Ofcom said.
Mark Bunting, director of Ofcom’s online safety group, says his interest in AI begins by looking at how it is used as a screening tool today.
“Some services are already using these tools to identify and protect children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about the accuracy and effectiveness of these tools. We want to look at ways in which we can ensure that industry assesses (this) when they use them, ensuring that risks to free speech and privacy are managed.
A likely outcome will be that Ofcom will recommend how and which platforms should assess, which could potentially lead not only to platforms adopting more sophisticated tools, but also potentially to fines if they fail to provide improvements, either by blocking content or creating better ways to keep younger users. to see it.
“As with many online safety regulations, it is up to businesses to ensure they are taking the appropriate steps and using the appropriate tools to protect users,” he said.
There will be both critics and supporters of these measures. AI researchers are discovering ever more sophisticated ways to use AI to detect, for example, deepfakes, as well as to verify users online. However, there are so many the skeptics who note that AI detection is far from infallible.
Ofcom announced the consultation on AI tools at the same time as it published its latest research into how children engage online in the UK, which found that overall there has more young children connected than ever before, so much so that Ofcom is now stepping aside. activity in increasingly younger age groups.
Nearly a quarter, or 24 percent, of all children ages 5 to 7 now own their own smartphone, and if you include tablets, that figure rises to 76 percent, according to a survey of U.S. parents . This same age group also uses media significantly more on these devices: 65% have made voice and video calls (up from 59% just a year ago), and half of children (up from 39% a year) watch streaming media. .
Age restrictions around some mainstream social media apps are getting lower and lower, but whatever the limits, in the UK they don’t seem to be enforced anyway. According to Ofcom, around 38% of children aged 5 to 7 use social media. Meta’s WhatsApp, with 37%, is the most popular application among them. And in perhaps the first case where Meta’s flagship image app is relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5- to 7-year-olds, with Instagram at “only” 22%. Discord completes the list but is significantly less popular with only 4%.
About a third, or 32 percent, of children this age go online on their own, and 30 percent of parents said they are OK with their minor children having social media profiles. YouTube Kids remains the most popular network among young users, at 48%.
The games, a perennial favorite among children, are now used by 41% of children aged 5 to 7, with 15% of children in this age group playing shooting games.
While 76% of parents surveyed said they had spoken to their young children about online safety, there are question marks, Ofcom points out, between what a child sees and what they might report. When looking for older children aged 8 to 17, Ofcom asked them directly. The study found that 32% of children said they had seen worrying content online, but only 20% of their parents said they had reported anything.
Even accounting for some inconsistencies in reporting, “research suggests a disconnect between older children’s exposure to potentially harmful online content and what they share with their parents about their online experiences,” writes l ‘Ofcom. And disturbing content is just one of the challenges: deepfakes are also a problem. Among children aged 16 to 17, Ofcom said, 25% said they were unsure whether they could tell fakes from real ones online.