Government & Policy

Ofcom to push for better age verification, filters and 40 other checks in new online child safety code

Comment

Image Credits: Alys Tomlinson / Getty Images

Ofcom is cracking down on Instagram, YouTube and 150,000 other web services to improve child safety online. A new Children’s Safety Code from the U.K. Internet regulator will push tech firms to run better age checks, filter and downrank content, and apply around 40 other steps to assess harmful content around subjects like suicide, self harm and pornography, to reduce under-18’s access to it. Currently in draft form and open for feedback until July 17, enforcement of the Code is expected to kick in next year after Ofcom publishes the final in the spring. Firms will have three months to get their inaugural child safety risk assessments done after the final Children’s Safety Code is published.

The Code is significant because it could force a step-change in how Internet companies approach online safety. The government has repeatedly said it wants the U.K. to be the safest place to go online in the world. Whether it will be any more successful at preventing digital slurry from pouring into kids’ eyeballs than it has actual sewage from polluting the country’s waterways remains to be seen. Critics of the approach suggest the law will burden tech firms with crippling compliance costs and make it harder for citizens to access certain types of information.

Meanwhile, failure to comply with the Online Safety Act can have serious consequences for UK-based web services large and small, with fines of up to 10% of global annual turnover for violations, and even criminal liability for senior managers in certain scenarios.

The guidance puts a big focus on stronger age verification. Following on from last year’s draft guidance on age assurance for porn sites, age verification and estimation technologies deemed “accurate, robust, reliable and fair” will be applied to a wider range of services as part of the plan. Photo-ID matching, facial age estimation and reusable digital identity services are in; self-declaration of age and contractual restrictions on the use of services by children are out.

That suggests Brits may need to get accustomed to proving their age before they access a range of online content — though how exactly platforms and services will respond to their legal duty to protect children will be for private companies to decide: that’s the nature of the guidance here.

The draft proposal also sets out specific rules on how content is handled. Suicide, self-harm and pornography content — deemed the most harmful — will have to be actively filtered (i.e. removed) so minors do not see it. Ofcom wants other types of content such as violence to be downranked and made far less visible in children’s feeds. Ofcom also said it may expect services to act on potentially harmful content (e.g. depression content). The regulator told TechCrunch it will encourage firms to pay particular attention to the “volume and intensity” of what kids are exposed to as they design safety interventions. All of this demands services be able to identify child users — again pushing robust age checks to the fore.

Ofcom previously named child safety as its first priority in enforcing the UK’s Online Safety Act — a sweeping content moderation and governance rulebook that touches on harms as diverse as online fraud and scam adscyberflashing and deepfake revenge pornanimal cruelty; and cyberbullying and trolling, as well as regulating how services tackle illegal content like terrorism and child sexual abuse material (CSAM).

The Online Safety Bill passed last fall, and now the regulator is busy with the process of implementation, which includes designing and consulting on detailed guidance ahead of its enforcement powers kicking in once parliament approves Codes of Practice it’s cooking up.

With Ofcom estimating around 150,000 web services in scope of the Online Safety Act, scores of tech firms will, at the least, have to assess whether children are accessing their services and, if so, take steps to identify and mitigate a range of safety risks. The regulator said it’s already working with some larger social media platforms where safety risks are likely to be greatest, such as Facebook and Instagram, to help them design their compliance plans.

Consultation on the Children’s Safety Code

In all, Ofcom’s draft Children’s Safety Code contains more than 40 “practical steps” the regulator wants web services to take to ensure child protection is enshrined in their operations. A wide range of apps and services are likely to fall in-scope — including popular social media sites, games and search engines.

“Services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges,” Ofcom wrote in a summary of the consultation.

“In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it,” it added in a press release Monday. “In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

Ofcom’s current proposal suggests that almost all services will have to take mitigation measures to protect children. Only those deploying age verification or age estimation technology that is “highly effective” and used to prevent children from accessing the service (or the parts of it where content poses risks to kids) will not be subject to the children’s safety duties.

Those who find — on the contrary — that children can access their service will need to carry out a follow-on assessment known as the “child user condition”. This requires them to assess whether “a significant number” of kids are using the service and/or are likely to be attracted to it. Those that are likely to be accessed by children must then take steps to protect minors from harm, including conducting a Children’s Risk Assessment and implementing safety measures (such as age assurance, governance measures, safer design choices and so on) — as well as applying an ongoing review of their approach to ensure they keep up with changing risks and patterns of use. 

Ofcom does not define what “a significant number” means in this context — but “even a relatively small number of children could be significant in terms of the risk of harm. We suggest service providers should err on the side of caution in making their assessment.” In other words, tech firms may not be able to eschew child safety measures by arguing there aren’t many minors using their stuff.

Nor is there a simple one-shot fix for services that fall in scope of the child safety duty. Multiple measures are likely to be needed, combined with ongoing assessment of efficacy.

“There is no single fix-all measure that services can take to protect children online. Safety measures need to work together to help create an overall safer experience for children,” Ofcom wrote in an overview of the consultation, adding: “We have proposed a set of safety measures within our draft Children’s Safety Codes, that will work together to achieve safer experiences for children online.” 

Recommender systems, reconfigured

Under the draft Code, any service that operates a recommender system — a form of algorithmic content sorting, tracking user activity — and is at “higher risk” of showing harmful content, must use “highly-effective” age assurance to identify who their child users are. They must then configure their recommender algorithms to filter out the most harmful content (i.e. suicide, self harm, porn) from the feeds of users it has identified as children, and reduce the “visibility and prominence” of other harmful content.

Under the Online Safety Act, suicide, self harm, eating disorders and pornography are classed “primary priority content”. Harmful challenges and substances; abuse and harassment targeted at people with protected characteristics; real or realistic violence against people or animals; and instructions for acts of serious violence are all classified “priority content.” Web services may also identify other content risks they feel they need to act on as part of their risk assessments.

In the proposed guidance, Ofcom wants children to be able to provide negative feedback directly to the recommender feed — in order that it can better learn what content they don’t want to see too.

Content moderation is another big focus in the draft Code, with the regulator highlighting research showing content that’s harmful to children is available on many services at scale and which it said suggests services’ current efforts are insufficient.

Its proposal recommends all “user-to-user” services (i.e. those allowing users to connect with each other, such as via chat functions or through exposure to content uploads) must have content moderation systems and processes that ensure “swift action” is taken against content harmful to children. Ofcom’s proposal does not contain any expectations that automated tools are used to detect and review content. But the regulator writes that it’s aware large platforms often use AI for content moderation at scale and says it’s “exploring” how to incorporate measures on automated tools into its Codes in the future.

“Search engines are expected to take similar action,” Ofcom also suggested. “And where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.”

“Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained,” it added.

The draft Code also includes measures it hopes will ensure “strong governance and accountability” around children’s safety inside tech firms. “These include having a named person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children,” Ofcom wrote.

Facebook- and Instagram-owner Meta was frequently singled out by ministers during the drafting of the law for having a lax attitude to child protection. The largest platforms may be likely to pose the greatest safety risks — and therefore have “the most extensive expectations” when it comes to compliance — but there’s no free pass based on size.

“Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Other proposed safety measures Ofcom highlights include suggesting services provide more choice and support for children and the adults who care for them — such as by having “clear and accessible” terms of service; and making sure children can easily report content or make complaints.

The draft guidance also suggests children are provided with support tools that enable them to have more control over their interactions online — such an option to decline group invites; block and mute user accounts; or disable comments on their own posts.

The UK’s data protection authority, the Information Commission’s Office, has expected compliance with its own age-appropriate children’s design Code since September 2021 so it’s possible there may be some overlap. Ofcom for instance notes that service providers may already have assessed children’s access for a data protection compliance purpose — adding they “may be able to draw on the same evidence and analysis for both.”

Flipping the child safety script?

The regulator is urging tech firms to be proactive about safety issues, saying it won’t hesitate to use its full range of enforcement powers once they’re in place. The underlying message to tech firms is get your house in order sooner rather than later or risk costly consequences.

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizeable fines,” it warned in a press release.

The government is rowing hard behind Ofcom’s call for a proactive response, too. Commenting in a statement today, the technology secretary Michelle Donelan said: “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines — step up to meet your responsibilities and act now.”

“The government assigned Ofcom to deliver the Act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” she added. “Once in place these measures will bring in a fundamental change in how children in the UK experience the online world.

“I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

Ofcom said it wants its enforcement of the Online Safety Act to deliver what it couches as a “reset” for children’s safety online — saying it believes the approach it’s designing, with input from multiple stakeholders (including thousands of children and young people), will make a “significant difference” to kids’ online experiences.

Fleshing out its expectations, it said it wants the rulebook to flip the script on online safety so children will “not normally” be able to access porn and will be protected from “seeing, and being recommended, potentially harmful content”.

Beyond identity verification and content management, it also wants the law to ensure kids won’t be added to group chats without their consent; and wants it to make it easier for children to complain when they see harmful content, and be “more confident” that their complaints will be acted on.

As it stands, the opposite looks closer to what UK kids currently experience online, with Ofcom citing research over a four-week period in which a majority (62%) of children aged 13-17 reported encountering online harm and many saying they consider it an “unavoidable” part of their lives online.

Exposure to violent content begins in primary school, Ofcom found, with children who encounter content promoting suicide or self-harm characterizing it as “prolific” on social media; and frequent exposure contributing to a “collective normalisation and desensitisation”, as it put it. So there’s a huge job ahead for the regulator to reshape the online landscape kids encounter.

As well as the Children’s Safety Code, its guidance for services includes a draft Children’s Register of Risk, which it said sets out more information on how risks of harm to children manifest online; and draft Harms Guidance which sets out examples and the kind of content it considers to be harmful to children. Final versions of all its guidance will follow the consultation process, a legal duty on Ofcom. It also told TechCrunch that it will be providing more information and launching some digital tools to further support services’ compliance ahead of enforcement kicking in.

“Children’s voices have been at the heart of our approach in designing the Codes,” Ofcom added. “Over the last 12 months, we’ve heard from over 15,000 youngsters about their lives online and spoken with over 7,000 parents, as well as professionals who work with children.

“As part of our consultation process, we are holding a series of focused discussions with children from across the UK, to explore their views on our proposals in a safe environment. We also want to hear from other groups including parents and carers, the tech industry and civil society organisations — such as charities and expert professionals involved in protecting and promoting children’s interests.”

The regulator recently announced plans to launch an additional consultation later this year which it said will look at how automated tools, aka AI technologies, could be deployed to content moderation processes to proactively detect illegal content and content most harmful to children — such as previously undetected CSAM and content encouraging suicide and self-harm.

However, there is no clear evidence today that AI will be able to improve detection efficacy of such content without causing large volumes of (harmful) false positives. It thus remains to be seen whether Ofcom will push for greater use of such tech tools given the risks that leaning on automation in this context could backfire.

In recent years, a multi-year push by the Home Office geared towards fostering the development of so-called “safety tech” AI tools — specifically to scan end-to-end encrypted messages for CSAM — culminated in a damning independent assessment which warned such technologies aren’t fit for purpose and pose an existential threat to people’s privacy and the confidentiality of communications.

One question parents might have is what happens on a kid’s 18th birthday, when the Code no longer applies? If all these protections wrapping kids’ online experiences end overnight, there could be a risk of (still) young people being overwhelmed by sudden exposure to harmful content they’ve been shielded from until then. That sort of shocking content transition could itself create a new online coming-of-age risk for teens.

Ofcom told us future proposals for larger platforms could be introduced to mitigate this sort of risk.

“Children are accepting this harmful content as a normal part of the online experience — by protecting them from this content while they are children, we are also changing their expectations for what’s an appropriate experience online,” an Ofcom spokeswoman responded when we asked about this. “No user, regardless of their age, should accept to have their feed flooded with harmful content. Our phase 3 consultation will include further proposals on how the largest and riskiest services can empower all users to take more control of the content they see online. We plan to launch that consultation early next year.”

More TechCrunch

Fisker is just a few days into its Chapter 11 bankruptcy, and the fight over its assets is already charged, with one lawyer claiming the startup has been liquidating assets…

The fight over Fisker’s assets is already heating up

A hacker is advertising customer data allegedly stolen from the Australia-based live events and ticketing company TEG on a well-known hacking forum. On Thursday, a hacker put up for sale…

Hacker claims to have 30 million customer records from Australian ticket seller giant TEG

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Elon…

Tesla makes Musk best-paid CEO of all time and Fisker bites the dust

Dot is a new AI companion and chatbot that thrives on getting to know your innermost thoughts and feelings.

Dot’s AI really, really wants to get to know you

The e-fuels startup is working on producing fuel for aviation and maritime shipping using carbon dioxide and other waste carbon streams.

E-fuels startup Aether Fuels is raising $34.3 million, per filing

Fisker was facing “potential financial distress” as early as last August, according to a new filing in its Chapter 11 bankruptcy proceeding, which the EV startup initiated earlier this week.…

Fisker faced financial distress as early as last August

Cruise, the self-driving subsidiary of General Motors, has agreed to pay a $112,500 fine for failing to provide full information about an accident involving one of its robotaxis last year.…

Cruise clears key hurdle to getting robotaxis back on roads in California

Feel Therapeutics has a pretty original deck, with some twists we rarely see; the company did a great job telling the overall story.

Pitch Deck Teardown: Feel Therapeutics’ $3.5M seed deck

The Rockset buy fits into OpenAI’s broader recent strategy of investing heavily in its enterprise sales and tech orgs.

OpenAI buys Rockset to bolster its enterprise AI

The U.S. government announced sanctions against 12 executives and senior leaders of the Russia-based cybersecurity giant Kaspersky. In a press release, the Department of the Treasury’s Office of Foreign Assets…

US government sanctions Kaspersky executives

Style DNA, an AI-powered fashion stylist app, creates a personalized style profile from a single selfie. The app is particularly useful for people interested in seasonal color analysis, a process…

Style DNA gets a generative AI chatbot that suggests outfit ideas based on your color type

Rates of depression, anxiety and suicidal thoughts are surging among U.S. teens. A recent report from the Center of Disease Control found that nearly one in three girls have seriously…

Khosla-backed Marble, built by former Headway founders, offers affordable group therapy for teens

Cover says what sets it apart is the underlying technology it employs, which has been exclusively licensed from NASA’s Jet Propulsion Laboratory.

A new startup from Figure’s founder is licensing NASA tech in a bid to curb school shootings

Spotify is introducing a new “Basic” streaming plan in the United States, the company announced on Friday. The new plan costs $10.99 per month and includes all of the benefits…

Spotify launches a new Basic streaming plan in the US

Photographers say the social media giant is applying a ‘Made with AI’ label to photos they took, causing confusion for users.

Meta is tagging real photos as ‘Made with AI,’ say photographers

Website building platform Squarespace is selling Tock, its restaurant reservation service, to American Express in a deal worth $400 million — the exact figure that Squarespace paid for the service…

Squarespace sells restaurant reservation system Tock to American Express for $400M

Featured Article

Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

The February ransomware attack on UHG-owned Change Healthcare stands as one of the largest-ever known digital thefts of U.S. medical records.

20 hours ago
Change Healthcare confirms ransomware hackers stole medical records on a ‘substantial proportion’ of Americans

Google said today that it globally paused its experiment that aimed to allow new kinds of real-money games on the Play Store, citing the challenges that come with the lack…

Google pauses its experiment to expand real-money games on the Play Store

Venture firms raised $9.3 billion in Q1 according to PitchBook data, which means this year likely won’t match or surpass 2023’s $81.8 billion total. While emerging managers are feeling the…

Kevin Hartz’s A* raises its second oversubscribed fund in three years

Google is making reviews of all your movies, TV shows, books, albums and games visible under one profile page starting June 24, according to an email sent to users last…

Google is making your movie and TV reviews visible under a new profile page

Zepto, an Indian quick commerce startup, has more than doubled its valuation to $3.6 billion in a new funding round of $665 million.

Zepto, a 10-minute delivery app, raises $665M at $3.6B valuation

Speak, the AI-powered language learning app, has raised new money from investors at double its previous valuation.

Language learning app Speak nets $20M, doubles valuation

SpaceX unveiled Starlink Mini, a more portable version of its satellite internet product that is small enough to fit inside a backpack.  Early Starlink customers were invited to purchase the…

SpaceX debuts portable Starlink Mini for $599

Ali Rathod-Papier has stepped down from her role as global head of compliance at corporate card expense management startup Brex to join venture firm Andreessen Horowitz (a16z) as a partner…

Brex’s compliance head has left the fintech startup to join Andreessen Horowitz as a partner

U.S. officials imposed the “first of its kind” ban arguing that Kaspersky threatens U.S. national security because of its links to Russia.

US bans sale of Kaspersky software citing security risk from Russia 

Apple has released Final Cut Pro for iPad 2 and Final Cut Camera, the company announced on Thursday. Both apps were previously announced during the company’s iPad event in May.…

Apple releases Final Cut Pro for iPad 2 and Final Cut Camera

Paris has quickly established itself as a major European center for AI startups, and now another big deal is in the works.

Poolside is raising $400M+ at a $2B valuation to build a supercharged coding co-pilot

The space industry is all abuzz about how SpaceX’s Starship, Blue Origin’s New Glenn, and other heavy-lift rockets will change just about everything. One likely consequence is that spacecraft will…

Gravitics prepares a testing gauntlet for a new generation of giant spacecraft

LTK (formerly LiketoKnow.it and RewardStyle), the influencer shopping app with 40 million monthly users, announced on Thursday the launch of a free direct message tool for creators to instantly share…

Influencer shopping app LTK gets an automatic direct message tool

YouTube appears to be taking a firm stance against Premium subscribers who attempt to use a VPN (virtual private network) to access cheaper subscription prices in other countries. This week,…

YouTube confirms crackdown on VPN users accessing cheaper Premium plans