Ofcom lights the path for age assurance
Both clarifying and infuriating in equal measure, Ofcom's guidance sets a standard
Ofcom is wasting no time in cranking out voluminous guidance in its new role as enforcer of the Online Safety Act (OSA). A few weeks ago this included no less than 295 pages across four documents just on the topic of age assurance.
That level of detail seems unhelpful at first glance1 but—surprisingly—much of it is actually quite clear (with some notable exceptions, see below) and and in fact useful for baselining what age assurance methods work well enough to be considered mainstream.
The Key Bits
If you run an online service and wonder whether this applies to you, the short answer is: Yes, it probably does. (If you’re not sure, see So What Next below)
Here’s what Ofcom expects: implement effective age assurance to protect under-18s from harms. The shortlist of methods considered capable of being effective includes: open banking check, photo-ID matching (selfie), facial age estimation (FAE), mobile network operator check, credit card check, email address analysis (with ownership verification), digital IDs.
(Definitely not effective are: self-declared age, payment methods (like debit cards) that are available to teens, and terms of service or contractual restrictions.)
The criteria to determine effectiveness are: accuracy (produces the correct answer in test conditions), robustness (works across real-world conditions, not easy to circumvent), reliability (reproducible, trustworthy result), and fairness (minimal bias or discrimination).
Additionally the method should be: accessible (eg, by offering a variety of methods), and interoperable (where available, to reduce burden).
Interestingly, privacy and data security do not figure prominently in the upfront principles—a refreshing sign perhaps, that in a country with a mature data privacy law, such things no longer need to be spelled out. Buried deeper in the guidance is a reminder that any approach needs to “follow a data protection by design approach,” including performing a Data Protection Impact Assessment (DPIA), being transparent with users, keeping records, and ensuring third-party providers comply, etc.
The Omissions
While Ofcom’s approach is relatively tech-neutral and future-proof, there are gaps that could create serious implementation challenges.
No clear benchmarks for accuracy. Ofcom fails to define how accurate an age check must be depending on the harms being managed. For age estimation in particular, the key trade-off is always what ratio of false negatives (older users wrongly rejected) to false positives (under-18s wrongly allowed in) is appropriate for a given risk level. In fact, in contrast with the Age Appropriate Design Code (AADC), Ofcom does not seem to embrace risk-based assessments at all, meaning low-risk services (say, a football club website) might be held to the same standard as higher-risk ones (say, a gambling service).2
(Ofcom does helpfully codify the best practice for age estimation (which has been used for a long time in facial age estimation for verifying adulthood): the challenge approach, where — for example—the estimation threshold may be set at 25 to minimise the number of under-18s, and a ‘hard’ age verification method is offered for adults 18-25.)
Ignoring under-18 age bands. The biggest difficulty in age assurance isn’t confirming if someone is over 18 (that’s a solved problem)—it’s differentiating between 10 vs. 13 vs. 16 for services that tailor experiences by age group. Most of Ofcom’s ‘approved’ methods (banking, mobile checks, credit cards) don’t work for those age bands. Facial age estimation is less reliable for mid-teen ages (though getting better all the time), yet Ofcom sidesteps this issue entirely.
No support for parent-provided age. Ofcom flatly ignores parental attestation of age (combined with proof of adulthood), even though this is a crucial tool3 for aligning age assurance with the AADC and the OSA’s broader child protection goals. Parents may have been massively complicit in helping kids lie about their age online to date, but that is where the perceived stakes were lower, and where the alternative (being truthful) blocked access rather than adapted a service.
Weak stance on interoperability and age tokens. Ofcom mentions age token reuse in passing, but doesn’t establish a clear framework. Its guidance suggests operators “should ensure” age checks can be reused across services, but does not address the legal and technical barriers that keep platforms from really working on interoperability. Given how critical sharing age signals is for making the internet safer at scale (which I outlined in detail here and here and here), this is a massive missed opportunity.
So what next?
To work out if they are in scope of new age assurance requirements, all user-to-user and search services must carry out a children’s access assessment (CAA) by 16 April. But Ofcom has essentially pre-empted the result: “Unless they are already using highly effective age assurance and can evidence this, […] most of these services will need to conclude that they are likely to be accessed by children.”
The CAA has two stages: (1) are children able to access my service? (2) are a significant number of children using it? The answer to (1) is always yes if you don’t already have ‘effective’ age assurance in place. So any service age-gated in the most common way (eg, to comply with COPPA) fails the first test, ie is considered accessible by children.
The answer to (2) is yes if either (a) the number is ‘significant’ or (b) the service is attractive to kids or teens.
What does ‘significant’ mean? This is where Ofcom’s lawyers really sharpened their pencils…
the reference to a ‘significant number’ includes a reference to a number which is significant in proportion to the total number of UK users. In considering whether a service has a significant number of users who are children, services must base their assessment on evidence about who actually uses the service rather than who the intended users of the service are.
And furthermore:
A significant number of users who are children means a number or proportion that is material in the nature and context of your service. Even a relatively small number or percentage of children could be a significant number.
This circular definition manages to be both exhaustively detailed and completely open-ended.
To answer 2(b), Ofcom helpfully provides a list of content types it considers inherently appealing to children. If you are one of these, you will probably be deemed to attract a ‘significant’ number of under-18s:
Entertainment & popular culture (eg, music, videos, humour/funny content, influencers, celebrities, film, TV, books, comics, animation)
Creative activities (eg, art, music, singing, photography, videography, drawing, painting, cooking, drama and acting, crafts, creative writing, beauty and makeup and fashion)
Games and sports
Making connections, friendships, dating, and relationships
Self-improvement, lifestyle and careers
Health, challenges and support
Education, learning and knowledge
Current affairs and engaging in civil activity
So, in essence: most of the internet. Or at least the attention internet.4
The burden of proof for these tests falls on the operator, who has to produce evidence that both (a) its service is not of a kind likely to attract a significant number of kids; and (b) it does not have a significant number of kids.
If a service determines it is in scope, it will need to conduct a children’s risk assessment by July 2025 (guidance for that to be updated in April), and then “implement measures to protect children on their services, in line with our Protection of Children Codes to address the risks of harm identified,” including age checks to prevent harm to under-18s.
Ofcom’s framework is ambitious, vague, and highly burdensome—especially for platforms that don’t directly target children but fall into its sweeping definitions. But if nothing else, it creates an unprecedented testbed for age verification technologies. The real-world data on effectiveness and user behaviour that emerges from this massive experiment may finally move us beyond the heated debate about age assurance into evidence-based approaches to solutions.
In keeping with December’s drop of 2,400 pages of Codes of Practice, guidance and risk registers relating to the roll out of the OSA.
With the exception of pornography, which is subject to the separate Part 5 duties, requiring mandatory, immediate (January 2025) age assurance measures; no allowance for adapting the measures to the risk level; and mandatory record-keeping & public reporting. That said, the guidance makes no distinction between the age assurance standards themselves for different services, including pornography.
A helpful description of this approach is outlined in section 3 of CommonSense Media’s 2024 review of the U.S. landscape for age assurance.
So, I guess we can safely say that content types which Ofcom may accept are not inherently attractive to kids include: corporate websites, scientific journals, politics, news (so long as not entertainment or celebrity-related), classic literature, wine reviews, home improvement, legal case law, art auctions, healthcare, luxury yachts, etc?