What's behind the ICO's updated opinion on age assurance?
Looks like we're not done tweaking the Children's Code. And that's a good thing.
The UK’s data protection authority, the Information Commissioner’s Office (ICO) recently “renewed its 2021 age assurance Opinion” to reflect technology developments and to start aligning its Age-Appropriate Design Code (aka the Children’s Code) with upcoming legislation, specifically the Online Safety Act (OSA).
This won’t be the last update on this topic. In fact the ICO says so: this opinion too will be replaced “in due course” by full guidance on age assurance. There are two main reasons for this. First, the state of the art in age assurance tech is shifting rapidly, while the pragmatic considerations—such as reliability, privacy, access, inclusion—are becoming more widely understood. Second, the ICO is heading into a (friendly) potential jurisdictional tussle with its sibling, Ofcom, who has been named as the enforcement authority for the sometimes-overlapping OSA.
In any case, the ICO’s guidance is broadly helpful to operators. It confirms the centrality of taking a risk-based approach to compliance, which is critical for untangling the complexity of (and addressing the confusion around) age assurance. And it contributes to the socialisation of an age assurance taxonomy that really ought to underpin every practical conversation among operators and policymakers around the best-fit solutions for particular situations.
Age Assurance 2024-style
Let’s break down what the ICO’s new paper means for operators specifically.
From the Code’s perspective, your service is either likely to be accessed by children, or not. If it is not, your obligations under the Code pretty much stop here. If it is, then you need to decide whether it creates a high privacy risk or not. High risk includes:
large-scale profiling—for automated decision-making, to create segments, to infer interests or behaviours
invisible processing—working with kids’ data obtained from third parties such as list brokers
behavioural advertising & targeting—or any personalised marketing that uses kids’ personal information (PI) including persistent identifiers like IP addresses or cookie IDs
sharing/revealing location or health data—or anything else that could put them at risk of physical or developmental harm
use of innovative technologies—such as AI or wearables. Note this last one is vague and appears to be a catch-all for future risks, yet to be articulated.
If your service is high risk, you need to: (a) complete a Data Processing Impact Assessment (DPIA) to work out what the risks are and how to mitigate them; and (b) apply an age assurance mechanism that is both accurate enough to stop kids encountering the risks you’ve identified, and complies with the GDPR’s other obligations (lawful basis, fairness, purpose limitation, data minimisation, etc).
This is where the simplified age assurance taxonomy comes in handy. The ICO differentiates between self-declaration, age estimation and age verification.1 They do different things and are useful for different purposes.
Self-declaration—ie an age gate—may be sufficient if your service presents a low risk to children, ie you’re definitely not doing any of the high risk stuff above. Most child-directed content, games and services ought to fall into this category. Those on the margins or where some higher risks exist should consider a combination approach (see ‘waterfall’ below).
The ICO says that ideally self-declaration would be used where the incentive by children to lie in the age gate is also low. But unfortunately, ‘low privacy risk’ does not often overlap with ‘low attraction for kids’—quite the contrary: some of the most popular games in the world, eg Fortnite, Roblox, Among Us, are also some of the most privacy-protective.
When applying an age gate, the ICO still requires operators to make it hard to circumvent, ie by making sure it asks for age neutrally, and taking technical measures to prevent backbuttoning to change the age.
Age estimation via algorithmic methods (including AI-based, such as facial age estimation) can be particularly useful for initial account creation or onboarding for services that present mixed risks, where you want to categorise users into adult/teen/kid, for example, and then use a harder age verification method to gate more mature experiences.
Both the ICO and Ofcom have explicitly highlighted facial age estimation as an effective age assurance method. In order to match the required accuracy to the level of risk, they recommend using a ‘Challenge’ buffer such as that applied in retail outlets selling alcohol (‘if you look under 25, show me ID’). Depending on your tolerance for letting in someone under 18, you may want to set the facial age estimation cut-off at 21 or at 25. Leading provider Yoti has done excellent research on the effectiveness of this approach.
Note that if you use an age estimation method that processes biometric data or uses machine learning, you must complete a DPIA to make sure you know the data privacy implications. If done right, your provider will process only biometric data (which relates to an individual), but not ‘special category biometric data’ which “uniquely identifies a natural person.”2 You don’t need to (and definitely don’t want to!) identify a person if you are only looking to estimate or verify age.
Age verification using hard identifiers (which is what many unhelpfully believe = age assurance) is applicable when you really want to keep kids out of your service entirely.3 If you’re operating a gambling, alcohol or adult content site, then your focus will be to make this as painless and privacy-preserving as possible (and there are lots of companies developing new solutions here). The same applies if you need to confirm someone is an adult for purposes of verified parental consent under GDPR Article 8.
Scanning IDs + possibly comparing them to selfies can be very effective but is also the most privacy-invasive method and gives you access to a lot more information than you need to prove age. Your provider can address the privacy issue by only collecting/providing what you need (ie, age), but make sure to diligence how it works.
Centralised database checks, such as via open banking or mobile network operator databases may be viable. Note that using a payment card (which was popularised for verifying parental consent under COPPA in the US) is only viable for high-risk services if the payment service in question is restricted to over-18s and the provider can evidence that it has done appropriate checks to enforce that.
Note that both the above methods potentially fail the fairness test by being exclusive, ie not everyone has a bank account or a mobile phone. You’ll need to take access and inclusion into account when assessing appropriateness. The most common solution is to offer the user a choice of methods.
Recently Ofcom has also called out facial age estimation as a viable method for ‘hard’ age verification, provided the ‘challenge’ buffer is set high enough to ensure a very high accuracy.
Finally, the ICO recognises that sometimes the best-fit approach is a combination of methods, what it calls ‘waterfalls’. For many services, multiple layers of age assurance will become standard. It’s the only way to take into account the multiplicity of content types, features and monetisation methods within one service, and it is the most user-friendly way to implement the risk-based approach to keeping children safe.
An example might be to use facial age estimation at the point of account creation, to let users into the service with a broad age range tag (kid/teen/adult) that can be used to segment the content they see along a relatively low-risk spectrum. Then, before you allow them to switch on voice chat services, or to access a mature content section, you might require an ID-scan-based age check.
Another example would to allow simple self-declaration of age, but then to monitor all accounts for content clues or behavioural clues that suggest the user’s age does not match what they told you. In that case, you might challenge the user to provide proof of age before letting them continue. This is what social media, like TikTok, have been doing for some time.
Note that if you do apply a waterfall approach, the ICO requires you to implement an appeals process so that users who fail a second test can prove their age and maintain access to the service. These are still mostly a heavy manual lift, but if you ask the team at Epic’s Kids Web Services nicely, they may be able to help. ;-)
What about the OSA?
Just when you thought you didn’t have to worry about age assurance because your service is not ‘likely to be accessed by children’ under the Children’s Code along comes the Online Safety Act. The OSA requires operators of user-to-user services, search services and porn to implement ‘appropriate’ age assurance. For porn sites this effectively means hard age verification. For everyone else in scope of the regulation (which includes the major social media platforms), they will have to carry out a Children’s Access Assessment to determine whether they are “likely to be accessed by children” under Ofcom’s own criteria4 . More detail on how these will work is expected in spring 2024. Duties under OSA are expected to become enforceable in summer 2025.
So what next?
If you might be a service that is ‘likely to be accessed by children’ under the Children’s Code you should already have implemented appropriate age assurance, but it’s worth testing your approach against the clarified guidance above. If you’re confident that you are not, but you are a user-to-user service or search service, then look out for the OSA’s guidance on children’s access assessments later this year as you will have to take that test to confirm that you are not.5 To keep you busy in the interim (on the basis that you also operate in the EU), shift your attention to the Digital Services Act and the separate age assurance and content moderation obligations it contains.
On this definition the ICO and Ofcom are synchronised so far. Both agree the umbrella term is ‘age assurance’.
This distinction was articulated by the ICO and is specific to the UK. Note that if you want to use this type of age assurance in the US, the legal landscape is rather more complicated. Your legal team will have to get comfortable with the distinction under Illinois’ Biometric Information Privacy Act (BIPA) between “biometric identifier” which includes scan of face geometry, and “biometric information” which is any information based on a biometric identifier used to identify an individual.
Well, “entirely” is clearly relative. One critical misconception is that hard ID verification is somehow foolproof as compared to, say, facial age estimation. If you apply a Challenge25 buffer to facial age estimation you can get to less than 0.01% of u18s slipping through. That is vastly more reliable than a human looking at a physical ID, and in most cases still more reliable than a digital verification scan of a physical ID. Sadly the arms race between fake ID generators and the technology to identify them is in full swing.
Under OSA, your service will be considered ‘likely to be accessed by children’ if: (a) children can access it, and (b) if there are a significant* number of children in it, or it is likely to attract a significant* number of children.
*Significant means ‘significant in proportion to’ your total UK users, based on actual numbers, not intended audience.
Note that if you are in scope and fail to take the assessment, your service will automatically be classed as ‘likely to be accessed by children.’
Super useful summary. Thanks for sharing Max. This is super high value to so many people building and operating in this space.