These days are seeing a record number of events on the topic of age assurance. First, we had a 2-day multi-stakeholder dialogue organised by the Centre for Information Policy Leadership (CIPL) and WeProtect Global Alliance, hosted by TikTok in London. Next week is the 5-day (!) Global Age Assurance Standards Summit organised by the Age Check Certification Scheme (ACCS) in Manchester. The week after, Europe’s telecom ministers will meet at the Council of the European Union, where they are expected to issue a declaration encouraging the use of digital ID schemes to age verify minors.
The new urgency has been triggered by new regulations coming into force (like DSA and OSA), growing regulator assertiveness (ICO/TikTok, DPC/TikTok, efforts to enforce age verification by France’s ARCOM, the upcoming EU Code of conduct on age-appropriate design), and ham-fisted efforts by US state legislators to age-verify teens out of social media.
At these events, most stakeholder groups are well-represented: digital platforms, social media, telcos, think tanks, civil society, academics, technology providers, regulators and trade associations. Notably absent tend to be video game publishers and consoles, one large digital platform named after a fruit, and privacy regulators from the post-Brexit EU, except for Ireland’s hardworking DPC (but sadly including some who are very active in kids’ privacy, like France’s CNIL).
The discussion tends to feature a familiar back-and-forth of arguments:
Keep kids out of porn, out of danger; we need better age assurance methods.
Don’t forget inclusion and accessibility; most methods don’t work in [pick a country in the Global South]
Approach the problem from the perspective of children’s rights, ie make sure they have access to services, and build on their right to wellbeing & development, rather than just thinking about how to block them.
Parents are overwhelmed, anxious and not digitally savvy enough — we can’t expect them to do more.
Parents know their kids best, and should have an ultimate right to override operator restrictions if they so choose; they need granular controls.
Self-declaration is ineffective, but age verification is by definition privacy-invasive so we can’t have both privacy and reliable age information.
This time around, though, it feels like there is momentum behind more industry collaboration and some kind of interoperability, in particular the idea of solving for age assurance at the platform- or app store-level. Some indicia from the past year:
The FTC’s 2023 consent decree with Microsoft requires Xbox to inform video game publishers through an API that a user is a child.
In his Senate testimony, Meta’s Mark Zuckerberg explicitly called on app stores to become gatekeepers for parental consent.
The FTC’s recent broad consultation on COPPA asks about the benefits of potential platform-based consent mechanisms, and how to encourage their development.
Spanish data protection authority AEPD has published an ambitious proposal for a new age verification system that relies on dedicated apps on smartphones interfacing with digital wallets or national digital ID schemes.1
The problem with the debate on age assurance is that it tends to look for a one-size-fits-all solution, which is why these discussions always end up going in circles. They unhelpfully conflate:
Use case 1: verifying that someone is an adult so that they can access adult services.
This is mostly solved. Adults have IDs, credit cards, social security numbers, etc. There are lots of vendors that can age verify adults for you. The remaining snag here is anonymity. Facial age estimation works very well for adults, and is privacy-preserving. Expect more innovation around how to share your age without sharing your identity, as age verification laws for porn kick in.
Use case 2: determining the age of a child or teen in order to deliver an age-appropriate experience (and work out if parental consent is required under privacy laws).
This is universally done today with simple age gates (self-declaration), which until now has satisfied most laws, but has the enormous flaw that kids lie about their age, sometimes with their parents’ complicity. Raising the bar on verifying kids’ ages necessarily pushes reasonable bounds of privacy (requiring kids’ IDs such as passports or birth certificates), access (may not have documentation), and user friction (driving users to other platforms, which may be less safe). This is the one worth thinking creatively about — see below.
Use case 3: verifying the relationship between a child and a parent or guardian in order to enable the parent to provide consent.
This remains unsolved, and both COPPA and GDPR today only require verifying that the parent is an adult. Some day, digital ID schemes (if accepted by the population) can address this by mapping verified relationships.
Use case 4: making it easier for parents to manage the risks their kids are exposed to by having consolidated, interoperable parental controls.
An understandable response to parents’ frustration and consent fatigue, but truly difficult to implement in practice. It’s clearly desirable to improve the way parents help kids and teens manage their online risks, and we should continue to invest in this, but ultimately it has little to do with age assurance (though having correct ages is a precondition for any parental controls solution to work).
Only the first two actually relate to age assurance, and only the second is actually in need of a new solution. Looking for the perfect method of age assurance that preserves privacy, works everywhere and for everyone, and is reliable is IMHO a waste of time.2 Fact is, there will be (and should be!) many methods of verifying age for different contexts. Kids, teens and adults are age gated or age verified many times per year, and that information is sitting in operators’ systems.3
The most valuable first step we can take to improve age assurance is to make existing age data reusable and interoperable. A ‘Universal Age API’ could allow platforms and operators to exchange age information, along with the method used to obtain it, and the recipient would be able to rely on that age, or combine it with their own age information to increase its confidence score.4
A Universal Age API could be immensely powerful if key industry players got on board. Imagine if Apple, Google, Microsoft, video game consoles, mobile operators, and the largest game platforms all agreed to contribute the ages they have to such a collaboration. Every matching age would increase the reliability of an individual’s age information; every hard-verified age would override self-declared age if it conflicts. Very very quickly, operators would have much more reliable age information for nearly every connected child or teen on the planet.5
To enable a Universal Age API, we need to solve for:
Standards (not many)
Legal / regulatory clarity for both the supplier and the recipient of age information.
Privacy-enhancing technical methods for sharing age information
We need some standards because whenever age information is shared via the API, it would come with a critical bit of information, namely the method(s) of verification. This label would be based on an agreed taxonomy of methods, or perhaps would be translated into a reliability score (if we can get those standardised), eg self-declaration is low; parent-provided age, or ID-document check is high. This allows the recipient to determine whether the reliability is sufficient for their purpose, or whether the provided age should override a less reliable and conflicting age in their own systems. Each time an age is confirmed via another method (or reconfirmed via the same method) its reliability increases.
We need regulators to participate because in both the US and Europe (and UK) it’s not sufficiently clear how to exchange age information compliantly. In the US, operators (rightly or wrongly) believe they have to age screen users directly as they won’t otherwise be able to prove to the FTC that they have done it to a reasonable standard. In Europe, there are even more questions. Does a supplier need a legal basis to share the age of a user, if properly pseudonymised? Can this be done under the safeguarding exemption? Who is the controller of the data? How do we give the supplier comfort they won’t be liable if the age they shared is wrong? Regulators need to confirm that the recipient can rely on age information provided by another operator, subject to making its own assessment of its reliability in relation to the risk of harms.6
The technologies for holding and sharing age information in a privacy-preserving way exist. Kids Web Services’ ParentGraph — which aggregates and shares adult verification status for parents providing consent under privacy laws — does this in a very simple way, by storing only a hashed email address with the verification status and method. The status is confirmed whenever the same email, hashed with the same one-way cryptographic hash function, requests it. A similar approach (based on hashed email or mobile number) could work with kids’ age signals.
The euCONSENT project has also worked on privacy-preserving ways to share age information between websites, using a token placed on the user’s device, which has the added advantage of working for services where users are not logged in.
Here too, it would be game-changing if regulators were to outline minimum privacy and security standards that should be used by any party involved in such an exchange —whether direct between supplier and recipient, or whether via a trusted third-party.
Everything else can wait, frankly. If we enable the private, secure reuse of existing age information, we can (a) stop the endless age gating and age verifying that creates trust issues, privacy risks and security risks (and are ineffective); (b) start building up a more reliable marker of age for every internet user that helps undo a decade of lying in age gates; (c) focus on building new safeguarding tools, adaptive age-appropriate services (like K-iD enables) and interoperability for parental controls, on top of a new base of more reliable age information.
While I applaud the AEPD’s initiative, their approach won’t get far. It has fallen into the trap of trying to solve two very different problems with one solution — allowing adults to access pornography without compromising their privacy, and adapting content and service access for children based on age. The result is unwieldy and impractical; it’s not easily accessible unless you have a smartphone; it requires the user to have either a digital wallet or a national digital ID; it focuses on content blocking rather than digital literacy and empowerment for kids (which means kids won’t adopt it); and it enforces 2-way anonymity (to protect naughty adults), which won’t work for account-based services that require a real name, like Meta.
Though almost-perfect solutions are totally technically feasible, at least on paper: you just need a universal digital ID that uses zero-knowledge proofs to enable its holder to securely reveal only the relevant attribute (eg, age), is built on a decentralised architecture, overseen by trusted parties, recognised by all governments and platforms, that ideally works on feature phones as well as smartphones. Let’s go!
In fact, age data exchanges already happen, but they are broken. Video game consoles, for example, regularly provide age or age band information to video game publishers. But this information is mostly ignored, because the publishers don’t know whether they can fulfil their legal obligations by relying on that information, and they’re not sure how to resolve conflicts if the age they get contradicts the self-declared age of the user.
The age data would not need to be held by a single trusted third-party operator — it could be federated across multiple platforms. In order for each operator to be able to check all previous ages obtained for a particular user, each could be stored on a blockchain, ensuring immutability and auditability. In order to prevent the risk of bad actors using the database to identify kids, access would only be granted — at first — to named platforms, and secondly to trade association members vetted according to a standard process. Probably 90% of digital places where kids go can be covered off this way; and we can then work on how to enable the remaining 10%.
This is urgent not least because of the massively underreported problem of sequential harms from incorrect age data. A whole generation of teens who lied about their age when they were 10 or 11 to access social media are now 16 or 17, but the platforms think they are 19 or 20, eg full-blown adults. It is estimated that between one-third and half of teens are incorrectly aged in social media. Given that the platforms have implemented positive safeguards that trigger below certain age thresholds (like 16 or 18), millions of kids are bypassing these and exposed to harms. Ofcom has done excellent research on this problem.
Regulators might argue that receiving age information from a partner in this context is no different than receiving it from a contracted processor, such as an age verification vendor. But that obscures the fact that in a contracted processor relationship, the operator has (presumably) done due diligence on the provider’s processes and has contractual guarantees in place. We need regulatory guidance that outlines how an operators can rely on age information provided via a standards-based Universal Age API (in the absence of a contract or due diligence).
Thanks Max. Great to hear your are feeling some industry alignment. Look forward to your take on Manchester and the other events. Keep it up. You are doing an excellent job.