It's OK to ask 'How old are you?'
Why protecting minors online doesn't have to compromise privacy or access
Last week Florida and South Carolina began enforcing new age verification laws intended to prevent minors accessing pornography sites, bringing to 19 the number of US state laws mandating age verification. (Tennessee’s equivalent law was blocked at the last minute). Next week the Supreme Court will hear an appeal against a similar Texas statute.
Predictably, this triggered a reaction from techno-libertarians and free speech absolutists, who generate unnecessary polarisation around the debate by refusing to engage with the solutions. I was particularly dismayed to see 404 Media, one of my favourite investigative outfits — great at untangling complex topics at the confluence of technology and society — blurt out a post that parrots these points without journalistic skepticism or a modicum of research.
I’m not above slagging off bad laws or regressive policies when I see them. But one thing our team did really well at SuperAwesome was to find practicable solutions at the nexus of regulation (imperfect as it is!) and technology (that works in the wild!). Protecting children and teens from harmful content with age checks may be messy, but it’s not impossible.1
It’s a double shame that we’re arguing about age assurance for over/under-18s, which is effectively a solved problem, rather than putting effort into devising industry-wide approaches to making the internet safer across all ages under 18.
Age verification facts
I won’t restate the arguments, which are summarised here. But to find common ground on age assurance for those who care about privacy, access and protecting minors, I propose the following baseline:
Hopefully we all agree that children and teens are being harmed by unfettered access to pornography and other adult content. If you need more, Common Sense Media’s 2023 report is enlightening.
In the offline world, we control minors’ access to inappropriate products and services (alcohol, porn mags, gambling, guns, etc) all the time. We do this by asking them to produce ID to prove their age. That system is not perfect, but it does reduce harm.
Many of the new laws in US states and elsewhere2 to restrict access by minors are poorly written and difficult to implement. This is a sad fact of the modern legislative sausage factory.
If we agree that it makes sense to gate some content on the internet from access by minors, then we need a way to verify adulthood.
Contrary to what critics repeat ad nauseam, age verification that is effective, private and safe is not only possible, but already prevalent. This is especially true for the simple requirement to prove someone is over/under the age of 18 (or 21).
Let’s address the typical challenges to age verification:
It requires unsavoury companies to collect personal information from users.
Not really. Most of the laws require operators to obtain proof of age, using a reasonably reliable method. In practice, this nearly always means engaging an established third-party to determine age and provide to the operator only confirmation if the threshold has been met. If the service provider is credible (eg, a member of the AVPA), it will operate with strict privacy standards and will not store or retain users’ personal information. The purveyors of porn don’t want your name and address, they just want your eyeballs.
It impinges on the freedom of adults to access certain content if they can’t prove their age.
It’s trivial to implement a waterfall approach to age verification, giving users a choice that is likely to cover 99%+ of possible cases, starting with facial age estimation (which can be used by just about everyone), and falling back to ID scans in the small number of cases where it gets it wrong or it does not work. Access concerns are minimal and manageable. And frankly, if 1% of adults can’t access porn, but we protect most kids, surely that is a reasonable trade-off?
It creates databases of personal information that could be hacked.
None of the reputable service providers are storing copies of IDs or of personal information. By design, they are deleted immediately after the age check. Storing them would be colossally stupid and risky. Obviously there might be dodgy or incompetent providers that get this wrong, but then let’s legislate to put those companies out of business, and mandate use of accredited service providers.
It doesn’t work for kids and teens.
Definitely a legitimate issue. We have not yet solved for age assurance at all age levels (though there is real innovation coming). Facial age estimation gets less accurate the younger you go. And the younger the user the less likely they are to have ID. That is the next challenge to solve, but not a reason not to deal with +/-18 safety online.
All the bits exist to enable privacy-preserving access control to protect minors on the internet. But to make them a practical reality we need to de-escalate the rhetoric and focus, together, on improving the laws and improving the tech.
Improving the laws
Most laws requiring age verification spill too much ink on trying to pinpoint acceptable methods and not enough on providing a framework (of privacy, security, effectiveness) within which companies can innovate. The best overview of all the laws and related issues comes courtesy of the Centre for Information Policy Leadership (CIPL).
The laws can be improved by focusing on (a) outcomes; and (b) minimum standards, whilst leaving the development of solutions to the market. They should mandate use of third-party age verification services that have a minimum accreditation (like the UK’s Age Check Certification Scheme (ACCS)); stipulate a minimum standard of effectiveness (eg, based on NIST evaluations or similar); and require standard privacy and security practices (immediate deletion of personal data). More than a dozen providers already meet these criteriaf.
Improving the tech
Huge strides have been made in the last 10 years around age verification methods and technology. Facial age estimation has reached high 90s% effectiveness for over/under 18 checks. Machine learning has dramatically increased the scope and accuracy of document ID checks, and enabled very effective protection against cheating. In fact, cheating age checks is now harder online than in the liquor store or at the nightclub velvet rope.
There are still gaps. It’s inconvenient for consumers to verify their age repeatedly across different services. It’s expensive for operators to pay third-party providers on a per-verification basis. As in the real world, teens will constantly look for ways around such systems and the cat-and-mouse game of fake IDs and detection tools will continue. None of these issues are reasons not to do it.
The technical solutions to most of these gaps exist. But to be effective, they require more dialogue and more collaboration among stakeholders than we’ve seen to date. We’ve talked here a lot about the idea of a Universal Age API that aggregates already-existing signals to establish an age + confidence score, which can then be attached to a device or a user. The largest platforms already have age signals for pretty much all internet users.
We already know how to exchange such information privately. Platform provider Kids Web Services has been doing this for years to enable verified parents (ie, adults) to share their status across websites. Bandio leverages zero-knowledge proof technology from the Aleo Network Foundation to enable kids to share their age safely with web publishers. Child safety and privacy tools vendor k-ID combines parent-affirmed age confirmations with age gates in video games. Given the number of new digital ID initiatives — some commercial, some via governments — aggregators like Ver-iD are emerging to make it easier for publishers to support them all.
I have no doubt that will have viable digital ID infrastructure in future — which enables granular age assurance —, and there is no technical reason why this would not be compatible with privacy, security and access.
The real focus of debate ought to be around what we do with that capability. How do we adapt content and services to be appropriate for all ages? How do we improve digital services to make them less liable to cause harm? How do we balance access to information to protection from harmful content?
That is what we should be arguing about.
To be clear, I’m not taking a view here on what content should or shouldn’t be available to users of different ages, or whether 18 is the right cut-off for porn, or 16 for social media. I think that debate is premature, given that half or more of young internet users are surfing the web as adults most of the time...
I believe that if we can find a private, safe way to share age information on the internet, most platforms will adapt their content for the appropriate audience. In most cases (with some glaring exceptions to be addressed in legislation), this is just good business sense.
Europe risks fragmenting into different, impractical approaches as well, given that each country seems set on creating out its own standards for age assurance. Australia's new social media law has generated plenty of controversy, but at least the government is officially running tests of age verification approaches (results due in 2025).
Common sense as always. You are the Chief Repeating Officer and iyour voice has never been needed more.