Lying about your age? This AI will see right through it
Lying about your age has been a part of growing up for a long time. Teenagers are famously skilled at slipping past bouncers or pasting together fake IDs to get into clubs or buy booze. The internet makes it even easier. Any 12-year-old with basic math skills can set up a social media account or watch some porn simply by selecting the right year in a drop-down menu. You don’t even need to lie to an actual person.
In recent years, governments and tech companies alike have begun looking to artificial intelligence for better solutions than those the average web platforms now employ, which amounts to little more than a mechanical honor system. Two leaders in the nascent age verification industry, FaceTec and Yoti, say their tools offer more assurance than filling in a form with an easily-spoofable date of birth and are more secure than offline ID checks. Just look into your device’s camera and let the program “read” your face. Then, poof! The AI will decide how old you are.
“It’s even more privacy preserving than me, in a bar, asking to see your ID,” said Julia Dawson, the chief policy and regulatory officer at Yoti, which claims to have conducted more than 570 million age checks for its clients. “It’s more privacy preserving than me asking you to upload something because nothing is retained.”
Yoti’s biggest-name client is none other than Instagram, where it performs age verification checks to ensure that users are 13 or older. In a description of the “facial age estimation technology” it supplies for Instagram, the company explains that users’ facial images are “used for the purpose of estimating age. Once a result is given, the image is instantly deleted.”
Alberto Lima, the senior vice president of operations in Europe, the Middle East and Africa at FaceTec, another face-based age authentication tool that checks a billion faces every year, says that his company’s tool is rights-respecting — not just of underage users, but of adults, too. “We don’t receive data from our customers and users,” he said. “They license our software and install it inside their network. There is no subprocessing from FaceTec.”
But there are questions about whether the technology really works or whether these kinds of age verification requirements are simply destined to put people’s privacy rights at undue risk. When France’s data regulator analyzed a range of current age verification methods in 2022, it concluded that no single method could be sufficiently reliable, offer the same level of quality for the whole population and maintain privacy and security all at once.
Over the years, companies and governments alike have attempted to integrate actual ID checks at the virtual doors of big social media sites — South Korea famously introduced a “real-name registration” policy that required citizens to upload their state-issued IDs to major social media sites but later scrapped the rule when it led to a series of massive data breaches. The U.K. is the latest to dabble in requiring hard age checks from platforms promoting adult content through the country’s online safety bill, though whether these provisions will remain in the law’s final version is yet to be seen.
Although states and companies may still be tempted to go this route, the risks of collecting and storing peoples’ personal data at this scale are now well known. While many governments have gone all in on using people’s biometric data — images of people’s faces, fingerprints or irises are popular identifiers — as verification tools for state services, this kind of territory is increasingly fraught due to privacy concerns.
The Electronic Frontier Foundation says it “opposes mandated age verification laws, no matter how well intentioned they may be.” The reason? The fear that we’re starting down a slippery slope to an Orwellian world where citizens are made to hand over their biometric details every time they enter a store, park their cars or engage in a myriad of other mundane acts in their everyday lives.
That argument is keenly held by those who say introducing age checks can be problematic — but other researchers believe that to simply say no to biometrics and face-based checks by default is holding back safety at a time when it’s needed the most.
Children need to be treated “in a way that respects their age and maturity, their role and capacity in child rights, that is respectful of the privacy of everybody,” said Sonia Livingstone, a professor of social psychology at the London School of Economics who studies child privacy rights in the digital age. “And I don’t think we found that yet.” Livingstone participated in the EU’s consent initiative, which tried to find a way of developing a child rights-respecting way of introducing age assurance checks across Europe.
“We’re past the point of imagining that this is a future world,” she said. “We are already in the land of age assurances. The question is not, should it be happening, but how should we regulate it to be rights-respecting?”
Livingstone recognizes that “the privacy community is, of course, deeply skeptical, but on the other hand, the public is completely used to giving their biometrics.” She believes checks should be done by trusted third parties, not by platforms themselves or by governments. “That introduces a kind of structural weakness,” she said.
Michael Veale, an associate professor in internet law at University College London, expressed skepticism about technology made by companies like Yoti. The rise of age assurance — shorthand for the product that both Yoti and FaceTec are selling — has led to what Veale calls “a lot of snake oil” being sold.
“The idea that from magic, limited telemetry signals, such as how people move their mouths or type, you can magically identify and spit out fairly well who is a child and who is not a child,” Veale said, “that technology doesn’t really exist, is what I would say. It doesn’t do what people think it does.”
Veale points to problems of inaccuracy and bias and to concerns about overreach in data collection. “Companies like Yoti have been collecting biometric data from kids to create this technology,” said Veale. That concerns him. “Kids have a right to privacy and that is not overridden by the parents.”
Yoti, for its part, says that it doesn’t process what’s called in the U.K. and Europe “special category” data. “We’re not uniquely identifying or authenticating any individual,” said Dawson, Yoti’s chief policy officer, pointing to guidance from the U.K.’s Information Commissioner’s Office. “They basically revised their whole guidance to say that with facial age estimation, there is no unique identification or recognition.” Dawson explained that Yoti instantly deletes images of any users and only analyzes pixels using an AI model that has previously been trained on a prior set of facial images and ages to try and estimate how old people of different ages look.
“This is facial analysis,” Dawson said. “It’s detecting a face and analyzing it. It’s not recognizing anyone. I think a lot of people are concerned about live facial recognition. This is not that.”
Facetec’s Lima also says that face analysis technology isn’t only about how accurately it identifies the age of someone who puts their face through the system. “This is always also a deterrent,” he said. “If a minor sees that they need to do a selfie, probably some of them will back up from doing that.”
But when people do use it, what happens when the software gets their age wrong? It’s hard to imagine how a tool like this could muster perfect performance, especially when trying to determine the ages of the adolescent population, in which people often look much older or much younger than they actually are. Yoti’s Dawson disputes concerns about inaccuracies in age verification software, pointing to the company’s internal research suggesting that it’s relatively accurate regardless of the age, gender or race of the person whose face is being analyzed. “We keep looking at how can we be very honest about how good it is for each of the demographics, and keep improving on it,” she said.
FaceTec’s Lima says that an AI-driven approach is preferable to the human-based alternative of checking IDs. “The human is not as good as the AI at determining if that face is the face on the ID document,” he said.
“We’re operating across many, many countries,” Dawson said. And it’s not just the tech world where Yoti and other services are becoming an intrinsic part of our lives. They’re also in supermarkets in Estonia and made a deal recently with a retailer in the Nordic countries. “Pretty much anywhere in the world where people have a face,” Dawson said, “you can assess age.”
CORRECTION [03/29/2023 9:38 AM EDT]: The original version of this story incorrectly quoted FaceTec’s Alberto Lima. The story has been updated to reflect that FaceTec clients install its software on their own network, not on FaceTec’s.
The story you just read is a small piece of a complex and an ever-changing storyline that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Show your support for journalism that stays on the story by becoming a member today. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.
How the global anti-LGBTQ movement found a home in Turkey
The smart city where everybody knows your name
Silicon Savanna: The workers taking on Africa's digital sweatshops
Sectarian violence in Manipur is a mirror for Modi's India
How space traffic in orbit could spell trouble on Earth