Instagram is testing new ways to verify the age of people who use its service, including an artificial intelligence tool to scan faces, have mutual friends verify their age or upload an ID.
But the tools won’t be used, at least not yet, to block kids from the popular photo- and video-sharing app. The current test only involves verifying that a user is 18 years of age or older.
The use of face-scanning AI, especially on teens, raised some alarm bells Thursday, given Instagram’s father Meta’s checkered track record when it comes to protecting user privacy. Meta emphasized that the technology used to verify the age of people cannot recognize a person’s identity, only age. Once the age verification is complete, Meta said so, and Yoti, the AI contractor he partnered with to perform the scans, will remove the video.
Meta, which owns Facebook and Instagram, said that starting Thursday, if someone tries to edit their date of birth on Instagram from under 18 to 18 or older, they’ll need to verify their age using one of these methods.
Meta continues to face questions about the negative effects of its products, especially Instagram, on some teens.
Kids technically have to be at least 13 years old to use Instagram, similar to other social media platforms. But some get around this by lying about their age or by having one of their parents do it. Meanwhile, teens ages 13 to 17 have additional restrictions on their accounts — for example, adults they aren’t connected with can’t message them — until they turn 18.
The use of uploaded IDs is not new, but the other two options are. “We’re giving people a variety of options to verify their age and see what works best,” said Erica Finkle, Meta’s director of data governance and public policy.
To use the face scan option, the user must upload a selfie video. That video is then sent to Yoti, a London-based startup that uses people’s facial features to estimate their age. Finkle said Meta is not yet trying to identify those under 13 who use the technology because it doesn’t keep data on that age group, which would be necessary to properly train the AI system. But if Yoti predicts a user is too young for Instagram, they will be asked to prove their age or their account will be deleted, he said.
“It never uniquely recognizes anyone,” said Julie Dawson, director of policy and regulation at Yoti. “And the image is instantly deleted once we’ve done it.”
The so-called Zuck Bucks, by Mark Zuckerberg, the founder of Facebook, would function as the so-called token, and according to specialized media, it is “unlikely” that it is a cryptocurrency.
Yoti is one of several biometric companies capitalizing on a push in the UK and Europe for stronger age verification technology to prevent children from accessing pornography, dating apps and other adult-oriented internet content. Not to mention bottles of alcohol and other limited items in physical stores.
Yoti has been working with several large UK supermarkets on face-scanning cameras at self-checkout counters. She has also started age-verifying users of the youth-oriented French video chat room app Yubo.
While Instagram is likely to make good on its promise to remove an applicant’s facial images and not attempt to use them to recognize individual faces, normalizing facial scanning raises other societal concerns, said Daragh Murray, a senior lecturer at the University of Essex. Law School.
“It’s problematic because there are a lot of known biases in trying to identify things like age or gender,” Murray said. “Basically, you’re looking at a stereotype and people differ a lot.”
A 2019 study by a US agency found that facial recognition technology often performs unevenly based on a person’s race, gender, or age. The National Institute of Standards and Technology found higher error rates for younger and older people. There is no benchmark yet for age-estimating facial analysis, but Yoti’s own published analysis of its results reveals a similar trend, with slightly higher error rates for women and people with lighter skin tones. dark.
Meta’s face-scanning move is a departure from what some of its tech competitors are doing. Microsoft said on Tuesday it would stop providing customers with facial analysis tools that “claim to infer” emotional states and identity attributes such as age or gender, citing concerns about “stereotyping, discrimination or unfair denial of services.”
Meta itself announced last year that it would shut down Facebook’s facial recognition system and remove the facial prints of more than a billion people after years of scrutiny by courts and regulators. But at the time, he indicated he would not give up face analytics entirely, moving away from the broad-based tagging of social media photos that helped popularize commercial use of facial recognition toward “narrower forms of personal authentication.”