Could the latest viral app have taught us something valuable about how we think about privacy?
Unless you’ve been living under a rock, you’ll have heard about FaceApp, a smartphone application that went viral for the second time in two years this summer. The clever photo manipulator developed by a Russian startup promises to take users on a journey into the future by showing them what they might look like when they grow old.
It’s all fun and games – until you start thinking about the implications of the facial recognition technology the app uses to scan people’s faces. Such data could be exploited to identify people and used for future surveillance. Where is it stored and what happens to it? Who is it shared with and for what purpose?
The app was met with an unusual amount of skepticism and elicited fierce reactions from privacy-conscious users – which also went viral. The drama that ensued serves as a great lesson on how we view technologies like facial recognition in relation to privacy.
So what has the FaceApp incident taught us about valuing our privacy, and how should we think about facial recognition technology in general? Let’s dive into it.
Is FaceApp a real threat to our privacy?
The short answer is: no, the AI-powered photo manipulator app is not a major privacy concern. As far as researchers can tell, FaceApp is not guilty of the serious privacy breaches it’s been accused with – there has been no evidence found of the app uploading a user’s entire camera roll to the cloud, nor of any user data being transferred to Russia (which was a major point of concern for American users). The startup also claims that they do not share any data with third parties.
However, FaceApp does process and temporarily store photos in the cloud, promising to delete “most images” within 48 hours.
The company’s terms of service has also raised some eyebrows: it includes an irrevocable license to use your user content and any name, username or likeness provided in connection with it. It’s broad language that serves to protect the company from a lawsuit, but it’s definitely something to think about before starting to use the app.
We could just say: alright, we overreacted, FaceApp seems fairly safe, let’s just move on. But we’d be drawing the wrong conclusion from an episode in the history of tech and privacy that could serve as a very important lesson to us all. Let’s take a slight detour and look at the wider context of facial recognition and privacy to better understand what’s at stake.
Facial recognition is our reality
FaceApp is not the first and certainly not the last facial recognition software that has raised privacy concerns.
Today, facial recognition is everywhere. It’s become so commonplace that we’ve accepted it as a part of our daily lives to a point where we hardly ever question it.
Private companies and governmental organizations use surveillance systems that are backed by sophisticated facial recognition software. “Walking around anywhere can get your face included in facial-recognition databases. How that information can be mined, manipulated, bought, or sold is minimally regulated—in the United States and elsewhere,” writes author Tiffany C. Li in an article on The Atlantic.
Law enforcement has been using facial recognition software to identify and track down suspected criminals ever since the technology reared its head. The trouble is, studies suggest that only a fifth of matches are correct, raising questions of privacy and human rights.
Always ahead of the game when it comes to privacy breaches, Facebook has been applying facial recognition to images for years. They’ve built up a large database of facial data from photos uploaded to the platform in which users have tagged others – identifying them and giving Facebook the data it needs to create facial profiles and start tagging people automatically. What the company is using all of that amassed data for, we can only guess.
It was Apple’s Face ID, released in 2017, that took the presence of facial recognition technology in our lives to a whole new level. Many of us rely on it to unlock our smartphones more than a hundred times a day, make payments and create animated emojis using our facial data. There’s been a report of the FBI forcing a suspect to unlock their phone using facial recognition to gain access to their private information – a tactic that could be a cause for concern.
There are countless recent examples of facial recognition technology being installed and utilized in a variety of contexts.
Several US airports have rolled out facial recognition solutions to simplify the check-in process: airlines like JetBlue and Delta scan faces and use the data to verify boarding passes. The data is then passed on to Customs and Border Protection, ending up in the hands of the government in yet another unregulated way.
Retailers and other businesses are also starting to use facial data as a marketing tool. The leading American pharmacy store chain, Walgreens, is piloting a line of smart coolers that scan shoppers’ faces and attempt to guess their age and gender, so they can provide tailored product recommendations. What happens to the data being collected, nobody knows. Whether such solutions, developed under the aegis of personalization, are truly necessary and whether they’re worth potentially jeopardizing our privacy (or the privacy of our customers), is up for debate.
What can we learn from the FaceApp incident?
The moral of the FaceApp fiasco is this: you should always be skeptical. This time, no one’s information was stolen, no one got hurt. However, that doesn’t mean that you can blindly trust obscure apps developed in all corners of the world from now on.
You must think before you download, do your research, get as much information as you possibly can, and, in case an app does raise privacy concerns, make an informed decision on whether you think it’s worth it for you to potentially sacrifice your personal data for a few hours of fun.
The backlash generated by this particular app can be seen as a huge step forward in privacy awareness. It shows that more and more people are thinking critically and questioning technologies that Big Tech (and Smaller Tech) would have us accept without further ado.
As for what we can do to solve the matter of facial recognition and privacy… We can wait for the law and our regulatory systems to catch up to the rapid development of technology, or, we can cast our vote by starting to use services that make a point of respecting our privacy.
Want to enjoy an online presence without sacrificing your personal data? Create a free account on Idka today! Be social, stay private®