Facial recognition technology was once no more than a futuristic concept, unfathomable technology wielded for invasive surveillance in dystopian fiction such as Orwell’s 1984. Now, this form of biometric security is a well-established reality employed by entities ranging from governing bodies to educational institutions.
Although techniques and systems differ depending on the developer, the basic functionality remains more or less the same: An algorithm translates the key features of an individual’s face, either via a photo or video, into a mathematical representation that can then be compared to existing faces on a database. Unlike a regular photograph, the collected facial data is unique as it consists of explicit details that distinguish one specific face from others.
Privacy intrusion and usage
The latest smartphones include facial recognition technology as standard. People have a choice when it comes to their mobile purchase though; if they aren’t comfortable with the idea of a sophisticated algorithm “saving” their facial features they can look elsewhere. But when various organizations within public spaces use the technology, we are not given the option to decline observation.
In countries where facial recognition software is widespread, such as the United Kingdom, China, and Singapore, Orwell’s dystopian vision of a “Big Brother” state where privacy is a privilege and not a right is coming to fruition.
It’s no secret that government agencies possess centralized databases consisting of anywhere from several thousand to millions of identities, including data from facial recognition software. The outward purpose of these vast archives is to allow for better security; ostensibly monitoring and identifying terrorists and criminals.
However, not every individual on these databases falls into either of these categories, and there is always the possibility of exploitation. Powerful equipment and technologies can be misused without repercussion, and government and private entities using the vast resources at their disposal to cross privacy lines is an enduring tradition.
One blatant example of facial recognition technology being manipulated for illicit purposes is in China, where it has been employed for racial profiling against the Uighurs, an ethnic minority group that has been subjected to harassment and forcible internment by the Chinese government.
A newer phenomenon is the spread of facial recognition technology to other industries and sectors. In airports across the United States, facial recognition is used to identify passengers in place of boarding passes. President Trump passed an order stating that all international travellers arriving in the US must pass through a facial recognition check, regardless of nationality.
The capabilities of facial recognition technology can vary dramatically, as with any artificial intelligence (AI) system. For example, some are incapable of identifying a match under poor visual conditions, such as low lighting or awkward angles. These systems are not infallible, and so-called “false positives” in which a face is incorrectly matched occur.
Incidents of misidentification are not uncommon, and when facial recognition is being implemented as a matter of course throughout various government agencies, particularly law enforcement, these mistakes can have devastating consequences. A false positive can lead to an innocent individual being arrested, and there are proven biases against racial minorities, especially when the technology is outdated.
Academic institutions have toyed with the idea of using this form of surveillance to increase security, despite the cost to student privacy. One school in New York attempted to implement a facial recognition system for the entire student body. The program was halted due to warranted concerns over the privacy of students.
These unfortunate trends warn of a future in which ceaseless scrutiny and privacy violations, as data is collected without permission or knowledge, is all but impossible to avoid. When facial recognition technology seeps into all aspects of life, from travelling to working to attending school, ordinary individuals have no control over the biometric data being gathered and shared with unknown entities.
Beyond the unsettling fact of living with all-encompassing perlustration in regions where this technology is commonplace is the sheer lack of transparency about what is done with biometric data and how it is stored. Information can be distributed among various institutions and governments without knowledge or permission on behalf of the owner.
Regulations concerning who may access this data and how many personal details are revealed during a search remain an enigma for many of these centralized databases. The last line of defence against privacy crumbles entirely if a person’s biometrics are stored with other identifiable information such as birth dates or names, and the risk of misuse increases.
The legalities that regulate how law enforcement and businesses apply facial recognition technology vary dramatically. For all countries that fall under the European Union, the General Data Protection Regulation (GDPR) applies, which explicitly prohibits disclosure of EU citizens’ private data to third parties without consent.
However, in the United States, laws change depending on the state — Texas, Illinois, and Washington are only three out of fifty states that have biometric-specific law. In some states, corporations are legally obliged to provide any and all personal information they have collected (including facial recognition data) to that person on demand. In others, there are no statutes specific to biometric data collection at all.
If highly-regulated procedures such as search warrants and arrests are frequently and aggressively abused, what expectation can the average citizen have for one of the most powerful types of surveillance that exist today?
A convenient workaround for entities that seek to implement facial recognition software without respecting privacy boundaries is hiring a company to provide and manage the technology.
Clearview AI is one such firm that has grown immensely popular among its clients due to simplicity of use and the fact that it has a gargantuan database in excess of over three billion images. Users with access to this database can take photos of any person with their phone to find a match in Clearview’s database.
If a match is found, it will disclose a compilation of any publicly available information about the person online, which can include sensitive information such as home addresses and phone numbers. Despite the information existing online already, the fact that someone — a police officer, for instance — can snap a photo of you in a public space and view all this information is disconcerting, to say the least, not to mention the ethical dilemmas raised by being able to search for matches at any provocation, including peaceful activism or racial profiling.
Since Clearview opened its doors in 2017, several incidents have brought to light how dangerous it is for such companies to operate under the radar in terms of clientele and data collection techniques. An investigation by the New York Times divulged that the organization had been using scraping techniques to build its impressive database, mining photos from public platforms such as social media accounts.
The images gathered for each individual may include those taken without consent or images that were taken down by the subject. It remains unclear if data on minors can be collected, which is sufficient reason to shut down Clearview and other entities that use scraping tactics to fatten their databases.
A second investigation revealed that Clearview’s stance that its facial recognition technology was used primarily by vetted law enforcement agencies within North America turned out to be false. Documents revealed a client list that included the United Arab Emirates, Saudia Arabia, and American retailers such as Macy’s and Best Buy.
Although Hoan Ton-Thot, the CEO of Clearview, stated that he was unsure about selling the software to nations that are “governed terribly,” this assurance is questionable as neither the UAE or Saudia Arabia are known for their attention to human rights.
Internal exploitation of the technology within the company’s walls can occur as well, as demonstrated by a New York Times report detailing how Clearview’s leaders allowed associates to use their facial recognition software, which led to one billionaire using it to investigate a man dating his daughter.
If the application eventually becomes available to the general public, there is no limit to who can use and abuse facial recognition software to uncover sensitive data about anyone simply by taking a photo of them without their permission.
Furthermore, biometric data can be a veritable goldmine for cybercriminals. In the event that a breach occurred and private information was compromised, the damage could be irreparable.
It’s a misconception that advanced software such as facial recognition tech cannot be compromised.
Biometrics are not immune to “spoofing” from hackers — fingerprints can be transferred to silicone fingers, and facial recognition software can be tricked with a photo of the owner. That
also raises the question: Will victims be informed if data is stolen, and how will the damages be addressed?
Clearview AI serves yet again as a prime example of a facial recognition software organization that has been caught handling personal data carelessly. An incorrectly-configured source code on Clearview’s server, caught by Dubai cybersecurity firm SpiderSilk, could allow anyone to access the company’s internal files, including cloud storage. Among the files exposed in the breach are videos from a camera in a New York residential building that clearly capture residents entering and leaving the building.
Fighting facial recognition
Almost predictably, declining aggressive forms of surveillance can be penalized. One story gained notoriety in the United Kingdom after London cops questioned and eventually fined a man who deliberately covered his face to shield himself from public street surveillance. Contrary to the story circulating on mainstream media, the man was fined for swearing at the officers rather than obscuring his face.
Nonetheless, it’s not uncommon for UK police to target individuals attempting to retain their privacy with “random” stop-and-searches. The message is clear: Challenging statewide surveillance draws unwanted attention, even if the protest is subtle and well within the individual’s legal rights.
The potential of gross privacy intrusion that widespread use of facial recognition technology promises has prompted action from activists worldwide. One such campaign, known as “Ban Facial Recognition,” is a protest against facial recognition technology in the US.
The leaders of this movement are Fight for the Future, a technology advocacy group that supports technology as a means of empowerment rather than a tool for structural inequality and tyranny. The small body of activists and experts work tirelessly to spur people into action to defend and achieve basic rights, aiming for policy victories that make genuine impacts, level the playing field, and halt corruption and abuse.
Ban Facial Recognition began in 2019 by raising awareness through an interactive map that revealed which states and areas in the US were equipped with facial recognition cameras. Earlier this year, a decision on behalf of the University of California Los Angeles (UCLA) to trial facial recognition technology as a security measure on campus inadvertently drew attention to the campaign, mobilizing students from approximately 12 universities into action. Over 30 more institutions were subjected to online activism from students objecting.
After learning of UCLA’s initiative, Fight for the Future swiftly acted to illustrate the chances of false positives occurring by comparing the data gathered by the software to that within a
database of mugshots. Out of over 400 UCLA students and staff members, 58 people were wrongly matched to a mugshot — the bulk of the misidentifications consisting of minorities.
The demonstration was successful, kicking off protests by college students throughout the US, demanding that academic administrations reconsider using facial recognition technology on campus. As a result, five colleges, including UCLA, have abandoned plans to move forward with facial recognition technology on campus. Ideally, more universities will follow suit.
Fight the Future inspires more and more people to join the quest to ban facial recognition. And if we can’t avoid it altogether, we can at least work towards reducing and limiting its usage to protect the best interests of those being surveyed.
Brad Smith is a technology expert at TurnOnVPN, a non-profit promoting a safe and free internet for all. He writes about his dream of a free internet and unravels the horror behind big tech.