Facial recognition has greatly evolved in the past several years. However, many have expressed concerns about the potential misuse, invasion of privacy and racial and gender biases presented by this technology.
As the founders of the company believe that facial recognition is too invasive, they created a technology that purposely blurs out people’s faces while tracking them. Since facial recognition is prone to false positives when observing people of colour, the Traces AI platform aims to reduce biases without sacrificing the effectiveness of its tracking technology.
“The way our technology works, we actually blur faces of the people before sending it to the cloud,” co-founder Kostya Shysh told TechCrunch. “We’re doing it intentionally as one of the safety mechanisms to protect from racial and gender biases as well.”
Instead, the California-based startup utilises a variety of other attributes to identify a person.
“It’s a combination of different parameters from the visuals,” co-founder Veronika Yurchuk explained. “We can use your hairstyle, whether you have a backpack, your type of shoes and the combination of your clothing.”
However, removing facial recognition from a security system would not bode well in certain circumstances, such as a city-wide manhunt; the TechCrunch article ponders why one should “forego a sophisticated technology just to stave off dystopia?”
While the AI startup’s founders understand the merits of facial recognition, they believe that an alternative is needed for the sake of privacy.
“The biggest concern [detractors] have is, ‘Okay, you want to ban the technology that is actually protecting people today, and will be protecting this country tomorrow?’” Kostya said. “And, that’s hard to argue with, but what we are actually trying to do is propose an alternative that will be very effective but less invasive of privacy.”
Kostya also touched upon the fact that facial recognition has already received criticism amongst major government entities in the United States. San Francisco has already banned the technology from being used in the city, and Detroit’s Project Green Light received backlash concerning racial biases and the potential dangers of incorporating deeply personal information into government systems.
“Government seems to be trending more and more towards biometrics — which is using your body as your ID — so the more facial recognition technology that we get into and the more they’re storing our data, the more opportunity there is for our personal information to get hacked as we go further and further down this road,” David Dudenhoefer, chairman of the 13th Congressional Republican District Committee told Metro Times about the Detroit facial recognition initiative.
“It’s not so much the concerns that I have about the current administration or the current body of police commissioners that are discussing this. It’s 20 years down the road when a future administration, a future body of police commissioners, or city council reaches the conclusion that what they’ve done with this technology doesn’t go far enough and they want to go a few steps further.”
In addition to protecting the public’s privacy, the founders stated that their technology may be a better fit for closed venues with limited information about their patrons.
“You can actually give them a verbal description, so if you say, ‘it’s a missing 10-year-old boy, and he had blue shorts and a white t-shirt,’ that will be enough information for us to start a search,” Kostya explained, using an example of a missing child in an amusement park.
According to TechCrunch, the company plans on pursuing the US, the UK and several Asian countries. While the US and the UK are obvious choices due to the large presence of security cameras in various cities, the founders also believe that countries such as Japan and Singapore, where face masks are often worn in public, could benefit from the AI Traces platform.