The use of facial recognition technology has been controversial and it has been criticized as being prone to misuse and reinforcing existing biases. Cities across the United States have been banning the use of facial recognition software and in the past year, companies like IBM, Microsoft and Amazon decided to suspend selling facial recognition software to police. Electrical engineering and computer science professor Vir Phoha says he agrees with taking a deep look at the use of facial recognition technology and holding it back until proper safeguards to prevent unintentional misuse are found but still believes it can be beneficial.
On the suspension of selling face recognition technology to police by Amazon, IBM, and Microsoft, he says “My first reaction was that they did the right thing. At the same time, once I thought about it, it is a very good technology. It has a lot of potential but it is a double edge sword. You use it properly and it can do great things and if you don’t use it properly, it can hurt you.”
Phoha has done extensive research on artificial intelligence, machine learning and security. He says a lot of questions about facial recognition should start with the humans who built them.
“There are many ways to do face recognition, one is geometric. So you look at the points, for example the distance between eyes, the length of the nose – that is geometric,” said Phoha. “There are multiple other ways such as making a base model, looking at variations, and storing the variations as a template for a user. There are methods that involve learning and associating specific face types to specific gender or history, or behaviors. There is a learning involved. If you use machine learning or artificial intelligence, any learning can be biased by the people who build those algorithms. Unconsciously, people who build those algorithms may be bringing their own biases in regard to gender, race and age.”
An algorithm that reflects biases can have destructive effects. Numerous studies have shown it misidentifies Black and Brown faces at a much higher rate. A Commerce Department test of facial recognition software found that error rates for African men and women were twice as high as they were for Eastern Europeans. Errors can lead to wrongful arrests.
“If you say 10% more of a specific racial group have been convicted of a crime compared to a majority race, then a random person from that racial group who is completely innocent – their chance of being labeled as a criminal could be 10% higher just due to this underlying statistic being part of the algorithm,” said Phoha.
Phoha says it will be an ongoing fight to combat inherent biases in algorithms.
“It is good technology but we must make sure there are safeguards. Enough science should be there to make sure the algorithms that are built are impartial,” said Phoha. “In replicating human capabilities, humans have bias.”
Software that attempts to identify people based on their facial structure can easily be misconfigured.
“Facial structure can be very different for differing ethnicities,” said Phoha. “People who are biased without knowing they are biased, implicit bias that will be translated into data.”
If the technology is going to move forward, Phoha and many other experts believe it is an area where sociology, psychology, machine learning, computer science, artificial intelligence need to come together.
“The science will be a mess if we don’t consider all these factors. We want an equitable society,” said Phoha. “The potential of misuse is very high. Social justice, empathy and equity should be part of research in this area. We do not want a group where any groups are marginalized for any reason.”