A Pivotal Time for Facial Recognition

Last year in this blog we looked at the state of facial recognition and concluded that, for its most controversial applications (especially police surveillance), it simply wasn’t accurate enough. Add in the tendency for the algorithms to exhibit ethnic and gender bias, it was clear that the use of facial recognition technology (FRT) needed to be controlled and regulated before it was used in the wild, and especially before it was used in public spaces.

Since September, when we published that blog, the world has been turned upside down. We have been witness to a pandemic and a global outpouring of anger at the murder of George Floyd. Some argue that the riots we have seen, prove that we need FRT more than ever (although, ironically, the necessary face masks have presented their own challenges to the technology). But, based on a number of examples where FRT has been abused and misused the case still remains for FRT to be regulated and controlled as quickly as possible. Let’s look at some examples.

At the start of this year, a company called ClearviewAI started boasting about its technology which was being used by a number of law enforcement agencies in the US, including the FBI. The company had scraped more than 3 billion pictures from various social media platforms (Facebook, Venmo, YouTube, etc) without the users’ permission, and then using that dataset to train their facial recognition system. Fast forward to this month and a number of governments, including Canada, have started to investigate the company’s ethical practices. This has resulted in ClearviewAI no longer offering their services in Canada. There are also challenges in the EU about the legality of its system.

The use of facial recognition systems, like ClearviewAI’s, by police forces can have dramatic consequences. Last month it was widely reported that Robert Williams, an innocent Black man, was arrested in front of his children and locked up for 30 hours. His arrest was based on an incorrect match from CCTV of a thief stealing a watch from a store. The prosecutor has since apologized for the error. This case also highlighted a previous misidentification by FRT of Michael Oliver, who was accused of stealing a mobile ‘phone from a car. A simple comparison of the images shows that the thief and Oliver are different people, but he was charged with a felony count of larceny. The case was dismissed as soon as it reached court.

Many of the underlying datasets that are used to train the facial recognition systems are themselves inherently flawed. Until recently, a popular dataset called “Labelled Faces in the Wild” consisted of 83% white faces and nearly 78% male. Neither of those numbers are representative of the global population. Other datasets have been shown to have had images labeled with derogatory slurs and offensive terms, as evidenced in a paper published earlier this month.

Closer to home (literally), Amazon’s Ring doorbell system has been used by police forces to identify thieves, and in apartment blocks, FRT systems can be used to identify “abnormal behavior” in elevators.

It is the enthusiasm to use these systems, even when they are clearly flawed, that is the key problem at the moment. There are simply not enough checks done to assess the accuracy and ethical considerations of using FRT. We should all remember the old adage of “just because you can do it, doesn’t mean you should do it”.

Tags: Artificial intelligence, Facial recognition