Facial recognition uses AI machine vision technology to try and match an image of a stranger’s face with one on its database of known people. In theory the technology could be really useful, allowing employees automatic access to offices, criminals to be tracked down, and lost children to be found. And whilst facial recognition has been making the news quite a lot recently, it has been, as they say, for all the wrong reasons.
The key issues are poor accuracy and bias. The systems can work really well in controlled environments, such as offices with consistent lighting, but perform poorly in the wild where the cameras have to cope with different angles, different lighting and people who are not aware they are being photographed. Bias creeps in if the data set used to train the AI is not representative of the population it will be used on.
What all this means is that when it is used by police forces, innocent people, usually from minorities, can suffer as a result. AI experts are highlighting the issues by campaigning against Amazon after they proved it was biased against women of colour. More benign uses are also causing concern: the New York Times carried out an interesting experiment using public webcam data to identify and track people – they collected public images of those who worked near Bryant Park in New York (available on their employers’ websites, for the most part) and ran one day of footage through a facial recognition service. It detected 2,750 faces in a nine-hour period, and, as an example, they were able to identify one person as Professor Richard Madonna.
Also in New York, facial recognition is being used to identify and track tenants and students. In London, the developers of an area north of Kings Cross Station had, without telling anyone, installed facial recognition technology across the whole area. This was highlighted in the newspapers, and the company is facing an ICO investigation because they couldn’t provide any legitimate justification for it. They have since committed to not use it again, after the damaging revelations.
Liberty, the human rights organisation, has called facial recognition ‘the arsenic in the water of democracy‘ and are bringing legal cases against UK police forces. But they have a huge challenge on their hands as the UK’s Home Secretary has recently backed plans for the police to use the technology.
On the upside, some big tech firms have rejected the use of their technology by the police, and San Francisco has become the first city to ban facial recognition technology from being used in public places. And in the UK, one citizen is taking the police to court for capturing his image using facial recognition technology.
What’s clear is that the technology is far from perfect right now, yet organisations, public bodies and governments are continuing to deploy it without necessarily fully considering the consequences. In an ideal world there would be global standards on its permitted use (just as there are with Chemical Weapons) but these will be extremely difficult to achieve, considering the different attitudes to it around the world. Someone, or some nation, needs to take the lead in regulating the use of facial recognition so that citizens can trust it, or at least have recourse when it is used abusively. The campaigning work of the AI experts mentioned earlier, organisations such as Big Brother Watch, and, ironically, AI technology itself (using adversarial systems), are currently our best hopes to maintain one of our basic freedoms.