With Great Power Comes Great Responsibility

If you ever read comic books, you will know that the best ever piece of advice to come out of the world of Marvel Comics was when Peter Parker’s (aka Spiderman) Uncle Ben told him that, ‘with great power comes great responsibility’ (Winston Churchill also said it, but it is the Spiderman version we all remember!). With Artificial Intelligence, that ‘great responsibility’ has never been more important as the capabilities of technology becomes ever more powerful.

We have recently seen both deliberate and accidental abuses of AI’s power. There have been cases of companies and countries using facial recognition to track their customers and citizens without their consent. And we have seen local authorities in the UK using algorithms to make decisions on their citizens welfare benefits, without really understanding the benefits or risks.

Right now, it’s clear that the technology is moving much faster than any regulations or laws can keep up with. That’s not surprising considering the rigour and robustness that needs to go into creating new laws and regulations that will impact peoples’ lives in important ways, but is there anything that can be done in the meantime to ensure that will eliminate, or at least reduce, the abuses of AI power?

There are already a number of ‘AI Ethical Frameworks’ that have been published by various bodies, including private companies, research institutions and public sector organisations. A piece of research was carried out recently by Anna John, Marcello Ienca and Effy Vayena from the Health Ethics & Policy Lab at ETH Zurich in Switzerland to look at the global landscape for ethical AI frameworks. They found 84 different documents containing ethical principles or guidelines for AI, most of which (88%) had been released after 2016. Interestingly, they found that there was large agreement on what the key principles were (transparency, justice and fairness, non-maleficence, responsibility and privacy) but there was general divergence in relation to how these principles should be interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented. One of the biggest challenges is finding harmonisation across different geographies and cultures.

Another approach is to try and control the issue at source. Hannah Fry, an associate professor in the mathematics of cities at University College London, is advocating that developers of AI (specifically mathematicians and computer engineers) should sign the equivalent of a doctor’s Hippocratic Oath. Whereas with doctors the ethical issues are right in front of them, for mathematicians the issues are generally abstract and one, or more, steps removed from the work they do. Working on a dataset of geographical disease spread is no different, methodologically, from one on the spread of riots, but are ethically a world apart.

The concept could work but there will be challenges in defining who should be included and who not. Mathematicians are not regulated or qualified in the same way that doctors are, and AI is becoming more democratised every day. If a reasonably intelligent person (and in some cases, kids) can build a simple algorithm on a laptop, then it becomes difficult to know where to draw the line on who should be included in the oath or not. Hannah Fry will be presenting her ideas to tackle these questions when she presents at the 2019 Royal Institution Christmas lectures this month.

As with all difficult problems, it is better to talk about them than not. The more the abuses of AI are highlighted, the bigger the demand there will be for regulation. Whether that is in the form of an oath or international laws (or something in between) remains to be seen.

Tags: Artificial intelligence