While AI’s role in society has been a focal point for debate in the last several years, the outbreak of COVID-19 has given the world a demonstration of the benefits that it could potentially deliver. From the autonomous vehicles that deliver needed supplies without human contact, to AI sensing mechanisms that enforce social distancing, to machine learning powered detection software that helps to identify outbreaks, the reach of AI into our daily lives is expanding continuously.
However, during this increased deployment of AI systems, the debate around the safety of AI usage, its potential biases, and questions about its integrity have come into strong focus. Specific AI powered technologies, such as facial recognition, have become flashpoints in the political discussion on surveillance and bias in society. In a recent survey carried out in the UK by BCS, the Chartered Institute for IT, a majority of the general public responded that they distrusted any institutions, ranging from education to social services, that would deploy AI’s to make decisions. In this environment, much of the discussion revolves around what level of trust the public can put in AI as well as the level of understanding that we need to have in how AI will be deployed in order to minimize risks to society.
There are several different paradigms that have been used to describe how integrated AI would be in our future lives; one such comparison is to electricity, that AI’s impact on society will be similar to how electricity transformed our entire way of life. This is an apt comparison, not only because it shows us the potential scale of transformation AI can subject us to, but it potentially points us a way to resolve the dilemma between AI growth and the mistrust it currently engenders.
Although it’s difficult for us to imagine a world now without electrical power, the first commercial generators were only installed less than a century and a half ago. Pioneered by Thomas Edison in the US, electrical power was hailed as a herald of a modern age of light, but this technology at first engendered both admiration and fear in the populace. In the United States, the roll out of the technology in the 1880’s and 1890’s were marked by an unorganized welter of competing companies that focused on market growth above all else. The infamous Current War was waged by Edison and his main competitor Westinghouse were predicated on showing the public that the rival systems promoted by the other party, Direct Current and Alternating Current, respectively, were dangerous to human life. The laissez-faire regulatory ecosystem of the time combined with the newness of the industry ensured that there were no safety standards in the equipment. Combined with a few high profiled and gruesome deaths that result from electrical accidents, the result was public fear and distrust of the new technology.
Beginning at the turn of the 20th century, however, there came an awareness that standards were required to not only create safety in the technology, but also foster public trust in that safety. High profile accidents and deaths had caused incidents such as the “Electric Wire Panic” in 1889 in New York City, where residents frightened by the perceived dangers of electricity chopped down power poles and threw out telephones for fear of electrocution. By 1895, several electrical installation standards were codified in the US, and these were integrated into one single nationwide code by 1897. Efforts were also made at an international level by governmental cooperation with private organizations to define standards and frameworks that eventually produced our routine, modern world reliant on electrical power.
As it stands now, AI development could be seen as being in a similar state of development as electrical power was during its first few decades of existence. Even as it stands ready to transform our world, and we now have AI implementation in various industries and integration in various technologies, we are only at the beginning of forming a common vision for safe development at a global level that we can all subscribe to. Various organizations around the world, some governmental, some private, others mixed institutions, have promulgated various visions of what safe, responsible, and trustworthy AI means. There are now almost a hundred of these frameworks; for example the OECD Principles on AI issued in May 2019, or the Toronto Declaration issued by Amnesty International and Access Now and endorsed by Human Rights Watch and Wikimedia foundation. Many of these approaches are summarized in the below graphic, but they all address different facets of AI technology and have different emphasis on which aspects of the technology should be monitored and regulated. Given the fact that the effects of local development of AI will have global impacts, it’s desirable that we strive towards a common framework.
Just as unified frameworks helped ease adoption of electricity in the previous century and allowed the public to come to rely on a previously distrusted technology, so can unified principles help AI adoption by harmonizing standards of testing, transparency, auditing, and showing compliance. Another way we can make progress towards this common framework is by working with future scenarios that take into account the human factor. While the technological aspects of trustworthy AI is important, in order for its uptake and adoption to succeed, we must also train non-technologists in diverse, vital fields, such as medicine or law, on the socio-economic, business, regulatory, and ethical aspects of AI systems. Working with future scenarios is a proven way for non-specialists to envision, understand, and disseminate the knowledge among their peers of the impact of transformative technologies such as AI.
By working towards a global framework of approaching principled AI, and training in future scenarios that make technology more approachable and understandable, we can take a big step forward in not just making sure AI technology is safe but also trusted by the public. The gains that AI can achieve on behalf of society is only possible if it is seen as a trustworthy technology, and a unified standard that can allow the general populace to place their trust in will go a long way to making that vision a reality. If AI is truly to become a background technology for the 21st century, similar to what electricity was in the 20th, then we must work towards making it safe, comprehensible, and trusted.
Claudia Olsson founded Stellar Capacity with the vision to help humans become more digital, and digital to become more human. She was named a Young Global Leader by the World Economic Forum, served on its Future Council on Values, Ethics and Innovation, and is the author of "Sweden 2030 - an integrated, smart and competitive digital society” for the Digitalization Commission at the Government of Sweden. Read more about her and our team here.
Peng Wu is the Program Director for Research at Stellar Capacity with a background in transportation, maritime logistics, and urban planning. He specializes in urban development, social sustainability, and is passionate about how better leadership can leverage technology to build a better tomorrow for all.
Would you like to develop your digital leadership and learn how to navigate digital change? Sign up for the Stellar Executive Program at www.stellarcapacity.com. We also offer a diverse range of courses, trainings, seminars and tailor-made programs for organizations and individuals progressing digital transformation.