Preparing for Advanced AI, Safely


Recent progress in advanced information technologies, artificial intelligence (AI) and robotics are all very exciting and point to a new world in which life and work should become easier in the future. Cars that drive and park themselves, robots that perform surgery better than humans, and surveillance systems that instantly spot troublemakers should ultimately make us all healthier and safer. But while the power of the latest computing and AI systems is impressive and potentially very useful and beneficial for society, there are increasing worries of a dark side to these technologies. In embracing the new opportunities that technology provides, we must be aware of new and hitherto unheard of dangers.

The dangers I am speaking of are not just the ones that concern many, such as the increasing use of AI and robotics that may eliminate many jobs in the next 20 years. While very real, that concern may pale into insignificance compared to some other threats we might face from technologies that are out of control or under the control of humans who are not adequately prepared for them.

The late Professor Stephen Hawking said the emergence of AI could be the worst event in our civilisation. Elon Musk, the founder of Tesla and SpaceX, is also worried about AI and fears we could unleash a demon. The reasons for such fears are that AI systems can now exceed humans in many cognitive tasks. They can search global information sources and do deep learning far faster than a human. They can solve problems faster than us by search­ing through millions of solutions in a few seconds and find the right one when humans would take far longer. AI programmes have not yet developed consciousness, and that is where we are still ahead of them. But might they do that one day? It is hard to know and we still struggle scientifically to know pre­cisely what consciousness is. It is possible that the day could come when robots with advanced AI become self-aware, realise they are smarter than us, and believe that their goal is to preserve themselves and thrive, even at our expense. The science fiction nightmare of robots or machines taking over may not be quite as fictional as we might like to think. When we come to realise what is happening, it may already be too late.

But advanced technology also threatens us in more insidious ways, without most of us realising it, and it is already happening. There appears to be a lot of evidence in some countries of voter intentions being influenced by fake news spread through fictitious social media accounts and bots acting automatically. We may not be directly aware of the threat, but once false information permeates our thinking, we may be inclined to act or vote in ways that go against our natural inclinations.

Unknown actors may be manipulating views and, hence, the course of democratic processes. This danger can spread to other choices we make as consumers, if false information about products or services is spread through AI systems that learn how to manipulate our thinking.

In addition to this, the rapid growth in the use of video surveillance technology is quite alarming. Video capture technologies can now easily identify individuals in crowds. This has already been adopted in China, where crimi­nals have been identified and arrested after being spotted by video surveillance systems.

Such systems can make us safer, but what if they get it wrong? What happens if you are the person identified as a criminal when you are not? Most technology systems have a certain margin of error. They are not perfect and probably never will be. But the speed at which they operate, combined with a tenden­cy for people to believe them, could give rise to innocent individuals being thrown in jail. Is that the kind of world we want to live in?

All of this points to the need for us to be­come aware of the fact that some of the latest technologies have the potential for signifi­cant harm to individuals and society. There are many positive aspects of new technology, but we also have to be aware of the negative ones, and these do not get much publicity or even much thought. We tend to trust machines when we really should not. We need to have a better understanding of how and when technology can be malevolent and introduce a sense of technological ethics in those who develop and deploy advanced AI systems. It is the responsibility of educators to ensure that these aspects are better understood by students who will form the next generation of developers of advanced AI technologies.


Professor Graeme Wilkinson
Sunway University


Originally published in The Edge, June 2019