Advances in artificial intelligence, synthetic biology, nanotechnology and robotics promise a new start for the human race – if it learns to understand the risks, that is
Part of a technology risks series supported by
If an open letter bearing his name is anything to go by, Professor Stephen Hawking has mixed views on artificial intelligence (AI).
“The potential benefits are huge,” suggested the letter, signed by 150 luminaries and released in January by the Future of Life Institute (which works “to mitigate existential risks facing humanity”).
“Everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.
Unfortunately,” added the letter (endorsed by entrepreneur Elon Musk, among others), “it might also be the last, unless we learn how to avoid the risks”.
It is human nature to focus on the positive aspects of new technology, but we avoid the potential – perhaps catastrophic – downsides at our peril.
“People should be scared,” says Professor Huw Price of Cambridge University’s Faculty of Philosophy, one of the three founders of the Centre for the Study of Existential Risk. “As all risk managers know, the allocation of resources into what gets studied is not always logical. As far as we know, these events are not so unlikely to dismiss, and they are not called ‘catastrophic’ for no reason.”
Advanced AI is not the only potential threat on the horizon, as advanced forms of synthetic biology, nanotechnology weaponry, robot warriors and machine superintelligence are all on the verge of becoming a potential reality.
“The great bulk of existential risk in the foreseeable future is anthropogenic – that is, arising from human activity,” explains Professor Nick Bostrom, a philosopher at Oxford university’s St Cross College and editor of the book GlobalCatastrophic Risks. “In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences – intended and unintended, positive and negative.”
As an example, Bostrom cites the advanced forms of synthetic biology, nanotechnology weaponry and machine superintelligence that might be developed this century.
“This new class of technology is greatly reducing the number of people it would take to wipe out our species,” says Price. “However, because nanotechnology is perhaps not as dramatic as nuclear war, it has not yet received the attention it deserves. Yet, this will be the situation for a long time; our technology is not going to get less powerful.”
However, existential risks have so far barely been studied, perhaps because they are so vast and hard to grasp.“We therefore know little about how big various risks are, what factors influence the level of risk, how different risks affect one another, how we could most cost-effectively reduce risk or what are the best methodologies for researching existential risk,” says Bostrom.
“We need to be investigating, mitigating and managing these risks,” says Price.
“We should be developing a serious plan. So far, many of the people concerned about these risks have come from outside science, or from Hollywood, and this has contributed to the sense that the risks are flaky. However, they are a real danger. We need to bring study into the realm of serious analysis. We all live in an environment with long-tail risks, and we all know that sooner or later a long-tail risk will get us. What about when that risk could wipe us all out? That is the reality we face.”
Cause of concern
Risk managers should be concerned about existential risk because they are human begins with moral responsibilities, agrees Bostrom.
“They might have the greatest opportunities to do something helpful with regard to existential risks outside their practice,” he says.
For example, risks from future technologies might be studied by means of theoretical modelling to determine their capabilities, what kinds of safeguards are needed and the strategic context in which they might be used.
“We are not trying to say that we can change the world,” says Price. “However, what we can do is shift the problem from a predominantly bad to predominantly good outcome. In many ways, this is akin to putting on a seatbelt. It might not be the whole answer, but it is definitely making things better.”
No comments yet