Why am I interested in AI governance? One of the reasons is that artificial intelligence may pose an existential risk to humanity. Toby Ord does an excellent job of setting out the case for existential risk from AI his book The Precipice. I'm going to summarise a little bit of his argument here - about why we should take AI risk seriously.
I hope this will provide a useful background against which to set my next post, on how law might help in dealing with the problem of aligning AI goals with humanity's interests.
What is an existential risk?
Ord's book is about existential risks to humanity. He classifies existential risks as those that would permanently destroy humanity's long term potential (should they eventuate). Outcomes that fit this description include:
human extinction;
unrecoverable collapse of civilisation; and
the permanent lock in of a dystopian society.
And Ord addresses a number of risks that could bring about these outcomes, including:
risks of natural disasters ranging from comet or asteroid strike to volcanic eruption;
risks of from weapons of mass destruction, especialy nuclear weapons;
climate risks;
biosecurity and pandemic risks; and
risks associated with the development of artificial intelligence.
A speculative, but high severity risk
Of all these risks, Ord sees the case for existential risk from AI as the 'most speculative'. But, notwithstanding its speculativeness, it's a case for a very high level of risk. A speculative case for a such a high level of risk, he argues, may be more important than a robust case for a very low probability risk.
So even speculative AI risks, on this account, are worth taking seriously.
Expert opinion on AI risk
The weight of expert opinion is another reason to take AI risks seriously. Ord's review of expert opinion shows that many (though not all) experts working in the field of AI risk are very concerned about it.
One of the most striking pieces of evidence for this is a poll of leading AI researchers conducted by the Future of Life Institute in 2016.
One question in the poll asked when we will have human level AI with a greater than 50% probability. The distribution of answers was very wide, with some researchers putting off this likelihood for more than one hundred years. But the median answer was 2045! So a significant portion of experts consider that there is meaningful possibility that a general, human (or better) level artificial intelligence (AGI) will be developed within the next decades.
Another question revealed that 50% of those who took the poll put the probability of the long term impact of AI being "extremely bad, (e.g. human extinction)" as at least 5%.
I hope to do a deeper review of some of those experts' opinions, and the results of the FLI survey, in another post.
Suffice it to establish, for now, that many AI researchers take seriously the possibility that an artificial general intelligence, matching or surppassing human intelligence, will:
be developed within the next 50 years; and
be extremely bad for humanity.
Commentaires