The folks at Gizmodo are on the case, writing How We Can Prepare for Catastrophically Dangerous AI–and Why We Can’t Wait.
There are many extreme scenarios that come to mind. Armed with superhuman powers, an ASI [artificial superintelligence] could destroy our civilization, either by accident, misintention, or deliberate design. For instance, it could turn our planet into goo after a simple misunderstanding of its goals (the allegorical paperclip scenario is a good example), remove humanity as a troublesome nuisance, or wipe out our civilization and infrastructure as it strives to improve itself even further, a possibility AI theorists refer to as recursive self-improvement. Should humanity embark upon an AI arms race with rival nations, a weaponized ASI could get out of control, either during peacetime or during war. An ASI could intentionally end humanity by destroying our planet’s atmosphere or biosphere with self-replicating nanotechnology. Or it could launch all our nuclear weapons, spark a Terminator-style robopocalypse, or unleash some powers of physics we don’t even know about. Using genetics, cybernetics, nanotechnology, or other means at its disposal, an ASI could reengineer us into blathering, mindless automatons, thinking it was doing us some sort of favor in an attempt to pacify our violent natures. Rival ASIs could wage war against themselves in a battle for resources, scorching the planet in the process.
As the article notes, a new AI called AlphaZero taught itself to play chess, Go, and shogi in three days’ time and then beat AlphaGo, the first AI to beat a human world champion Go player, plus two other supercomputers specifically programmed to play chess and shogi.
The secret ingredient: “reinforcement learning,” in which playing itself for millions of games allows the program to learn from experience. This works because AGZ is rewarded for the most useful actions (i.e., devising winning strategies). The AI does this by considering the most probable next moves and calculating the probability of winning for each of them. AGZ could do this in 0.4 seconds using just one network. (The original AlphaGo used two separate neural networks: one determined next moves, while the other calculated the probabilities.) AGZ only needed to play 4.9 million games to master Go, compared to 30 million games for its predecessor.
Go is not the real world, it’s a closed environment with very few, very clear rules, but the trajectory of AI is clearly toward learning to do more and more, faster and faster.