The doom sayers believe that humanity will be overwhelmed by creating machines that – like Terminator's Skynet – become ever-more clever and reach a singularity. They're wrong.
The singularity – or, to give it its proper title, the technological singularity. It's an idea that has taken on a life of its own; more of a life, I suspect, than what it predicts ever will. It's a Thing for techno-utopians: wealthy middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some seemingly prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence – a man-made god that grants transcendence.
And it's a thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a super-intelligent AI will have no interest in curing cancer or old age, or ending poverty, but will – malevolently or maybe just accidentally – bring about the end of human civilisation as we know it. History and Hollywood are on their side. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.
The singularity is basically the idea that as soon as AI exceeds human intelligence, everything changes. There are two central planks to the hypothesis: one is that as soon as we succeed in building AI as smart as humans it rapidly reinvents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans cannot possibly comprehend how it works. The other is that the future of humanity becomes in some sense out of control, from the moment of the singularity onwards.
So should we be worried or optimistic about the technological singularity? I think we should be a little worried – cautious and prepared may be a better way of putting it – and at the same time a little optimistic (that's the part of me that would like to live in Iain M Banks' The Culture. But I don't believe we need to be obsessively worried by a hypothesised existential risk to humanity. Why? Because, for the risk to become real, a sequence of things all need to happen, a sequence of big ifs.
If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.
By worrying unnecessarily we're falling into a trap: the fallacy of privileging the hypothesis. And, perhaps worse, taking our eyes off other risks we should really be worrying about, such as manmade climate change or bioterrorism. Let me illustrate what I mean. Consider the possibility that we invent faster than light travel (FTL) some time in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow. At the end of it you'll be thinking: my god, never mind climate change, we need to stop all FTL research right now.
A human-equivalent AI would need to be a generalist, like humans. It would need to be able to learn, most likely by developing over the course of some years, then generalise what it has learned – in the same way we learned as toddlers that wooden blocks could be stacked, banged together or as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesise new knowledge, have intentionality and – in all likelihood – be self-aware, so it understands what it means to have agency in the world.
So we don't need to be obsessing about the risk of super-intelligent AI, but I do think we need to be cautious and prepared. In a Guardianpodcast last week philosopher Nick Bostrom explained that there are two big problems, which he calls competency and control. The first is how to make super-intelligent AI, the second is how to control it (ie, to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first. On this I 100% agree, and I'm one of the small number of people working on the control problem.
In 2010 I was part of a group that drew up a set of principles of robotics – principles that apply equally to AI systems. I strongly believe science and technology research should be undertaken within a framework of responsible innovation, and have argued we should be thinking about subjecting robotics and AI research to ethical approval, in the same way we do for human subject research. And recently I've started work towards making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical as well as safe. We should be worrying about present-day AI rather than future super-intelligent AI.
The guardian.com
By worrying unnecessarily we're falling into a trap: the fallacy of privileging the hypothesis. And, perhaps worse, taking our eyes off other risks we should really be worrying about, such as manmade climate change or bioterrorism. Let me illustrate what I mean. Consider the possibility that we invent faster than light travel (FTL) some time in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow. At the end of it you'll be thinking: my god, never mind climate change, we need to stop all FTL research right now.
A human-equivalent AI would need to be a generalist, like humans. It would need to be able to learn, most likely by developing over the course of some years, then generalise what it has learned – in the same way we learned as toddlers that wooden blocks could be stacked, banged together or as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesise new knowledge, have intentionality and – in all likelihood – be self-aware, so it understands what it means to have agency in the world.
So we don't need to be obsessing about the risk of super-intelligent AI, but I do think we need to be cautious and prepared. In a Guardianpodcast last week philosopher Nick Bostrom explained that there are two big problems, which he calls competency and control. The first is how to make super-intelligent AI, the second is how to control it (ie, to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first. On this I 100% agree, and I'm one of the small number of people working on the control problem.
In 2010 I was part of a group that drew up a set of principles of robotics – principles that apply equally to AI systems. I strongly believe science and technology research should be undertaken within a framework of responsible innovation, and have argued we should be thinking about subjecting robotics and AI research to ethical approval, in the same way we do for human subject research. And recently I've started work towards making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical as well as safe. We should be worrying about present-day AI rather than future super-intelligent AI.
The guardian.com
0 comments:
Post a Comment
Grace A Comment!