This morning I read an article on
Artificial Intelligence (AI) in The Guardian, called “Can we stop robots outsmarting humanity?”
and it triggered some thoughts.
First of all, terrible title (note: The title has been changed after I wrote this): Robots
and AI are not the same and the article isn’t about limiting the intelligence
of AI, but about limiting or preventing the damage Superintelligent AIs (SAIs) might
be able to do in the future. The title has probably been made by an editor who
thought it was smart to have an inaccurate headline, in the belief it would get
more people to read it. I’m not so sure, I think “How do we prevent Artificial
Intelligence from wiping us all out” would have gotten you plenty of clicks,
but I digress.
AIs are getting more I
The article talks about people and
institutions that try to prevent the damage SAIs might do. SAIs don’t really exist
yet, what we have now are mostly very narrow AIs that can do one thing really
well, like chess. But slowly we’re moving to broader and more advanced AIs,
like Google’s Alpha Zero and IBM’s Watson. These AIs can be repurposed and
expended upon.
For example, in the past an AI would
be developed to just play chess. Programmers would feed thousands of human chess
matches into the system and it would learn from rules and tricks thought up by the
best human players. By 1997 these AIs were better than humans and they have
improved over time. Then, in 2017, Alpha Zero was introduced to chess. The
program was taught the rules of the game and just played games against itself.
Within 4 hours it was better than a master. It went on to beat the best chess computer
in the world with 28 wins, 72 draws and 0 loses, using a unique way of playing.
Impressive, but chess is a so-called
‘perfect information game’. Which means that all the necessary information is known
and doesn’t ever change. It’s free from randomness and chaos. It’s still a
giant leap from the orderly chess board to the chaotic real world.
What is success?
While we are capable of making self-learning
programs, the challenge lies in having these programs correctly evaluate if
they are successful. With chess this is easy; win most games. But with a more
ambitious goal – say curing human decease – it’s harder. If the AI wipes out
all humans and this ends human decease, has it been successful?
This brings us to the crux of the
fears humans have about AIs: that their solutions don’t take our interests into
account. I would argue that an artificial intelligence that would do that is
not a SAI. But the road to SAIs is fraught with the danger of having such defective
and destructive AIs. This is not the AI’s fault, but of the fallible humans who
make them.
A true SAI would be able to
correctly assess whether its solution is the optimal one. In order to do that
we have to provide them with a correct answer to the question: “what is the right
thing to do?” or give them the tools to come to a proper conclusion. We’ve
struggled with that question for ages. How do we get to a conclusion that is
not biased in any way? Is that even possible?
How do you solve a problem like humans?
Most humans would prefer it to be biased, anyway.
We want it to prioritize human interests above others. I suspect a non-biased
SAI ruling the world wouldn’t wipe us out, but would seriously cull the human world
population and put us in supercomfortable zoos for humans – for our own and the
universe’s good.
People don’t like the idea of being
dominated and nannied by a superior intellect in the future. Tough luck, I say,
that’s part of evolution. But I’m sure many people would rebel and if there is
ever a human versus machine war, you know it will have been us that started it.
Us and our overinflated sense of importance.
Galileo all over again
A lot of these articles understandably
focus on human loss, instead of on the universe’s gain. But if we are capable
at some point in the future to develop a superior intelligence that’s truly
wise, just and logical, wouldn’t that be a good thing? Even if we die out in
the process? I don’t have an answer to that question, because ultimately it
would mean I have an answer to the question: “What is the point of existence?”.
But in the conventional linear perception of time and progress I think we can
argue that the answer is positive.
It’ll just be another point in our
collective history that we discover that the universe doesn’t revolve around
us. Accepting that truth might turn out to be much harder than developing Superintelligent
Artificial Intelligences.