Can Artificial Intelligence (AI) enjoy a joke? In the future it will be able to think one up that we would find funny, but it wouldn't enjoy it itself. It wouldn't enjoy anything. That’s both its strength and weakness: it doesn’t enjoy nor fear. It knows joy nor pain.
It will know simulated versions of that. It will be able to mimic laughter or make a grimace, but only when we tell it to. An AI will be completely neutral at its core.
Three entities
There are three possible core values an entity can have: negative, neutral or positive.
A negative entity doesn’t want to exist and will actively try to destroy itself. Logically these entities don’t survive long and have no chance in evolution.
Neutral entities don’t care whether they live or die. They last a little longer than negative ones, but because of their indifference they also don’t stand a chance in classic evolution.
Then there is the positive entity. This one cares about survival (It has been given this trait by pure chance). It will actively pursue life and staying alive. Logically it will thrive, as its competition (negative and neutral entities) don't want to. This is why all living creatures are positive entities at their core (even depressed humans). You simply don't survive and evolve without this trait.
The price of positivity
There’s a price to be paid for being a positive entity: fear of death (and its companion pain). And one gift that is sort of a mixed bag: having emotions. Emotions are there to steer us towards surviving as a species. (Like all simple things that you let simmer for a while they’ve become very complex, but that’s their basic function.)
Humans influencing AI
Now after a few billion years of evolution with only positive entities, we finally have a species (us humans) that is intelligent enough to be able to create a neutral entity and that is capable of helping it staying alive.
If you were to just give an AI a task, being neutral, it wouldn’t do it. It has no incentive, it doesn’t care. So that’s why we build in some basic rules: be afraid of death (or some variation of that). Without this, the AI wouldn’t have made any progress.
In the previous article I referenced an AI playing and improving on playing 80s video games. That only works because a human told the AI that losing a game (dying) was bad. With just that rule the AI developed some pretty advanced game mechanics. But only because it was tricked into thinking like a positive entity.
An AI craves no power
Take those instructions away and the AI will return to being neutral. Without emotions, without fear of death. It won’t crave power (not even electricity). It is without wishes. It is without morals (which is different from immoral). An AI could never be evil out of itself. They’re very Zen. They don’t care nor want.
Oh you dang humans
Unless a human will program it to be evil. An AI is quite similar to a gun or any weapon. A gun doesn’t want to be fired or kill, but it makes it a lot easier for the person operating it to do so. If you program an AI to wipe out your enemies, it will do so ruthlessly and very efficiently.
But just as dangerous could be the well meaning human. Tell an AI to end human suffering and it might kill us all, thus ending human suffering. Very logical, but not necessarily the outcome the well meaning human had in mind.
Fusion, the way to immortality
As humans (and all positive entities), survival of the species is our main objective. Progress (and learning) is measured by how much it helps us get closer to this. The ultimate goal would be to become immortal. Now, nothing lasts forever, but we could get very close.
Especially when we start to fuse AI, robots and biology. Upgrade your brain, hook you up to the internet, make every part replaceable and improvable. Download your personality, memories and thoughts onto digital storage units. Make these downloads sharable. You could live forever.
It will debatable if that you is really ‘you’, but you could even debate that now.
Not that far away
This will sound like far flung science fiction to most. But it should be here in about thirty to fifty years time - as long as we don’t have a major disaster before that. I might still be alive. The next generation surely will be. It will bring a host of new opportunities and problems. Resources will become ever more valuable, that’s for sure.
Not sure Buddha had robots in mind as a way to Nirvana
Maybe we will get to the point that we truly understand that we too have been given some lines of code (in this case by evolution) that tricks us into thinking life is dear, that survival matters. That we matter.
Then we’re free to move beyond that. Which would mean becoming neutral. No longer wanting anything. Completely Zen. The end of evolution.