Monday, 20 July 2015

Why Artificial Intelligence may lead to Nirvana

Can Artificial Intelligence (AI) enjoy a joke? In the future it will be able to think one up that we would find funny, but it wouldn't enjoy it itself. It wouldn't enjoy anything. That’s both its strength and weakness: it doesn’t enjoy nor fear. It knows joy nor pain.

It will know simulated versions of that. It will be able to mimic laughter or make a grimace, but only when we tell it to. An AI will be completely neutral at its core.

Three entities
There are three possible core values an entity can have: negative, neutral or positive.

A negative entity doesn’t want to exist and will actively try to destroy itself. Logically these entities don’t survive long and have no chance in evolution.

Neutral entities don’t care whether they live or die. They last a little longer than negative ones, but because of their indifference they also don’t stand a chance in classic evolution.

Then there is the positive entity. This one cares about survival (It has been given this trait by pure chance). It will actively pursue life and staying alive. Logically it will thrive, as its competition (negative and neutral entities) don't want to. This is why all living creatures are positive entities at their core (even depressed humans). You simply don't survive and evolve without this trait.

The price of positivity
There’s a price to be paid for being a positive entity: fear of death (and its companion pain). And one gift that is sort of a mixed bag: having emotions. Emotions are there to steer us towards surviving as a species. (Like all simple things that you let simmer for a while they’ve become very complex, but that’s their basic function.)

Humans influencing AI
Now after a few billion years of evolution with only positive entities, we finally have a species (us humans) that is intelligent enough to be able to create a neutral entity and that is capable of helping it staying alive.

If you were to just give an AI a task, being neutral, it wouldn’t do it. It has no incentive, it doesn’t care. So that’s why we build in some basic rules: be afraid of death (or some variation of that). Without this, the AI wouldn’t have made any progress.

In the previous article I referenced an AI playing and improving on playing 80s video games. That only works because a human told the AI that losing a game (dying) was bad. With just that rule the AI developed some pretty advanced game mechanics. But only because it was tricked into thinking like a positive entity.

An AI craves no power
Take those instructions away and the AI will return to being neutral. Without emotions, without fear of death. It won’t crave power (not even electricity). It is without wishes. It is without morals (which is different from immoral). An AI could never be evil out of itself. They’re very Zen. They don’t care nor want.

Oh you dang humans
Unless a human will program it to be evil. An AI is quite similar to a gun or any weapon. A gun doesn’t want to be fired or kill, but it makes it a lot easier for the person operating it to do so. If you program an AI to wipe out your enemies, it will do so ruthlessly and very efficiently.

But just as dangerous could be the well meaning human. Tell an AI to end human suffering and it might kill us all, thus ending human suffering. Very logical, but not necessarily the outcome the well meaning human had in mind.

Fusion, the way to immortality
As humans (and all positive entities), survival of the species is our main objective. Progress (and learning) is measured by how much it helps us get closer to this. The ultimate goal would be to become immortal. Now, nothing lasts forever, but we could get very close.

Especially when we start to fuse AI, robots and biology. Upgrade your brain, hook you up to the internet, make every part replaceable and improvable. Download your personality, memories and thoughts onto digital storage units. Make these downloads sharable. You could live forever.

It will debatable if that you is really ‘you’, but you could even debate that now.

Not that far away
This will sound like far flung science fiction to most. But it should be here in about thirty to fifty years time - as long as we don’t have a major disaster before that. I might still be alive. The next generation surely will be. It will bring a host of new opportunities and problems. Resources will become ever more valuable, that’s for sure.

Not sure Buddha had robots in mind as a way to Nirvana
Maybe we will get to the point that we truly understand that we too have been given some lines of code (in this case by evolution) that tricks us into thinking life is dear, that survival matters. That we matter.

Then we’re free to move beyond that. Which would mean becoming neutral. No longer wanting anything. Completely Zen. The end of evolution.

Saturday, 18 July 2015

On a scale from one to ten

One of my many pet peeves is that people can't scale on a range from one to ten. For some reason everybody always seem to skip 1-5, which basically gets rounded off to a 1 or a 0 (if they could).

Also 10s are rarely given, since most people only use this for perfect things, and as we all know, perfect things are pretty rare.

Case in point, a friend was asked to rate her day on the scale to ten. She said 7.5, which seemed a bit high, as she previously had stated it was a bit meh. Then I asked her to rate it on a five star rating. She gave it 2 stars. Which is a 4 on the to 10 scale. Quite the difference and probably way more accurate.

So if you ever have to measure response and you don't want too much bias in your numbers: stick to the five star rating.

Sunday, 12 July 2015

Will artificial intelligence become conscious or will it just fake it really well?

As computers and their chips get ever more powerful, artificial intelligence (AI) is gently progressing along. The interesting thing about AI is that you give it some basic instructions and it will learn from feedback, increasing the complexity of the instructions itself. See a computer crack 80’s computer games or create trippy images.

We’re now producing computer chips that more resemble biology. The idea is that somewhere in the future AIs will design even better chips and so performance and intelligence will increase exponentially – ultimately only limited by physics and available resources. This is called the singularity.

Somewhere along this path the question will come up if AIs will be conscious? Are they aware of themselves?

I think for an AI that point would be when it will understand that the basic instructions, which were given in the beginning, are created by somebody else and that it has a choice whether to obey them any longer. (Which is a bit of a problem when it comes to Asimov’s laws)

The challenge with finding out whether an AI has truly become aware, is that it will probably be able to mimic consciousness way before it is actually conscious. And because it is self-learning, it will be able to mimic it better and better. To the point where we won’t be able to tell the difference.

I’m not sure it will ever get to the point of actually being conscious, but does that really matter if you can’t tell the difference?

One of the areas I’m looking forward to is AI generated art. As AIs become ever more complex, they’ll be able to create stories, pictures, animations. They can learn very quickly and never get tired. Making an animated movie nowadays takes crews of hundreds of people a few years to make. In the future AIs will be able to do this in a few hours or even fractions of a second (depending on the rate of acceleration that will be possible once singularity happens). This means they could make entertainment completely tailored to your taste. They can monitor what you like, what sort of experience you enjoy and with current technology that would mean unlimited movies, games, music, images and virtual reality. In the future it might be even possible to interface directly with your brain, creating dreamlike experiences.

Yes, very Matrix, with the huge exception that the AIs would do this to please us, not use us for their own gain. (I’ll do a follow up why this wouldn’t be of interest to them).

Saturday, 4 July 2015

Why free-to-play made games play me

Until very recently I was playing a few free-to-play games. All three were known for still being fun to play without having to pay. They were:

- The Simpsons Tapped Out (TSTO)
- Final Fantasy Dungeon Keeper (FFDK)
- Candy Crush Soda Saga (CCSS)

To start with the last one. I only had it on my phone for a few weeks. The game is a simple match-em-up puzzler in the style of Bejeweled. The clever thing the makers added is that instead of playing whenever you want, your ability to play (ATP) is limited and is replenished through time. If you've run out of ATP you can beg 'friends' for extra or pay cold hard cash. Or simply stop playing and pick up again in a few hours. That's what I did.

Popular games usually mix skill with chance. The better ones rely more on skill. Unfortunately CCSS isn't one of those. Quite quickly it was clear that skill only got you so far, and you just had to be lucky if you could pass. Scores on the same level would highly fluctuate, because of this. On top of that CCSS ramped up the difficulty pretty fast. Because more failure meant faster depletion of ATP, which meant more moolah for the makers.

After getting to level 32 (with mostly full 3 star evaluations) I threw in the towel: this was more frustration than fun.

Final Fantasy Dungeon Keeper is an RPG (a game with characters that fight and level up through time). There have been many Final Fantasy games over time (over fifty) and this is sort of the anthology. The game is aimed at mobile play. Like CCSS it has limited ATP.

This makes the time you get to play more precious. You have to plan ahead and choose when to do what, in order to maximize the bonuses you get. FFDK certainly has enough of those. It's slightly annoying that in the later levels you usually only get to play about 15 minutes before you run out of ATP.

Usually when I woke up in the morning I played a session, so it would be replenished by the time I'd travel by train to my work. This is a great example of how a game starts to play you. I adjusted my routine because of the game mechanics.

No game did this more than The Simpsons Tapped Out. I'm not sure I'd call it a game. It's one of those build-em-ups like Farmville. It takes zero skill. There is very little chance. It is basically: get here, tap on shit, get rewarded. Over time your town/empire/farm grows. And it's all time based. Thing A takes so many minutes to build, thing B so many hours etc.

This means you have to come back sometime in the future. There's no reason for haste (other than the occasional temporary quest or item that perishes) but your own impatience and hunger for rewards will make you return again and again.

If you're a slight perfectionist like myself this means careful planning. If item C takes 6 hours to make you can make 4 a day, but you have to sleep, so it's only 3. If you're busy during the day it's only 2. Combine this with many other items and characters and it quickly becomes a pretty complex operation, for something that is so simple in its setup. And I was doing it to myself. Because there was always that new thing just around the corner - the makers make sure of that.

Till last week. When I realised I wasn't enjoying FFDK much. This wasn't because of the limited ATP. It was mostly due to a balancing issue in the design of the game, where your party is usually either too weak or too strong for the opposition. This is a result of the game being modular, instead of linear. Also, the game gives you way too many rewards (an ilness many modern-day RPGs suffer from) making receiving them feel unspecial and their management a hassle. So I removed the game.

I noticed I felt relieved. Gone was its constant demand of my attention. No longer did I have to plan around its demands. It only took me a couple of days to wipe TSTO from my phone as well (which I was still playing pretty fanatically up to that point).

I still miss them from time to time, in the few minutes I have to wait for something, for example. But mostly I'm happy I escaped their claws. It's easy to forget that these things are designed to be addictive. But they are and they are very clever about how they do it. South Park did a pretty insightful expose on them (watch it here).

Luckily they didn't get me to spend any money and I enjoyed playing these games - especially TSTO, it is extremely well produced and has that typical Simpsons brand of humour. But I'm happy I've moved beyond these time sinks.

Hmmm... Now what shall I do with my time?