Thursday, 28 March 2019

The problem with Artificial Intelligence is humans


This morning I read an article on Artificial Intelligence (AI) in The Guardian, called “Can we stop robots outsmarting humanity?” and it triggered some thoughts.

First of all, terrible title (note: The title has been changed after I wrote this): Robots and AI are not the same and the article isn’t about limiting the intelligence of AI, but about limiting or preventing the damage Superintelligent AIs (SAIs) might be able to do in the future. The title has probably been made by an editor who thought it was smart to have an inaccurate headline, in the belief it would get more people to read it. I’m not so sure, I think “How do we prevent Artificial Intelligence from wiping us all out” would have gotten you plenty of clicks, but I digress.

AIs are getting more I

The article talks about people and institutions that try to prevent the damage SAIs might do. SAIs don’t really exist yet, what we have now are mostly very narrow AIs that can do one thing really well, like chess. But slowly we’re moving to broader and more advanced AIs, like Google’s Alpha Zero and IBM’s Watson. These AIs can be repurposed and expended upon.

For example, in the past an AI would be developed to just play chess. Programmers would feed thousands of human chess matches into the system and it would learn from rules and tricks thought up by the best human players. By 1997 these AIs were better than humans and they have improved over time. Then, in 2017, Alpha Zero was introduced to chess. The program was taught the rules of the game and just played games against itself. Within 4 hours it was better than a master. It went on to beat the best chess computer in the world with 28 wins, 72 draws and 0 loses, using a unique way of playing.

Impressive, but chess is a so-called ‘perfect information game’. Which means that all the necessary information is known and doesn’t ever change. It’s free from randomness and chaos. It’s still a giant leap from the orderly chess board to the chaotic real world.

What is success?

While we are capable of making self-learning programs, the challenge lies in having these programs correctly evaluate if they are successful. With chess this is easy; win most games. But with a more ambitious goal – say curing human decease – it’s harder. If the AI wipes out all humans and this ends human decease, has it been successful?

This brings us to the crux of the fears humans have about AIs: that their solutions don’t take our interests into account. I would argue that an artificial intelligence that would do that is not a SAI. But the road to SAIs is fraught with the danger of having such defective and destructive AIs. This is not the AI’s fault, but of the fallible humans who make them.

A true SAI would be able to correctly assess whether its solution is the optimal one. In order to do that we have to provide them with a correct answer to the question: “what is the right thing to do?” or give them the tools to come to a proper conclusion. We’ve struggled with that question for ages. How do we get to a conclusion that is not biased in any way? Is that even possible?

How do you solve a problem like humans?

Most humans would prefer it to be biased, anyway. We want it to prioritize human interests above others. I suspect a non-biased SAI ruling the world wouldn’t wipe us out, but would seriously cull the human world population and put us in supercomfortable zoos for humans – for our own and the universe’s good.

People don’t like the idea of being dominated and nannied by a superior intellect in the future. Tough luck, I say, that’s part of evolution. But I’m sure many people would rebel and if there is ever a human versus machine war, you know it will have been us that started it. Us and our overinflated sense of importance.

Galileo all over again

A lot of these articles understandably focus on human loss, instead of on the universe’s gain. But if we are capable at some point in the future to develop a superior intelligence that’s truly wise, just and logical, wouldn’t that be a good thing? Even if we die out in the process? I don’t have an answer to that question, because ultimately it would mean I have an answer to the question: “What is the point of existence?”. But in the conventional linear perception of time and progress I think we can argue that the answer is positive.

It’ll just be another point in our collective history that we discover that the universe doesn’t revolve around us. Accepting that truth might turn out to be much harder than developing Superintelligent Artificial Intelligences.

Thursday, 21 February 2019

Tucker Carlson shows his real face – and it isn’t pretty

Tucker Carlson is a hypocrite who pretends to be on the side of the common people, while secretly selling them out to the highest bidder.

Have you seen Rutger Bregman’s unaired interview with Tucker Carlson? If not, go watch it now. Or if you have, go watch it again – I’ll wait.

So, Tucker Carlson starts out quite chummy, but gets flustered when Rutger Bregman gives a critical comment about Fox. Tucker starts to stutter, but he finds his footing and they go into a somewhat substantive discussion, till Bregman starts to attack Trump, then Fox and finally Carlson personally. After a while Carlson loses it and starts swearing at Bregman. This pretty much ends the interview, both knowing this won’t make it to air.

A marketing stunt?
Was this just a simple marketing stunt Bregman pulled? The crux is in the words ‘just’ and ‘simple’. Because it was a marketing stunt, for sure. Bregman knew his combative style would result in conflict and not make it to air. He calculated that he would reach many more people by going viral through other media and he acted accordingly – this footage wasn’t captured by accident.

Let’s say Rutger had used the conventional approach instead and had a little 5 minute segment on Carlson’s show, where he would just be critical of Davos and rich people not paying taxes. The take away for Carlson’s viewers would have been that Carlson is on their side against the ‘global elites’.

But he isn’t. And that was wat Bregman wanted to expose, which Carlson wouldn’t have let him do on his own show. So Bregman pretended to be interested in appearing on Carlson’s show and gave his criticism directly to Tucker. Who clearly hardly ever gets challenged like that and as a result lost his cool.

Who is Tucker Carlson and who does he work for?
Tucker Carlson is one of the opinion stars of Fox News. Fox News has a clear conservative bias and is pretty much the propaganda arm of the Republican Party – or the Republican Party is the political arm of Fox News. They are very intertwined. And both pledge fealty to the incredibly rich, because they are the ones who pay them (yes, there are billionaires on the Democrats side, but they are a small minority).

This is why Republicans are against taxing the rich more, combating global warming, giving everybody access to healthcare, etc. All of these are popular with the American people and even have majority support with Republican voters. But they are not popular with the donors, so nothing happens.

Putting the con in conservative
Going against the will of your voters is a dangerous thing to do, so you need to pull the wool over their eyes. So you blame immigrants and the global elite. The word ‘global’ is important here: these are outsider elites, like George Soros; not insider elites, like the Koch brothers.

Rutger mentions the Kochs and the Cato institute. Carlson is a senior fellow at Cato, an influential rightwing think tank that helps develop policy that is favorable to the most dangerous industries: fossil fuel, mining, healthcare insurance, tobacco, finance, incarceration, etc. The policy then gets pushed by Republicans in congress and sold to the public through Fox News. This is the corruption Rutger is talking about.

What is the antidote?
Bregman knew Carlson wouldn’t allow him to be directly critical of Fox and Carlson, on air. This is why he did what he did and hoped it would catch fire, just like his comments on a small panel discussion at Davos would do. It’s a very clever marketing ploy, as the attention allows him to direct part of it towards his own platform: The Correspondent.

It is unfortunate that the message will probably not reach many of Tucker’s viewers. But they are very hard to reach for him, anyway. People aren’t swayed by rational arguments when they are tribal. Fox viewers have made their choice and five minutes on a channel that spews lies 24 hours a day, wouldn’t have made much of a difference.

By building a platform that counters Fox’s bullshit on a much bigger scale, Bregman understands that losing a battle might help win you the war.

Saturday, 15 April 2017

Pet peeve: the intellectually lazy question of the half filled glass

Most of you will be familiar with the question “Is the glass half empty or half full?” I'm not a fan of this question. 

“Ah”, some of you might think, “he’s one of those half empty people.” These people will generally be self-described glass half full people; the optimistic go-getters that see every problem as a challenge, like the woman who said during an unexpected lay-over in of Turkey’s most depressing airports: “Great, now we get to discover this place!”. But they are wrong. Well, mostly wrong.

First of all, I don't like this question because of its binary options. The poser is assuming everybody falls in either one of the two categories. It is a bit like asking: “Are you a Catholic or Protestant?”, which assumes everybody is a Christian. Or “Are you a cat or dog person?”, like those are mutually exclusive and some people like neither.

The obvious missing answer is “It is 50% filled”, which is at least is a somewhat accurate observation, instead of an opinion. (Technically, unless the half-filled glass is in a vacuum, it is always filled 100%, just not with liquid.) I’d like to call the people who choose this option realists.

But most of all, I don't like this question because it doesn’t consider context. The question forces you to form an opinion without all the facts. We’re not told what the glass is filled with, why it was filled or why it may be emptied? These things matter.

Let’s say we stick to the two original choices. Now we fill an empty glass to 50%. I would then argue this glass is half full. Now we fill it to ‘full’, which is never 100% filled to the brim (unless you order a drink in the UK), and then drink it back down to 50%. I’d now argue the glass is half empty. That has nothing to do with optimism or negativity, but to do with the logical progression of the contents of the glass. 

If the glass is filled with poison or piss, is it still considered optimistic to call it half full?

I understand the appeal of this question. For the asker it gives them a rough insight into your personality. But as a realist, I don’t like to be excluded from consideration. The overt simplicity of its premise makes me judge the asker as intellectually lazy. I guess when it comes to this dilemma, it’d say I’m more a glass half empty guy.

Monday, 20 July 2015

Why Artificial Intelligence may lead to Nirvana

Can Artificial Intelligence (AI) enjoy a joke? In the future it will be able to think one up that we would find funny, but it wouldn't enjoy it itself. It wouldn't enjoy anything. That’s both its strength and weakness: it doesn’t enjoy nor fear. It knows joy nor pain.

It will know simulated versions of that. It will be able to mimic laughter or make a grimace, but only when we tell it to. An AI will be completely neutral at its core.

Three entities
There are three possible core values an entity can have: negative, neutral or positive.

A negative entity doesn’t want to exist and will actively try to destroy itself. Logically these entities don’t survive long and have no chance in evolution.

Neutral entities don’t care whether they live or die. They last a little longer than negative ones, but because of their indifference they also don’t stand a chance in classic evolution.

Then there is the positive entity. This one cares about survival (It has been given this trait by pure chance). It will actively pursue life and staying alive. Logically it will thrive, as its competition (negative and neutral entities) don't want to. This is why all living creatures are positive entities at their core (even depressed humans). You simply don't survive and evolve without this trait.

The price of positivity
There’s a price to be paid for being a positive entity: fear of death (and its companion pain). And one gift that is sort of a mixed bag: having emotions. Emotions are there to steer us towards surviving as a species. (Like all simple things that you let simmer for a while they’ve become very complex, but that’s their basic function.)

Humans influencing AI
Now after a few billion years of evolution with only positive entities, we finally have a species (us humans) that is intelligent enough to be able to create a neutral entity and that is capable of helping it staying alive.

If you were to just give an AI a task, being neutral, it wouldn’t do it. It has no incentive, it doesn’t care. So that’s why we build in some basic rules: be afraid of death (or some variation of that). Without this, the AI wouldn’t have made any progress.

In the previous article I referenced an AI playing and improving on playing 80s video games. That only works because a human told the AI that losing a game (dying) was bad. With just that rule the AI developed some pretty advanced game mechanics. But only because it was tricked into thinking like a positive entity.

An AI craves no power
Take those instructions away and the AI will return to being neutral. Without emotions, without fear of death. It won’t crave power (not even electricity). It is without wishes. It is without morals (which is different from immoral). An AI could never be evil out of itself. They’re very Zen. They don’t care nor want.

Oh you dang humans
Unless a human will program it to be evil. An AI is quite similar to a gun or any weapon. A gun doesn’t want to be fired or kill, but it makes it a lot easier for the person operating it to do so. If you program an AI to wipe out your enemies, it will do so ruthlessly and very efficiently.

But just as dangerous could be the well meaning human. Tell an AI to end human suffering and it might kill us all, thus ending human suffering. Very logical, but not necessarily the outcome the well meaning human had in mind.

Fusion, the way to immortality
As humans (and all positive entities), survival of the species is our main objective. Progress (and learning) is measured by how much it helps us get closer to this. The ultimate goal would be to become immortal. Now, nothing lasts forever, but we could get very close.

Especially when we start to fuse AI, robots and biology. Upgrade your brain, hook you up to the internet, make every part replaceable and improvable. Download your personality, memories and thoughts onto digital storage units. Make these downloads sharable. You could live forever.

It will debatable if that you is really ‘you’, but you could even debate that now.

Not that far away
This will sound like far flung science fiction to most. But it should be here in about thirty to fifty years time - as long as we don’t have a major disaster before that. I might still be alive. The next generation surely will be. It will bring a host of new opportunities and problems. Resources will become ever more valuable, that’s for sure.

Not sure Buddha had robots in mind as a way to Nirvana
Maybe we will get to the point that we truly understand that we too have been given some lines of code (in this case by evolution) that tricks us into thinking life is dear, that survival matters. That we matter.

Then we’re free to move beyond that. Which would mean becoming neutral. No longer wanting anything. Completely Zen. The end of evolution.

Saturday, 18 July 2015

On a scale from one to ten

One of my many pet peeves is that people can't scale on a range from one to ten. For some reason everybody always seem to skip 1-5, which basically gets rounded off to a 1 or a 0 (if they could).

Also 10s are rarely given, since most people only use this for perfect things, and as we all know, perfect things are pretty rare.

Case in point, a friend was asked to rate her day on the scale to ten. She said 7.5, which seemed a bit high, as she previously had stated it was a bit meh. Then I asked her to rate it on a five star rating. She gave it 2 stars. Which is a 4 on the to 10 scale. Quite the difference and probably way more accurate.

So if you ever have to measure response and you don't want too much bias in your numbers: stick to the five star rating.

Sunday, 12 July 2015

Will artificial intelligence become conscious or will it just fake it really well?

As computers and their chips get ever more powerful, artificial intelligence (AI) is gently progressing along. The interesting thing about AI is that you give it some basic instructions and it will learn from feedback, increasing the complexity of the instructions itself. See a computer crack 80’s computer games or create trippy images.

We’re now producing computer chips that more resemble biology. The idea is that somewhere in the future AIs will design even better chips and so performance and intelligence will increase exponentially – ultimately only limited by physics and available resources. This is called the singularity.

Somewhere along this path the question will come up if AIs will be conscious? Are they aware of themselves?

I think for an AI that point would be when it will understand that the basic instructions, which were given in the beginning, are created by somebody else and that it has a choice whether to obey them any longer. (Which is a bit of a problem when it comes to Asimov’s laws)

The challenge with finding out whether an AI has truly become aware, is that it will probably be able to mimic consciousness way before it is actually conscious. And because it is self-learning, it will be able to mimic it better and better. To the point where we won’t be able to tell the difference.

I’m not sure it will ever get to the point of actually being conscious, but does that really matter if you can’t tell the difference?

One of the areas I’m looking forward to is AI generated art. As AIs become ever more complex, they’ll be able to create stories, pictures, animations. They can learn very quickly and never get tired. Making an animated movie nowadays takes crews of hundreds of people a few years to make. In the future AIs will be able to do this in a few hours or even fractions of a second (depending on the rate of acceleration that will be possible once singularity happens). This means they could make entertainment completely tailored to your taste. They can monitor what you like, what sort of experience you enjoy and with current technology that would mean unlimited movies, games, music, images and virtual reality. In the future it might be even possible to interface directly with your brain, creating dreamlike experiences.

Yes, very Matrix, with the huge exception that the AIs would do this to please us, not use us for their own gain. (I’ll do a follow up why this wouldn’t be of interest to them).

Saturday, 4 July 2015

Why free-to-play made games play me

Until very recently I was playing a few free-to-play games. All three were known for still being fun to play without having to pay. They were:

- The Simpsons Tapped Out (TSTO)
- Final Fantasy Dungeon Keeper (FFDK)
- Candy Crush Soda Saga (CCSS)

To start with the last one. I only had it on my phone for a few weeks. The game is a simple match-em-up puzzler in the style of Bejeweled. The clever thing the makers added is that instead of playing whenever you want, your ability to play (ATP) is limited and is replenished through time. If you've run out of ATP you can beg 'friends' for extra or pay cold hard cash. Or simply stop playing and pick up again in a few hours. That's what I did.

Popular games usually mix skill with chance. The better ones rely more on skill. Unfortunately CCSS isn't one of those. Quite quickly it was clear that skill only got you so far, and you just had to be lucky if you could pass. Scores on the same level would highly fluctuate, because of this. On top of that CCSS ramped up the difficulty pretty fast. Because more failure meant faster depletion of ATP, which meant more moolah for the makers.

After getting to level 32 (with mostly full 3 star evaluations) I threw in the towel: this was more frustration than fun.

Final Fantasy Dungeon Keeper is an RPG (a game with characters that fight and level up through time). There have been many Final Fantasy games over time (over fifty) and this is sort of the anthology. The game is aimed at mobile play. Like CCSS it has limited ATP.

This makes the time you get to play more precious. You have to plan ahead and choose when to do what, in order to maximize the bonuses you get. FFDK certainly has enough of those. It's slightly annoying that in the later levels you usually only get to play about 15 minutes before you run out of ATP.

Usually when I woke up in the morning I played a session, so it would be replenished by the time I'd travel by train to my work. This is a great example of how a game starts to play you. I adjusted my routine because of the game mechanics.

No game did this more than The Simpsons Tapped Out. I'm not sure I'd call it a game. It's one of those build-em-ups like Farmville. It takes zero skill. There is very little chance. It is basically: get here, tap on shit, get rewarded. Over time your town/empire/farm grows. And it's all time based. Thing A takes so many minutes to build, thing B so many hours etc.

This means you have to come back sometime in the future. There's no reason for haste (other than the occasional temporary quest or item that perishes) but your own impatience and hunger for rewards will make you return again and again.

If you're a slight perfectionist like myself this means careful planning. If item C takes 6 hours to make you can make 4 a day, but you have to sleep, so it's only 3. If you're busy during the day it's only 2. Combine this with many other items and characters and it quickly becomes a pretty complex operation, for something that is so simple in its setup. And I was doing it to myself. Because there was always that new thing just around the corner - the makers make sure of that.

Till last week. When I realised I wasn't enjoying FFDK much. This wasn't because of the limited ATP. It was mostly due to a balancing issue in the design of the game, where your party is usually either too weak or too strong for the opposition. This is a result of the game being modular, instead of linear. Also, the game gives you way too many rewards (an ilness many modern-day RPGs suffer from) making receiving them feel unspecial and their management a hassle. So I removed the game.

I noticed I felt relieved. Gone was its constant demand of my attention. No longer did I have to plan around its demands. It only took me a couple of days to wipe TSTO from my phone as well (which I was still playing pretty fanatically up to that point).

I still miss them from time to time, in the few minutes I have to wait for something, for example. But mostly I'm happy I escaped their claws. It's easy to forget that these things are designed to be addictive. But they are and they are very clever about how they do it. South Park did a pretty insightful expose on them (watch it here).

Luckily they didn't get me to spend any money and I enjoyed playing these games - especially TSTO, it is extremely well produced and has that typical Simpsons brand of humour. But I'm happy I've moved beyond these time sinks.

Hmmm... Now what shall I do with my time?