Is a self aware robot like Chappie possible? Yes, and soon, says scientist. (Full interview)

in Entertainment, Science 

I was fortunate enough to be invited to a screening and press conference for the movie Chappie. It is a great movie about a police robot named Chappie that is downloaded with a program that makes him self-aware.

At the press conference, Neill Blomkamp, the director, script writer, and the guy who came up with the idea for the movie, said he wasn’t sure if humanity could ever produce the type of artificial intelligence (AI) in Chappie. I was curious if that was possible myself.

In order to answer that question, I was able to interview the founder and director of the Visual and Autonomous Exploration Systems Research Laboratory at Caltech and the University of Arizona, Dr. Wolfgang Fink. He is working on building an autonomous ” planetary field geologist” robot for NASA.

 

chappie-Chappie-Final-Poster_rgb

How would you define an autonomous exploration system?

A truly autonomous exploration system is not controlled at all by humans. It may be given a charter, for example, “Go out there and explore a planet and then report back once you see something exciting.” So then, at that point, the system not only needs mobility and senor capabilities, but it needs the capabilities to digest the senor data and by itself reason over that data to come up with areas or objects of interest. Once it figures on what and where the regions of interests are, it would then move itself into those locations for close follow up examinations. That is how it would go about exploring another planet or another environment.

For example, if you have a planetary lander, like the Phoenix mission on Mars, so that the fact that it deploys a robotic arm to an object on the ground and does an examination, that is automation in robotics. The intent to deploy the arm to that particular object, that is autonomy. In other words, that intent, so far, is always covered by humans. Humans always tell the robots what to do, so far, and that is the part where my lab tries to make a difference, so the robot or the system comes up with the decision where to go next and what to explore.

So, coming up with the decision, or the reasoning aspect, would you call that artificial intelligence?

I would not, and that is very important. The term artificial intelligence refers to, over the last several decades, a largely rule based system, so called Mamdani type rules. What it means is if you encounter a certain type of situation, then react a certain way. If you encounter another situation, react another way. So if you have thousands of these rules in a system, then it looks like as if the system was intelligent looking from the outside in. However, the problem with such an approach is, while it is applicable and sufficient for certain tasks, especially task where we know everything that could happen, there are no surprises, but it doesn’t lend itself to when there are surprises or a heavily changing dynamic environment.

In other words, if you encounter a situation in which you do not have a rule, you do not know how to react to it. So then the system falls flat.

So, in this case, with Chappie, for example, in the movie, now you have a system that has been instilled with the capability of modifying itself, modifying its thinking and learning over time, and being influenced by its environment. Now that is then the mark of an autonomous system, because at this point you cannot really predict anymore what trajectory or development the system will take in the future.

Let’s say if you had two identical Chappies, which would not be a problem. You gave them the same outfit, the same program and everything. However, if you put them in two different environments, they would develop completely differently from each other.

So you are saying that Chappie is more along the lines of using autonomous reasoning rather than artificial intelligence.

Yes. We have a nice comparison with the police robots, and Chappie was one of those. So the police robots, while they are highly automated, and many people would call them also autonomous, they are more governed by an artificial intelligence, because they have the law and order book internalized. They know if someone is wrongly parked, they give them a ticket, if someone pulls a gun, they pull a gun, and so they work according to certain schemes and a certain set of actions.

However, Chappie in severe contrast to this, is not governed by the artificial intelligence rules. He is basically developing over time, being taught like a child, a personality actually. The culmination of which, at some point, he says, “I am Chappie.” That is a huge statement, because that means at that point he becomes self aware. That is sort of the highest order and the true mark of an autonomous system. To be self aware, self/non-self discrimination. That is the core of an autonomous system.

So you are working on autonomous scientists.

Right.

Do you believe it is possible to have a safe autonomous peace keeper?

That is a loaded question. For certain tasks, let’s say patrolling, maybe containing situations, you will very well have these types of systems. As far as situational awareness, and to use judgment… Let’s say you have civilians involved and you have to make a judgment. You cannot go according to a script anymore. That is where you have to be situation aware and self aware in order to make appropriate and adequate decisions, which is different from a rule based or scripted system, which may react the wrong way. It may follow through on a script no matter what and put children and other people in harm’s way. Whereas a situational system, which would be an autonomous system, would have to weigh the pros and cons and  may come up with a vastly different decision.

Once you have a self aware and situation aware system, as I said before, you lose control over that system, and so that’s why such a system would not lend itself to these law enforcement type of operations, because you need to still have some control from the human side. Said another way, if you can still know how a system came up with a certain cause of action, based on an algorithm let’s say, then you are still in control of that system even if it is weaponized. If, however, you do not have a way anymore of figuring out how the system makes its decisions, because it has evolved by self modifying, then at that point you have lost control of that system and then you have something that may potentially turn against you. Not necessarily out of ill intent, but it may not have ethical or moral values. It simply thinks you are obsolete or there is no need for you to be there.

Since you are working on autonomous systems, do you believe that one of your systems could become self aware?

We are not at that point. What we are working towards is that aspect of a system making its own decision of where to go next and what to explore next and what to find interesting. So, in other words, we are mimicking, if you will, a planetary field geologist who looks at the evidence in front of them. Based on the evidence they form a working hypothesis, and in order to corroborate a working hypothesis it triggers certain additional steps. They have to then be taken by the system to corroborate that hypothesis and to explore. So basically, what we are trying to do, if you want to sum it up in a lay term, we are trying to instill the quality of curiosity into a robotic system. So that is not at the level of self awareness.

Do you think self awareness is possible?

Yes, I believe it is possible, but not with the current AI approaches. It has to be vastly different than that.

What are your feelings about the technological singularity? Some people believe we are within 20 years of reaching it.

This is basically a breakthrough synthetic system, if you will. Do you mean as far as it is a threat to humanity?

Some people predict AI will exceed human capacity within 20 years. Do you think we are within that time frame, and should we worry about that?

It is a complex question. Let me try to break it down to some extent. There are systems which are faster than humans. They can go into places we cannot go, radiation environments, space, and so forth. That is all hardware. That can happen and happens already.

There are systems that can calculate quicker than humans. That is a given too.

We have systems that play chess better than humans, and that is only a number crunching exercise. Not to diminish those systems, but just to bring it down to the basics.

So, while that is impressive, it is not a threat to humanity. Where it becomes dicey is when you get a system which can move about and can take action, weaponized or not, and is able to react to its environment based on a non-deterministic algorithm. Meaning, it is not scripted. You cannot predict how the system is going to react. If you have such a system, which I think will be possible that it will happen in our lifetime, then, yes, it is something which is a threat to humanity. Especially, if you can’t get control of it.

Of course, I need to say a few more things to that effect. It is important that the system can move about. Obviously, if it was just sitting in your laptop and was intelligent that would not do anything. At the same time, you get other questions. Can it somehow replicate itself? That may be another quality that comes about. Or it may have access to weapons or other means which are a threat to humanity, and you cannot figure out anymore how the system thinks. Because just intelligent thinking does not need to occur the way we think to be intelligent. Many people go down the road of trying to mimic the human brain. They try to figure out how anatomically the brain is wired and they try to replicate that. Personally, I do not necessarily subscribe to this approach. A system can look vastly different to a human brain and still exhibit all of the reasoning qualities of a human being, but is completely foreign.  So it doesn’t have to look like a human, it doesn’t have to think like a human, and it doesn’t have to have a brain like a human, yet it has its own cause of action and its own reasoning, which we may or may not be able to figure out.

An abbreviated version of this interview can be seen on The Huffington Post.

The following video was made by Neill Blomkamp several years ago. You can see the similarities. I remember watching it then and being blown away by the CGI.

Leave a Reply

Your email address will not be published. Required fields are marked *