X Close

Events

Home

UCL events news and reviews

Menu

Thinking about robots thinking

By Sarah E M Wiseman, on 15 June 2012

Robot image from Mark StrozierHow do we know we are even thinking?”, the debate was getting existential. In the panel, we had heard from the two roboticists Alan Winfield and Murray Shanahan and a legal expert Lilian Edwards. I could tell that the talk “Can Robots think? Or at least pretend to?” was going to be interesting.

Initially the debate began with Murray Shanahan defining the possible ways in which we might try to get robots thinking. At first it might seem that the only way to do it would be to mimic the human brain, either by modelling it approximately, or perhaps going so far as to create an entire artificial human brain. Due to technical limitations of today’s technology, this one is quite a way off, but creating an artificial mouse brain in the near future is a real possibility.

Though we don’t have to go down the route of mimicking the way humans think, we might want to approach the problem from a completely different viewpoint, and “re-invent” thinking, in much the same way that the Wright brothers’ plane doesn’t exactly copy how birds fly. Can you imagine flying on your holidays in a plane that flapped its wings?

Next, Alan Winfield considered whether we needed robots to actually think, or whether appearing as though they were thinking would be enough. I suspect this concept was the reason the questions after the discussion took a fairly philosophical turn regarding our own thought and consciousness.

This question certainly got me thinking about humans and thought. If we were able to produce a robot that was complex enough that it was able to act exactly as a human would, then how would that be different to a human? Isn’t this what we are doing? What would separate this robot from humans? And interestingly, what rights could this robot expect to have in this world?

This is where the third speaker, Lilian Edwards came in. Lillian was speaking about robots in terms of the law. If a robot was truly able to “think” then who becomes responsible for its actions? If that robot was to commit a crime, or hurt somebody, who would need to be accountable for these actions? For this reason, Lillian and Alan have been working on a set of laws to govern this murky legal area.

In the discussion, we were initially shown the Three Laws of Robotics penned by Isaac Asimov in his short story, “Runaround”. Although these laws were fictional, and introduced for story telling purposes, such rules are important and necessary for a society with ever increasingly intelligent robots.

The principles that Lillian and Alan have focussed on however, say more about responsibility, that one person needs to be held accountable for each robot. The rules also talk about the purposes of robots. The first one being, that “robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.”  The clarification at the end added due to the fact that much of the money in robotics at present is provided by organisations with national security in mind, and therefore many robots these days are being created solely for the purpose of killing.

With this in mind, some of the audience questions at the end of the session revolved around how safe it is to be creating robots capable of this. If one of these robots went wrong, who knows what the consequences might be. Admittedly, the questions were beginning to sound like the audience had watched and read too many sci-fi horror stories, but questions like these are clearly important to address.

Marvin the paranoid androidThe discussion however, did not focus solely on the possible negative consequences of giving robots thought from a human perspective; the panel briefly discussed the moral implications for robots themselves.

Alan Winfield asked whether this line of investigation should continue, because an inevitable consequence of this research is that eventually we will build something capable of feeling pain and suffering. It is clear that before this occurs, more needs to be done to make clear the ethical implications of this field.

This discussion, like any good debate I would say, has left me with more questions afterwards than I went in with. The debate is not simply how we can make robots think, but also why? And when we do, how can we ensure the safety of both humans and the robots themselves.

This is not just an incredibly complex computer science issue, but also one that has impacts upon the law, and philosophy.  My mind is still melting from trying to work out how I know that I am thinking. A rather interesting way to spend an hour in Cheltenham!

Leave a Reply