The European: You’re a roboticist, but you’ve also said that your ultimate interest is human nature. Why do you think that we can best examine ourselves by building humanoid androids?
Ishiguro: Robotics is my way of trying to understand the human self. To learn about ourselves, we have traditionally relied on psychology or biology. But that’s a bottom-up approach with a focus on micro-interpretations about human nature. We also need a more holistic understanding of the human. We need to treat humans as complicated systems. Robots are a tool for that.
The European: It seems like a bit of a stretch to compare complex robotic systems to complex evolutionary processes and to complex species…
Ishiguro: We’re talking about different kinds of complex systems. Traditionally, by complex system we mean a chaotic system. That’s a very mathematical understanding of systems. By contrast, humans are quite social. We cannot describe ourselves by numbers alone, but that doesn’t mean that humans aren’t complex creatures with complex social systems. The reason why we have been building humanoid robots is to hold the mirror up to ourselves. We need a deeper definition of humanity, and those robots also illustrate the limits of defining ourselves in relation to our bodies.
The European: But aren’t interactions between humans and your robots based on a fundamental deceit? The robot might simulate empathy or understanding, but it does not care about its human partner.
Ishiguro: My response is: What is the meaning of understanding? When young children say, “I understand,” they are usually just repeating something. We often talk about understanding, but we rarely define what we mean by it. Or take “Watson,” the computer that beat human “Jeopardy!” champions: Was that computer really understanding the questions to answer? Most people say no, but they would also say that they have understood something to which they can give a correct answer.
The European: Do we have to re-define something like intelligence?
Ishiguro: Has intelligence ever been defined?
The European: One of the first test for artificial intelligence comes from Alan Turing, who proposed this: If you can chat on a computer and not distinguish whether the answers are typed by a person or computed by software, that has to count as intelligence…
Ishiguro: It shows that we’ve defined knowledge, but it’s very hard to define intelligence. That’s the most important aspect of our work: We need better definitions of thinking, feeling, and intelligence. Some robots might be able to pass the Turing test, but we probably would not call them intelligent, because they still require remote controls and don’t operate autonomously.
The European: How might we go about finding those definitions?
Ishiguro: My approach is to ask: Do people accept this robot? In the near future, it will be possible for people to go on walks alongside robots, and we will see many robots in society. We know from studies in Denmark and Japan that the elderly sometimes prefer to interact with mechanical-looking robots rather than interact with other humans. Many of them are a bit weak and are regularly at the receiving end of services. Because of their dependency, they might be hesitant to ask others for more help. That problem doesn’t arise with robots. Humans are very different in their needs and desires, so we’ll hopefully see a large variation among robots as well. Eventually we will cease to ask which is which and just accept robots as human-like partners. If we can bring about that situation, we will have started to rethink human nature.
The European: Let’s imagine a situation in which terminally ill patients are cared for by robots. Would you be comfortable with robots making the choice of whether to end life support? That’s a medical decision, but it also seems to have a significant moral component. Can robots acquire that kind of moral intuition?
Ishiguro: I would ask, “what is morality?” That’s one of the mysteries of human society. During the time of World War II, the Japanese morality was to battle other countries. Today, we have a totally different set of morals. Morality is a living creature; it is always changing. We don’t know yet what kind of morals will arise in the future. My hope is that we will preserve at least some of today’s moral norms.
The European: Arguably, there’s a very strict distinction today between human and non-human life, and it seems that the future you imagine requires at least a few more nuances around the edges of the moral universe.
Ishiguro: Human rights is something given to individuals by society. If we wanted, we could decide to grant human rights to robots as well. There’s a precedent for this: In New Zealand, chimpanzees now enjoy basic rights that other animals don’t. If we accept interactions with androids as normal, I imagine that we will also grant them certain human rights. Consider this thought experiment: A mother loves her daughter very much. One day, the daughter dies in an accident and the mother decides to create an android that resembles her daughter. She loves that android very much, too. But then a thief enters her house at night and tries to break the android, and the mother kills him. Should she be acquitted because she protected the android or not? You see; judgment is not so easy. If the relationship between a human and a robot is very intimate and human-like, shouldn’t these questions deserve some consideration?
The European: What do you mean by “human-like”? Intuitively, you seem to suggest that there’s an unspecified essence to our humanity.
Ishiguro: That’s a very good and difficult question. Let’s consider tele-operated androids: Because humans remotely control them, their behavior might appear very human-like and we might hesitate to break them. Yet those androids are obviously not human. They cannot operate autonomously. But we can ask: What are the minimal cognitive and emotional requirements for being human? Again, there’s a great deal of variation among people in terms of height or intelligence, but we all recognize each other as human. So we cannot say that human rights are defined in relation to our intelligence – they are defined in the context of our relationships.
The European: What makes us human is being recognized as human by others?
Ishiguro: Yes, but it is not quite as simple. External relationships are one aspect, but we also need some minimal internal functions to pass the autonomy threshold, like minimal emotive capacities.
The European: In science fiction movies of the past, the future was often imagined as a world filled with androids – but a lot of the technological innovations that have actually happened weren’t foreseen. I wonder whether our visions of the future are too anthropocentric.
Ishiguro: The human brain has a very strong innate function to recognize humans, and it is very attuned to interactions with other human beings. That’s why we expect to see a future filled with humanoids and androids: It’s self-explanatory. Cell phones aren’t as intuitive. You probably have to read the user manual before you can use it, or someone has to teach you how to use a new technology. That’s not the case for humanoid robots: As long as they can speak, you can intuitively interact with them.
The European: Where do you see robotics heading?
Ishiguro: Over time, very simple robots will acquire more human-like features and become more accepted and prevalent in society. When the first cars were introduced, they were very simple but extraordinary machines. Now, cars are much more complex but very widespread. We can expect a similar development with robots.
The European: Which science fiction account offers a particularly compelling vision of the future?
Ishiguro: I really like the Robin Williams movie “Bicentennial Man,” about a robot called Andrew who tries to become human. It takes him two hundred years. I love that movie.
The European: Do you worry about the abuse of powerful robotic technology or about unintended consequences?
Ishiguro: No, because as humans we never stop to develop new technologies. They help us to expand the possibilities of the human. But we cannot simply pause to compare different societies and say, “this one is better than that one.” We cannot compare today’s society to the society of a hundred years ago and say which is objectively better or happier. But the inability to make those judgments has not prevented us from developing new technologies. They allow us to explore different aspects of human possibilities. Some of them will be beneficial, and some will be dangerous.
The European: Why are so many people skeptical? Is it simply a fear of the unknown?
Ishiguro: In the early days of the mobile phone, many people said that the radio waves would influence our brains and that mobile phones were very dangerous. They were seriously discussing that influence, but today nobody talks about that anymore. Whenever there’s a new technology, many people worry. After a few years, they adapt or forget.
Did you like the conversation? Read one with Christoper Steiner: "Innovation is a Social Issue"