Am I My Android?
Hiroshi Ishiguro interviewed by Alun Anderson.
If my experience is anything to go by, when you meet an android you should prepare for reactions outside your control. Walking across Ishiguro’s lab, I can see android Repliee Q2 in the distance. Originally created from a body cast of a TV announcer, she seems nothing more than a shop-window mannequin. Close-up I can see that her skin is silicone rubber and I imagine the steel skeleton and networks of pneumatic actuators that lie beneath it. But as Ishiguro switches on the control computers, I am in for a surprise. Repliee Q2 comes to life: she breathes, fidgets, gestures, blinks and looks around her – movements copied from video analysis of the real person’s behaviour.
Then she makes eye contact with me and I unconsciously drop my eyes, move to a more correct social distance, and blurt out an instinctive "excuse me" for staring at her. And while the incongruity of apologising to a brainless android is flashing through my mind, I also notice I have begun mirroring her body posture.
My reaction draws a laugh from Ishiguro. "From our experiments we have found something quite surprising," he says. "If a human subconsciously recognises an android as a human, he or she will treat it as a social partner even while consciously recognising it as a robot."
So what are the key triggers for this subconscious process? Eye movements are important, but finding a fuller answer is one goal of Ishiguro’s android science. That will make it possible to build androids that aren’t necessarily more accurate copies of humans, but ones better able to elicit the reactions they need to fit smoothly into a human-robot society.
Hiroshi Ishiguro made waves last year when he built a robot twin of himself. He had previously built equally realistic android copies of his daughter and of a TV announcer. Less publicly, he is working on a raft of other ideas, including sensor networks to give robots better data about the world. So where is robotics headed? Even Ishiguro doesn’t know yet, but he loves exploring as many ideas as Japan will fund – and being surprised as often as possible. Alun Anderson talked to him.
Why do you build robots?
These are not commercial products, they are testbeds for understanding the important factors in human-robot interaction. After this work on my "twin" I want to go back to a simpler kind of truly human-friendly robot, but to do that I need to find the principles of human-robot interactions. This is the approach we have named "android-science". If we improve human-robot interaction, robots will be able to integrate human into human society as partners and have natural social relations with humans. Our brains are designed for understanding other humans, not for manipulating keyboards. We will be able to get more information more easily from a human-like robot because it will tap into our innate abilities.
For robots to integrate into human society, do they have to look exactly like humans?
No. Humans have a strong capacity to accept new kinds of intelligent creature, so robots do not have to be human-like androids. But it depends very much on the situation, the purpose and even the culture you come from. Japanese people, for example, like the idea of a human-like android as a companion and we are quite serious about developing helper robots for old people.
Tell me about your most famous robot, your "twin", Geminoid HI-1.
Previously I built the "female" android, Repliee Q2, modelled on a TV announcer. Back then, the fundamental issue was what provides human-like appearance, human-like movements and so on (see "Double take"). But people expect a human-like robot to be able to carry on a conversation. We don’t have the artificial intelligence to make that possible now so I decided to develop a tele-interaction robot that I can control in real time.
With Geminoid HI-1, I talk from my lab here in Osaka University and my words come out from my "twin" in a lecture room at the Advanced Telecommunications Research Institute on the other side of town. I sit at this computer and in front of me are two screens showing the view from the android’s perspective. I can see the lecture theatre and look down at the android’s legs. It’s programmed with my characteristic gestures and I can control particular movements, so people in the lecture theatre hear my voice and see my android body move.
There are two big advantages of giving lectures remotely. I don’t have to travel and I can have a cigarette whenever I like without anyone knowing.
Has your twin surprised you?
When I am tele-operating, it is very interesting what happens. The true situation is that I am controlling a robot, but I feel something. It is as though I am there. If I look at the robot’s body from this viewpoint, I feel the robot’s body is my body. If someone walks up and touches the robot’s face, I feel something.
Does that feel strange?
It is natural. The question is very philosophical. I think this is my brain is here – a kind of mind body separation. Physically my mind and body can be in different places, is that possible? Until we built this android we had no way to do this, or to investigate the mind-body issue experimentally.
Would it upset you if someone else operated your twin’s body?
No. In the beginning, my students were much better at operating my body than I was. Once I had developed a copy of myself, a new issue emerged: what is the difference between my observations of myself and other people’s observations? When I looked at my copy and it was not moving, the experience was familiar because it was like looking in a mirror. But when the students programmed in my characteristic behaviours, I could not recognise myself as I never had the experience of watching myself carefully. My understanding of myself is different from your understanding of me, but I can still maintain my identity.
If robots are to integrate into society, surely you’ll need massive artificial intelligence inside their heads as well as making them more human-like?
Not inside their heads. I’m running two kinds of project: one is the robot, the other is the sensor network. There are many artificial intelligence studies in computer vision but they are not close to providing robots with human-level perception. Our idea of a robot brain takes a different, distributed-cognition approach. If we distribute many sensors in the environment to form a network, then the robot can get the necessary information without any complicated perception. This carpet you are standing on contains pressure sensors in a network that can track human feet and tell the robot where you are. There are also omnidirectional cameras placed around the room which provide 360-degree visual information to the sensor network, helping to track people and recognise their gestures. The network provides the robots with perception, while the role of the robot is representation, to interact effectively with humans.
So while the conventional sci-fi robot would look at me from inside "itself", your robots will look at me from every angle, even through the floor, while tracking everyone else in the room too.
Exactly, this framework doesn’t limit itself to the physical restriction of the robot’s body. There is a perceptual information infrastructure that is part of the total system, which we call the network robot. I don’t know when we will have many robots in our society, but they could be the result of integrating ubiquitous computing and robots.
I’ve heard criticism of this "network" approach because you would need to build sensor nets wherever robots operate, is it widely accepted?
I am open to different ideas, and recently the Japanese government has become very encouraging of the robot network approach. Sensor networks will allow robots to move easily among crowds without running into people, which is a hard problem to solve with just local sensors on the robot. Building sensor networks into big public spaces would not be difficult. Such robots might work in shopping malls directing people or explaining new products. They might push your luggage trolley at the airport. Everyone expects better service but it’s hard to recruit people into these jobs in Japan. Different kinds of robots will fuel many possibilities. The key task is to find the killer application.
Hiroshi Ishiguro is visiting head of the department of communication robots at ATR Intelligent Robotics and Communication Laboratories near Kyoto, Japan, and a professor in Osaka University’s department of adaptive machine systems.
Interview: Am I My Android? [Hiroshi Ishiguro interviewed by Alun Anderson] pp 46/7 in New Scientist 28 July 2007 Vol 195. No. 2674
Photo Caption: Like looking in the mirror, but when Ishiguro operates his "twin" from a distance his mind melds with the machine.