Social Robots Are Coming

COSMOS | October 2014

Geminoid F with experimental visual artist Mari Velonaki, who is researching human reactions to lifelike robots

A new breed of automatons – friendly, engaging and even soft to touch – will soon be among us. And they are moving into the real world at remarkable speed, as Wilson da Silva discovers.



THERE IS SOMETHING UNNERVING about Geminoid F. She looks like a Japanese woman in her 20s, about 165 cm tall with long dark hair, brown eyes and soft pearly skin. She breathes, blinks, smiles, sighs, frowns, and speaks in a soft, considered tone.


But the soft skin is made of silicon, and underneath it lies urethane foam flesh, a metal skeleton and a plastic head. Her movements are powered by pressurised gas and an air compressor is hidden behind her seat. She sits with her lifelike hands folded casually on her lap. She – one finds it hard to say “it” – was recently on loan to the Creative Robotics Lab at the University of New South Wales in Sydney, where robotics researcher David Silvera-Tawil set her up for a series of experiments.


“For the first three or four days I would get a shock when I came into the room early in the morning,” he says. “I’d feel that there was someone sitting there looking at me. I knew there was going to be a robot inside, and I knew it was not a person. But it happened every time!”

Her soft skin is made of silicon, and underneath it lies urethane foam flesh, a metal skeleton and a plastic head. Her movements are powered by pressurised gas and an air compressor is hidden behind her seat.

The director of the lab, Mari Velonaki, an experimental visual artist turned robotics researcher, has been collaborating with Geminoid F’s creator, Hiroshi Ishiguro, who has pioneered the design of lifelike androids at his Intelligent Robotics Laboratory at Osaka University. Their collaboration seeks to understand “presence” – the feeling we have when another human is in our midst. Can this sensation be reproduced by robots?


Velonaki has also experienced the shock of encountering Geminoid F. “It’s not about repulsion,” she says. “It’s an eerie, funny feeling. When you’re there at night, and you switch off the pneumatics … it’s strange. I’ve worked with many robots; I really like them. But there’s a moment when there’s an element of … strangeness.”

This strangeness has also been observed with hyper-real video or movie animations. It even has a name: “the uncanny valley”: it’s the sense of disjunction we experience when the impression that something is alive and human does not entirely match the evidence of our senses (see Infobox 1 at bottom of this article).


Facial expressions of Geminoid F: neutral, angry and smiling

For all her disturbing attributes, Geminoid F’s human-like qualities are strictly skin deep. She is actually a fancy US$100,000 puppet, who is partly driven by algorithms that move her head and face in lifelike ways, and partly guided by operators from behind the scenes. They oversee her questions and answers to ensure they’re relevant. Geminoid F is not meant to be smart. She’s been created to help establish the etiquette of human-robot relations.


Which is why those studying her are cross-disciplinary types. “We hope that collaborations between artists, scientists and engineers can get us closer to a goal of building robots that interact with humans in more natural, intuitive and meaningful ways,” says Silvera-Tawil.

It is hoped Geminoid F will help pave the way for robots to take their first steps out of the fields and factory cages to work alongside us. In the near future her descendants – some human-like, others less so – will be looking after the elderly and teaching children.


It will happen sooner than you think.



RODNEY BROOKS HAS BEEN CALLED “the bad boy of robotics”. More than once he has turned the field upside down, bulldozing shibboleths with new approaches that have turned out to be prophetic and influential.


Born in Adelaide, he moved to the US in 1977 for his PhD and by 1984 he was on the faculty at the Massachusetts Institute of Technology. There he created insect-like robots that, with very little brainpower, could navigate over rough terrain and climb steps. At the time the dominant paradigm was that robot mobility required massive processing power and a highly advanced artificial intelligence.

“As the people of retirement age increase, there’ll be fewer people to take care of them, and I really think we’re going to have to have robots to help us. I think we’ll all come to rely on robots in our daily lives.”

Brooks reasoned that insects had puny brains and yet could move and navigate, so he created simple independent ‘brains’ for each of the six legs of his robots, which followed basic commands (always stay upright irrespective of direction of motion), while a simple overseer brain coordinated collaborative movement. His work spawned what is now known as behaviour-based robotics, used by field robots in mining and bomb demolition robots.


But it is the work he began in the 1990s – developing humanoid robots and exploring human-robot interactions – that may be an even greater game changer. First he created Cog, a humanoid robot of exposed wires, mechanical arms and a head with camera eyes, programmed to respond to humans. Cog’s intelligence grew in the same way a child’s does – by interacting with people. The Cog experiment fathered social robotics, in which autonomous machines interact with humans by using social cues and responding in ways people intuitively understand.


Australian roboticist Rodney Brooks with Baxter, designed to work with humans

Brooks believes robots are about to become more commonplace, with “social robots” leading the way. Consider the demographics – the percentage of working-age adults in the US and Europe is around 80%, a statistic that has remained largely unchanged for 40 years. But over the next 40 years, this will fall to 69% in the US and 64% in Europe as the boomers retire. “As the people of retirement age increase there’ll be fewer people to take care of them, and I really think we’re going to have to have robots to help us,” Brooks says.


“I don’t mean companions – I mean robots doing things, like getting groceries from the car, up the stairs into the kitchen. I think we’ll all come to rely on robots in our daily lives.”


In the 1990s, he and two of his MIT graduate students, Colin Angle and Helen Greiner, founded iRobot Corp, maker of the Roomba robot vacuum cleaner. It was the first company to bring robots to the masses – 12 million of their products have been sold worldwide and more than 1 million are now sold every year.


The company began by developing military robots for bomb disposal work. Known as PackBots, they’re rovers on caterpillar tracks packed with sensors and with a versatile arm. They’ve since been adapted for emergency rescue, handling hazardous materials or working alongside police hostage teams to locate snipers in city environments. More than 5,000 have been deployed worldwide. They were the first to enter the damaged Fukushima nuclear plant in 2011 – although they failed in their bid to vent explosive hydrogen from the plant.


With the success of the Roomba, iRobot has since launched other domestic lines: the floor mopping Braava, the gutter cleaning Looj, and Mirra for pools. Its latest offering is the tall, free-standing RP-VITA, a telemedicine health care robot approved by the US Food and Drug Administration in 2013. It drives itself to pre-operative and post-surgical patients within a hospital, allowing doctors to assess them remotely.

Baxter is cute too. Its face is an electronic screen, dominated by big, expressive cartoon eyes. When confused, it raises an eyebrow and shrugs.

Other companies have sprouted up in the past 15 years manufacturing robots that run across rocky terrain, manoeuvre in caves and underwater, or that can be thrown into hostile situations to provide intelligence.


Robot skills have grown by means of advances in natural language processing, artificial speech, vision and machine learning, and the proliferation of fast and inexpensive computing aided by access to the internet and big data. Computers can now tackle problems that, until recently, only people could handle. It’s a self-reinforcing loop – as machines understand the real world better, they learn faster.


Robots that can interact with ordinary people are the next step. This is where Brooks comes in. “We have enough understanding of human-computer interaction and human-robot interaction to start building robots that can really interact with people,” he says. “An ordinary person, with no programming knowledge, can show it how to do something useful.”


Rethink Robotics's Baxter at work

In 2008 Brooks founded another company, Rethink Robotics, which has done exactly that – created a collaborative robot that can safely work elbow to elbow with humans. Baxter requires no programming and learns on the job, much as humans do. If you want it to pick an item from a conveyor belt, scan it and place it with others in a box, you grasp its mechanical hand and guide it through the entire routine.


It works out what you mean it to do and goes to work. Baxter is cute too. Its face is an electronic screen, dominated by big, expressive cartoon eyes. When its sonar detects someone entering a room, it turns and looks at them, raising its virtual eyebrows. When Baxter picks something up, it looks at the arm it’s about to move, signalling to co-workers what it’s going to do. When Baxter is confused, it raises an eyebrow and shrugs.


Baxter, priced at an affordable $25,000, is aimed at small to medium businesses for whom robots have been prohibitively expensive until now. While robots are a big business today, generating $29 billion in annual sales, the market is still dominated by old-school industrial machines – disembodied arms reliant on complex and rigid programming. These automatons haven’t really changed much from those that began to appear on factory floors in the 1960s. They are stand-alone machines stuck in cages, hardware-based and unsafe for people to be around. Nevertheless, 1.35 million now operate worldwide, with 162,000 new ones sold every year. They’re used for welding, painting, assembly, packaging, product inspection and testing – all accomplished with speed and precision 24 hours a day. But Baxter and his ilk are starting to shake up the field.

“In the new style of robots there’s a lot of software with common-sense knowledge built in,” says Brooks.

“In the new style of robots there’s a lot of software with common-sense knowledge built in,” says Brooks.


Launched in 2012, Baxter is used in 18 countries with applications such as manufacturing, health care and education. Rethink Robotics’ backers include Amazon’s Jeff Bezos, whose own company is a big user of robots to handle goods in its warehouses. When Google revealed in December 2013 it had acquired eight robotics companies it sent a thunderbolt through the field.


Google created a division led by Andy Rubin, the man who spearheaded Android, the world’s most widely used smartphone software and who began his career as a robotics engineer. Only a month later, Google shelled out $650 million to buy DeepMind Technologies, a secretive artificial intelligence company in London developing general-purpose learning algorithms.


“As of 2014, things are finally changing,” says Dennis Hong, who heads the Robotics and Mechanisms Laboratory at the University of California in Los Angeles. “The fact that Google bought these companies shows that, finally, it’s time for the robotics business to really start.”



WHERE DOES THAT LEAVE SOCIAL ROBOTICS? Since Baxter came on the scene, “everybody’s saying they’ve got collaborative robots,” chuckles Brooks. “But some of them are just dressed up, old-style interfaces. Industrial robots have not been made easy to use because it’s engineers who use them, and they like that complexity. We made them popular by making them easy to use.”


But as people and money flood into the field, artificial intelligence with social smarts is developing fast, says Brooks. Take Google’s self-driving car: the original algorithms were found to be useless in traffic. The cars would become trapped at four-way stop sign intersections because they couldn’t read other drivers’ intentions. The solution came in part by incorporating social skills into the algorithm.

“We’d like to have robots that can guide a child towards long-term educational goals ... and basically grow and develop with the child.”

Brooks hopes that Baxter will become smart and cheap enough that researchers will develop applications beyond manufacturing. Updates to its operating system already allow the latest model Baxters to be twice as accurate and operate three times faster than earlier models. Brian Scassellati, who studied under Brooks and is now a professor of computer science at Yale University, also believes robots are about to leave the factory and enter homes and schools. “We’re entering a new era ... something we saw with computers 30 years ago. Robotics is following that same curve.


“They are going to have a very important impact on populations that need a little bit of extra help, whether that’s children learning a new language, adults who are ageing and forgetful, or children with autism spectrum disorder who are struggling to learn social behaviour,” he says. In 2012, Scassellati’s Social Robotics Lab began a five-year, $10 million US National Science Foundation program with Stanford University, MIT and the University of Southern California to develop a new breed of “socially assistive” robots designed to help young children learn to read, overcome cognitive disabilities and perform physical exercises.


The many faces of MIT's Leonardo, a highly expressive robot research platform

“At the end of five years, we’d like to have robots that can guide a child towards long-term educational goals ... and basically grow and develop with the child,” he says.


Despite the progress in human-robot interaction that has led to machines such as Baxter, Scassellati’s challenge is still daunting. It requires robots to detect, analyse and respond to children in a classroom; to adapt to their interactions, taking into account each child’s physical, social and cognitive differences; and to develop learning systems that achieve targeted lesson goals over weeks and months.


To try to achieve this robots will be deployed in schools and homes for up to a year, with the researchers monitoring their work and building a knowledge base. Early indications are that real gains can be made in education, says Velonaki. Another of her collaborators, cognitive psychologist Katsumi Watanabe of the University of Tokyo, has tested the interaction of autistic children over several days with three types of robot: a fluffy toy that talks and reacts; a humanoid with cables and wires visible; and a lifelike android. Children usually prefer the fluffy toy to start with, but as they interact with the humanoid, and later the android, they grow in confidence and interaction skills – and have been known to interact with the android’s human operators when they emerge from behind the controls.


“By the time they go to the android, they’re almost ready to interact with a real human,” she says.



THE NUMBER ONE FEAR people have of smart, lifelike humanoid robots is not that they’re creepy – but that they will take people’s jobs. And according to some economists and social researchers, we are right to worry.


Erik Brynjolfsson and Andrew McAfee of MIT’s Centre for Digital Business say that, even before the global financial crisis in 2008, a disturbing trend was visible. From 2000 to 2007, U.S. GDP and productivity rose faster than they had in any decade since the 1960s – yet employment growth slowed to a crawl. They believe this was due to automation and that the trend will only accelerate as big data, connectivity and cheaper robots become more commonplace.


“The pace and scale of this encroachment into human skills is relatively recent and has profound economic implications,” they write in their book, Race Against the Machine.

The robot industry takes the opposite view: that the widespread introduction of robots in the workplace will create jobs – specifically, jobs that would otherwise go offshore to developing countries. And they may have a point.

Economic historian Carl Benedikt Frey and artificial intelligence researcher Michael Osborne at the University of Oxford agree. They estimate that 47% of American jobs could be replaced “over the next decade or two”, including “most workers in transportation and logistics ... together with the bulk of office and administrative support workers, and labour in production occupations”.


Perhaps unsurprisingly, the robot industry takes the opposite view: that the widespread introduction of robots in the workplace will create jobs – specifically, jobs that would otherwise go offshore to developing countries. And they may have a point.


In June 2014, for example, the European Union launched the US$3.6 billion Partnership for Robotics in Europe initiative, known as SPARC. The EU calls it the world’s largest robotics research program and expects it will “create more than 240,000 jobs”.


Universal Robot's UR5 works alongside people

Job growth is certainly happening at Denmark’s Universal Robots, which also makes collaborative robots. The company has grown 40-fold in the last four years, employs 110 people and is putting on another 50 in 2014. Its robots – UR5 and UR10 – look like disembodied arms with cameras attached. They are operated by desktop controllers and taught tasks using tablet computers. They are not as social as Baxter, but they are able to work alongside humans.


“The more a company is allowed to automate, the more successful and productive it is, allowing it to employ more people,” chief executive Enrico Krog Iversen told The Financial Times in May 2014. But the jobs they will be doing will change, he argues. “People themselves need to be upgraded so they can do something value-creating.”


That’s been true for some robot clients. Both Universal Robots and Rethink Robotics say customers have hired more people as output in small companies has increased.


Brooks believes the fear that robots are going to take away all the jobs is overplayed. The reality could be the opposite, he argues. It’s not only advanced Western economies that are faced with a shrinking human workforce as their populations age. Even China is facing a demographic crisis. The number of adults in the workforce will drop to 67% by 2050, he says. By the time we’re old and infirm, we could all be reliant on robots. Says Brooks, “I’m not worried about them taking jobs, I’m worried we’re not going to have enough smart robots to help us.

[INFOBOX 1]

The Uncanny Valley



Rachael, the Nexus 7 Replicant in Blade Runner

IN THE MOVIE ALIEN, Ash is one of the crew who appears to be a ruthless loyal company man, but is later revealed to be an android with a secret agenda. In Blade Runner, ‘replicants’ are worker androids of synthetic flesh who follow programming, but begin to deviate at their end of their functional lives. In MOON, Gerty is all gears and switches with a smiley face and a soothing voice to help his interactions with humans, but is entirely focussed on what’s best for the mission.


They are all robots. But what is a robot? It’s probably better to start by asking, “What is a human?” For when electro-mechanical machines start resembling and – more disturbingly – behaving like humans, we have to define what they are by, first, what they are not. They are clearly not human.


But what is it to be human?


For the 19th century American essayist Ralph Waldo Emerson, humans were “an intelligence served by organs”, while for the 18th century U.S. polymath and scientist Benjamin Franklin “a tool-making animal”. But if your organs are electrical and your intelligence driven by software, what then?


We’ve already crossed Franklin’s line with tool-making robots in the 1960s – they are mindless and hardly sentient, but they both use and make tools. The Roman philosopher Seneca the Elder, on the other hand, saw humans as “A reasoning animal” – but a lot of what we see in advanced artificial intelligence looks eerily like reasoning.


Even as the gamut of human capabilities are increasingly replicated, mimicked or even matched by automatons with no blood coursing through their veins and no heart beating in their chests, we still believe we know what it is to be human. But when they look and act like us, and as we begin to interact with them as if they were people, a vague discomfort creeps in. If it walks and talks like a man, is it not a man?


This is ‘the uncanny valley’. The term was coined by Japanese roboticist Masahiro Mori in 1970; his thesis is that, as the appearance of a robot is made more and more humanoid, a human observer’s emotional response becomes increasingly positive and empathic. But only up to a point. When the robot becomes too human-like, the response can quickly turn to repulsion.



“There are cute stuffed animals – and then there are those scary dolls few would want on their sofa,” Mori told The Japan Times. “With robots, it’s the same. As their design gets closer and closer to looking like humans, most people begin to feel more and more scared of them. To a certain degree, we feel empathy and attraction to a human-like object; but one tiny design change, and suddenly we are full of fear and revulsion.”


The theory has long had currency in cognitive science, with anecdotal instances in real life. In Pixar’s groundbreaking 1988 short film Tin Toy, the animated human baby who plays with the title character provoked strong negative reactions from test audiences, leading the film company to take the uncanny valley thesis seriously. Sony Pictures Imageworks encountered a similar problem with The Polar Express in 2004; despite the film grossing well, many critics said its mannequin-like human characters – which some critics compared to zombies – “gave them the creeps”.


It wasn’t until 2009 that the uncanny valley effect was properly tested. A study led by cognitive scientist Ayse Pinar Saygin of the University of California in San Diego scanned the brains of 20 subjects with no experience working with robots. They were shown 12 videos of the humanoid robot Repliee Q2 – another of Ishiguro’s creations – performing ordinary actions such as waving, nodding, taking a drink of water and picking up a piece of paper.

[INFOBOX 2]

Google’s Robot Army


Google has snapped up eight robotics companies in the past year, representing some of the leading players in the burgeoning field.

Boston Dynamics's BigDog running along the beach
  • Boston Dynamics: creators of impressive quadruped robots such as Cheetah (clocked running a treadmill at 45 km/h) and BigDog, a mechanical pack mule that can carry loads across rough terrain where wheels fear to roll – both developed for the U.S. Defence Advanced Research Projects Agency (DARPA). It was founded in 1992 by Marc Raibert, a former professor at MIT’s Artificial Intelligence Lab (AIL).

  • Meka Robotics: Another MIT AIL spin-off company, based in San Francisco, it was founded in 2006 by Aaron Edsinger and Jeff Webe. It aims is to create agile bipedal robots that can run quickly over uneven ground, and has developed adaptable and responsive hardware ranging from humanoid heads and hands, to grippers, arms and manipulators.

  • Schaft: A Japanese company spun off from the University of Tokyo which recently won the U.S. Defence Advanced Research Projects Agency’s US$2 million Robotics Challenge with a rescue robot that could drive a vehicle, walk on uneven ground, walk up an ladder, clear debris, open a door, cut through a wall, open a valve and use a hose. It scored 27 out of 32 points and beat the Boston Dynamics team by large margin.

  • Redwood Robotics: a robotics arm specialist that aims “to do for robotics what the Apple II did for computers” – get the hardware out of factories and into homes.

  • Bot & Dolly: Creates robotic systems to control cameras in movies such as Gravity.

  • Autofuss: The design studio arm of Bot & Dolly which develops robotic cameras, motion design, animation and live action production.

  • Holomni: A design company that specialises in mobility for robots, and has developed controllable caster wheels that can move 360 degrees with extreme precision.

  • Industrial Perception: considered a leader in computer vision, it produces 3D visual perception systems that enable rapid image recognition, depth perception and action. Watching its robot unloading a truck is a little eerie, especially the way it can throw a heavy box onto a conveyor belt with the casual abandon of practiced workman.

Wilson da Silva is a science writer in Sydney, and the founding editor-in-chief of COSMOS. An abridged version of this story was republished in The Best Australian Science Writing 2015.

© 2019-20 Wilson da Silva. All rights reserved.