Advertisement
Advertisement
Advertisement
Popular Science

Researchers tortured robots to test the limits of human empathy

Mack DeGeurin
5 min read
Boston Dynamic's Spot getting kicked in a routine test.
Boston Dynamic's Spot getting kicked in a routine test.
Generate Key Takeaways

In 2015, a jovial three-foot-tall robot with pool noodles for arms set out on what seemed like a simple mission. Using the kindness of strangers, this machine, called “hitchBOT” would spend months hitchhiking across the continental United States. It made it just 300 miles. Two weeks into the road trip, HitchBOT was found abandoned in the streets of Philadelphia, its head severed and spaghetti arms ripped from its bucket-shaped body.

“It was quite a setback, and we didn’t really expect it,” hitchBOT co-creator Frauke Zeller told CNN at the time.

hitchBot’s untimely dismemberment isn’t a unique case. For years, humans have relished opportunities to kick, punch, trip, crush, and run over anything remotely resembling a robot. This penchant for machine violence could move from funny to potentially concerning as a new wave of humanoid robots is being built to work alongside people in manufacturing facilities. But a growing body of research suggests we may be more likely to feel bad for our mechanical assistants and even take it easy on them if they express sounds of human-like pain. In other words, hitchBot may have fared better if it had been programmed to beg for mercy.

Humans feel guilty when robots cry

Radboud University Nijmegen researcher Marieke Wieringa recently carried out a series of experiments looking at how people reacted when asked to violently shake a test robot. In some cases, participants would shake the robot and nothing would happen. Other times, the robot would emit a pitiful crying sound from a pair of small speakers or enlarge its “eyes” to convey sadness. The researchers say they were more likely to feel guilty when the robot gave the emotion-like responses. In another experiment, the participants were given the option of either performing a boring task or giving the robot a solid shake. Participants were more than willing to shake the robot when it was unresponsive. When it cried out, however, participants opted to go ahead and complete the task instead.

Advertisement
Advertisement

“Most people had no problem shaking a silent robot, but as soon as the robot began to make pitiful sounds, they chose to do the boring task instead,” Wieringa said in a statement. Wieringa will be defending the research as part of her PhD thesis at Radboud University in November.

Those findings build off previous research that shows we may treat robots kinder when they appear to exhibit a range of human-like tendencies. Participants in one study, for example, were less inclined to strike a robot with a hammer if the robot had a backstory describing its supposed personality and experiences. In another case, test subjects were friendlier to humanoid-shaped robots after they used a VR headset to “see” through the machine’s perspective. Other research suggests humans may be more willing to empathize with or trust robots that appear to be able to recognize their own emotional state.

“If a robot can pretend to experience emotional distress, people feel guiltier when they mistreat the robot,” Wieringa added.

The many ways humans have abused robots

Humans have a long history of taking out our frustrations on inanimate objects. Whether it’s parking meters, vending machines, or broken toaster ovens, people have long bizarrely found themselves attributing human-like hostility to everyday objects, a phenomenon the writer Paul Hellweg refers to as “resentalism.”  Before more modern conceptions of robots, people could be seen attacking parking meters and furiously shaking vending machines. As machines became more complex, so too did our methods for destroying them. That penchant for robot destruction was maybe best encapsulated in the popular 2000s television show Battle Bots, where crowds cheered as quickly cobbled together robots were repeatedly sliced, shredded, and lit on fire before a cheering crowd.

Now, with more consumer-grade robots roaming around in the real world, some of those exuberant attacks are taking place on city streets. Autonomous vehicles operated by Waymo and Cruise have been vandalized and had their tires slashed in recent months. One Waymo vehicle was even burned to the ground earlier this year.

Advertisement
Advertisement

In San Francisco, local residents reportedly knocked over an egg-shaped Knightscope K9 patrol robot and smeared it with feces after it was deployed by a local animal shelter to monitor unhoused people. Knightscope previously told Popular Science an intruder fleeing a healthcare center intentionally ran over one of its robots with his vehicle. Food delivery robots currently operating in several cities have also been kicked over and vandalized. More recently, a roughly $3,000, AI-powered sex robot shown off at a tech fair in Austria had to be sent off for repairs after event participants reportedly left it “heavily soiled.”

But possibly the most famous examples of sustained robot abuse come from now Hyundai-owned Boston Dynamics. The company has created what many consider some of the most advanced quadruped and bipedal robots in the world, in part, by subjecting them to countless hours of attack. Popular YouTube videos show Boston Dynamics engineers kicking its Spot robot, and harassing its Atlas humanoid robot with weighted medicine balls and a hockey stick.

Research trying to understand the actual reasons why people seem to enjoy abusing robots has been a mixed bag. In higher-stakes cases like autonomous vehicles and factory robots, these automated tools can function as a reminder of potential job loss or other economic hardships that may arise from a world marked by automation. In other cases though, researchers like the Italian Institute of Technology Cognitive Neuroscientist Agnieszka Wykowska say that the non-humanness of machines can trigger an odd type of anthroposophy tribal response.

“You have an agent, the robot, that is in a different category than humans,” Wykowska said during a 2019 interview with the New York Times. “So you probably very easily engage in this psychological mechanism of social ostracism because it’s an out-group member. That’s something to discuss: the dehumanization of robots even though they’re not humans.”

Either way, our apparent propensity towards messing with robots could get more complicated as they become more integrated into public life. Humanoid robot makers like Figure and Tesla envision a world where upright, bipedal machines work side by side with humans in factories, perform chores, and, maybe even look after our children. All of those predictions, it’s worth noting, are still very much theoretical. The success or failure of those machines, however, may ultimately depend in part on tricking human psychology to make us pity a machine like we would a person.

Advertisement
Advertisement