How UConn Researchers are Teaching Robots to Think Like Humans

There’s a great scene in the movie “Iron Man” where Robert Downey Jr.’s character Tony Stark (aka Iron Man) is crawling across his lab, desperately trying to reach the small arc reactor he needs to keep his heart beating and stay alive.

Weakened by a run-in with arch villain Obadiah Stane, Stark can’t reach the gizmo where it sits on a tabletop. Defeated, he rolls onto his back, exhausted and pondering his inevitable doom.

But the very moment that we think our intrepid hero’s a goner, a metallic hand appears at Stark’s shoulder, holding the lifesaving device. “Good boy,” Stark says weakly as he takes the device from his robot assistant, Dum-E.

And just like that, our hero is saved.

From the dutiful shuffling of C-3PO to the terrorizing menace of The Terminator, Hollywood has made millions tantalizing audiences with far-out robot technology. Scenes like the one in “Iron Man” make for good entertainment, but they also are based, to some degree, in reality.

Dum-E’s interaction with Stark is called collaborative robotics, where robots with advanced artificial intelligence, or A.I., not only work alongside us humans but also are able to anticipate our actions and even grasp what we need.

Collaborative robotics represents the frontier of robotics and A.I. research today. And it’s happening at UConn.

Three thousand miles away from the klieg lights of Hollywood, Ashwin Dani, director of UConn’s Robotics and Controls Lab, or RCL, stands in the stark fluorescent light of his Storrs office staring at a whiteboard covered in hastily scrawled diagrams and mathematical equations.

Here, in the seemingly unintelligible mishmash of numbers and figures, are the underlying mathematical processes that are the lifeblood of collaborative robotics.

If robots are going to interact safely and appropriately with humans in homes and factories across the country, they need to learn how to adapt to the constantly changing world around them, says Dani, a member of UConn’s electrical and computer engineering faculty.

“We’re trying to move toward human intelligence. We’re still far from where we want to be, but we’re definitely making robots smarter,” he explains.

All of the subconscious observations and moves we humans take for granted when we interact with others and travel through the world have to be taught to a robotic machine.

When you think about it, simply getting a robot to pick up a cup of water (without crushing it) and move it to another location (without spilling its contents or knocking things over) is an extraordinarily complex task. It requires visual acuity, a knowledge of physics, fine motor skills, and a basic understanding of what a cup looks like and how it is used.

“We’re teaching robots concepts about very specific situations,” says Harish Ravichandar, the senior Ph.D. student in Dani’s lab and a specialist in human-robot collaboration. “Say you’re teaching a robot to move a cup. Moving it once is easy. But what if the cup is shifted, say, 12 inches to the left? If you ask the robot to pick up the cup and the robot simply repeats its initial movement, the cup is no longer there.”

Repetitive programs that work so well for assembly-line robots are old school. A collaborative robot has to be able to constantly process new information coming in through its sensors and quickly determine what it needs to do to safely and efficiently complete a task. If that robot is part of an assembly line, the line has to shut down and the robot has to be reprogrammed to account for the change, an inefficient process that costs manufacturers money. Hence the thinking robot this team is trying to create.

While the internet is filled with mesmerizing videos of robots doing backflips, jumping over obstacles, and even making paper airplanes, the UConn team’s effort at controlling robots through advanced artificial intelligence is far less flashy but potentially far more important.

Every move the UConn team wants its test robot to make starts here, says Dani, with control theory, engineering, whiteboards, and math.

“We’re writing algorithms and applying different aspects of control theory to take robot intelligence to a higher level,” says Ravichandar. “Rather than programming the robot to make one single movement, we are teaching the robot that it has an objective — reaching for and grabbing the cup. If we succeed, the robot should be able to make whatever movements are necessary to complete that task no matter where the cup is. When it can do that, now the robot has learned the task of picking something up and moving it somewhere else. That’s a very big step.”

While most of us are familiar with the robots of science fiction, actual robots have existed for centuries. Leonardo da Vinci wowed friends at a Milan pageant in 1495 when he unveiled a robotic knight that could sit, stand, lift its visor, and move its arms. It was a marvel of advanced engineering, using an elaborate pulley and cable system and a controller in its chest to manipulate and power its movements.

But it wasn’t until Connecticut’s own Joseph Engelberger introduced the first industrial robotic arm, the 2,700-pound Unimate #001, in 1961 that robots became a staple in modern manufacturing.

Unimates were first called into service in the automobile industry, and today, automobile manufacturers like BMW continue to be progressive leaders using robots on the factory floor. At a BMW plant in Spartanburg, South Carolina, for example, collaborative robots help glue down insulation and water barriers on vehicle doors while their human counterparts hold the material in place.

The advent of high-end sensors, better microprocessors, and cheaper and easily programmable industrial robots is transforming industry today, with many mid-size and smaller companies considering automation and the use of collaborative robots.

Worldwide use of industrial robots is expected to increase from about 1.8 million units at the end of 2016 to 3 million units by 2020, according to the International Federation of Robotics. China, South Korea, and Japan use the most industrial robots, followed by the United States and Germany.

Anticipating further growth in industrial robotics, the Obama administration created the national Advanced Robotics Manufacturing Institute, bringing together the resources of private industry, academia, and government to spark innovations and new technologies in the fields of robotics and artificial intelligence. UConn’s Robotics and Controls Lab is a member of that initiative, along with the United Technologies Research Center, UTC Aerospace Systems, and ABB US Corporate Research in Connecticut.

Manufacturers see real value in integrating collaborative robots into their production lines. The biggest concern, clearly, is safety.

There have been 39 incidents of robot-related injuries or deaths in the U.S. since 1984, according to the federal Occupational Safety and Health Administration. To be fair, none of those incidents involved collaborative robots and all of them were later attributed to human error or engineering issues.

The first human known to have been killed by a robot was Robert Williams in 1979. Williams died when he got tired of waiting for a part and climbed into a robot’s work zone in a storage area in a Ford Motor plant in Flat Rock, Michigan. He was struck on the head by the robot’s arm and died instantly. The most recent incident happened in January 2017, when an employee at a California plastics plant entered a robot’s workspace to tighten a loose hose and had his sternum fractured when the robot’s arm suddenly swung into action.

“When you have a human and a robot trying to do a joint task, the first thing you need to think about of course is safety,” says Dani. “In our lab, we use sensors that, along with our algorithms, not only allow the robot to figure out where the human is but also allow it to predict where the human might be a few seconds later.”

One way to do that is to teach robots the same assembly steps taught to their human counterparts. If the robot knows the order of the assembly process, it can anticipate its human partners’ next moves, thereby reducing the possibility of an incident, Dani says. Knowing the process would also allow robots to help humans assemble things more quickly if they can anticipate an upcoming step and prepare a part for assembly, thus improving factory efficiency.

“Humans are constantly observing and predicting each other’s movements. We do it subconsciously,” says Ravichandar. “The idea is to have robots do the same thing. If the robot sees its human partner performing one step in an assembly process, it will automatically move on to prepare for the next step.”

Which brings us back to the whiteboards. And the math.

Failure is always an option. But when the math finally works, Ravichandar says, the success is exhilarating.

“Once you have the math figured out, it’s the best feeling because you know what you want the robot to do is going to work,” Ravichandar says with an excited smile.

“Implementing it is a whole other challenge,” he adds quickly, his passion for his work undiminished. “Things never work the first time. You have to constantly debug the code. But when you finally see the robot move, it is great because you know you have translated this abstract mathematical model into reality and actually made a machine move. It doesn’t get any better than that.”

With an eye on developing collaborative robotics that will assist with manufacturing, Dani and his team spent part of the past year teaching their lab’s test robot to identify tools laid out on a table so it can differentiate between a screwdriver, for example, and a crescent wrench, even when the tools’ initial positions are rearranged. Ultimately, they hope to craft algorithms that will help the robot work closely with a human counterpart on basic assembly tasks.

Another member of the team, Ph.D. candidate Gang Yao, is developing programs that help a robot track objects it sees with its visual sensors. Again, things we humans take for granted, such as being able to tell the difference between a bird and a drone flying above the trees, a robot has to learn.

Building advanced artificial intelligence doesn’t happen overnight. Ravichandar has been working on his projects for more than three years. It is, as they say, a process. Yet the team has learned to appreciate even the smallest of advances, and late last year, he flew to California to present some of the lab’s work to an interested team at Google.

“C-3PO is a protocol droid with general artificial intelligence,” says Ravichandar. “What we are working on is known as narrow artificial intelligence. We are developing skills for the robot one task at a time and designing algorithms that guarantee that whatever obstacles or challenges the robot encounters, it will always try to figure out a safe way to complete its given task as efficiently as it can. With generalized intelligence, a robot brings many levels of specific intelligence together and can access those skills quickly on demand. We’re not at that point yet. But we are at a point where we can teach a robot a lot of small things.”

Inevitably, as robots gain more and more human characteristics, people tend to start worrying about how much influence robots may have on our future.

Robots certainly aren’t going away. Saudi Arabia recently granted a robot named Sophia citizenship. Tesla’s Elon Musk and Deep Mind’s Mustafa Suleyman are currently leading a group of scientists calling for a ban on autonomous weapons, out of concern for the eventual development of robots designed primarily to kill.

Although it doesn’t apply directly to their current research, Dani and Ravichandar say they are well aware of the ethical concerns surrounding robots with advanced artificial intelligence.

Ravichandar says the problem is known in the field as “value alignment,” where developers try to make sure the robot’s core values are aligned with those of humans. One way of doing that, Ravichandar says, is to create a safety mechanism, such as making sure the robot always understands that the best solution it can come up with for a problem might not always be the best answer.

“The time is coming when we will need to have consensus on how to regulate this,” says Ravichandar. “Like any technology, you need to have regulations. But I think it’s absolutely visionary to inject humility into robots, and that’s happening now.”

That’s good news for the rest of us, because killer robots certainly are not the droids we’re looking for.

 

originally written by Colin Poitras