Title: Professor, University of California at Berkeley
Location: Berkeley, Calif.
Primary Focus: For the past several years, artificial intelligence as a professor at the University of California at Berkeley. Until a few months ago, Abbeel was a researcher at Elon Musk’s OpenAI lab. More recently, he co-founded Embodied Intelligence with three researchers from OpenAI and Berkeley. The company teaches robots how to pick parts and build assemblies using virtual reality and artificial intelligence.
Modern: There are plenty of tech startups out there, but it isn’t every day that The New York Times writes a story about one of them less than a month after launch. Can you get us up to speed on what you’re doing?
Abbeel: We’re trying to take robots to the next level in terms of their capabilities. It’s only recently that artificial intelligence, or AI, has advanced enough to make any of this possible. Traditional robot programming is very time consuming. It’s also very limited in its capabilities. You program a specific action or series of actions. But the robot is limited to those activities or range of motions. The robot can’t go beyond and perform new tasks without new programming.
We’re trying to go beyond this by using completely different technologies. For instance, the objective could be to teach the robot how to pick a specific part from a jumble of other parts. We both know the part is not always going to be in the same place or in the same position every time and the clutter around it will be different every time, too. So, we’re using virtual reality, or VR, to first teach the robot the basic action and then use AI to allow the robot to learn on its own how to pick that same part each time no matter what surrounds it. We call that the environment.
Modern: How exactly do VR and AI work together? It’s so different from how robots work today.
Abbeel: Robots need to be smarter than they are. They need to learn how to react to situations not just perform the same action again and again. They need to adapt to live in their environment. So, we’re giving robots eyes. That might be in the form of audio and/or tactile sensors. It can also be eyes as in a vision system. Let’s assume we have a robot with a vision system. We start with a person wearing an off-the-shelf, commercial VR system. The person holds a teaching pendant and performs the picking action. The neural net learns from these demonstrations how the robot’s motion should depend on the configuration of the environment, which will be different every time. The pendant tracks the motion of the person and translates it into a motion learned by the neural net. At this point, the person needs to demonstrate the action 100 to 1,000 times, depending on the environment, for the neural net to collect enough information about the motion. By the way, it isn’t difficult to perform an action 100 or more times in an hour. So this doesn’t take much time compared to traditional programming. And, we usually have an 80% to 90% success rate.
Modern: What exactly is the robot learning from this?
Abbeel: Using AI, we are teaching the robot to learn how to move as a function of the environment that it is working in. Using machine learning, the robot is able to adapt to the current conditions and pick the part every time regardless of anything else that might be in the way or just plain different. That’s what’s new here.
Modern: Is this all laboratory work?
Abbeel: Not at all. This is all real-world work. We work closely with quite a few companies to solve their problems. It’s about picking a specific part or item in environments that even people have challenges working within. Pharma, as well as other industries with shipping and warehousing operations, are very interested in this right now. They see the potential for giving robots a completely different set of skills and how this can do much more than just improve their own operating efficiencies. At the same time, the technology here is going beyond the armies of PhDs that today program human tasks. We are creating an AI layer that allows robots to learn new tasks in their environment without the need for human programming.