San Francisco-based Rapid Robotics continues to evolve.
The message of the company remains the same in solving the labor crisis and helping manufacturing companies thrive.
The vision, specifically of the machine variety, is where the biggest change has occurred.
RaaS to RapidID
When Rapid Robotics was founded in 2019, it was primarily a robotics-as-a-service (RaaS) provider and integrator for companies looking to automate portions of their manufacturing processes. But by 2023, and largely spearheaded by the COVID-19 pandemic, the company shifted its focus from hardware to software.
“The pandemic gave us the ability to deploy all these robots because everybody needed them,” said Kim Losey, CEO of Rapid Robotics. “We figured out how to use primarily cobots at that time to do all kinds of machine tending tasks, from pad printing PCR chips for covid tests, to packing pharmaceuticals into boxes, dispensing pills into jars and putting embalming fluid in boxes.”
Losey noted that the market and demand for automation helped drive revenue for Rapid, but created other issues that made her and the rest of the company rethink how they approached their underlying hypothesis of solving the labor crisis.
“We proved speed, we proved service and support, and we proved that we could do the custom engineering and design required to make them successful,” she said. “What we didn't do was focus enough on the technology that would really allow us to scale beyond deploying hundreds of robots to deploying thousands or tens of thousands of robots. That was really the shift. It’s not so much a shift in vision, but more a shift in how we do it.”
That shift from “Rapid 1.0,” as CTO Tom Hummel called it, saw Rapid Robotics stop deploying robots in 2023 - although the company still supports those customers - and focus its efforts on its RapidID software and the underlying machine vision, machine learning and AI technologies to “give robots human-like instinct.”
Palletizing served as the start for RapidID
“The initial application for this to test out the infrastructure was palletizing,” Hummel said. “Boxes were actually the first things onboarded. Then we tried other simple shapes, like cans, other types of boxes, but it’s all really customer driven.”
As Rapid added customers, it also added to its internal object database of objects and shapes. Hummel noted that one of the beauties of the system is that Rapid Robotics doesn’t need a massive object database before RapidID goes to market. Onboarding objects - in the grand scheme of all this very extensive and ever-evolving technology - is actually the “easy” part, he said.
“That's the beauty of modern machine learning approaches,” Hummel added. “The modern way to do it now is called object intelligence. Instead of figuring out what I need to pick in a bin of objects, let's learn everything about this object first. Once I know everything about that object, then I can make a decision around it. And it's far easier to learn about an object than it is to teach a robot to grasp it.”
In conjunction with this major company overhaul from robot hardware to a machine vision software platform was the emergence of higher quality, but cheaper 3D scanners, cameras and AI.
“If we go back almost five years to when Rapid started, 3D vision wasn't what it is today,” Losey said. “I don't even know that we could have done this and if we could have achieved it. The technology just wasn't there.”
With this perfect storm of cheaper, but better technology and more evolved AI, machine learning and machine vision, RapidID was placed front and center by Rapid Robotics.
“Advanced 3D vision cameras were probably 10 times more expensive, five times at least, more expensive than they are today,” Losey said. “Generative AI really wasn't as prolific as it is today. It's very difficult to be able to use traditional machine learning to be able to handle that kind of variability. We believe this convergence of opportunity all at one time… brought everything all together that made RapidID possible.”
Shape matching, completion create the vision
“We perform things called shape matching,” Hummel said. “If we know that we have a bin full of ‘Object As’ and ‘Object Bs’ because we've deployed this system at a customer site, we know that we're only grabbing Object A and Object B. We're only finding these two objects.”
The overarching library of objects in the aforementioned database that Rapid maintains provides a level of context for the machine learning system to make inferences. But because the system is trained on the customer objects, it allows for quicker deployments of the platform.
“Pick-and-place is never really just pick-and-place,” he said. “There's always some intermediate step or some wrinkle that requires a little bit more attention.”
As the technology continues to evolve, Rapid is keeping its eye on different industries that are ripe for automation. Losey specifically highlighted food and beverage and consumer packaged goods as prime examples that could benefit from RapidID.
“Automation was always standard for the upstream process in these industries, but not so standard for the downstream process,” she said. “I think that's where advances in vision and perception are really going to help make that more digestible.”
3D-printed grippers add customization for customers
Along with the machine vision and AI technology behind RapidID, the company also 3D prints custom grippers for its deployments to hone in on the precision required to identify, pick and place objects to and from boxes.
“3D printing technology allows us to create the geometries that are specifically required for that task,” Hummel said. “The only real expense is the engineering time to make that, but we've managed to be pretty good at constraining that amount of time.”
The end result is end of arm tooling (EOAT) combined with a software platform that can identify and grasp objects to within a few millimeters. That capability creates the high level of precision required to work in a variety of environments, including a current Rapid customer that handles stuffed animals.
“The grippers usually only have to handle 10 or less things,” Hummel said. “It basically means that we can really tune the gripper in a reasonable amount of time, like a week or two, to do that task very successfully.”
Whether RapidID is grasping stuffed animals, cans, embalming fluid or anything in-between, the overarching simple message remains the same.
“Stuff goes in boxes, travels in boxes and out of boxes all around the world every day and on pallets,” Losey said. “It's not that different, putting something into a box, taking something out of a box, sorting something. It's all very similar. That's where we're thinking about RapidID and the platform is what makes that possible.”
Want to learn more about machine vision? This article was featured in the August 2024 Robotics 24/7 Special Focus Issue titled “Machine vision to increase robot precision.”
About the Author
Follow Robotics 24/7 on Linkedin