The need for order-fulfillment automation is rising exponentially each year. The COVID-19 pandemic has just added urgency to this trend. Customers expect ultra-fast deliveries, while e-commerce retailers struggle with a serious lack of workforce and increasing labor costs. Machine vision, artificial intelligence, and mobile robots are among the technologies helping warehouses, “dark stores,” and distribution centers consolidate orders and quickly, efficiently, and flawlessly prepare them for delivery.
When it comes to warehouse automation, the sorting of parcels for shipment is a key part of successful order fulfillment. Robotics can help optimize the flow of goods, keep track of all parcels, and prevent chaos and lost parcels.
Orders often need to be sorted according to specific criteria and placed into different containers for delivery. This task requires the right machine vision system that can provide high-quality 3D data to recognize each parcel, pick it, and place it into a destination bin based on the logistics service provider or other sorting methods.
Machine vision tradeoffs in parcel sorting
There are a number of systems to choose from for accurate object recognition and precise robot navigation. For instance, structured light 3D scanners are very popular, as they can provide submillimeter resolution and high accuracy. However, they have a major limitation—they can be used only for scanning static scenes.
If the sensor or the scene moves during the scanning process, the 3D scan will be distorted. This means that if one wants to use a structured light vision system for sortation, the parcels need to be placed on a carrier that can be stopped for scanning, such as a container or pallet, in order to send their coordinates to the robot for picking and sorting.
But what happens when parcels are placed on a moving conveyor belt from which they need to be picked? How to capture parcels in motion?
Traditionally, vision experts would tend to opt for time-of-flight (ToF) systems, which offer very fast scanning speed and data acquisition. However, they also have a limitation: Their scanning speed comes at the cost of lower resolution and higher noise levels.
It is this trade-off between the quality and speed of a vision system that forces integrators to accept compromises that have a negative impact on the whole automation system. Limiting automation to static scenes decreases efficiency and productivity, prolongs cycle times, and in the end shrinks the range of tasks that can be performed by a robot.
Parallel Structured Light a novel approach
To solve this challenge, Photoneo has developed a novel 3D approach. Its patented “Parallel Structured Light” changes how a scene can be captured.
Parallel Structured Light combines the unique qualities of ToF and structured light systems, according to the company. It is able to capture fast-moving objects while providing their 3D reconstruction in unprecedented resolution and accuracy, all without motion artifacts.
The approach enables 3D scanning of objects moving up to 89 mph (143.2 kph), providing a resolution of 0.9 megapixels and an accuracy of 300 to 1,250 µm. For static scenes, the parameters get even higher—2 MPX and 150 to 900 µm, respectively.
This new machine vision technology enables new types of robotic applications across all sectors, including order fulfillment for warehouses, e-commerce, and its hybrid forms like q-commerce (quick commerce) or e-grocery.
Sorting of parcels in motion
The Parallel Structured Light technology could transform the automation of sortation by allowing for the critical aspect of motion. For an illustration, let’s take a look at a Photoneo implementation for a major e-commerce retailer in Central Europe that fulfills more than 100,000 SKUs daily.
It starts with parcels moving on a conveyor belt toward a robot to be picked and sorted into different containers according to delivery company. When they get within the reach of a Parallel Structured Light 3D camera installed above the conveyor belt, it scans them in motion.
Specially designed software then localizes each parcel individually. The localization system defines the right gripping point for the robotic arm equipped with a custom-made vacuum gripper and navigates the robot to make a perfect pick in motion, without the need to stop.
Simultaneously, a scanner reads the QR code of each parcel and sends the data to the system so that the robot can sort the parcels accordingly. All this happens without human intervention, but the whole process can be visualized and controlled through a human-machine Web interface.
The Parallel Structured Light technology enables a continuous workflow without interruption, resulting in higher levels of productivity and throughput compared with standard machine vision technologies that require pauses to make each scan.
Whatever state-of-the-art system the customer chooses, fully automated sorting of parcels can increase throughput, decrease error rates, and reduce labor costs. Robotics and machine vision can also improve productivity and efficiency and ultimately shrink the space required for an application.
About the authors:
Svorad Stolc is chief technology officer of Photoneo s.r.o.'s Sensors business unit, and Andrea Pufflerova is public relations specialist at the Bratislava, Slovakia-based company, which raised $21 million in Series B funding last month.