Machine vision is a mature technology with established incumbents. However, significant advancements in chipsets, software, and standards are bringing deep learning innovation into the machine vision sector.
According to a recent analysis by global tech market advisory firm ABI Research, total shipments for machine vision sensors and cameras will reach 16.9 million by 2025, creating an installed base of 94 million machine vision systems in industrial manufacturing. Of that installed base, 11% will be deep learning-based.
Machine vision systems are a staple in production lines for barcode reading, quality control, and inventory management. “These solutions often have long replacement cycles and are less prone to disruption. Due to the increasing demands for automation, machine vision is finding its way into new applications,” said Lian Jye Su, Principal Analyst at ABI Research. “Robotics, for example, is a new growth area for machine vision: Collaborative robots rely on machine vision for guidance and object classification, while mobile robots rely on machine vision for SLAM and safety.”
A different breed from conventional machine vision technology, deep learning-based machine vision is data-driven and utilizes a statistical approach, which allows the machine vision model to improve as more data is gathered for training and testing. Major machine vision vendors have realized the potential of deep learning-based machine learning. Cognex, for example, acquired SUALAB, a leading Korean-based developer of vision software using deep learning for industrial applications, and Zebra Technologies acquired Cortexica Vision Systems Ltd., a London-headquartered leader in business-to-business (B2B) AI-based computer vision solutions developer.
At the same time, chipset vendors are launching new chipsets and software stacks to facilitate the implementation of deep learning-based machine vision. Xilinx, a Field Programmable Gated Array (FPGA) vendor, partnered closely with camera sensor manufacturer Sony and camera vendors such as Framos and IDS Imaging to incorporate its Versal ACAP System on Chip (SoC). Intel, on the other hand, offers OpenVINO for developers to deploy pre-trained deep learning-based machine vision models through a common API to deliver inference solutions on various computing architectures. Another FPGA vendor, Lattice Semiconductor, focuses on low-powered Artificial Intelligence (AI) for embedded vision through its senseAI stack, which offers hardware accelerators, software tools, and reference designs. These technology stacks aim to ease development and deployment challenges and create platform stickiness.
On the standards front, vendors are bringing 10GigE (Gigabit Ethernet) and 25GigE cameras into industrial applications. Continual upgrades on video capturing and compression technologies also generate a better image and video quality for deep learning-based machine vision models. This ensures the futureproofing of machine vision systems. “Therefore, when choosing machine vision systems, end implementers need to understand their machine vision requirements, consider integration with their backend system, and identify the right ecosystem partners. Deployment flexibility and future upgradability and scalability will be crucial as machine vision technology continues to evolve and improve,” concludes Su.
These findings are from ABI Research’s Machine Vision in Industrial Applications application analysis report. This report is part of the company’s Artificial Intelligence and Machine Learning research service, which includes research, data, and analyst insights. Based on extensive primary interviews, Application Analysis reports present in-depth analysis on key market trends and factors for a specific technology.