The vast scope of robotics innovation is enabling machines to not only relieve humans of tedious tasks but also perform at superhuman levels. From picking operations in warehouses to complex surgical procedures, robots are becoming more dexterous and autonomous. And with artificial intelligence, they can operate more consistently and with fewer errors over time. Vision technology is enabling a host of new robotics applications.
Robots and drones might be able to spot anomalies or defects, but they need contextual judgment to know what to do next. Similarly, can robotic vision discern aesthetically pleasing scenes and then snap a photo?
That's precisely what a graduate student at Cornell University is working on. Hadi AlZayer came up with the idea during a nature walk. It led to AutoPhoto, a robotic system developed with the help of other researchers at Cornell Ann S. Bowers College of Computing and Information Science. The digital technology allowed a robot to survey an interior space and take photographs that people would find pleasant.
The system used AI and machine learning to build what Cornell researchers called a “learned aesthetic” model. A robot would visually document a space by taking “smart” photographs.
With AutoPhoto, a robot could travel around a real estate property to promote it for sale. Or, it could observe dangerous areas in an industrial setting or gather surveillance data for security purposes.
Robotic vision on the farm
Moving outdoors, robotic vision systems could assist with agricultural tasks such as pruning apple trees, estimating fruit yields, thinning out crops, or picking mushrooms.
Cornell University researchers have developed technology that could make a big difference in the growing of grapes. Powdery mildew is a harmful fungus that invades the leaves of wine and table grapes. It leaves white spores on leaves and fruit.
To counter the fungus, growers must precisely and consistently apply fungicides. The whole process of finding and treating such leaves to kill the fungus is laborious and costly.
The Cornell scientists developed robot prototypes capable of scanning grape leaf samples automatically, using a robotic camera they named “BlackBird.” The robot can gather information “at a scale of 1.2 micrometers per pixel—equivalent to a regular optical microscope. For each 1-cm leaf sample being examined, the robot provides 8,000 by 5,000 pixels of information.”
In addition, researchers at Penn State have developed a machine vision system that allows robots to see objects while on the move. It uses a powerful LED flash that rapid fires on a target so that an image can be identified without blur or other disturbances such as vibration from passing over rough terrain.
The researchers stated that the technology could expedite some processes in challenging agricultural environments.
“Artificial intelligence does well with images that are really rich with information, so the important thing is capturing high-quality images,” said Omeed Mirbod, a doctoral student in agricultural and biological engineering. “For agriculture, we need images that are invariant to outdoor lighting conditions.”
“If you capture an image in which a fruit is very saturated with light due to the sun, and then capture another one in shadow where there is little sunlight, the artificial intelligence that you're training to detect the fruit might struggle to identify it,” he said.
Chess player a first move to smarter robots
Machine vision can also help robots with more entertaining things, like chess. Even games can develop and demonstrate apabilities that lead to more demanding use cases for robotics and AI.
Many innovations begin with the interest of a single individual with unique curiosity. That's what happened at the Rose-Huhlman Institute of Technology in Terra Haute, Ind., where a mechanical engineering student chess aficionado who saw a relationship between AI, robots, and problem solving.
Using trial and error and “principled AI,” Josh Eckels put a robot to work to find solutions to chess problems, with the intent that the underlying technology could solve problems in other scenarios.
His system used a camera mounted about 36 in. above a chess board watches the pieces and their movements. The camera captures the movement and feeds it into an AI-based chess engine that determines the next best move.
But the model is physical. A gripper then moves the piece per the engine's direction, and the robot waits for the next move by its human opponent. waits for a move to be made by the live person (see video below).
The innovation behind this approach isn't just about chess; it's about how cognitive AI can guide a robot's physical reactions.
Vision provides power for robotics
University researchers are imaginatively exploring ways in which machine vision can inform robotics movement, manipulation, and task planning, from fashion photography and fruit tending to chess.
This is just a mere sampling of the ways in which researchers are combining vision technology with robotics to produce new capabilities. Despite their use in many manufacturing, logistics, and warehouse tasks, they're by no means mature, with many more applications to come.
About the Author
Follow Robotics 24/7 on Linkedin