At its fall GPU Technology Conference, or GTC, this week, NVIDIA Corp. announced several new technologies intended to help robotics developers, including enhancements to the Omniverse simulation environment, the Jetson AGX Orin computer, integration with the Robot Operating System, and the NVIDIA DRIVE platform for autonomous vehicle development.
For instance, Jetson AGX Orin features high-speed interfaces, faster memory bandwidth, and multimodal sensor support. This enables it to feed multiple concurrent artificial intelligence application pipelines, said NVIDIA.
“As robotics and embedded computing transform manufacturing, healthcare, retail, transportation, smart cities, and other essential sectors of the economy, the demand for processing continues to surge,” said Deepu Talla, vice president and general manager of embedded and edge computing at NVIDIA. “Jetson AGX Orin addresses this need, enabling the 850,000 Jetson developers and over 6,000 companies building commercial products on it to create and deploy autonomous machines and edge AI applications that once seemed impossible.”
NVIDIA develops synthetic data generation
Since its launch late last year, Omniverse has been downloaded 70,000 times by designers at 500 companies, said NVIDIA. The virtual world simulation and collaborative platform for 3D workflows can be used in product design, testing, and training. The Santa Clara, Calif.-based company said it has continued to refine its approach to generating synthetic data, which it said enables developers to scale applications more rapidly than with annotated collected data.
“Adopting a data-centric approach to model development when using synthetic data is a very iterative process,” said NVIDIA. “Trained models are evaluated, and improvements in the dataset are identified. New datasets are then generated, and a new cycle of training is initiated. This process of generating data, training the model, evaluating the model, and generating more data is continued until the model performs as desired.”
“Because the data in each iteration is being generated in simulation as opposed to being collected in the real-world and subsequently labeled, the speed of model training is greatly accelerated,” the company said. “These datasets which can be generated at scale are output in a format that can be directly used by the training tools. This eliminates the need for another step of data pre-processing.”
Omniverse and robotics
Among its announcements at GTC, NVIDIA announced the Omniverse Replicator synthetic-data generation engine for Isaac Sim for robotics and NVIDIA Drive for autonomous vehicles. It demonstrated a mobile robot trained with a model built in Replicator.
The company generated more than 90,000 images for the demonstration. It also used the new Omniverse Farm to manage the GPU (graphics processing unit) compute resources that created the dataset. Despite the limitations of lidar, the robot was able to avoid colliding with a forklift.
“Improving performance for challenging AI-based computer vision applications requires large and diverse datasets that replicate the inherent distribution of the target domain,” said NVIDIA. “The new NVIDIA Omniverse Replicator for Isaac Sim is a powerful tool to generate production-quality synthetic datasets to train robust deep-learning perception models.”
“The goal is for the robot to not know whether it is inside a simulation or the real world,” said Jenson Huang, founder and CEO of NVIDIA. Replicator simulates the sensors, generates data that is automatically labeled, and with a domain randomization engine, creates rich and diverse training data sets, he explained.
Omniverse Enterprise is now available, starting at $9,000 a year. NVIDIA's GTC session recordings and resources are also available online to registrants.
For 10 announcements from NVIDIA GTC, see the slideshow at top right.
About the Author
Follow Robotics 24/7 on Linkedin