
Did you know robots can now be simulated and trained in the cloud? NVIDIA CEO Jensen Huang has described general-purpose robots as the next trillion-dollar industry. As a global leader in AI, 3D graphics, and simulation, NVIDIA has introduced an integrated solution that leverages its core technologies to provide a more efficient environment for robot development.
From Tesla Bot to Boston Dynamics' Atlas, from Amazon's warehouse robots to restaurant delivery bots, robotics is entering daily life faster than ever. According to a Markets and Markets report, the global service robotics market is projected to grow from USD 47.1 billion in 2024 to USD 98.65 billion in 2029, at a compound annual growth rate (CAGR) of 15.9%. The main driving forces behind this boom are the maturity of AI and deep learning technologies. In the past, developing robotic applications could take years. Now, with accelerated platforms such as NVIDIA Isaac, engineers can turn their ideas into reality within months.
In 2022, NVIDIA launched NVIDIA Omniverse, a platform integrating RTX rendering, OpenUSD and generative AI technologies. Developers can use NVIDIA Omniverse to create digital twins of robots — virtual replicas that enable robot training and testing in simulated environments, significantly reducing development time and costs.
For autonomous robotics applications, NVIDIA also introduced the Isaac platform, which combines software, hardware, and simulation tools to support training, simulation, deployment, and optimization — forming a key ecosystem for robotics development.
Leadtek's NVIDIA DLI certified workshop, "NVIDIA Isaac for Accelerated Robotics" merges Omniverse and Isaac to teach the foundational concepts of robot development. Participants will learn to assemble and operate robots in Omniverse's virtual environment, integrate AI models, and fine-tune them to enable autonomous behaviors.
A key challenge in robotics is ensuring precise and efficient robot motion. The open-source NVIDIA Isaac ROS was designed for this purpose. Built on the widely used ROS 2 (Robot Operating System) framework, it greatly simplifies hardware control processes and has become a core solution for modern robotics. Isaac ROS packages are accelerated by CUDA, allowing GPU-based real-time processing of visual data and decision-making for fast, responsive robotic behavior.
In this course (workshop), students will begin by assembling a robotic arm using provided OpenUSD files — attaching grippers, adding actuators, and modeling the complete robot into a virtual environment with Isaac Sim. The course also explains OpenUSD and key simulation details, building a solid foundation for later testing and motion studies.

The curriculum further demonstrates how to link ROS 2 with the virtual environment to achieve real-time robot control — including grasping, picking, and placing actions — and perform Software-in-the-Loop (SIL) simulations to enhance system stability and development efficiency.
By working through module assembly and motion simulation, students will not only understand how robots operate but also gain hands-on experience in AI-powered intelligent control.
As an NVIDIA DLI course, AI integration is a key highlight. Students will use NVIDIA's Isaac Lab toolkit to deploy an autonomous mobile robot (AMR). Through map-based analysis and path planning, the robot gains AI-driven decision-making capabilities — understanding natural language commands (e.g., "Move to the next warehouse") and detecting obstacles via cameras to automatically stop or reroute without human intervention.
To further improve AI accuracy and optimize for different environments, the course demonstrates how to train robots in Isaac Lab using Cosmos World Foundation Models (WFMs) to generate physics-accurate synthetic data. Using GROOT workflows, participants will train the robot's digital twin with large volumes of precise synthetic data, effectively refining its algorithms to shorten training time while improving stability and reliability.
This course is ideal for professionals in robotics or smart manufacturing. It covers both fundamental theory and full practical workflows. Participants are recommended to have basic knowledge of Linux CLI operations and Python programming to facilitate smooth model deployment and execution.
The course provides access to a preconfigured NVIDIA DLI cloud environment. Students can practice directly through a browser on any macOS or Windows laptop or desktop without requiring a dedicated GPU — lowering the entry barrier for learning.
For extended practice after the course, instructors recommend using a system equipped with an NVIDIA RTX GPU, which allows repeated robot operation and AI fine-tuning in Omniverse to strengthen technical skills.