A major challenge in robotics is training robots to perform new tasks without the massive effort of collecting and labeling datasets for every new task and environment. Recent research efforts from NVIDIA aim to solve this challenge through the use of generative AI, world foundation models (WFMs) like NVIDIA Cosmos, and data generation blueprints such as NVIDIA Isaac GR00T-Mimic and GR00T-Dreams.
]]>Robots must perceive and interpret their 3D environments to act safely and effectively. This is especially critical for tasks such as autonomous navigation, object manipulation, and teleoperation in unstructured or unfamiliar spaces. Advances in robotic perception increasingly focus on integrating 3D scene understanding, generalizable object tracking, and persistent spatial memory��using robust��
]]>This edition of NVIDIA Robotics Research and Development Digest (R2D2) explores several contact-rich manipulation workflows for robotic assembly tasks from NVIDIA Research and how they can address key challenges with fixed automation, such as robustness, adaptability, and scalability. Contact-rich manipulation refers to robotic tasks that involve continuous or repeated physical contact��
]]>Robotic arms are used today for assembly, packaging, inspection, and many more applications. However, they are still preprogrammed to perform specific and often repetitive tasks. To meet the increasing need for adaptability in most environments, perceptive arms are needed to make decisions and adjust behavior based on real-time data. This leads to more flexibility across tasks in collaborative��
]]>Welcome to the first edition of the NVIDIA Robotics Research and Development Digest (R2D2). This technical blog series will give developers and researchers deeper insight and access to the latest physical AI and robotics research breakthroughs across various NVIDIA Research labs. Developing robust robots presents significant challenges, such as: We address these challenges through��
]]>