From humanoids to policy, explore the work NVIDIA is bringing to the robotics community.
]]>Originally published on July 29, 2024, this post was updated on October 8, 2024. Robots need to be adaptable, readily learning new skills and adjusting to their surroundings. Yet traditional training methods can limit a robot��s ability to apply learned skills in new situations. This is often due to the gap between perception and action, as well as the challenges in transferring skills across��
]]>As Moore��s law slows down, it becomes increasingly important to develop other techniques that improve the performance of a chip at the same technology process node. Our approach uses AI to design smaller, faster, and more efficient circuits to deliver more performance with each chip generation. Vast arrays of arithmetic circuits have powered NVIDIA GPUs to achieve unprecedented acceleration��
]]>Using video games as a medium for training AI has become a popular method within the AI research community. These autonomous agents have had great success in Atari games, Starcraft, Dota, and Go. But while these advancements have been popular for AI research, the agents do not generalize beyond a very specific set of tasks, unlike humans that continuously learn from open-ended tasks.
]]>MLPerf benchmarks are developed by a consortium of AI leaders across industry, academia, and research labs, with the aim of providing standardized, fair, and useful measures of deep learning performance. MLPerf training focuses on measuring time to train a range of commonly used neural networks for the following tasks: Lower training times are important to speed time to deployment��
]]>A critical question to ask when designing a machine learning�Cbased solution is, ��What��s the resource cost of developing this solution?�� There are typically many factors that go into an answer: time, developer skill, and computing resources. It��s rare that a researcher can maximize all these aspects, so optimizing the solution development process is critical. This problem is further aggravated in��
]]>After the first successes of deep learning, designing neural network architectures with desirable performance criteria for a given task (for example, high accuracy or low latency) has been a challenging problem. Some call it alchemy and some intuition, but the task of discovering a novel architecture often involves a tedious and costly trial-and-error process of searching in an exponentially large��
]]>Recent developments in artificial intelligence and autonomous learning have shown impressive results in tasks like board games and computer games. However, the applicability of learning techniques remains mainly limited to simulated environments. One of the major causes of this inapplicability to real-world scenarios is the general sample-inefficiency and inability to guarantee the safe��
]]>Deep neural networks (DNNs) have been successfully applied to volume segmentation and other medical imaging tasks. They are capable of achieving state-of-the-art accuracy and can augment the medical imaging workflow with AI-powered insights. However, training robust AI models for medical imaging analysis is time-consuming and tedious and requires iterative experimentation with parameter��
]]>This post is Part 4 of the Deep Learning in a Nutshell series, in which I��ll dive into reinforcement learning, a type of machine learning in which agents take actions in an environment aimed at maximizing their cumulative reward. Deep Learning in a Nutshell posts offer a high-level overview of essential concepts in deep learning. The posts aim to provide an understanding of each concept rather��
]]>Today OpenAI, a non-profit artificial intelligence research company, launched OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Go. OpenAI researcher John Schulman shared some details about his organization, and how OpenAI Gym will make it easier for AI researchers to design��
]]>