Training – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-07-08T01:00:00Z http://www.open-lab.net/blog/feed/ Ashraf Eassa <![CDATA[Leading MLPerf Inference v3.1 Results with NVIDIA GH200 Grace Hopper Superchip Debut]]> http://www.open-lab.net/blog/?p=70450 2023-09-22T16:17:33Z 2023-09-09T16:00:00Z AI is transforming computing, and inference is how the capabilities of AI are deployed in the world��s applications. Intelligent chatbots, image and video...]]> AI is transforming computing, and inference is how the capabilities of AI are deployed in the world��s applications. Intelligent chatbots, image and video...NVIDIA Jetson Orin modules.

AI is transforming computing, and inference is how the capabilities of AI are deployed in the world��s applications. Intelligent chatbots, image and video synthesis from simple text prompts, personalized content recommendations, and medical imaging are just a few examples of AI-powered applications. Inference workloads are both computationally demanding and diverse, requiring that platforms be��

Source

]]>
1
Michelle Horton <![CDATA[Take a Free NVIDIA Technical Training Course]]> http://www.open-lab.net/blog/?p=69704 2023-08-24T19:18:25Z 2023-08-18T16:28:26Z Join the free NVIDIA Developer Program and enroll in a course from the NVIDIA Deep Learning Institute.]]> Join the free NVIDIA Developer Program and enroll in a course from the NVIDIA Deep Learning Institute.A view of the back of a man sitting at a desk and working on his laptop.

Join the free NVIDIA Developer Program and enroll in a course from the NVIDIA Deep Learning Institute.

Source

]]>
0
Gwena Cunha Sergio <![CDATA[Sparsity in INT8: Training Workflow and Best Practices for NVIDIA TensorRT Acceleration]]> http://www.open-lab.net/blog/?p=64658 2023-06-09T20:26:40Z 2023-05-16T16:00:00Z The training stage of deep learning (DL) models consists of learning numerous dense floating-point weight matrices, which results in a massive amount of...]]> The training stage of deep learning (DL) models consists of learning numerous dense floating-point weight matrices, which results in a massive amount of...

The training stage of deep learning (DL) models consists of learning numerous dense floating-point weight matrices, which results in a massive amount of floating-point computations during inference. Research has shown that many of those computations can be skipped by forcing some weights to be zero, with little impact on the final accuracy. In parallel to that, previous posts have shown that��

Source

]]>
0
Michelle Horton <![CDATA[Top MLOps Sessions at NVIDIA GTC 2023]]> http://www.open-lab.net/blog/?p=61275 2023-06-09T22:39:40Z 2023-02-23T21:17:28Z Discover how to build a robust MLOps practice for continuous delivery and automated deployment of AI workloads at scale. ]]> Discover how to build a robust MLOps practice for continuous delivery and automated deployment of AI workloads at scale. A collage of 4 illustrations of: a city with vehicles with object detection, a person interacting with a virtual assistant, a wirelessly connected city, and a robotic hand holding an object.

Discover how to build a robust MLOps practice for continuous delivery and automated deployment of AI workloads at scale.

Source

]]>
0
Tanya Lenz <![CDATA[Upcoming Event: Deep Learning Framework Sessions at GTC 2022]]> http://www.open-lab.net/blog/?p=54612 2022-09-15T19:33:09Z 2022-09-14T20:00:00Z Join us for these GTC 2022 sessions to learn about optimizing PyTorch models, accelerating graph neural networks, improving GPU performance, and more.]]> Join us for these GTC 2022 sessions to learn about optimizing PyTorch models, accelerating graph neural networks, improving GPU performance, and more.

Join us for these GTC 2022 sessions to learn about optimizing PyTorch models, accelerating graph neural networks, improving GPU performance, and more.

Source

]]>
0
Michelle Horton <![CDATA[Upcoming Event: Deep Learning Sessions at GTC 2022]]> http://www.open-lab.net/blog/?p=54052 2023-06-12T09:02:56Z 2022-09-07T16:00:00Z Join our deep learning sessions at GTC 2022 to learn about real-world use cases, new tools, and best practices for training and inference.]]> Join our deep learning sessions at GTC 2022 to learn about real-world use cases, new tools, and best practices for training and inference.

Join our deep learning sessions at GTC 2022 to learn about real-world use cases, new tools, and best practices for training and inference.

Source

]]>
0
Markel Ausin <![CDATA[NVIDIA AI Platform Delivers Big Gains for Large Language Models]]> http://www.open-lab.net/blog/?p=51198 2023-03-14T23:23:58Z 2022-07-28T18:35:00Z As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo framework that provide training...]]> As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo framework that provide training...

As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo framework that provide training speed-ups of up to 30%. These updates�Cwhich include two trailblazing techniques and a hyperparameter tool to optimize and scale training of LLMs on any number of GPUs�Coffer new capabilities to train and deploy models using the NVIDIA AI��

Source

]]>
0
Charu Chaubal <![CDATA[Optimizing Enterprise IT Workloads with NVIDIA-Certified Systems]]> http://www.open-lab.net/blog/?p=47985 2023-06-12T20:33:38Z 2022-05-12T23:03:13Z GPU-accelerated workloads are thriving across all industries, from the use of AI for better customer engagement and data analytics for business forecasting to...]]> GPU-accelerated workloads are thriving across all industries, from the use of AI for better customer engagement and data analytics for business forecasting to...

GPU-accelerated workloads are thriving across all industries, from the use of AI for better customer engagement and data analytics for business forecasting to advanced visualization for quicker product innovation. One of the biggest challenges with GPU-accelerated infrastructure is choosing the right hardware systems. While the line of business cares about performance and the ability to use a��

Source

]]>
0
Ashraf Eassa <![CDATA[Saving Time and Money in the Cloud with the Latest NVIDIA-Powered Instances]]> http://www.open-lab.net/blog/?p=44315 2023-07-05T19:28:41Z 2022-03-01T19:13:57Z AI is transforming every industry, enabling powerful new applications and use cases that simply weren��t possible with traditional software. As AI continues to...]]> AI is transforming every industry, enabling powerful new applications and use cases that simply weren��t possible with traditional software. As AI continues to...

AI is transforming every industry, enabling powerful new applications and use cases that simply weren��t possible with traditional software. As AI continues to proliferate, and with the size and complexity of AI models on the rise, significant advances in AI compute performance are required to keep up. That��s where the NVIDIA platform comes in. With a full-stack approach spanning chips��

Source

]]>
1
James Sohn <![CDATA[Building a Question and Answering Service Using Natural Language Processing with NVIDIA NGC and Google Cloud]]> http://www.open-lab.net/blog/?p=24231 2022-08-21T23:41:07Z 2021-03-04T01:10:12Z Enterprises across industries are leveraging natural language process (NLP) solutions��from chatbots to audio transcription��to improve customer engagement,...]]> Enterprises across industries are leveraging natural language process (NLP) solutions��from chatbots to audio transcription��to improve customer engagement,...

Enterprises across industries are leveraging natural language process (NLP) solutions��from chatbots to audio transcription��to improve customer engagement, increase employee productivity, and drive revenue growth. NLP is one of the most challenging tasks for AI because it must understand the underlying context of text without explicit rules in human language. Building an AI-powered solution��

Source

]]>
1
Sanjay Dulepet <![CDATA[Deploying a Scalable Object Detection Inference Pipeline: Optimization and Deployment, Part 3]]> http://www.open-lab.net/blog/?p=20870 2022-08-21T23:40:39Z 2020-12-18T18:43:45Z This post is the third in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). The previous posts provided a general...]]> This post is the third in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). The previous posts provided a general...

This post is the third in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). The previous posts provided a general overview of deep learning inference for object detection and covered the object detection inference process and object detection metrics. In this post, we conclude with a brief look at the optimization techniques and deployment of an end-to-end��

Source

]]>
1
Maggie Zhang <![CDATA[Getting the Most Out of the NVIDIA A100 GPU with Multi-Instance GPU]]> http://www.open-lab.net/blog/?p=21816 2023-07-27T19:58:45Z 2020-12-01T00:30:40Z With the third-generation Tensor Core technology, NVIDIA recently unveiled A100 Tensor Core GPU that delivers unprecedented acceleration at every scale for AI,...]]> With the third-generation Tensor Core technology, NVIDIA recently unveiled A100 Tensor Core GPU that delivers unprecedented acceleration at every scale for AI,...

With the third-generation Tensor Core technology, NVIDIA recently unveiled A100 Tensor Core GPU that delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing. Along with the great performance increase over prior generation GPUs comes another groundbreaking innovation, Multi-Instance GPU (MIG). With MIG, each A100 GPU can be partitioned up to seven��

Source

]]>
11
Sanjay Dulepet <![CDATA[Deploying a Scalable Object Detection Inference Pipeline, Part 1]]> http://www.open-lab.net/blog/?p=19956 2022-08-21T23:40:36Z 2020-08-28T16:29:32Z This post is the first in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). In this post, we provide a general overview...]]> This post is the first in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). In this post, we provide a general overview...

This post is the first in a series on Autonomous Driving at Scale, developed with Tata Consultancy Services (TCS). In this post, we provide a general overview of the deep learning inference for object detection. The next posts cover the object detection inference process and object detection metrics and optimization techniques and deployment of an end-to-end inference pipeline.

Source

]]>
0
Adolf Hohl <![CDATA[Validating Distributed Multi-Node Autonomous Vehicle AI Training with NVIDIA DGX Systems on OpenShift with DXC Robotic Drive]]> http://www.open-lab.net/blog/?p=19146 2022-08-21T23:40:25Z 2020-07-30T00:32:18Z Deep neural network (DNN) development for self-driving cars is a demanding workload. In this post, we validate DGX multi-node, multi-GPU, distributed training...]]> Deep neural network (DNN) development for self-driving cars is a demanding workload. In this post, we validate DGX multi-node, multi-GPU, distributed training...

Deep neural network (DNN) development for self-driving cars is a demanding workload. In this post, we validate DGX multi-node, multi-GPU, distributed training running on RedHat OpenShift in the DXC Robotic Drive environment. We used OpenShift 3.11, also a part of the Robotic Drive containerized compute platform, to orchestrate and execute the deep learning (DL) workloads.

Source

]]>
0
Micha? Szo?ucha <![CDATA[Case Study: ResNet50 with DALI]]> http://www.open-lab.net/blog/?p=15089 2023-07-05T19:40:54Z 2019-07-02T13:00:47Z Let��s imagine a situation. You buy a brand-new, cutting-edge, Volta-powered DGX-2 server. You��ve done your math right, expecting a 2x performance increase...]]> Let��s imagine a situation. You buy a brand-new, cutting-edge, Volta-powered DGX-2 server. You��ve done your math right, expecting a 2x performance increase...

Let��s imagine a situation. You buy a brand-new, cutting-edge, Volta-powered DGX-2 server. You��ve done your math right, expecting a 2x performance increase in ResNet50 training over the DGX-1 you had before. You plug it into your rack cabinet and run the training. That��s when an unpleasant surprise pops up. Even though your math is correct, the speedup you��re getting lower than expected. Why?

Source

]]>
0
Prethvi Kashinkunti <![CDATA[Creating an Object Detection Pipeline for GPUs]]> http://www.open-lab.net/blog/?p=14734 2022-08-21T23:39:30Z 2019-06-19T17:00:13Z Earlier this year in March, we showed retinanet-examples, an open source example?of how to accelerate the training and deployment of an object detection...]]> Earlier this year in March, we showed retinanet-examples, an open source example?of how to accelerate the training and deployment of an object detection...

Earlier this year in March, we showed retinanet-examples, an open source example of how to accelerate the training and deployment of an object detection pipeline for GPUs. We presented the project at NVIDIA��s GPU Technology Conference in San Jose. This post discusses the motivation for this work, a high-level description of the architecture, and a brief look under-the-hood at the optimizations we��

Source

]]>
0
Hitoshi Harada <![CDATA[Labellio: Scalable Cloud Architecture for Efficient Multi-GPU Deep Learning]]> http://www.open-lab.net/blog/parallelforall/?p=5709 2022-08-21T23:37:36Z 2015-08-10T13:00:41Z Labellio is the world��s easiest deep learning web service for computer vision. It aims to provide a deep learning environment for image data where non-experts...]]> Labellio is the world��s easiest deep learning web service for computer vision. It aims to provide a deep learning environment for image data where non-experts...

Labellio is the world��s easiest deep learning web service for computer vision. It aims to provide a deep learning environment for image data where non-experts in deep learning can experiment with their ideas for image classification applications. Watch our video embedded here to see how easy it is. The challenges in deep learning today are not just in configuring hyperparameters or��

Source

]]>
1
Mark Ebersole http://www.open-lab.net/blog/parallelforall <![CDATA[Learn GPU Programming in Your Browser with NVIDIA Hands-On Labs]]> http://www.open-lab.net/blog/parallelforall/?p=4066 2022-08-21T23:37:28Z 2014-11-12T22:04:02Z As CUDA Educator at NVIDIA, I work to give access to massively parallel programming education & training to everyone, whether or not they have access to...]]> As CUDA Educator at NVIDIA, I work to give access to massively parallel programming education & training to everyone, whether or not they have access to...Qwiklabs Logo

As CUDA Educator at NVIDIA, I work to give access to massively parallel programming education & training to everyone, whether or not they have access to GPUs in their own machines. This is why, in partnership with qwikLABS, NVIDIA has made the hands-on content we use to train thousands of developers at the Supercomputing Conference and the GPU Technology Conference online and accessible from��

Source

]]>
1
���˳���97caoporen����