BERT – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-07-08T01:00:00Z http://www.open-lab.net/blog/feed/ Tanay Varshney <![CDATA[An Introduction to Large Language Models: Prompt Engineering and P-Tuning]]> http://www.open-lab.net/blog/?p=63707 2023-11-28T19:18:25Z 2023-04-26T16:00:00Z ChatGPT has made quite an impression. Users are excited to use the AI chatbot to ask questions, write poems, imbue a persona for interaction, act as a personal...]]> ChatGPT has made quite an impression. Users are excited to use the AI chatbot to ask questions, write poems, imbue a persona for interaction, act as a personal...

ChatGPT has made quite an impression. Users are excited to use the AI chatbot to ask questions, write poems, imbue a persona for interaction, act as a personal assistant, and more. Large language models (LLMs) power ChatGPT, and these models are the topic of this post. Before considering LLMs more carefully, we would first like to establish what a language model does. A language model gives��

Source

]]>
0
Ashraf Eassa <![CDATA[Setting New Records in MLPerf Inference v3.0 with Full-Stack Optimizations for AI]]> http://www.open-lab.net/blog/?p=62958 2023-07-05T19:23:50Z 2023-04-05T19:10:55Z The most exciting computing applications currently rely on training and running inference on complex AI models, often in demanding, real-time deployment...]]> The most exciting computing applications currently rely on training and running inference on complex AI models, often in demanding, real-time deployment...

The most exciting computing applications currently rely on training and running inference on complex AI models, often in demanding, real-time deployment scenarios. High-performance, accelerated AI platforms are needed to meet the demands of these applications and deliver the best user experiences. New AI models are constantly being invented to enable new capabilities��

Source

]]>
0
Shashank Gaur <![CDATA[Topic Modeling and Image Classification with Dataiku and NVIDIA Data Science]]> http://www.open-lab.net/blog/?p=62857 2023-11-03T07:15:04Z 2023-04-04T18:30:00Z The Dataiku platform for everyday AI simplifies deep learning. Use cases are far-reaching, from image classification to object detection and natural language...]]> The Dataiku platform for everyday AI simplifies deep learning. Use cases are far-reaching, from image classification to object detection and natural language...Twitter topic model Dataiku diagram

The Dataiku platform for everyday AI simplifies deep learning. Use cases are far-reaching, from image classification to object detection and natural language processing (NLP). Dataiku helps you with labeling, model training, explainability, model deployment, and centralized management of code and code environments. This post dives into high-level Dataiku and NVIDIA integrations for image��

Source

]]>
0
Ashraf Eassa <![CDATA[The Full Stack Optimization Powering NVIDIA MLPerf Training v2.0 Performance]]> http://www.open-lab.net/blog/?p=49597 2023-07-05T19:27:00Z 2022-06-30T18:00:00Z MLPerf benchmarks are developed by a consortium of AI leaders across industry, academia, and research labs, with the aim of providing standardized, fair, and...]]> MLPerf benchmarks are developed by a consortium of AI leaders across industry, academia, and research labs, with the aim of providing standardized, fair, and...

MLPerf benchmarks are developed by a consortium of AI leaders across industry, academia, and research labs, with the aim of providing standardized, fair, and useful measures of deep learning performance. MLPerf training focuses on measuring time to train a range of commonly used neural networks for the following tasks: Lower training times are important to speed time to deployment��

Source

]]>
0
James Sohn <![CDATA[Developing a Question Answering Application Quickly Using NVIDIA Riva]]> http://www.open-lab.net/blog/?p=24073 2023-03-22T01:16:51Z 2021-11-09T16:14:38Z Sign up for the latest Speech AI news from NVIDIA. There is a high chance that you have asked your smart speaker a question like, ��How tall is Mount...]]> Sign up for the latest Speech AI news from NVIDIA. There is a high chance that you have asked your smart speaker a question like, ��How tall is Mount...

Sign up for the latest Speech AI news from NVIDIA. There is a high chance that you have asked your smart speaker a question like, ��How tall is Mount Everest?�� If you did, it probably said, ��Mount Everest is 29,032 feet above sea level.�� Have you ever wondered how it found an answer for you? Question answering (QA) is loosely defined as a system consisting of information retrieval (IR)��

Source

]]>
6
Purnendu Mukherjee <![CDATA[Real-Time Natural Language Processing with BERT Using NVIDIA TensorRT (Updated)]]> http://www.open-lab.net/blog/?p=34688 2023-06-12T21:08:51Z 2021-07-20T13:00:00Z This post was originally published in August 2019 and has been updated for NVIDIA TensorRT 8.0. Large-scale language models (LSLMs) such as BERT, GPT-2, and...]]> This post was originally published in August 2019 and has been updated for NVIDIA TensorRT 8.0. Large-scale language models (LSLMs) such as BERT, GPT-2, and...

This post was originally published in August 2019 and has been updated for NVIDIA TensorRT 8.0. Join the NVIDIA Triton and NVIDIA TensorRT community to stay current on the latest product updates, bug fixes, content, best practices, and more. Large-scale language models (LSLMs) such as BERT, GPT-2, and XL-Net have brought exciting leaps in accuracy for many natural language processing��

Source

]]>
0
Gorkem Batmaz https://twitter.com/gorkembatmaz <![CDATA[Enabling Predictive Maintenance Using Root Cause Analysis, NLP, and NVIDIA Morpheus]]> http://www.open-lab.net/blog/?p=31337 2022-08-21T23:51:33Z 2021-05-10T16:00:00Z Background Predictive maintenance is used for early fault detection, diagnosis, and prediction when maintenance is needed in various industries including oil...]]> Background Predictive maintenance is used for early fault detection, diagnosis, and prediction when maintenance is needed in various industries including oil...

Predictive maintenance is used for early fault detection, diagnosis, and prediction when maintenance is needed in various industries including oil and gas, manufacturing, and transportation. Equipment is continuously monitored to measure things like sound, vibration, and temperature to alert and report potential issues. To accomplish this in computers, the first step is to determine the root cause��

Source

]]>
0
Weiwei Guo <![CDATA[Achieving High-Quality Search and Recommendation Results with DeepNLP]]> http://www.open-lab.net/blog/?p=23855 2024-10-28T18:28:06Z 2021-02-04T23:25:32Z Speech and natural language processing (NLP) have become the foundation for most of the AI development in the enterprise today, as textual data represents a...]]> Speech and natural language processing (NLP) have become the foundation for most of the AI development in the enterprise today, as textual data represents a...

Speech and natural language processing (NLP) have become the foundation for most of the AI development in the enterprise today, as textual data represents a significant portion of unstructured content. As consumer internet companies continue to improve the accuracy of conversational AI, search, and recommendation systems, there is an increasing need for processing rich text data efficiently and��

Source

]]>
2
James Sohn <![CDATA[Deploying a Natural Language Processing Service on a Kubernetes Cluster with Helm Charts from NVIDIA NGC]]> http://www.open-lab.net/blog/?p=22018 2022-08-21T23:40:46Z 2020-11-11T22:39:07Z Conversational AI solutions such as chatbots are now deployed in the data center, on the cloud, and at the edge to deliver lower latency and high quality of...]]> Conversational AI solutions such as chatbots are now deployed in the data center, on the cloud, and at the edge to deliver lower latency and high quality of...

Conversational AI solutions such as chatbots are now deployed in the data center, on the cloud, and at the edge to deliver lower latency and high quality of service while meeting an ever-increasing demand. The strategic decision to run AI inference on any or all these compute platforms varies not only by the use case but also evolves over time with the business. Hence��

Source

]]>
4
Peng Xu <![CDATA[Adding External Knowledge and Controllability to Language Models with Megatron-CNTRL]]> http://www.open-lab.net/blog/?p=21265 2023-03-22T01:09:01Z 2020-10-06T13:00:00Z Large language models such as Megatron and GPT-3 are transforming AI. We are excited about applications that can take advantage of these models to create better...]]> Large language models such as Megatron and GPT-3 are transforming AI. We are excited about applications that can take advantage of these models to create better...

Large language models such as Megatron and GPT-3 are transforming AI. We are excited about applications that can take advantage of these models to create better conversational AI. One main problem that generative language models have in conversational AI applications is their lack of controllability and consistency with real-world facts. In this work, we try to address this by making our large��

Source

]]>
1
Meghana Ravikumar <![CDATA[Efficient BERT: Finding Your Optimal Model with Multimetric Bayesian Optimization, Part 3]]> http://www.open-lab.net/blog/?p=19520 2022-08-21T23:40:31Z 2020-08-18T17:35:00Z This is the third post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part...]]> This is the third post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part...

This is the third post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part 2 discusses the setup for the Bayesian optimization. In my previous posts, I discussed the importance of BERT for transfer learning in NLP, and established the foundations of this experiment��s design. In this post, I discuss the model��

Source

]]>
0
Meghana Ravikumar <![CDATA[Efficient BERT: Finding Your Optimal Model with Multimetric Bayesian Optimization, Part 2]]> http://www.open-lab.net/blog/?p=19510 2022-08-21T23:40:31Z 2020-08-18T17:30:00Z This is the second post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part...]]> This is the second post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part...

This is the second post in this series about distilling BERT with multimetric Bayesian optimization. Part 1 discusses the background for the experiment and Part 3 discusses the results. In my previous post, I discussed the importance of the BERT architecture in making transfer learning accessible in NLP. BERT allows a variety of problems to share off-the-shelf, pretrained models and moves NLP��

Source

]]>
0
Meghana Ravikumar <![CDATA[Efficient BERT: Finding Your Optimal Model with Multimetric Bayesian Optimization, Part 1]]> http://www.open-lab.net/blog/?p=19499 2022-08-21T23:40:28Z 2020-08-18T17:25:00Z This is the first post in a series about distilling BERT with multimetric Bayesian optimization. Part 2 discusses the set up for the Bayesian experiment, and...]]> This is the first post in a series about distilling BERT with multimetric Bayesian optimization. Part 2 discusses the set up for the Bayesian experiment, and...

This is the first post in a series about distilling BERT with multimetric Bayesian optimization. Part 2 discusses the set up for the Bayesian experiment, and Part 3 discusses the results. You��ve all heard of BERT: Ernie��s partner in crime. Just kidding! I mean the natural language processing (NLP) architecture developed by Google in 2018. That��s much less exciting, I know. However��

Source

]]>
0
Akhil Docca <![CDATA[Accelerating AI and ML Workflows with Amazon SageMaker and NVIDIA NGC]]> http://www.open-lab.net/blog/?p=19448 2022-10-20T21:49:02Z 2020-08-07T19:33:05Z AI is going mainstream and is quickly becoming pervasive in every industry��from autonomous vehicles to drug discovery. However, developing and deploying AI...]]> AI is going mainstream and is quickly becoming pervasive in every industry��from autonomous vehicles to drug discovery. However, developing and deploying AI...

AI is going mainstream and is quickly becoming pervasive in every industry��from autonomous vehicles to drug discovery. However, developing and deploying AI applications is a challenging endeavor. The process requires building a scalable infrastructure by combining hardware, software, and intricate workflows, which can be time-consuming as well as error-prone. To accelerate the end-to-end AI��

Source

]]>
0
Ivan Goldwasser <![CDATA[Optimizing NVIDIA AI Performance for MLPerf v0.7 Training]]> http://www.open-lab.net/blog/?p=19195 2023-07-05T19:38:22Z 2020-07-29T17:00:00Z MLPerf is an industry-wide AI consortium that has developed a suite of performance benchmarks covering a range of leading AI workloads that are widely in use...]]> MLPerf is an industry-wide AI consortium that has developed a suite of performance benchmarks covering a range of leading AI workloads that are widely in use...

MLPerf is an industry-wide AI consortium that has developed a suite of performance benchmarks covering a range of leading AI workloads that are widely in use today. The latest MLPerf v0.7 training submission includes vision, language, recommenders, and reinforcement learning. NVIDIA submitted MLPerf v0.7 training results for all eight tests and the NVIDIA platform set records in all��

Source

]]>
0
Akhil Docca <![CDATA[Accelerating AI Training with MLPerf Containers and Models from NVIDIA NGC]]> http://www.open-lab.net/blog/?p=19139 2023-07-05T19:37:55Z 2020-07-29T17:00:00Z The MLPerf consortium mission is to ��build fair and useful benchmarks�� to provide an unbiased training and inference performance reference for ML hardware,...]]> The MLPerf consortium mission is to ��build fair and useful benchmarks�� to provide an unbiased training and inference performance reference for ML hardware,...

The MLPerf consortium mission is to ��build fair and useful benchmarks�� to provide an unbiased training and inference performance reference for ML hardware, software, and services. MLPerf Training v0.7 is the third instantiation for training and continues to evolve to stay on the cutting edge. This round consists of eight different workloads that cover a broad diversity of use cases��

Source

]]>
0
David Williams <![CDATA[Training and Fine-tuning BERT Using NVIDIA NGC]]> http://www.open-lab.net/blog/?p=17909 2022-08-21T23:40:09Z 2020-06-16T17:25:49Z Imagine an AI program that can understand language better than humans can. Imagine building your own personal Siri or Google Search for a customized domain or...]]> Imagine an AI program that can understand language better than humans can. Imagine building your own personal Siri or Google Search for a customized domain or...

Imagine an AI program that can understand language better than humans can. Imagine building your own personal Siri or Google Search for a customized domain or application. Google BERT (Bidirectional Encoder Representations from Transformers) provides a game-changing twist to the field of natural language processing (NLP). BERT runs on supercomputers powered by NVIDIA GPUs to train its��

Source

]]>
0
Mohammad Shoeybi <![CDATA[State-of-the-Art Language Modeling Using Megatron on the NVIDIA A100 GPU]]> http://www.open-lab.net/blog/?p=17320 2023-04-04T17:01:46Z 2020-05-14T13:00:46Z Recent work has demonstrated that larger language models dramatically advance the state of the art in natural language processing (NLP) applications such as...]]> Recent work has demonstrated that larger language models dramatically advance the state of the art in natural language processing (NLP) applications such as...

Recent work has demonstrated that larger language models dramatically advance the state of the art in natural language processing (NLP) applications such as question-answering, dialog systems, summarization, and article completion. However, during training, large models do not fit in the available memory of a single accelerator, requiring model parallelism to split the parameters across multiple��

Source

]]>
1
���˳���97caoporen����