Adi Renduchintala – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-06-12T18:48:43Z http://www.open-lab.net/blog/feed/ Adi Renduchintala <![CDATA[Introducing the Nemotron-H Reasoning Model Family: Throughput Gains Without Compromise]]> http://www.open-lab.net/blog/?p=101373 2025-06-12T18:48:43Z 2025-06-06T17:00:00Z As large language models increasingly take on reasoning-intensive tasks in areas like math and science, their output lengths are getting significantly...]]>

As large language models increasingly take on reasoning-intensive tasks in areas like math and science, their output lengths are getting significantly longer—sometimes spanning tens of thousands of tokens. This shift makes efficient throughput a critical bottleneck, especially when deploying models in real-world, latency-sensitive environments. To address these challenges and enable the…

Source

]]>
Adi Renduchintala <![CDATA[Scalable Federated Learning with NVIDIA FLARE for Enhanced LLM Performance]]> http://www.open-lab.net/blog/?p=78348 2024-05-10T00:21:02Z 2024-02-29T21:00:00Z In the ever-evolving landscape of large language models (LLMs), effective data management is a key challenge. Data is at the heart of model performance. While...]]>

In the ever-evolving landscape of large language models (LLMs), effective data management is a key challenge. Data is at the heart of model performance. While most advanced machine learning algorithms are data-centric, necessary data can’t always be centralized. This is due to various factors such as privacy, regulation, geopolitics, copyright issues, and the sheer effort required to move vast…

Source

]]>
0
Adi Renduchintala <![CDATA[Adapting LLMs to Downstream Tasks Using Federated Learning on Distributed Datasets]]> http://www.open-lab.net/blog/?p=67237 2024-05-10T00:22:46Z 2023-07-10T20:00:00Z Large language models (LLMs), such as GPT, have emerged as revolutionary tools in natural language processing (NLP) due to their ability to understand and...]]>

Large language models (LLMs), such as GPT, have emerged as revolutionary tools in natural language processing (NLP) due to their ability to understand and generate human-like text. These models are trained on vast amounts of diverse data, enabling them to learn patterns, language structures, and contextual relationships. They serve as foundational models that can be customized to a wide range of…

Source

]]>
0
���˳���97caoporen����