Harry Kim – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-07-16T17:43:14Z http://www.open-lab.net/blog/feed/ Harry Kim <![CDATA[NVIDIA Dynamo Accelerates llm-d Community Initiatives for Advancing Large-Scale Distributed Inference]]> http://www.open-lab.net/blog/?p=100638 2025-06-12T18:51:07Z 2025-05-21T17:41:06Z The introduction of the llm-d community at Red Hat Summit 2025 marks a significant step forward in accelerating generative AI inference innovation for the open...]]>

The introduction of the llm-d community at Red Hat Summit 2025 marks a significant step forward in accelerating generative AI inference innovation for the open source ecosystem. Built on top of vLLM and Inference Gateway, llm-d extends the capabilities of vLLM with Kubernetes-native architecture for large-scale inference deployments. This post explains key NVIDIA Dynamo components that…

Source

]]>
Harry Kim <![CDATA[NVIDIA Dynamo Adds GPU Autoscaling, Kubernetes Automation, and Networking Optimizations]]> http://www.open-lab.net/blog/?p=100047 2025-05-29T17:30:52Z 2025-05-20T18:30:02Z At NVIDIA GTC 2025, we announced NVIDIA Dynamo, a high-throughput, low-latency open-source inference serving framework for deploying generative AI and reasoning...]]>

At NVIDIA GTC 2025, we announced NVIDIA Dynamo, a high-throughput, low-latency open-source inference serving framework for deploying generative AI and reasoning models in large-scale distributed environments. The latest v0.2 release of Dynamo includes: In this post, we’ll walk through these features and how they can help you get more out of your GPU investments.

Source

]]>
Harry Kim <![CDATA[NVIDIA Dynamo, A Low-Latency Distributed Inference Framework for Scaling Reasoning AI Models]]> http://www.open-lab.net/blog/?p=95274 2025-07-16T17:43:14Z 2025-03-18T17:50:00Z NVIDIA announced the release of NVIDIA Dynamo at GTC 2025. NVIDIA Dynamo is a high-throughput, low-latency open-source inference serving framework for deploying...]]>

NVIDIA announced the release of NVIDIA Dynamo at GTC 2025. NVIDIA Dynamo is a high-throughput, low-latency open-source inference serving framework for deploying generative AI and reasoning models in large-scale distributed environments. The framework boosts the number of requests served by up to 30x, when running the open-source DeepSeek-R1 models on NVIDIA Blackwell. NVIDIA Dynamo is compatible…

Source

]]>
2
Harry Kim <![CDATA[Measuring Generative AI Model Performance Using NVIDIA GenAI-Perf and an OpenAI-Compatible API]]> http://www.open-lab.net/blog/?p=85839 2024-08-22T18:25:47Z 2024-08-01T15:00:00Z NVIDIA offers tools like Perf Analyzer and Model Analyzer to assist machine learning engineers with measuring and balancing the trade-off between latency and...]]>

NVIDIA offers tools like Perf Analyzer and Model Analyzer to assist machine learning engineers with measuring and balancing the trade-off between latency and throughput, crucial for optimizing ML inference performance. Model Analyzer has been embraced by leading organizations such as Snap to identify optimal configurations that enhance throughput and reduce deployment costs. However…

Source

]]>
���˳���97caoporen����