Hayden Wolff – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2024-11-20T23:02:36Z http://www.open-lab.net/blog/feed/ Hayden Wolff <![CDATA[Building AI Agents with NVIDIA NIM Microservices and LangChain]]> http://www.open-lab.net/blog/?p=86543 2024-10-28T21:55:34Z 2024-08-07T16:00:00Z NVIDIA NIM, part of NVIDIA AI Enterprise, now supports tool-calling for models like Llama 3.1. It also integrates with LangChain to provide you with a...]]>

NVIDIA NIM, part of NVIDIA AI Enterprise, now supports tool-calling for models like Llama 3.1. It also integrates with LangChain to provide you with a production-ready solution for building agentic workflows. NIM microservices provide the best performance for open-source models such as Llama 3.1 and are available to test for free from NVIDIA API Catalog in LangChain applications.

Source

]]>
Hayden Wolff <![CDATA[A Simple Guide to Deploying Generative AI with NVIDIA NIM]]> http://www.open-lab.net/blog/?p=83163 2024-08-08T14:22:01Z 2024-06-02T11:30:00Z Whether you��re working on-premises or in the cloud, NVIDIA NIM microservices provide enterprise developers with easy-to-deploy optimized AI models from the...]]>

Whether you’re working on-premises or in the cloud, NVIDIA NIM microservices provide enterprise developers with easy-to-deploy optimized AI models from the community, partners, and NVIDIA. Part of NVIDIA AI Enterprise, NIM offers a secure, streamlined path forward to iterate quickly and build innovations for world-class generative AI solutions. Using a single optimized container…

Source

]]>
8
Hayden Wolff <![CDATA[RAG 101: Retrieval-Augmented Generation Questions Answered]]> http://www.open-lab.net/blog/?p=75743 2024-11-20T23:02:36Z 2023-12-18T19:44:42Z Data scientists, AI engineers, MLOps engineers, and IT infrastructure professionals must consider a variety of factors when designing and deploying a RAG...]]>

Data scientists, AI engineers, MLOps engineers, and IT infrastructure professionals must consider a variety of factors when designing and deploying a RAG pipeline: from core components like LLM to evaluation approaches. The key point is that RAG is a system, not just a model or set of models. This system consists of several stages, which were discussed at a high level in RAG 101…

Source

]]>
2
Hayden Wolff <![CDATA[RAG 101: Demystifying Retrieval-Augmented Generation Pipelines]]> http://www.open-lab.net/blog/?p=75493 2024-08-22T21:46:12Z 2023-12-18T19:44:31Z Large language models (LLMs) have impressed the world with their unprecedented capabilities to comprehend and generate human-like responses. Their chat...]]>

Large language models (LLMs) have impressed the world with their unprecedented capabilities to comprehend and generate human-like responses. Their chat functionality provides a fast and natural interaction between humans and large corpora of data. For example, they can summarize and extract highlights from data or replace complex queries such as SQL queries with natural language.

Source

]]>
1
���˳���97caoporen����