With emerging use cases such as digital humans, agents, podcasts, images, and video generation, generative AI is changing the way we interact with PCs. This paradigm shift calls for new ways of interfacing with and programming generative AI models. However, getting started can be daunting for PC developers and AI enthusiasts. Today, NVIDIA released a suite of NVIDIA NIM microservices on…
]]>NVIDIA recently announced a new generation of PC GPUs—the GeForce RTX 50 Series—alongside new AI-powered SDKs and tools for developers. Powered by the NVIDIA Blackwell architecture, fifth-generation Tensor Cores and fourth-generation RT Cores, the GeForce RTX 50 Series delivers breakthroughs in AI-driven rendering, including neural shaders, digital human technologies, geometry and lighting.
]]>The NVIDIA RTX AI for Windows PCs platform offers a thriving ecosystem of thousands of open-source models for application developers to leverage and integrate into Windows applications. Notably, llama.cpp is one popular tool, with over 65K GitHub stars at the time of writing. Originally released in 2023, this open-source repository is a lightweight, efficient framework for large language model…
]]>Today’s large language models (LLMs) achieve unprecedented results across many use cases. Yet, application developers often need to customize and tune these models to work specifically for their use cases, due to the general nature of foundation models. Full fine-tuning requires a large amount of data and compute infrastructure, resulting in model weights being updated.
]]>Meta’s Llama collection of large language models are the most popular foundation models in the open-source community today, supporting a variety of use cases. Millions of developers worldwide are building derivative models, and are integrating these into their applications. With Llama 3.1, Meta is launching a suite of large language models (LLMs) as well as a suite of trust and safety models…
]]>NVIDIA today launched the NVIDIA RTX AI Toolkit, a collection of tools and SDKs for Windows application developers to customize, optimize, and deploy AI models for Windows applications. It’s free to use, doesn’t require prior experience with AI frameworks and development tools, and delivers the best AI performance for both local and cloud deployments. The wide availability of generative…
]]>NVIDIA is announcing the Generative AI on RTX PCs Developer Contest – designed to inspire innovation within the developer community. Build and submit your next innovative generative AI projects on Windows PC with RTX Systems, and you could win an RTX 4090 GPU, a full GTC in-person conference pass, and more in great prizes.
]]>Large language models (LLMs) are fundamentally changing the way we interact with computers. These models are being incorporated into a wide range of applications, from internet search to office productivity tools. They are advancing real-time content generation, text summarization, customer service chatbots, and question-answering use cases. Today, LLM-powered applications are running…
]]>Generative AI and large language models (LLMs) are changing human-computer interaction as we know it. Many use cases would benefit from running LLMs locally on Windows PCs, including gaming, creativity, productivity, and developer experiences. This post discusses several NVIDIA end-to-end developer tools for creating and deploying both text-based and visual LLM applications on NVIDIA RTX AI-ready…
]]>Large language models (LLMs) are a class of generative AI models built using transformer networks that can recognize, summarize, translate, predict, and generate language using very large datasets. LLMs have the promise of transforming society as we know it, yet training these foundation models is incredibly challenging. This blog articulates the basic principles behind LLMs…
]]>Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging source code, and so much more. You can build incredibly sophisticated LLM applications by connecting them to external tools, for example reading data from a real-time source, or enabling an LLM to decide what action to take given a user’s…
]]>Generative AI is primed to transform the world’s industries and to solve today’s most important challenges. To enable enterprises to take advantage of the possibilities with generative AI, NVIDIA has launched NVIDIA AI Foundations and the NVIDIA NeMo framework, powered by NVIDIA DGX Cloud. NVIDIA AI Foundations are a family of cloud services that provide enterprises with a simplified…
]]>Recent advances in large language models (LLMs) have fueled state-of-the-art performance for NLP applications such as virtual scribes in healthcare, interactive virtual assistants, and many more. To simplify access to LLMs, NVIDIA has announced two services: NeMo LLM for customizing and using LLMs, and BioNeMo, which expands scientific applications of LLMs for the pharmaceutical and…
]]>As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo framework that provide training speed-ups of up to 30%. These updates–which include two trailblazing techniques and a hyperparameter tool to optimize and scale training of LLMs on any number of GPUs–offer new capabilities to train and deploy models using the NVIDIA AI…
]]>