The launch of the NVIDIA Blackwell platform ushered in a new era of improvements in generative AI technology. At its forefront is the newly launched GeForce RTX 50 series GPUs for PCs and workstations that boast fifth-generation Tensor Cores with 4-bit floating point compute (FP4)��a must-have for accelerating advanced generative AI models like FLUX from Black Forest Labs. As the latest image��
]]>With emerging use cases such as digital humans, agents, podcasts, images, and video generation, generative AI is changing the way we interact with PCs. This paradigm shift calls for new ways of interfacing with and programming generative AI models. However, getting started can be daunting for PC developers and AI enthusiasts. Today, NVIDIA released a suite of NVIDIA NIM microservices on��
]]>Text-to-image diffusion models can generate diverse, high-fidelity images based on user-provided text prompts. They operate by mapping a random sample from a high-dimensional space, conditioned on a user-provided text prompt, through a series of denoising steps. This results in a representation of the corresponding image, . These models can also be used for more complex tasks such as image��
]]>Generative AI, the ability of algorithms to process various types of inputs��such as text, images, audio, video, and code��and generate new content, is advancing at an unprecedented rate. While this technology is making significant strides across multiple industries, one sector that stands to benefit immensely is the Architecture, Engineering, and Construction (AEC) industry.
]]>Generative AI models have a variety of uses, such as helping write computer code, crafting stories, composing music, generating images, producing videos, and more. And, as these models continue to grow in size and are trained on even more data, they are producing even higher-quality outputs. Building and deploying these more intelligent models is incredibly compute-intensive��
]]>At Google I/O 2024, Google announced Firebase Genkit, a new open-source framework for developers to add generative AI to web and mobile applications using models like Google Gemini, Google Gemma. With Firebase Genkit, you can build apps that integrate intelligent agents, automate customer support, use semantic search, and convert unstructured data into insights. Genkit also includes a developer UI��
]]>Text-to-image diffusion models have been established as a powerful method for high-fidelity image generation based on given text. Nevertheless, diffusion models do not always grant the desired alignment between the given input text and the generated image, especially for complicated idiosyncratic prompts that are not encountered in real life. Hence, there is growing interest in efficiently fine��
]]>Speakers from NVIDIA, Meta, Microsoft, OpenAI, and ServiceNow will be talking about the latest tools, optimizations, trends and best practices for large language models (LLMs).
]]>Visual generative AI is the process of creating images from text prompts. The technology is based on vision-language foundation models that are pretrained on web-scale data. These foundation models are used in many applications by providing a multimodal representation. Examples include image captioning and video retrieval, creative 3D and 2D image synthesis, and robotic manipulation.
]]>At CES, NVIDIA shared that SDXL Turbo, LCM-LoRA, and Stable Video Diffusion are all being accelerated by NVIDIA TensorRT. These enhancements allow GeForce RTX GPU owners to generate images in real-time and save minutes generating videos, vastly improving workflows. SDXL Turbo achieves state-of-the-art performance with a new distillation technology, enabling single-step image��
]]>Register for expert-led technical workshops at NVIDIA GTC and save with early bird pricing through February 7, 2024.
]]>In the realm of generative AI, building enterprise-grade large language models (LLMs) requires expertise collecting high-quality data, setting up the accelerated infrastructure, and optimizing the models. Developers can begin with pretrained models and fine-tune them for their use case, saving time and getting their solutions faster to market. Developers need an easy way to try out models��
]]>Stable Diffusion is an open-source generative AI image-based model that enables users to generate images with simple text descriptions. Gaining traction among developers, it has powered popular applications like Wombo and Lensa. End users typically access the model through distributions that package it together with a user interface and a set of tools. The most popular distribution is the��
]]>Large language models (LLMs) are becoming an integral tool for businesses to improve their operations, customer interactions, and decision-making processes. However, off-the-shelf LLMs often fall short in meeting the specific needs of enterprises due to industry-specific terminology, domain expertise, or unique requirements. This is where custom LLMs come into play.
]]>NVIDIA will present 19 research papers at SIGGRAPH, the year��s most important computer graphics conference.
]]>Foundation models are AI neural networks trained on massive unlabeled datasets to handle a wide variety of jobs from translating text to analyzing medical images.
]]>Generative AI is primed to transform the world��s industries and to solve today��s most important challenges. To enable enterprises to take advantage of the possibilities with generative AI, NVIDIA has launched NVIDIA AI Foundations and the NVIDIA NeMo framework, powered by NVIDIA DGX Cloud. NVIDIA AI Foundations are a family of cloud services that provide enterprises with a simplified��
]]>NVIDIA T4 was introduced 4 years ago as a universal GPU for use in mainstream servers. T4 GPUs achieved widespread adoption and are now the highest-volume NVIDIA data center GPU. T4 GPUs were deployed into use cases for AI inference, cloud gaming, video, and visual computing. At the NVIDIA GTC 2023 keynote, NVIDIA introduced several inference platforms for AI workloads��
]]>Learn how AI is boosting creative applications for creators during NVIDIA GTC 2023, March 20-23.
]]>Autonomous vehicles (AVs) must be able to safely handle any type of traffic scenario that could be encountered in the real world. This includes hazardous near-accidents, where an unexpected maneuver by other road users in traffic could lead to collision. However, developing and testing AVs in these types of scenarios is challenging. Real-world collision data is sparse��
]]>See how recent breakthroughs in generative AI are transforming media, content creation, personalized experiences, and more.
]]>