2024 was another landmark year for developers, researchers, and innovators working with NVIDIA technologies. From groundbreaking developments in AI inference to empowering open-source contributions, these blog posts highlight the breakthroughs that resonated most with our readers. NVIDIA NIM Offers Optimized Inference Microservices for Deploying AI Models at Scale Introduced in��
]]>Today, over 80% of internet traffic is video. This content is generated by and consumed across various devices, including IoT gadgets, smartphones, computers, and TVs. As pixel density and the number of connected devices grow, continued investment in fast, efficient, high-quality video encoding and decoding is essential. The latest NVIDIA data center GPUs, such as the NVIDIA L40S and NVIDIA��
]]>Capturing video footage and playing games at 8K resolution with 60 frames per second (FPS) is now possible, thanks to advances in camera and display technologies. Major leading multimedia companies including RED Digital Cinema, Nikon, and Canon have already introduced 8K60 cameras for both the consumer and professional markets. On the display side, with the newest HDMI 2.1 standard��
]]>As we approach the end of another exciting year at NVIDIA, it��s time to look back at the most popular stories from the NVIDIA Technical Blog in 2023. Groundbreaking research and developments in fields such as generative AI, large language models (LLMs), high-performance computing (HPC), and robotics are leading the way in transformative AI solutions and capturing the interest of our readers.
]]>Robots are increasing in complexity, with a higher degree of autonomy, a greater number and diversity of sensors, and more sensor fusion-based algorithms. Hardware acceleration is essential to run these increasingly complex workloads, enabling robotics applications that can run larger workloads with more speed and power efficiency. The mission of NVIDIA Isaac ROS has always been to empower��
]]>The most exciting computing applications currently rely on training and running inference on complex AI models, often in demanding, real-time deployment scenarios. High-performance, accelerated AI platforms are needed to meet the demands of these applications and deliver the best user experiences. New AI models are constantly being invented to enable new capabilities��
]]>The business applications of GPU-accelerated computing are set to expand greatly in the coming years. One of the fastest-growing trends is the use of generative AI for creating human-like text and all types of images. Driving the explosion of market interest in generative AI are technologies such as transformer models that bring AI to everyday applications, from conversational text to��
]]>NVIDIA T4 was introduced 4 years ago as a universal GPU for use in mainstream servers. T4 GPUs achieved widespread adoption and are now the highest-volume NVIDIA data center GPU. T4 GPUs were deployed into use cases for AI inference, cloud gaming, video, and visual computing. At the NVIDIA GTC 2023 keynote, NVIDIA introduced several inference platforms for AI workloads��
]]>AV1 is the new gold standard video format, with superior efficiency and quality compared to older H.264 and H.265 formats. It is the most recent royalty-free, efficient video encoder standardized by the Alliance for Open Media. NVIDIA Ampere architecture introduced hardware-accelerated AV1 decoding. NVIDIA Ada Lovelace architecture supports both AV1 encoding and decoding.
]]>Learn about the newest CUDA features such as release compatibility, dynamic parallelism, lazy module loading, and support for the new NVIDIA Hopper and NVIDIA Ada Lovelace GPU architectures.
]]>NVIDIA announces the newest CUDA Toolkit software release, 12.0. This release is the first major release in many years and it focuses on new programming models and CUDA application acceleration through new hardware capabilities. For more information, watch the YouTube Premiere webinar, CUDA 12.0: New Features and Beyond. You can now target architecture-specific features and instructions��
]]>