As many enterprises move to running AI training or inference on their data, the data and the code need to be protected, especially for large language models (LLMs). Many customers can��t risk placing their data in the cloud because of data sensitivity. Such data may contain personally identifiable information (PII) or company proprietary information, and the trained model has valuable intellectual��
]]>NVIDIA and Red Hat have partnered to bring continued improvements to the precompiled NVIDIA Driver introduced in 2020. Last month, NVIDIA announced that the open GPU driver modules will become the default recommended way to enable NVIDIA graphics hardware. Today, NVIDIA announced that Red Hat is now compiling and signing the NVIDIA open GPU kernel modules to further streamline the usage for��
]]>Confidential and self-sovereign AI is a new approach to AI development, training, and inference where the user��s data is decentralized, private, and controlled by the users themselves. This post explores how the capabilities of Confidential Computing (CC) are expanded through decentralization using blockchain technology. The problem being solved is most clearly shown through the use of��
]]>Edgeless Systems introduced Continuum AI, the first generative AI framework that keeps prompts encrypted at all times with confidential computing by combining confidential VMs with NVIDIA H100 GPUs and secure sandboxing. The launch of this platform underscores a new era in AI deployment, where the benefits of powerful LLMs can be realized without compromising data privacy and security.
]]>Join the webinar on June 11th with NVIDIA and Super Protocol to learn about the benefits of Confidential Computing for Web3 AI.
]]>NVIDIA launched the initial release of the Confidential Computing (CC) solution in private preview for early access in July 2023 through NVIDIA LaunchPad. Confidential Computing can be used in virtualized environments and provides the highest level of security with the best performance possible in the industry today. The?NVIDIA H100 Tensor Core GPU was the first GPU to introduce support for CC.
]]>The latest release of CUDA Toolkit, version 12.4, continues to push accelerated computing performance using the latest NVIDIA GPUs. This post explains the new features and enhancements included in this release: CUDA and the CUDA Toolkit software provide the foundation for all NVIDIA GPU-accelerated computing applications in data science and analytics, machine learning��
]]>Join us on March 20 for Cybersecurity Developer Day at GTC to gain insights on leveraging generative AI for cyber defense.
]]>Discover how generative AI is powering cybersecurity solutions with enhanced speed, accuracy, and scalability.
]]>On Sept. 13, connect with the winning multilingual recommender systems Kaggle Grandmaster team of KDD��23.
]]>Hardware virtualization is an effective way to isolate workloads in virtual machines (VMs) from the physical hardware and from each other. This offers improved security, particularly in a multi-tenant environment. Yet, security risks such as in-band attacks, side-channel attacks, and physical attacks can still happen, compromising the confidentiality, integrity, or availability of your data and��
]]>On July 26, connect with NVIDIA CUDA product team experts on the latest CUDA Toolkit 12.
]]>The latest release of CUDA Toolkit 12.2 introduces a range of essential new features, modifications to the programming model, and enhanced support for hardware capabilities accelerating CUDA applications. Now out through general availability from NVIDIA, CUDA Toolkit 12.2 includes many new capabilities, both major and minor. The following post offers an overview of many of the key��
]]>Watch on-demand as experts deep dive into CUDA 12.2, including support for confidential computing.
]]>Some years ago, Jensen Huang, founder and CEO of NVIDIA, hand-delivered the world��s first NVIDIA DGX AI system to OpenAI. Fast forward to the present and OpenAI��s ChatGPT has taken the world by storm, highlighting the benefits and capabilities of artificial intelligence (AI) and how it can be applied in every industry and business, small or enterprise. Now, have you ever stopped to think��
]]>Rapid digital transformation has led to an explosion of sensitive data being generated across the enterprise. That data has to be stored and processed in data centers on-premises, in the cloud, or at the edge. Examples of activities that generate sensitive and personally identifiable information (PII) include credit card transactions, medical imaging or other diagnostic tests, insurance claims��
]]>Confidential computing is a way of processing data in a protected zone of a computer��s processor, often inside a remote edge or public cloud server, and proving that no one viewed or altered the work.
]]>NVIDIA is now publishing Linux GPU kernel modules as open source with dual GPL/MIT license, starting with the R515 driver release. You can find the source code for these kernel modules in the NVIDIA/open-gpu-kernel-modules GitHub page This release is a significant step toward improving the experience of using NVIDIA GPUs in Linux, for tighter integration with the OS, and for developers to��
]]>Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture. This post gives you a look inside the new H100 GPU and describes important new features of NVIDIA Hopper architecture GPUs. The NVIDIA H100 Tensor Core GPU is our ninth-generation data center GPU designed to deliver an��
]]>