Hardware / Semiconductor – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-08-19T20:50:11Z http://www.open-lab.net/blog/feed/ Ashkan Seyedi <![CDATA[Scaling AI Factories with Co-Packaged Optics for Better Power Efficiency]]> http://www.open-lab.net/blog/?p=104786 2025-08-15T19:14:23Z 2025-08-18T16:00:00Z As artificial intelligence redefines the computing landscape, the network has become the critical backbone shaping the data center of the future. Large language...]]> As artificial intelligence redefines the computing landscape, the network has become the critical backbone shaping the data center of the future. Large language...

As artificial intelligence redefines the computing landscape, the network has become the critical backbone shaping the data center of the future. Large language model training performance is determined not only by compute resources but by the agility, capacity, and intelligence of the underlying network. The industry is witnessing the evolution from traditional, CPU-centric infrastructures toward��

Source

]]>
0
Berkin Kartal <![CDATA[Using CI/CD to Automate Network Configuration and Deployment]]> http://www.open-lab.net/blog/?p=103912 2025-08-07T19:07:28Z 2025-07-30T16:44:54Z Continuous integration and continuous delivery/deployment?(CI/CD) is a set of modern software development practices used for delivering code changes more...]]> Continuous integration and continuous delivery/deployment?(CI/CD) is a set of modern software development practices used for delivering code changes more...

Continuous integration and continuous delivery/deployment (CI/CD) is a set of modern software development practices used for delivering code changes more reliably and often. While CI/CD is widely adopted in the software world, it��s becoming more relevant for network engineers, particularly as networks become automated and software-driven. In this post, I briefly introduce CI/

Source

]]>
0
Sophia Schuur <![CDATA[Automating Network Design in NVIDIA Air with Ansible and Git]]> http://www.open-lab.net/blog/?p=103452 2025-07-24T18:32:17Z 2025-07-18T21:57:07Z At its core, NVIDIA Air is built for automation. Every part of your network can be coded, versioned, and set to trigger automatically. This includes creating...]]> At its core, NVIDIA Air is built for automation. Every part of your network can be coded, versioned, and set to trigger automatically. This includes creating...Black and white topology of connected nodes in NVIDIA Air.

At its core, NVIDIA Air is built for automation. Every part of your network can be coded, versioned, and set to trigger automatically. This includes creating the topology, configuring the network, and validating its setup. Automation reduces manual error, speeds up testing, and brings the same rigor to networking that modern DevOps teams apply to software development. Let��s discuss the basic��

Source

]]>
0
Nidhi Bhatia <![CDATA[Think Smart and Ask an Encyclopedia-Sized Question: Multi-Million Token Real-Time Inference for 32X More Users]]> http://www.open-lab.net/blog/?p=102927 2025-07-24T18:33:17Z 2025-07-08T01:00:00Z Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents...]]> Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents...

Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents following months of conversation, legal assistants reasoning through gigabytes of case law as big as an entire encyclopedia set, or coding copilots navigating sprawling repositories, preserving long-range context is essential for relevance and��

Source

]]>
0
Jason Perlow <![CDATA[How Early Access to NVIDIA GB200 Systems Helped LMArena Build a Model to Evaluate LLMs]]> http://www.open-lab.net/blog/?p=102053 2025-06-26T18:55:16Z 2025-06-18T16:00:00Z LMArena at the University of California, Berkeley is making it easier to see which large language models excel at specific tasks, thanks to help from NVIDIA and...]]> LMArena at the University of California, Berkeley is making it easier to see which large language models excel at specific tasks, thanks to help from NVIDIA and...

LMArena at the University of California, Berkeley is making it easier to see which large language models excel at specific tasks, thanks to help from NVIDIA and Nebius. Its rankings, powered by the Prompt-to-Leaderboard (P2L) model, collect votes from humans on which AI performs best in areas such as math, coding, or creative writing. ��We capture user preferences across tasks and apply��

Source

]]>
0
Sophia Schuur <![CDATA[Advantages of External File Uploads for Scalable, Custom Network Topologies in NVIDIA Air]]> http://www.open-lab.net/blog/?p=101034 2025-06-12T18:50:50Z 2025-06-02T21:32:30Z NVIDIA Air offers the unique ability to simulate anything from a small network to an entire data center. Before you start configuration, routing, or management,...]]> NVIDIA Air offers the unique ability to simulate anything from a small network to an entire data center. Before you start configuration, routing, or management,...

NVIDIA Air offers the unique ability to simulate anything from a small network to an entire data center. Before you start configuration, routing, or management, consider the topology first. A network topology is the layout or structure of how devices connect and communicate within a network. It describes both the physical arrangement and the logical flow of data.

Source

]]>
0
Joe DeLaere <![CDATA[Integrating Semi-Custom Compute into Rack-Scale Architecture with NVIDIA NVLink Fusion]]> http://www.open-lab.net/blog/?p=100146 2025-05-29T17:31:00Z 2025-05-19T04:54:31Z Data centers are being re-architected for efficient delivery of AI workloads. This is a hugely complicated endeavor, and NVIDIA is now delivering AI factories...]]> Data centers are being re-architected for efficient delivery of AI workloads. This is a hugely complicated endeavor, and NVIDIA is now delivering AI factories...NVLink-Fusion-image.

Data centers are being re-architected for efficient delivery of AI workloads. This is a hugely complicated endeavor, and NVIDIA is now delivering AI factories based on the NVIDIA rack-scale architecture. To deliver the best performance for the AI factory, many accelerators need to work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the��

Source

]]>
0
Berkin Kartal <![CDATA[AI Fabric Resiliency and Why Network Convergence Matters]]> http://www.open-lab.net/blog/?p=98574 2025-05-29T19:05:04Z 2025-05-14T16:20:00Z High-performance computing and deep learning workloads are extremely sensitive to latency. Packet loss forces retransmission or stalls in the communication...]]> High-performance computing and deep learning workloads are extremely sensitive to latency. Packet loss forces retransmission or stalls in the communication...Typical data center interconnection schema for Clos fabric.

High-performance computing and deep learning workloads are extremely sensitive to latency. Packet loss forces retransmission or stalls in the communication pipeline, which directly increases latency and disrupts the synchronization between GPUs. This can degrade the performance of collective operations such as all-reduce or broadcast, where every GPU��s participation is required before progressing.

Source

]]>
0
Sophia Schuur <![CDATA[Connect Simulations with the Real World Using NVIDIA Air Services]]> http://www.open-lab.net/blog/?p=99778 2025-05-29T19:05:06Z 2025-05-13T18:00:00Z NVIDIA Air enables cloud-scale efficiency by creating identical replicas of real-world data center infrastructure deployments. With NVIDIA Air, you can spin up...]]> NVIDIA Air enables cloud-scale efficiency by creating identical replicas of real-world data center infrastructure deployments. With NVIDIA Air, you can spin up...

NVIDIA Air enables cloud-scale efficiency by creating identical replicas of real-world data center infrastructure deployments. With NVIDIA Air, you can spin up hundreds of switches and servers and configure them with a single script. One of the many advantages of NVIDIA Air is the ability to connect your simulations with the real world. Enabling an external connection in your environment can��

Source

]]>
0
Weiji Chen <![CDATA[New NVIDIA NV-Tesseract Time Series Models Advance Dataset Processing and Anomaly Detection]]> http://www.open-lab.net/blog/?p=99642 2025-05-29T19:05:21Z 2025-05-06T16:22:57Z Time-series data has evolved from a simple historical record into a real-time engine for critical decisions across industries. Whether it��s streamlining...]]> Time-series data has evolved from a simple historical record into a real-time engine for critical decisions across industries. Whether it��s streamlining...

Time-series data has evolved from a simple historical record into a real-time engine for critical decisions across industries. Whether it��s streamlining logistics, forecasting markets, or anticipating machine failures, organizations need more sophisticated tools than traditional methods can offer. NVIDIA GPU-accelerated deep learning is enabling industries to gain real-time analytics.

Source

]]>
0
Emily Sakata <![CDATA[Announcing NVIDIA Secure AI General Availability]]> http://www.open-lab.net/blog/?p=99064 2025-05-15T19:08:42Z 2025-04-23T22:23:11Z As many enterprises move to running AI training or inference on their data, the data and the code need to be protected, especially for large language models...]]> As many enterprises move to running AI training or inference on their data, the data and the code need to be protected, especially for large language models...

As many enterprises move to running AI training or inference on their data, the data and the code need to be protected, especially for large language models (LLMs). Many customers can��t risk placing their data in the cloud because of data sensitivity. Such data may contain personally identifiable information (PII) or company proprietary information, and the trained model has valuable intellectual��

Source

]]>
0
Ashraf Eassa <![CDATA[NVIDIA Blackwell Delivers Massive Performance Leaps in MLPerf Inference v5.0]]> http://www.open-lab.net/blog/?p=98367 2025-04-23T19:41:12Z 2025-04-02T18:14:48Z The compute demands for large language model (LLM) inference are growing rapidly, fueled by the combination of growing model sizes, real-time latency...]]> The compute demands for large language model (LLM) inference are growing rapidly, fueled by the combination of growing model sizes, real-time latency...

The compute demands for large language model (LLM) inference are growing rapidly, fueled by the combination of growing model sizes, real-time latency requirements, and, most recently, AI reasoning. At the same time, as AI adoption grows, the ability of an AI factory to serve as many users as possible, all while maintaining good per-user experiences, is key to maximizing the value it generates.

Source

]]>
0
Dave Salvator <![CDATA[NVIDIA Blackwell Ultra for the Era of AI Reasoning]]> http://www.open-lab.net/blog/?p=96761 2025-03-20T22:34:30Z 2025-03-19T18:00:15Z For years, advancements in AI have followed a clear trajectory through pretraining scaling: larger models, more data, and greater computational resources lead...]]> For years, advancements in AI have followed a clear trajectory through pretraining scaling: larger models, more data, and greater computational resources lead...An image of the NVIDIA Blackwell Ultra system on a black background.

For years, advancements in AI have followed a clear trajectory through pretraining scaling: larger models, more data, and greater computational resources lead to breakthrough capabilities. In the last 5 years, pretraining scaling has increased compute requirements at an incredible rate of 50M times. However, building more intelligent systems is no longer just about pretraining bigger models.

Source

]]>
0
Leigh Engel <![CDATA[Simplify System Memory Management with the Latest NVIDIA GH200 NVL2 Enterprise RA]]> http://www.open-lab.net/blog/?p=96079 2025-04-23T02:45:13Z 2025-02-13T21:26:30Z NVIDIA Enterprise Reference Architectures (Enterprise RAs) can reduce the time and cost of deploying AI infrastructure solutions. They provide a streamlined...]]> NVIDIA Enterprise Reference Architectures (Enterprise RAs) can reduce the time and cost of deploying AI infrastructure solutions. They provide a streamlined...

NVIDIA Enterprise Reference Architectures (Enterprise RAs) can reduce the time and cost of deploying AI infrastructure solutions. They provide a streamlined approach for building flexible and cost-effective accelerated infrastructure while ensuring compatibility and interoperability. The latest Enterprise RA details an optimized cluster configuration for systems integrated with NVIDIA GH200��

Source

]]>
2
Allison Ding <![CDATA[Get Started with GPU Acceleration for Data Science]]> http://www.open-lab.net/blog/?p=95894 2025-04-23T02:52:30Z 2025-02-06T23:07:48Z In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows,...]]> In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows,...

In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements. RAPIDS is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes.

Source

]]>
0
Taylor Allison <![CDATA[Accelerating AI Storage by up to 48% with NVIDIA Spectrum-X Networking Platform and Partners]]> http://www.open-lab.net/blog/?p=95432 2025-04-23T02:48:15Z 2025-02-04T15:00:00Z AI factories rely on more than just compute fabrics. While the East-West network connecting the GPUs is critical to AI application performance, the storage...]]> AI factories rely on more than just compute fabrics. While the East-West network connecting the GPUs is critical to AI application performance, the storage...

AI factories rely on more than just compute fabrics. While the East-West network connecting the GPUs is critical to AI application performance, the storage fabric��connecting high-speed storage arrays��is equally important. Storage performance plays a key role across several stages of the AI lifecycle, including training checkpointing, inference techniques such as retrieval-augmented generation��

Source

]]>
0
Annamalai Chockalingam <![CDATA[New AI SDKs and Tools Released for NVIDIA Blackwell GeForce RTX 50 Series GPUs]]> http://www.open-lab.net/blog/?p=95526 2025-06-10T18:46:37Z 2025-01-30T14:00:00Z NVIDIA recently announced a new generation of PC GPUs��the GeForce RTX 50 Series��alongside new AI-powered SDKs and tools for developers. Powered by the...]]> NVIDIA recently announced a new generation of PC GPUs��the GeForce RTX 50 Series��alongside new AI-powered SDKs and tools for developers. Powered by the...

NVIDIA recently announced a new generation of PC GPUs��the GeForce RTX 50 Series��alongside new AI-powered SDKs and tools for developers. Powered by the NVIDIA Blackwell architecture, fifth-generation Tensor Cores and fourth-generation RT Cores, the GeForce RTX 50 Series delivers breakthroughs in AI-driven rendering, including neural shaders, digital human technologies, geometry and lighting.

Source

]]>
0
Sophia Schuur <![CDATA[Simulate Real-World Data Centers in the Cloud with NVIDIA Air]]> http://www.open-lab.net/blog/?p=92749 2025-07-25T02:22:39Z 2024-12-12T22:18:40Z The advent of AI has introduced a new type of data center, the AI factory, purpose-built from the ground up to handle AI workloads. AI workloads can...]]> The advent of AI has introduced a new type of data center, the AI factory, purpose-built from the ground up to handle AI workloads. AI workloads can...Black and white topology of connected nodes in NVIDIA Air.

The advent of AI has introduced a new type of data center, the AI factory, purpose-built from the ground up to handle AI workloads. AI workloads can significantly vary in scope and scale, but in every case, the network is key to ensuring high performance and faster time to value. To accelerate time to AI and offer enhanced return on investment, NVIDIA Air enables organizations to build��

Source

]]>
0
Leigh Engel <![CDATA[Deploying NVIDIA H200 NVL at Scale with New Enterprise Reference Architecture]]> http://www.open-lab.net/blog/?p=93686 2024-12-12T19:35:14Z 2024-12-12T00:40:45Z Last month at the Supercomputing 2024 conference, NVIDIA announced the availability of NVIDIA H200 NVL, the latest NVIDIA Hopper platform. Optimized for...]]> Last month at the Supercomputing 2024 conference, NVIDIA announced the availability of NVIDIA H200 NVL, the latest NVIDIA Hopper platform. Optimized for...

Last month at the Supercomputing 2024 conference, NVIDIA announced the availability of NVIDIA H200 NVL, the latest NVIDIA Hopper platform. Optimized for enterprise workloads, NVIDIA H200 NVL is a versatile platform that delivers accelerated performance for a wide range of AI and HPC applications. With its dual-slot PCIe form-factor and 600W TGP, the H200 NVL enables flexible configuration options��

Source

]]>
0
Rob Nertney <![CDATA[Exploring the Case of Super Protocol with Self-Sovereign AI and NVIDIA Confidential Computing]]> http://www.open-lab.net/blog/?p=91216 2025-06-25T17:52:12Z 2024-11-14T22:01:38Z Confidential and self-sovereign AI is a new approach to AI development, training, and inference where the user��s data is decentralized, private, and...]]> Confidential and self-sovereign AI is a new approach to AI development, training, and inference where the user��s data is decentralized, private, and...A cloud with a cybersecurity lock icon, surrounded by a sphere of connected nodes.

Confidential and self-sovereign AI is a new approach to AI development, training, and inference where the user��s data is decentralized, private, and controlled by the users themselves. This post explores how the capabilities of Confidential Computing (CC) are expanded through decentralization using blockchain technology. The problem being solved is most clearly shown through the use of��

Source

]]>
25
Sophia Schuur <![CDATA[Protect Your Network with Secure Boot in SONiC]]> http://www.open-lab.net/blog/?p=91056 2024-10-31T19:07:37Z 2024-10-29T22:01:56Z NVIDIA technology helps organizations build and maintain secure, scalable, and high-performance network infrastructure. Advances in AI, with NVIDIA at the...]]> NVIDIA technology helps organizations build and maintain secure, scalable, and high-performance network infrastructure. Advances in AI, with NVIDIA at the...

NVIDIA technology helps organizations build and maintain secure, scalable, and high-performance network infrastructure. Advances in AI, with NVIDIA at the forefront, contribute every day to security advances. One way NVIDIA has taken a more direct approach to network security is through a secure network operating system (NOS). A secure network operating system (NOS) is a specialized type of��

Source

]]>
1
Scot Schultz <![CDATA[Advancing Performance with NVIDIA SHARP In-Network Computing]]> http://www.open-lab.net/blog/?p=90863 2024-10-31T18:36:43Z 2024-10-25T20:39:38Z AI and scientific computing applications are great examples of distributed computing problems. The problems are too large and the computations too intensive to...]]> AI and scientific computing applications are great examples of distributed computing problems. The problems are too large and the computations too intensive to...Picture of servers in a data center.

AI and scientific computing applications are great examples of distributed computing problems. The problems are too large and the computations too intensive to run on a single machine. These computations are broken down into parallel tasks that are distributed across thousands of compute engines, such as CPUs and GPUs. To achieve scalable performance, the system relies on dividing workloads��

Source

]]>
0
Max Bazalii <![CDATA[Building AI Agents to Automate Software Test Case Creation]]> http://www.open-lab.net/blog/?p=90387 2025-05-29T21:54:37Z 2024-10-24T16:00:00Z In software development, testing is crucial for ensuring the quality and reliability of the final product. However, creating test plans and specifications can...]]> In software development, testing is crucial for ensuring the quality and reliability of the final product. However, creating test plans and specifications can...Decorative image of a bot avatar poised between different processes.

In software development, testing is crucial for ensuring the quality and reliability of the final product. However, creating test plans and specifications can be time-consuming and labor-intensive, especially when managing multiple requirements and diverse test types in complex systems. Many of these tasks are traditionally performed manually by test engineers. This post is part of the��

Source

]]>
1
Ivan Goldwasser <![CDATA[NVIDIA Grace CPU Delivers World-Class Data Center Performance and Breakthrough Energy Efficiency]]> http://www.open-lab.net/blog/?p=90087 2025-07-23T00:00:11Z 2024-10-09T19:00:00Z NVIDIA designed the NVIDIA Grace CPU to be a new kind of high-performance, data center CPU��one built to deliver breakthrough energy efficiency and optimized...]]> NVIDIA designed the NVIDIA Grace CPU to be a new kind of high-performance, data center CPU��one built to deliver breakthrough energy efficiency and optimized...

NVIDIA designed the NVIDIA Grace CPU to be a new kind of high-performance, data center CPU��one built to deliver breakthrough energy efficiency and optimized for performance at data center scale. Accelerated computing is enabling giant leaps in performance and energy efficiency compared to traditional CPU computing. To deliver these speedups, full-stack innovation at data center scale is��

Source

]]>
0
Jialin Song <![CDATA[Using Generative AI Models in Circuit Design]]> http://www.open-lab.net/blog/?p=88462 2024-09-19T19:34:20Z 2024-09-06T16:30:00Z Generative models have been making big waves in the past few years, from intelligent text-generating large language models (LLMs) to creative image and...]]> Generative models have been making big waves in the past few years, from intelligent text-generating large language models (LLMs) to creative image and...

Generative models have been making big waves in the past few years, from intelligent text-generating large language models (LLMs) to creative image and video-generation models. At NVIDIA, we are exploring using generative AI models to speed up the circuit design process and deliver better designs to meet the ever-increasing demands for computational power. Circuit design is a challenging��

Source

]]>
0
Ashraf Eassa <![CDATA[NVIDIA Blackwell Platform Sets New LLM Inference Records in MLPerf Inference v4.1]]> http://www.open-lab.net/blog/?p=87957 2024-09-05T17:57:17Z 2024-08-28T15:00:00Z Large language model (LLM) inference is a full-stack challenge. Powerful GPUs, high-bandwidth GPU-to-GPU interconnects, efficient acceleration libraries, and a...]]> Large language model (LLM) inference is a full-stack challenge. Powerful GPUs, high-bandwidth GPU-to-GPU interconnects, efficient acceleration libraries, and a...

Large language model (LLM) inference is a full-stack challenge. Powerful GPUs, high-bandwidth GPU-to-GPU interconnects, efficient acceleration libraries, and a highly optimized inference engine are required for high-throughput, low-latency inference. MLPerf Inference v4.1 is the latest version of the popular and widely recognized MLPerf Inference benchmarks, developed by the MLCommons��

Source

]]>
1
Scot Schultz <![CDATA[Optimize Large-Scale AI Workloads with NVIDIA Spectrum-X]]> http://www.open-lab.net/blog/?p=87888 2024-10-11T20:02:14Z 2024-08-27T16:00:00Z In today��s rapidly evolving technological landscape, staying ahead of the curve is not just a goal��it's a necessity. The surge of innovations, particularly...]]> In today��s rapidly evolving technological landscape, staying ahead of the curve is not just a goal��it's a necessity. The surge of innovations, particularly...Decorative image of networking switches.

In today��s rapidly evolving technological landscape, staying ahead of the curve is not just a goal��it��s a necessity. The surge of innovations, particularly in AI, is driving dramatic changes across the technology stack. One area witnessing profound transformation is Ethernet networking, a cornerstone of digital communication that has been foundational to enterprise and data center��

Source

]]>
0
Brian Slechta <![CDATA[NVIDIA NVLink and NVIDIA NVSwitch Supercharge Large Language Model Inference]]> http://www.open-lab.net/blog/?p=87063 2024-08-22T18:25:32Z 2024-08-12T14:00:00Z Large language models (LLM) are getting larger, increasing the amount of compute required to process inference requests. To meet real-time latency requirements...]]> Large language models (LLM) are getting larger, increasing the amount of compute required to process inference requests. To meet real-time latency requirements...Decorative image of linked modules.

Large language models (LLM) are getting larger, increasing the amount of compute required to process inference requests. To meet real-time latency requirements for serving today��s LLMs and do so for as many users as possible, multi-GPU compute is a must. Low latency improves the user experience. High throughput reduces the cost of service. Both are simultaneously important. Even if a large��

Source

]]>
0
Vijay Thakkar <![CDATA[Next Generation of FlashAttention]]> http://www.open-lab.net/blog/?p=85219 2024-07-25T18:19:05Z 2024-07-11T17:46:06Z NVIDIA is excited to collaborate with Colfax, Together.ai, Meta, and Princeton University on their recent achievement to exploit the Hopper GPU architecture and...]]> NVIDIA is excited to collaborate with Colfax, Together.ai, Meta, and Princeton University on their recent achievement to exploit the Hopper GPU architecture and...

NVIDIA is excited to collaborate with Colfax, Together.ai, Meta, and Princeton University on their recent achievement to exploit the Hopper GPU architecture and Tensor Cores and accelerate key Fused Attention kernels using CUTLASS 3. FlashAttention-3 incorporates key techniques to achieve 1.5�C2.0x faster performance than FlashAttention-2 with FP16, up to 740 TFLOPS. With FP8��

Source

]]>
0
Sophia Schuur <![CDATA[Exploring SONiC on NVIDIA Air]]> http://www.open-lab.net/blog/?p=84372 2024-09-05T19:00:43Z 2024-06-24T16:00:00Z Testing out networking infrastructure and building working PoCs for a new environment can be tricky at best and downright dreadful at worst. You may run into...]]> Testing out networking infrastructure and building working PoCs for a new environment can be tricky at best and downright dreadful at worst. You may run into...Decorative image.

Testing out networking infrastructure and building working PoCs for a new environment can be tricky at best and downright dreadful at worst. You may run into licensing requirements you don��t meet, or pay pricey fees for advanced hypervisor software. Proprietary network systems can cost hundreds or thousands of dollars just to set up a test environment to play with. You may even be stuck testing on��

Source

]]>
0
Moon Chung <![CDATA[Video: Talk to Your Supply Chain Data Using NVIDIA NIM]]> http://www.open-lab.net/blog/?p=84090 2024-10-28T21:57:31Z 2024-06-17T19:13:29Z NVIDIA operates one of the largest and most complex supply chains in the world. The supercomputers we build connect tens of thousands of NVIDIA GPUs with...]]> NVIDIA operates one of the largest and most complex supply chains in the world. The supercomputers we build connect tens of thousands of NVIDIA GPUs with...

NVIDIA operates one of the largest and most complex supply chains in the world. The supercomputers we build connect tens of thousands of NVIDIA GPUs with hundreds of miles of high-speed optical cables. We rely on hundreds of partners to deliver thousands of different components to a dozen factories to build nearly three thousand products. A single disruption to our supply chain can impact our��

Source

]]>
0
Babak Hejazi <![CDATA[Introducing Grouped GEMM APIs in cuBLAS and More Performance Updates]]> http://www.open-lab.net/blog/?p=83888 2024-07-16T17:19:07Z 2024-06-12T20:30:00Z The latest release of NVIDIA cuBLAS library, version 12.5, continues to deliver functionality and performance to deep learning (DL) and high-performance...]]> The latest release of NVIDIA cuBLAS library, version 12.5, continues to deliver functionality and performance to deep learning (DL) and high-performance...

The latest release of NVIDIA cuBLAS library, version 12.5, continues to deliver functionality and performance to deep learning (DL) and high-performance computing (HPC) workloads. This post provides an overview of the following updates on cuBLAS matrix multiplications (matmuls) since version 12.0, and a walkthrough: Grouped GEMM APIs can be viewed as a generalization of the batched��

Source

]]>
0
Scott Ciccone <![CDATA[Spotlight: Cisco Enhances Workload Security and Operational Efficiency with NVIDIA BlueField-3 DPUs]]> http://www.open-lab.net/blog/?p=83704 2024-07-16T17:23:43Z 2024-06-10T20:24:54Z As cyberattacks become more sophisticated, organizations must constantly adapt with cutting-edge solutions to protect their critical assets. One such solution...]]> As cyberattacks become more sophisticated, organizations must constantly adapt with cutting-edge solutions to protect their critical assets. One such solution...

As cyberattacks become more sophisticated, organizations must constantly adapt with cutting-edge solutions to protect their critical assets. One such solution is Cisco Secure Workload, a comprehensive security solution designed to safeguard application workloads across diverse infrastructures, locations, and form factors. Cisco recently announced version 3.9 of the Cisco Secure Workload��

Source

]]>
0
Shashank Verma <![CDATA[Seamlessly Deploying a Swarm of LoRA Adapters with NVIDIA NIM]]> http://www.open-lab.net/blog/?p=83606 2024-06-13T19:06:00Z 2024-06-07T16:00:00Z The latest state-of-the-art foundation large language models (LLMs) have billions of parameters and are pretrained on trillions of tokens of input text. They...]]> The latest state-of-the-art foundation large language models (LLMs) have billions of parameters and are pretrained on trillions of tokens of input text. They...

The latest state-of-the-art foundation large language models (LLMs) have billions of parameters and are pretrained on trillions of tokens of input text. They often achieve striking results on a wide variety of use cases without any need for customization. Despite this, studies have shown that the best accuracy on downstream tasks can be achieved by adapting LLMs with high-quality��

Source

]]>
0
Sheilah Kirui <![CDATA[RAPIDS on Databricks: A Guide to GPU-Accelerated Data Processing]]> http://www.open-lab.net/blog/?p=82441 2024-05-30T19:55:56Z 2024-05-14T20:30:00Z In today's data-driven landscape, maximizing performance and efficiency in data processing and analytics is critical. While many Databricks users are familiar...]]> In today's data-driven landscape, maximizing performance and efficiency in data processing and analytics is critical. While many Databricks users are familiar...

In today��s data-driven landscape, maximizing performance and efficiency in data processing and analytics is critical. While many Databricks users are familiar with using GPU clusters for machine learning training, there��s a vast opportunity to leverage GPU acceleration for data processing and analytics tasks as well. Databricks�� Data Intelligence Platform empowers users to manage both small��

Source

]]>
0
William Hill <![CDATA[NVIDIA TensorRT 10.0 Upgrades Usability, Performance, and AI Model Support]]> http://www.open-lab.net/blog/?p=82402 2024-05-30T19:55:57Z 2024-05-14T15:00:00Z NVIDIA today announced the latest release of NVIDIA TensorRT, an ecosystem of APIs for high-performance deep learning inference. TensorRT includes inference...]]> NVIDIA today announced the latest release of NVIDIA TensorRT, an ecosystem of APIs for high-performance deep learning inference. TensorRT includes inference...

NVIDIA today announced the latest release of NVIDIA TensorRT, an ecosystem of APIs for high-performance deep learning inference. TensorRT includes inference runtimes and model optimizations that deliver low latency and high throughput for production applications. This post outlines the key features and upgrades of this release, including easier installation, increased usability��

Source

]]>
0
Chintan Patel <![CDATA[Leverage Mixture of Experts-Based DBRX for Superior LLM Performance on Diverse Tasks]]> http://www.open-lab.net/blog/?p=81586 2024-06-07T21:13:14Z 2024-04-30T17:21:35Z This week��s model release features DBRX, a state-of-the-art large language model (LLM) developed by Databricks. With demonstrated strength in programming and...]]> This week��s model release features DBRX, a state-of-the-art large language model (LLM) developed by Databricks. With demonstrated strength in programming and...

This week��s model release features DBRX, a state-of-the-art large language model (LLM) developed by Databricks. With demonstrated strength in programming and coding tasks, DBRX is adept at handling specialized topics and writing specific algorithms in languages like Python. It can also be used for text completion tasks and few-turn interactions. DBRX long-context abilities can be used in RAG��

Source

]]>
0
Hainan Xu <![CDATA[Turbocharge ASR Accuracy and Speed with NVIDIA NeMo Parakeet-TDT]]> http://www.open-lab.net/blog/?p=80732 2024-08-12T16:06:21Z 2024-04-18T20:03:54Z NVIDIA NeMo, an end-to-end platform for developing multimodal generative AI models at scale anywhere��on any cloud and on-premises��recently released...]]> NVIDIA NeMo, an end-to-end platform for developing multimodal generative AI models at scale anywhere��on any cloud and on-premises��recently released...

NVIDIA NeMo, an end-to-end platform for developing multimodal generative AI models at scale anywhere��on any cloud and on-premises��recently released Parakeet-TDT. This new addition to the?NeMo ASR Parakeet model family boasts better accuracy and 64% greater speed over the previously best model, Parakeet-RNNT-1.1B. This post explains Parakeet-TDT and how to use it to generate highly accurate��

Source

]]>
0
Amanda Saunders <![CDATA[Develop Custom Enterprise Generative AI with NVIDIA NeMo]]> http://www.open-lab.net/blog/?p=80360 2025-02-17T05:27:49Z 2024-03-27T20:00:00Z Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of...]]> Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of...

Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of generative AI is vast. Businesses can tap into their rich datasets to streamline time-consuming tasks��from text summarization and translation to insight prediction and content generation. But they must also navigate adoption challenges.

Source

]]>
0
Ashraf Eassa <![CDATA[NVIDIA H200 Tensor Core GPUs and NVIDIA TensorRT-LLM Set MLPerf LLM Inference Records]]> http://www.open-lab.net/blog/?p=80197 2024-11-14T15:53:12Z 2024-03-27T15:29:05Z Generative AI is unlocking new computing applications that greatly augment human capability, enabled by continued model innovation. Generative AI...]]> Generative AI is unlocking new computing applications that greatly augment human capability, enabled by continued model innovation. Generative AI...An image of an NVIDIA H200 Tensor Core GPU.

Generative AI is unlocking new computing applications that greatly augment human capability, enabled by continued model innovation. Generative AI models��including large language models (LLMs)��are used for crafting marketing copy, writing computer code, rendering detailed images, composing music, generating videos, and more. The amount of compute required by the latest models is immense and��

Source

]]>
0
Joe DeLaere <![CDATA[New Architecture: NVIDIA Blackwell]]> http://www.open-lab.net/blog/?p=80556 2024-04-09T23:45:09Z 2024-03-25T17:17:20Z Learn how the NVIDIA Blackwell GPU architecture is revolutionizing AI and accelerated computing.]]> Learn how the NVIDIA Blackwell GPU architecture is revolutionizing AI and accelerated computing.Image of the NVIDIA Blackwell GPU.

Learn how the NVIDIA Blackwell GPU architecture is revolutionizing AI and accelerated computing.

Source

]]>
0
Krishna Vasudevan <![CDATA[Simplifying Cumulus Linux Migrations]]> http://www.open-lab.net/blog/?p=78853 2024-04-09T23:45:33Z 2024-03-07T17:55:29Z Migrating between major versions of software can present several challenges to the infrastructure management teams: Data format changes Feature deprecations...]]> Migrating between major versions of software can present several challenges to the infrastructure management teams: Data format changes Feature deprecations...Decorative image of a web of green light on a dark background.

Migrating between major versions of software can present several challenges to the infrastructure management teams: These challenges can prevent users from adopting the newer versions, so they miss out on newer, more powerful features. Effective planning and thorough testing are essential to overcoming these challenges and ensuring a smooth transition. Cumulus Linux 3.7.x and 4.x.

Source

]]>
0
Brian Sparks <![CDATA[Benchmarking NVIDIA Spectrum-X for AI Network Performance, Now Available from Supermicro]]> http://www.open-lab.net/blog/?p=77990 2024-10-11T20:02:16Z 2024-02-22T17:54:05Z NVIDIA Spectrum-X is swiftly gaining traction as the leading networking platform tailored for AI in hyperscale cloud infrastructures. Spectrum-X networking...]]> NVIDIA Spectrum-X is swiftly gaining traction as the leading networking platform tailored for AI in hyperscale cloud infrastructures. Spectrum-X networking...

NVIDIA Spectrum-X is swiftly gaining traction as the leading networking platform tailored for AI in hyperscale cloud infrastructures. Spectrum-X networking technologies help enterprise customers accelerate generative AI workloads. NVIDIA announced significant OEM adoption of the platform in a November 2023 press release, along with an update on the NVIDIA Israel-1 Supercomputer powered by Spectrum��

Source

]]>
2
Tanya Lenz <![CDATA[Featured Developer Sessions at NVIDIA GTC 2024]]> http://www.open-lab.net/blog/?p=77823 2024-02-22T20:00:34Z 2024-02-15T21:00:00Z Advances in AI are rapidly transforming every industry. Join us in person or virtually to learn about the latest technologies, from retrieval-augmented...]]> Advances in AI are rapidly transforming every industry. Join us in person or virtually to learn about the latest technologies, from retrieval-augmented...

Advances in AI are rapidly transforming every industry. Join us in person or virtually to learn about the latest technologies, from retrieval-augmented generation to OpenUSD.

Source

]]>
0
Chintan Patel <![CDATA[Generate Code, Answer Queries, and Translate Text with New NVIDIA AI Foundation Models]]> http://www.open-lab.net/blog/?p=77364 2025-08-07T22:02:47Z 2024-02-05T18:48:17Z This week��s Model Monday release features the NVIDIA-optimized code Llama, Kosmos-2, and SeamlessM4T, which you can experience directly from your browser....]]> This week��s Model Monday release features the NVIDIA-optimized code Llama, Kosmos-2, and SeamlessM4T, which you can experience directly from your browser....

This week��s Model Monday release features the NVIDIA-optimized code Llama, Kosmos-2, and SeamlessM4T, which you can experience directly from your browser. With NVIDIA AI Foundation Models and Endpoints, you can access a curated set of community and NVIDIA-built generative AI models to experience, customize, and deploy in enterprise applications. Meta��s Code Llama 70B is the latest��

Source

]]>
0
Chintan Shah <![CDATA[Announcing NVIDIA Metropolis Microservices for Jetson for Rapid Edge AI Development]]> http://www.open-lab.net/blog/?p=76670 2024-06-17T16:38:04Z 2024-01-25T18:30:00Z NVIDIA Metropolis Microservices for Jetson has been renamed to Jetson Platform Services, and is now part of NVIDIA JetPack SDK 6.0. Building vision AI...]]> NVIDIA Metropolis Microservices for Jetson has been renamed to Jetson Platform Services, and is now part of NVIDIA JetPack SDK 6.0. Building vision AI...

NVIDIA Metropolis Microservices for Jetson has been renamed to Jetson Platform Services, and is now part of NVIDIA JetPack SDK 6.0. Building vision AI applications for the edge often comes with notoriously long and costly development cycles. At the same time, quickly developing edge AI applications that are cloud-native, flexible, and secure has never been more important. Now��

Source

]]>
1
Taylor Allison <![CDATA[Simplifying Network Operations for AI with NVIDIA Quantum InfiniBand]]> http://www.open-lab.net/blog/?p=76977 2024-02-08T18:51:59Z 2024-01-23T18:00:00Z A common technological misconception is that performance and complexity are directly linked. That is, the highest-performance implementation is also the most...]]> A common technological misconception is that performance and complexity are directly linked. That is, the highest-performance implementation is also the most...Photo of a person standing at a computer terminal in a data center.

A common technological misconception is that performance and complexity are directly linked. That is, the highest-performance implementation is also the most challenging to implement and manage. When considering data center networking, however, this is not the case. InfiniBand is a protocol that sounds daunting and exotic in comparison to Ethernet, but because it is built from the ground up��

Source

]]>
0
Krishna Vasudevan <![CDATA[Automating Data Center Networks with NVIDIA NVUE and Ansible]]> http://www.open-lab.net/blog/?p=75093 2023-12-14T19:27:28Z 2023-12-11T18:30:00Z Data center automation dates to the early days of the mainframe, with operational efficiency topping the list of its benefits. Over the years, technologies have...]]> Data center automation dates to the early days of the mainframe, with operational efficiency topping the list of its benefits. Over the years, technologies have...

Data center automation dates to the early days of the mainframe, with operational efficiency topping the list of its benefits. Over the years, technologies have changed both inside and outside the data center. As a result, tools and approaches have evolved as well. The NVIDIA NVUE Collection and Ansible aim to simplify your network automation journey by providing a comprehensive list of��

Source

]]>
0
Brian Sparks <![CDATA[Networking for Data Centers and the Era of AI]]> http://www.open-lab.net/blog/?p=71474 2023-11-02T18:14:42Z 2023-10-12T16:30:00Z Traditional cloud data centers have served as the bedrock of computing infrastructure for over a decade, catering to a diverse range of users and applications....]]> Traditional cloud data centers have served as the bedrock of computing infrastructure for over a decade, catering to a diverse range of users and applications....

Traditional cloud data centers have served as the bedrock of computing infrastructure for over a decade, catering to a diverse range of users and applications. However, data centers have evolved in recent years to keep up with advancements in technology and the surging demand for AI-driven computing. This post explores the pivotal role that networking plays in shaping the future of data centers��

Source

]]>
0
Berkin Kartal <![CDATA[Comparing Solutions for Boosting Data Center Redundancy]]> http://www.open-lab.net/blog/?p=70873 2023-10-19T19:05:58Z 2023-09-29T19:46:58Z In today��s data center, there are many ways to achieve system redundancy from a server connected to a fabric. Customers usually seek redundancy to increase...]]> In today��s data center, there are many ways to achieve system redundancy from a server connected to a fabric. Customers usually seek redundancy to increase...Picture of an aisle in a data center, with servers on either side.

In today��s data center, there are many ways to achieve system redundancy from a server connected to a fabric. Customers usually seek redundancy to increase service availability (such as achieving end-to-end AI workloads) and find system efficiency using different multihoming techniques. In this post, we discuss the pros and cons of the well-known proprietary multi-chassis link aggregation��

Source

]]>
0
Joanne Chang <![CDATA[Webinar: Boost Model Performance with NVIDIA TAO Toolkit on STM32 MCUs]]> http://www.open-lab.net/blog/?p=69521 2023-08-24T18:03:39Z 2023-08-16T16:37:51Z On Aug. 29, learn how to create efficient AI models with NVIDIA TAO Toolkit on STM32 MCUs.]]> On Aug. 29, learn how to create efficient AI models with NVIDIA TAO Toolkit on STM32 MCUs.Promo card for the webinar.

On Aug. 29, learn how to create efficient AI models with NVIDIA TAO Toolkit on STM32 MCUs.

Source

]]>
0
Jess Nguyen <![CDATA[ICYMI: Unlocking the Power of GPU-Accelerated DataFrames?in Python]]> http://www.open-lab.net/blog/?p=68916 2023-08-24T18:03:51Z 2023-08-04T16:00:00Z Read this tutorial on how to tap into GPUs by importing cuDF instead of pandas�Cwith only a few code changes.]]> Read this tutorial on how to tap into GPUs by importing cuDF instead of pandas�Cwith only a few code changes.An illustration with 3 different colored squares labeled GPUs in a row.

Read this tutorial on how to tap into GPUs by importing cuDF instead of pandas�Cwith only a few code changes.

Source

]]>
0
Joel Lashmore <![CDATA[GPUs for ETL? Run Faster, Less Costly Workloads with NVIDIA RAPIDS Accelerator for Apache Spark and Databricks]]> http://www.open-lab.net/blog/?p=67503 2023-11-10T01:27:07Z 2023-07-17T18:08:30Z We were stuck. Really stuck. With a hard delivery deadline looming, our team needed to figure out how to process a complex extract-transform-load (ETL) job on...]]> We were stuck. Really stuck. With a hard delivery deadline looming, our team needed to figure out how to process a complex extract-transform-load (ETL) job on...Stylized image of a computer chip.

We were stuck. Really stuck. With a hard delivery deadline looming, our team needed to figure out how to process a complex extract-transform-load (ETL) job on trillions of point-of-sale transaction records in a few hours. The results of this job would feed a series of downstream machine learning (ML) models that would make critical retail assortment allocation decisions for a global retailer.

Source

]]>
0
Rob Armstrong <![CDATA[CUDA Toolkit 12.2 Unleashes Powerful Features for Boosting Applications]]> http://www.open-lab.net/blog/?p=67705 2024-08-28T17:39:00Z 2023-07-06T19:16:56Z The latest release of CUDA Toolkit 12.2 introduces a range of essential new features, modifications to the programming model, and enhanced support for hardware...]]> The latest release of CUDA Toolkit 12.2 introduces a range of essential new features, modifications to the programming model, and enhanced support for hardware...CUDA abstract image.

The latest release of CUDA Toolkit 12.2 introduces a range of essential new features, modifications to the programming model, and enhanced support for hardware capabilities accelerating CUDA applications. Now out through general availability from NVIDIA, CUDA Toolkit 12.2 includes many new capabilities, both major and minor. The following post offers an overview of many of the key��

Source

]]>
0
Jess Nguyen <![CDATA[ICYMI: Exploring Challenges Posed by Biased Datasets Using RAPIDS cuDF]]> http://www.open-lab.net/blog/?p=67283 2023-07-13T19:00:23Z 2023-06-28T19:25:19Z Read about an innovative GPU solution that solves limitations using small biased datasets with RAPIDS cuDF.]]> Read about an innovative GPU solution that solves limitations using small biased datasets with RAPIDS cuDF.Several graph illustrations representing data science.

Read about an innovative GPU solution that solves limitations using small biased datasets with RAPIDS cuDF.

Source

]]>
0
Steve Lee <![CDATA[Decentralizing AI with a Liquid-Cooled Development Platform by Supermicro and NVIDIA]]> http://www.open-lab.net/blog/?p=65800 2023-06-09T20:19:29Z 2023-05-31T16:00:00Z AI is the topic of conversation around the world in 2023. It is rapidly being adopted by all industries including media, entertainment, and broadcasting. To be...]]> AI is the topic of conversation around the world in 2023. It is rapidly being adopted by all industries including media, entertainment, and broadcasting. To be...Photo of hardware system from Supermicro.

AI is the topic of conversation around the world in 2023. It is rapidly being adopted by all industries including media, entertainment, and broadcasting. To be successful in 2023 and beyond, companies and agencies must embrace and deploy AI more rapidly than ever before. The capabilities of new AI programs like video analytics, ChatGPT, recommenders, speech recognition, and customer service are��

Source

]]>
0
Anthony Agnesina <![CDATA[AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs]]> http://www.open-lab.net/blog/?p=62681 2024-02-27T00:58:34Z 2023-03-27T13:00:00Z Most modern digital chips integrate large numbers of macros in the form of memory blocks or analog blocks, like clock generators. These macros are often much...]]> Most modern digital chips integrate large numbers of macros in the form of memory blocks or analog blocks, like clock generators. These macros are often much...AutoDMP

Most modern digital chips integrate large numbers of macros in the form of memory blocks or analog blocks, like clock generators. These macros are often much larger than standard cells, which are the fundamental building blocks of digital designs. Macro placement has a tremendous impact on the landscape of the chip, directly affecting many design metrics, such as area and power consumption.

Source

]]>
0
Kevin Deierling <![CDATA[Explainer: What Is a SmartNIC?]]> http://www.open-lab.net/blog/?p=54463 2024-06-05T21:51:08Z 2022-11-02T19:00:00Z A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.]]> A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.

A SmartNIC is a programmable accelerator that makes data center networking, security and storage efficient and flexible.

Source

]]>
0
Yam Gellis <![CDATA[Calculating and Synchronizing Time?with the Precision Timing Protocol on the NVIDIA Spectrum Switch]]> http://www.open-lab.net/blog/?p=54221 2023-06-12T09:01:16Z 2022-09-09T15:46:47Z PTP uses an algorithm and method for synchronizing clocks on various devices across packet-based networks to provide submicrosecond accuracy. NVIDIA Spectrum...]]> PTP uses an algorithm and method for synchronizing clocks on various devices across packet-based networks to provide submicrosecond accuracy. NVIDIA Spectrum...

PTP uses an algorithm and method for synchronizing clocks on various devices across packet-based networks to provide submicrosecond accuracy. NVIDIA Spectrum supports PTP in both one-step and two-step modes and can serve either as a boundary or a transparent clock. Here��s how the switch calculates and synchronizes time in one-step mode when acting as a transparent clock. Later in this post��

Source

]]>
0
Dave Salvator <![CDATA[Extending NVIDIA Performance Leadership with MLPerf Inference 1.0 Results]]> http://www.open-lab.net/blog/?p=30931 2023-09-19T16:28:44Z 2021-04-22T17:22:00Z Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use...]]> Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use...

Inference is where we interact with AI. Chat bots, digital assistants, recommendation engines, fraud protection services, and other applications that you use every day��all are powered by AI. Those deployed applications use inference to get you the information that you need. Given the wide array of usages for AI inference, evaluating performance poses numerous challenges for developers and��

Source

]]>
3
Nefi Alarcon <![CDATA[NVIDIA Announces More NGC-Ready Systems]]> https://news.www.open-lab.net/?p=12113 2023-09-18T17:21:33Z 2018-11-21T03:37:35Z NVIDIA is committed to making it easier for developers to deploy software from our NGC container registry. As part of that commitment, last week we announced...]]> NVIDIA is committed to making it easier for developers to deploy software from our NGC container registry. As part of that commitment, last week we announced...

NVIDIA is committed to making it easier for developers to deploy software from our NGC container registry. As part of that commitment, last week we announced our NGC-Ready program, which expands the places users of powerful systems with NVIDIA GPUs can deploy GPU-accelerated software with confidence. Today, we��re announcing several new NGC-Ready systems from even more of the world��s leading��

Source

]]>
0
Mark Harris <![CDATA[Accelerating Hyperscale Data Center Applications with NVIDIA M40 and M4 GPUs]]> http://www.open-lab.net/blog/parallelforall/?p=6092 2023-09-18T17:40:03Z 2015-11-10T14:02:16Z The internet has changed how people consume media. Rather than just watching television and movies, the combination of ubiquitous mobile devices, massive...]]> The internet has changed how people consume media. Rather than just watching television and movies, the combination of ubiquitous mobile devices, massive...

The internet has changed how people consume media. Rather than just watching television and movies, the combination of ubiquitous mobile devices, massive computation, and available Internet bandwidth has led to an explosion in user-created content: users are re-creating the Internet, producing exabytes of content every day. Periscope, a mobile application that lets users broadcast video��

Source

]]>
1
Brad Nemire <![CDATA[GPUs Dominate ISC��15 Student Cluster Contest]]> http://news.www.open-lab.net/?p=6095 2023-09-18T17:43:55Z 2015-07-17T21:58:48Z Using NVIDIA Tesla K80s, China's Tsinghua University team and JMI University in India both took top honors at the popular student contest. At the International...]]> Using NVIDIA Tesla K80s, China's Tsinghua University team and JMI University in India both took top honors at the popular student contest. At the International...

Using NVIDIA Tesla K80s, China��s Tsinghua University team and JMI University in India both took top honors at the popular student contest. At the International Supercomputing Conference (ISC) in Frankfurt, Germany, China��s Tsinghua University team collected their fifth student challenge gold cup (and second ISC win). The popular student contest brings together university teams from��

Source

]]>
0
���˳���97caoporen����