Magnum IO GPUDirect Storage
A Direct Path Between Storage and GPU Memory
As datasets increase in size, the time spent loading data can impact application performance. GPUDirect? Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.
Key Features of v1.7
The following features have been added in v1.7:
- Support for APIs cuFileStreamRegister, cuFileStreamDeregister, cuFileReadAsync, cuFileWriteAsync is complete. This enables use of CUDA streams with cuFile APIs.
- cuFile APIs can be used with system memory.
- cuFile APIs can now be used with non O_DIRECT file descriptors.
- Threadpool support is enabled by default and is required for cuFile APis supporting CUDA streams.
Software Download
GPUDirect Storage v1.7 Release
NVIDIA Magnum IO GPUDirect? Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.
Resources
- Read the blog: Accelerating IO in the modern data center - magnum IO storage partnerships
- NVIDIA Magnum IO? SDK
- Read the blog: Optimizing data movement in GPU applications with the NVIDIA Magnum IO developer environment
- Read the blog: accelerating IO in the modern data center: Magnum IO Architecture
- Watch the webinar: NVIDIA GPUDirect Storage: Accelerating the data path to the GPU
- NVIDIA-Certified Systems configuration guide
- NVIDIA-Certified Systems
- Contact us at gpudirectstorageext@nvidia.com