NVIDIA DeepStream SDK

DeepStream’s multi-platform support gives you a faster, easier way to develop vision AI applications and services. You can even deploy them on-premises, on the edge, and in the cloud with the click of a button.

Get started  Try on LaunchPad

What is NVIDIA DeepStream?

There are billions of cameras and sensors worldwide, capturing an abundance of data that can be used to generate business insights, unlock process efficiencies, and improve revenue streams. Whether it’s at a traffic intersection to reduce vehicle congestion, health and safety monitoring at hospitals, surveying retail aisles for better customer satisfaction, or at a manufacturing facility to detect component defects, every application demands reliable, real-time Intelligent Video Analytics (IVA). NVIDIA’s DeepStream SDK is a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding. It’s ideal for vision AI developers, software partners, startups, and OEMs building IVA apps and services. Developers can now create stream processing pipelines that incorporate neural networks and other complex processing tasks such as tracking, video encoding/decoding, and video rendering. DeepStream pipelines enable real-time analytics on video, image, and sensor data.

What is DeepStream and how does the software stack look like
DeepStream is also an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixel and sensor data to actionable insights.

Key Benefits

 DeepStream is a powerful and flexible SDK

Powerful and Flexible SDK

DeepStream SDK is suitable for a wide range of use-cases across a broad set of industries.

DeepStream provides multiple programming options

Multiple Programming Options

Create powerful vision AI applications using C/C++, Python, or Graph Composer’s simple and intuitive UI.

DeepStream allows you to gather real-time insights

Real-Time Insights

Understand rich and multi-modal real-time sensor data at the edge.

DeepStream helps you create managed AI services

Managed AI Services

Deploy AI services in cloud native containers and orchestrate them using Kubernetes.

DeepStream helps you reduced total development cost

Reduced TCO

Increase stream density by training, adapting, and optimizing models with TAO toolkit and deploying models with DeepStream.

Unique Capabilities

Enjoy Seamless Development From Edge to Cloud

Developers can build seamless streaming pipelines for AI-based video, audio, and image analytics using DeepStream. It ships with 30+ hardware-accelerated plug-ins and extensions to optimize pre/post processing, inference, multi-object tracking, message brokers, and more. DeepStream also offers some of the world's best performing real-time multi-object trackers.

DeepStream is built for both developers and enterprises and offers extensive AI model support for popular object detection and segmentation models such as state of the art SSD, YOLO, FasterRCNN, and MaskRCNN. You can also integrate custom functions and libraries.

DeepStream introduces new REST-APIs for different plug-ins that let you create flexible applications that can be deployed as SaaS while being controlled from an intuitive interface. This means it’s now possible to add/delete streams and modify “regions-of-interest” using a simple interface such as a web page.

Learn more

DeepStream helps developers build seamless streaming pipeline for AI based video analytics
DeepStream helps developers build high performance cloud native AI applications

Get Cloud-Native

The use of cloud-native technologies gives you the flexibility and agility needed for rapid product development and continuous product improvement over time. Organizations now have the ability to build applications that are resilient and manageable, thereby enabling faster deployments of applications.

Developers can use the DeepStream Container Builder tool to build high-performance, cloud-native AI applications with NVIDIA NGC containers. The generated containers are easily deployed at scale and managed with Kubernetes and Helm Charts.



Learn more

Build End-to-End AI Solutions

Speed up overall development efforts and unlock greater real-time performance by building an end-to-end vision AI system with NVIDIA Metropolis. Start with production-quality vision AI models, adapt and optimize them with TAO Toolkit, and deploy using DeepStream.

Get incredible flexibility–from rapid prototyping to full production level solutions–and choose your inference path. With native integration to NVIDIA Triton? Inference Server, you can deploy models in native frameworks such as PyTorch and TensorFlow for inference. Using NVIDIA TensorRT? for high-throughput inference with options for multi-GPU, multi-stream, and batching support also helps you achieve the best possible performance.



Learn more
DeepStream is integrated with NVIDIA Metropolis for complete end-to-end AI solutions
DeepStream is bundled with multiple reference applications

Access Reference Applications

DeepStream SDK is bundled with 30+ sample applications designed to help users kick-start their development efforts. Most samples are available in C/C++, Python, and Graph Composer versions and run on both NVIDIA Jetson? and dGPU platforms. Reference applications can be used to learn about the features of the DeepStream plug-ins or as templates and starting points for developing custom vision AI applications.



DeepStream also now offers integration with Basler cameras for industrial inspection and lidar support for a wide range of applications.

Learn more

Work With Graph Composer

Graph Composer gives DeepStream developers a powerful, low-code development option. A simple and intuitive interface makes it easy to create complex processing pipelines and quickly deploy them using Container Builder.

Graph Composer abstracts much of the underlying DeepStream, GStreamer, and platform programming knowledge required to create the latest real-time, multi-stream vision AI applications.Instead of writing code, users interact with an extensive library of components, configuring and connecting them using the drag-and-drop interface.



Learn more
GraphComposer_Demo_AdobeExpress-larger_file.gif
 DeepStream SDK is part of NVIDIA AI Enterprise to help deploy AI anywhere.

Accelerate DeepStream Apps With NVIDIA AI Enterprise

NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software. It delivers key benefits including validation and integration for NVIDIA AI open-source software, and access to AI solution workflows to accelerate time to production.

Enterprise support is included with NVIDIA AI Enterprise to help you develop your applications powered by DeepStream and manage the lifecycle of AI applications with global enterprise support. This helps ensure that your business-critical projects stay on track.

Learn more

Explore Multiple Programming Options

C/C++

Create applications in C/C++, interact directly with GStreamer and DeepStream plug-ins, and use reference applications and templates.




Learn more about C/C++

Python

DeepStream pipelines can be constructed using Gst Python, the GStreamer framework's Python bindings. The source code for the binding and Python sample applications are available on GitHub.


Learn more about Python

Graph Composer

Graph Composer is a low-code development tool that enhances the DeepStream user experience. Using a simple, intuitive UI, processing pipelines are constructed with drag-and-drop operations.


Learn More About Graph Composer

Improve Accuracy and Real-Time Performance

DeepStream offers exceptional throughput for a wide variety of object detection, image processing, and instance segmentation AI models. The following table shows the end-to-end application performance from data ingestion, decoding, and image processing to inference. It takes multiple 1080p/30fps streams as input. Note that running on the DLAs for Jetson devices frees up the GPU for other tasks. For performance best practices, watch this video tutorial.


Jetson Orin NX
Jetson Orin AGX?
T4
A2
A10
A30
A100
H100
L40
L4
RTX* (A6000)
Application
Models
Tracker
Infer Resolution
Precision
GPU
DLA1
DLA2
GPU
DLA1
DLA2
GPU
GPU
GPU
GPU
GPU
GPU
GPU
GPU
GPU
People Detect
PeopleNet-ResNet34
(Version 2.6)
No Tracker
960x544
INT8
141
65
65
456
130
130
420
233
993
1440
2336
3492
1969
745
1432
People Detect
PeopleNet-ResNet34
(Version 2.6)
NvDCF
960x544
INT8
131
65
65
418
130
130
418
229
957
1375
2048
3196
1946
738
1375
License Plate Recognition
TrafficCamNet
LPDNet
LPRNet
NvDCF
960x544
640x480
96x48
INT8
INT8
FP16
143
-
-
379
-
-
455
290
1155
1301
2059
2531
2323
762
1482
3D Body Pose Estimation
PeopleNet-ResNet34 BodyPose3D
NvDCF
960x544
192x256
INT8
FP16
32
-
-
62
-
-
91
59
143
167
187
207
144
169
144
Action Recognition
ActionRecognitionNet (3DConv)
No Tracker
224x224x3x32
FP16
36
-
-
122
-
-
134
72
1154
2598
2583
3181
2304
2476
1327

RTX GPUs performance is only reported for flagship product(s). All SKUs support DeepStream.


The DeepStream SDK lets you apply AI to streaming video and simultaneously optimize video decode/encode, image scaling, and conversion and edge-to-cloud connectivity for complete end-to-end performance optimization.


To learn more about the performance using DeepStream, check the documentation.

Read Customer Stories

OneCup AI Customer Story

OneCup AI

OneCup AI’s computer vision system tracks and classifies animal activity using NVIDIA pretrained models, TAO Toolkit, and DeepStream SDK, significantly reducing their development time from months to weeks.


Learn more about OneCup AI
KoiReader Customer Story

KoiReader

KoiReader developed an AI-powered machine vision solution using NVIDIA developer tools including DeepStream SDK to help PepsiCo achieve precision and efficiency in dynamic distribution environments.



Learn more about KoiReader
Trifork Customer Story

Trifork

Trifork jumpstarted their AI model development with NVIDIA DeepStream SDK, pretrained models, and TAO Toolkit to develop their AI-based baggage tracking solution for airports.



Learn more about Trifork

General FAQ

DeepStream is a closed-source SDK. Note that sources for all reference applications and several plugins are available.

The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are retail analytics, parking management, managing logistics, optical inspection, robotics, and sports analytics.

Yes, that’s now possible with the integration of the Triton Inference server. Also with DeepStream 6.1.1, applications can communicate with independent/remote instances of Triton Inference Server using gPRC.

DeepStream supports several popular networks out of the box. For instance, DeepStream supports MaskRCNN. Also, DeepStream ships with an example to run the popular YOLO models, FasterRCNN, SSD and RetinaNet.

Yes, DS 6.0 or later supports the Ampere architecture.

Yes, audio is supported with DeepStream SDK 6.1.1. To get started, download the software and review the reference audio and Automatic Speech Recognition (ASR) applications. Learn more by reading the ASR DeepStream Plugin.

Build high-performance vision AI apps and services using DeepStream SDK.

Get Started