Being able to predict extreme weather events is essential as such conditions become more common and destructive. Subseasonal climate forecasting��predicting weather two or more weeks in the future��underpins proactive decision making and risk management across sectors that are sensitive to weather fluctuations. It can help farmers better choose which crops to grow and manage their water��
]]>As part of continued efforts to ensure NVIDIA Omniverse is a developer-first platform, NVIDIA will be deprecating the Omniverse Launcher on Oct. 1. Doing so will enable a more open, integrated, and efficient development experience. Removing the Launcher will streamline how developers access essential tools and resources on the platforms they already use and trust.
]]>NVIDIA today released developer previews of NVIDIA Isaac Sim and NVIDIA Isaac Lab �� reference robotics simulation and learning frameworks. Now available on GitHub, these releases offer early access to cutting edge capabilities for building, training, and testing AI-powered robots in physics-based simulation environments. Isaac Sim is a reference application built on NVIDIA��
]]>Generalist robotics have arrived, powered by advances in mechatronics and robot AI foundation models. But a key bottleneck remains: robots need vast training data for skills like assembly and inspection, and manual demonstrations aren��t scalable. The NVIDIA Isaac GR00T-Dreams blueprint, built on NVIDIA Cosmos, solves this challenge by generating massive synthetic trajectory data from just a single��
]]>Modern products often consist of millions of parts and require intricate design and collaboration. The industrial world is facing significant challenges in managing their complexity, with traditional visualization tools failing to render these large, multi-CAD assemblies with the true-to-life realism required to fully benefit from digital twins. To address these struggles��
]]>Autonomous vehicle (AV) stacks are evolving from a hierarchy of discrete building blocks to end-to-end architectures built on foundation models. This transition demands an AV data flywheel to generate synthetic data and augment sensor datasets, address coverage gaps and, and ultimately, build a validation toolchain to safely develop and deploy autonomous vehicles. In this blog post��
]]>Announced at COMPUTEX 2025, the NVIDIA Omniverse Blueprint for AI factory digital twins has expanded to support OpenUSD schemas. The blueprint features new tools to simulate more aspects of data center design across power, cooling, and networking infrastructure. Engineering teams can now design and test entire AI factories in a realistic virtual world, helping to catch issues early so they can��
]]>Join us at GTC Paris on June 10th and choose from six full-day, instructor-led workshops.
]]>Universal Scene Description (OpenUSD) offers a powerful, open, and extensible ecosystem for describing, composing, simulating, and collaborating within complex 3D worlds. From handling massive datasets and automating workflows for digital twins to enabling real-time rendering for games and streamlining industrial operations in manufacturing and energy, it is transforming how industries work with��
]]>AI has become nearly synonymous with innovation. As it rushes onto the world stage, AI is seeding inspiration in creators and problem-solvers of all stripes��from artists to more traditional industrial inventors. One of the world��s leading AI-first artists, Alexander Reben, has spent his career integrating AI into different artistic mediums. His current work explores AI and robotics and��
]]>Humans know more about deep space than we know about Earth��s deepest oceans. But scientists have plans to change that��with the help of AI. ��We have better maps of Mars than we do of our own exclusive economic zone,�� said Nick Rotker, chief BlueTech strategist at MITRE, a US government-sponsored nonprofit research organization. ��Around 70% of the Earth is covered in water and we��ve explored��
]]>At GTC 2025, a panel of industry leaders from across the tech ecosystem shared how they��re using AI to mitigate and prepare customers for the increasingly disruptive impact of climate change. Tenika Versey, the global head of sustainable futures for the NVIDIA Inception program, led a panel that included Colin le Duc, founding partner at Generation Investment Management, Suzanne DiBianca��
]]>Industrial enterprises are embracing physical AI and autonomous systems to transform their operations. This involves deploying heterogeneous robot fleets that include mobile robots, humanoid assistants, intelligent cameras, and AI agents throughout factories and warehouses. To harness the full potential of these physical AI enabled systems, companies rely on digital twins of their facilities��
]]>Welcome to the first edition of the NVIDIA Robotics Research and Development Digest (R2D2). This technical blog series will give developers and researchers deeper insight and access to the latest physical AI and robotics research breakthroughs across various NVIDIA Research labs. Developing robust robots presents significant challenges, such as: We address these challenges through��
]]>Kit SDK 107.0 is a major update release with primary updates for robotics development.
]]>Large ensembles are essential for predicting rare, high-impact events that cannot be fully understood through historical data alone. By simulating thousands of potential scenarios, they provide the statistical depth necessary to assess risks, prepare for extremes, and build resilience against once-in-a-century disasters. Global insurance group AXA is conducting simulations with cutting-edge��
]]>In the United Arab Emirates (UAE), extreme weather events disrupt daily life, delaying flights, endangering transportation, and complicating urban planning. High daytime temperatures limit human activity outdoors, while dense nighttime fog is a frequent cause of severe and often fatal car crashes. Meanwhile, 2024 saw the heaviest precipitation event in the country in 75 years��
]]>The wireless industry stands at the brink of a transformation, driven by the fusion of AI with advanced 5G and upcoming 6G technologies that promise unparalleled speeds, ultra-low latency, and seamless connectivity for billions of AI-powered endpoints. 6G specifically will be AI-native, enabling integrated sensing and communications, supporting immersive technologies like extended reality and��
]]>NVIDIA DGX Cloud Serverless Inference is an auto-scaling AI inference solution that enables application deployment with speed and reliability. Powered by NVIDIA Cloud Functions (NVCF), DGX Cloud Serverless Inference abstracts multi-cluster infrastructure setups across multi-cloud and on-premises environments for GPU-accelerated workloads. Whether managing AI workloads��
]]>The world of robotics is undergoing a significant transformation, driven by rapid advancements in physical AI. This evolution is accelerating the time to market for new robotic solutions, enhancing confidence in their safety capabilities, and contributing to the powering of physical AI in factories and warehouses. Announced at GTC, Newton is an open-source, extensible physics engine developed��
]]>Physical AI models enable robots to autonomously perceive, interpret, reason, and interact with the real world. Accelerated computing and simulations are key to developing the next generation of robotics. Physics plays a crucial role in robotic simulation, providing the foundation for accurate virtual representations of robot behavior and interactions within realistic environments.
]]>With the recent advancements in generative AI and vision foundational models, VLMs present a new wave of visual computing wherein the models are capable of highly sophisticated perception and deep contextual understanding. These intelligent solutions offer a promising means of enhancing semantic comprehension in XR settings. By integrating VLMs, developers can significantly improve how XR��
]]>Recently announced at MWC Barcelona, developers can now stream augmented reality (AR) experiences built with NVIDIA Omniverse to the Apple iPad. Omniverse, a platform for real-time collaboration and simulation, enables developers to create and stream detailed datasets with high visual quality. Built on Universal Scene Description (OpenUSD), Omniverse enables seamless compatibility across 3D tools��
]]>Learn how to adopt and evolve OpenUSD for the world��s physical and industrial AI data pipelines and workflows.
]]>Explore the future of extended reality, and learn how spatial computing is changing the future of immersive development and industry workflows.
]]>Universal Scene Description (OpenUSD) is an open, extensible framework and ecosystem with APIs for composing, editing, querying, rendering, collaborating, and simulating within 3D virtual worlds. This post explains how you can start using OpenUSD today with your existing assets and tools and what steps you can take to iteratively up-level your USD workflows. For an interactive��
]]>AI-driven flood modeling and 3D visualization tools are transforming how communities prepare for and respond to climate risks. In this NVIDIA GTC 2024 session, Guy Schumann and Guillaume Gallion from RSS-Hydro explore how next-generation geospatial applications and high-fidelity visualizations, including NVIDIA Omniverse, can enhance disaster resilience by providing dynamic tools for decision��
]]>Take the three self-paced courses at no cost through the NVIDIA Deep Learning Institute (DLI).
]]>As robotics and autonomous vehicles advance, accelerating development of physical AI��which enables autonomous machines to perceive, understand, and perform complex actions in the physical world��has become essential. At the center of these systems are world foundation models (WFMs)��AI models that simulate physical states through physics-aware videos, enabling machines to make accurate decisions and��
]]>Kit 106.5 now supports USDz exports, improved new project flow, and preview of new RTX real-time mode.
]]>Spatial computing experiences are transforming how we interact with data, connecting the physical and digital worlds through technologies like extended reality (XR) and digital twins. These advancements are enabling more intuitive and immersive ways to analyze and understand complex datasets. This post explains how developers can now engage with Universal Scene Description (OpenUSD)-based��
]]>Training physical AI models used to power autonomous machines, such as robots and autonomous vehicles, requires huge amounts of data. Acquiring large sets of diverse training data can be difficult, time-consuming, and expensive. Data is often limited due to privacy restrictions or concerns, or simply may not exist for novel use cases. In addition, the available data may not apply to the full range��
]]>NVIDIA 6G Developer Day 2024 brought together members of the 6G research and development community to share insights and learn new ways of engaging with NVIDIA 6G research tools. More than 1,300 academic and industry researchers from across the world attended the virtual event. It featured presentations from NVIDIA, ETH Z��rich, Keysight, Northeastern University, Samsung, Softbank��
]]>As global electricity demand continues to rise, traditional sources of energy are increasingly unsustainable. Energy providers are facing pressure to reduce reliance on fossil fuels while ensuring a fully supplied and stable grid. In this context, solar energy has emerged as a vital renewable resource, being one of the most abundant clean energy sources available. However��
]]>As enterprises increasingly integrate AI into their industrial operations to deliver more automated and autonomous facilities, more operations teams are becoming centralized in remote operations centers. From these centers, these teams monitor, operate, and provide expert guidance to distributed production sites. A new generation of 3D remote monitoring solutions, powered by advancements in��
]]>Today, brands and their creative agencies are under huge strain to create and deliver high-quality, accurate product images at scale, from campaign key visuals to packshots for e-commerce. Audience-targeted content, such as personalized and localized visual variations, adds additional layers of complexity to production. Production costs, short timelines, resources��
]]>Everything that is manufactured is first simulated with advanced physics solvers. Real-time digital twins (RTDTs) are the cutting edge of computer-aided engineering (CAE) simulation, because they enable immediate feedback in the engineering design loop. They empower engineers to innovate freely and rapidly explore new designs by experiencing in real time the effects of any change in the simulation.
]]>Dale Durran, a professor in the Atmospheric Sciences Department at the University of Washington, introduces a breakthrough deep learning model that combines atmospheric and oceanic data to set new climate and weather prediction accuracy standards. In this NVIDIA GTC 2024 session, Durran presents techniques that reduce reliance on traditional parameterizations, enabling the model to bypass��
]]>When interfacing with generative AI applications, users have multiple communication options��text, voice, or through digital avatars. Traditional chatbot or copilot applications have text interfaces where users type in queries and receive text-based responses. For hands-free communication, speech AI technologies like automatic speech recognition (ASR) and text-to-speech (TTS) facilitate��
]]>Programming robots for real-world success requires a training process that accounts for unpredictable conditions, different surfaces, variations in object size, shape, texture, and more. Consequently, physically accurate simulations are vital for training AI-enabled robots before deployment. Crafting physically accurate simulation requires advanced programming skills to fine-tune algorithms��
]]>Robotics could make everyday life easier by taking on repetitive or time-consuming tasks. At NVIDIA GTC 2024, researchers from Stanford University unveiled BEHAVIOR-1K, a major benchmark designed to train robots to perform 1,000 real-world-inspired activities��such as folding laundry, cooking breakfast, and cleaning up after a party. Using OmniGibson, a cutting-edge simulation environment for��
]]>NVIDIA has built three computers and accelerated development platforms to enable developers to create physical AI.
]]>Physical AI-powered robots need to autonomously sense, plan, and perform complex tasks in the physical world. These include transporting and manipulating objects safely and efficiently in dynamic and unpredictable environments. Robot simulation enables developers to train, simulate, and validate these advanced systems through virtual robot learning and testing. It all happens in physics��
]]>The integration of robotic surgical assistants (RSAs) in operating rooms offers substantial advantages for both surgeons and patient outcomes. Currently operated through teleoperation by trained surgeons at a console, these surgical robot platforms provide augmented dexterity that has the potential to streamline surgical workflows and alleviate surgeon workloads. Exploring visual behavior cloning��
]]>Reality capture creates highly accurate, detailed, and immersive digital representations of environments. Innovations in site scanning and accelerated data processing, and emerging technologies like neural radiance fields (NeRFs) and Gaussian splatting are significantly enhancing the capabilities of reality capture. These technologies are revolutionizing interactions with and analyses of the��
]]>Producing commercials is resource-intensive, requiring physical locations and various props and setups to display products in different settings and environments for more accurate consumer targeting. This traditional process is not only expensive and time-consuming but also can be destructive to the physical environment. It leaves you with no ability to capture a new angle after you return home.
]]>Gaming has always pushed the boundaries of graphics hardware. The most popular games typically required robust GPU, CPU, and RAM resources on a user��s PC or console��that is, until the advent of GeForce NOW and cloud gaming. Today, with the power of interactive streaming from the cloud, any user on almost any device can play the latest and greatest of today��s games. However��
]]>The journey to 6G has begun, offering opportunities to deliver a network infrastructure that is performant, efficient, resilient, and adaptable. 6G networks will be significantly more complex than their predecessors and will rely on a variety of new technologies, especially AI and machine learning (ML). To advance these new technologies and optimize network performance and efficiency��
]]>Accelerate your OpenUSD workflows with this free curriculum for developers and 3D practitioners.
]]>Generative physical AI models can understand and execute actions with fine or gross motor skills within the physical world. Understanding and navigating in the 3D space of the physical world requires spatial intelligence. To achieve spatial intelligence in physical AI involves converting the real world into AI-ready virtual representations that the model can understand.
]]>Originally published on July 29, 2024, this post was updated on October 8, 2024. Robots need to be adaptable, readily learning new skills and adjusting to their surroundings. Yet traditional training methods can limit a robot��s ability to apply learned skills in new situations. This is often due to the gap between perception and action, as well as the challenges in transferring skills across��
]]>NVIDIA announced new USD-based generative AI and NVIDIA-accelerated development tools built on NVIDIA Omniverse at SIGGRAPH 2024. These advancements will expand adoption of Universal Scene Description (OpenUSD) to robotics, industrial design, and engineering, so developers can quickly build highly accurate virtual worlds for the next evolution of AI. OpenUSD is an open-source framework and��
]]>Developers from advertising agencies to software vendors are empowering global brands to deliver hyperpersonalization for digital experiences and visual storytelling with product configurator solutions. Integrating NVIDIA Omniverse with OpenUSD and generative AI into product configurators enables solution providers and software developers to deliver interactive, ray-traced��
]]>Complimentary trainings on OpenUSD, Digital Humans, LLMs and more with hands-on labs for Full Conference and Experience attendees.
]]>The world��s energy system is increasingly complex and distributed due to increasing renewable energy generation, decentralization of energy resources, and decarbonization of heavy industries. Energy producers are challenged to optimize operational efficiency and costs within hybrid power plants generating both renewable and carbon-based electricity. Grid operators have less time to dispatch energy��
]]>Large-scale, use�Ccase-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That��s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations of real-world environments. NVIDIA Isaac Sim, built on NVIDIA Omniverse, is a fully extensible��
]]>SyncTwin GmbH, a company that builds software to optimize production, intralogistics, and assembly, is on a mission to unlock industrial digital twins for small and medium-sized businesses (SMBs). While SyncTwin has helped major global companies like BMW minimize costs and downtime in their factories with digital twins, they are now shifting their focus to enable manufacturing businesses��
]]>As vision AI complexity increases, streamlined deployment solutions are crucial to optimizing spaces and processes. NVIDIA accelerates development, turning ideas into reality in weeks rather than months with NVIDIA Metropolis AI workflows and microservices. In this post, we explore Metropolis microservices features: Managing and automating infrastructure with AI is��
]]>The era of AI robots powered by physical AI has arrived. Physical AI models understand their environments and autonomously complete complex tasks in the physical world. Many of the complex tasks��like dexterous manipulation and humanoid locomotion across rough terrain��are too difficult to program and rely on generative physical AI models trained using reinforcement learning (RL) in simulation.
]]>With AI introducing an unprecedented pace of technological innovation, staying ahead means keeping your skills up to date. The NVIDIA Developer Program gives you the tools, training, and resources you need to succeed with the latest advancements across industries. We��re excited to announce the following five new technical courses from NVIDIA. Join the Developer Program now to get hands-on��
]]>NVIDIA Omniverse is a platform that enables you to build applications for complex 3D and industrial digitalization workflows based on Universal Scene Description (OpenUSD). The platform��s modular architecture breaks down into core technologies and services, which you can directly integrate into tools and applications, customizing as needed. This approach simplifies integration��
]]>With the growing emphasis on environmental, social, and governance (ESG) investments and initiatives, manufacturers are looking for new ways to increase energy efficiency and sustainability across their operations. One area of opportunity in electronics manufacturing is the performance of run-in test rooms, which are essential for ensuring the reliability, quality, and safety of the world��s��
]]>Manufacturers face increased pressures to shorten production cycles, enhance productivity, and improve quality, all while reducing costs. To address these challenges, they��re investing in industrial digitalization and AI-enabled digital twins to unlock new possibilities from planning to operations. Developers at Pegatron, an electronics manufacturer based in Taiwan, used NVIDIA AI��
]]>This post is the first in a series on building multi-camera tracking vision AI applications. In this part, we introduce the overall end-to-end workflow, focusing on building and deploying the multi-camera tracking system. The second part covers fine-tuning AI models with synthetic data to enhance system accuracy. Large areas like warehouses, factories, stadiums, and airports are typically��
]]>AI is rapidly changing industrial visual inspection. In a factory setting, visual inspection is used for many issues, including detecting defects and missing or incorrect parts during assembly. Computer vision can help identify problems with products early on, reducing the chances of them being delivered to customers. However, developing accurate and versatile object detection models remains��
]]>With automotive consumers increasingly seeking more seamless, connected driving experiences, the industry has increased its focus on connectivity, advanced camera systems, and the in-vehicle experience. Continental, a leading German automotive technology company and innovator for automotive display solutions, is developing AI-powered virtual factory solutions to address these shifts and��
]]>With NVIDIA AI, NVIDIA Omniverse, and the Universal Scene Description (OpenUSD) ecosystem, industrial developers are building virtual factory solutions that accelerate time-to-market, maximize production capacity, and cut costs through optimized processes for both brownfield and greenfield developments. Companies such as Delta Electronics, FoxConn, Pegatron, and Wistron have developed��
]]>Today, NVIDIA, and the Alliance for OpenUSD (AOUSD) announced the AOUSD Materials Working Group, an initiative for standardizing the interchange of materials in Universal Scene Description, known as OpenUSD. As an extensible framework and ecosystem for describing, composing, simulating, and collaborating within 3D worlds, OpenUSD enables developers to build interoperable 3D workflows��
]]>We are so excited to be back in person at GTC this year at the San Jose Convention Center. With thousands of developers, industry leaders, researchers, and partners in attendance, attending GTC in person gives you the unique opportunity to network with legends in technology and AI, and experience NVIDIA CEO Jensen Huang��s keynote live on-stage at the SAP Center. Past GTC alumni? Get 40%
]]>Learn how synthetic data is supercharging 3D simulation and computer vision workflows, from visual inspection to autonomous machines.
]]>Gain a foundational understanding of USD, the open and extensible framework for creating, editing, querying, rendering, collaborating, and simulating within 3D worlds.
]]>Developers and enterprises can now deploy lifelike virtual and mixed reality experiences with Varjo��s latest XR-4 series headsets, which are integrated with NVIDIA technologies. These XR headsets match the resolution that the human eye can see, providing users with realistic visual fidelity and performance. The latest XR-4 series headsets support NVIDIA Omniverse and are powered by NVIDIA��
]]>HOMEE AI, an NVIDIA Inception member based in Taiwan, has developed an ��AI-as-a-service�� spatial planning solution to disrupt the $650B global home decor market. They��re helping furniture makers and home designers find new business opportunities in the era of industrial digitalization. Using NVIDIA Omniverse, the HOMEE AI engineering team developed an enterprise-ready service to deliver��
]]>Discover why OpenUSD is central to the future of 3D development with Aaron Luk, a founding developer of Universal Scene Description.
]]>Railroad simulation is important in modern transportation and logistics, providing a virtual testing ground for the intricate interplay of tracks, switches, and rolling stock. It serves as a crucial tool for engineers and developers to fine-tune and optimize railway systems, ensuring efficiency, safety, and cost-effectiveness. Physically realistic simulations enable comprehensive scenario��
]]>Convai is a versatile developer platform for designing characters with advanced multimodal perception abilities. These characters are designed to integrate seamlessly into both the virtual and real worlds. Whether you��re a creator, game designer, or developer, Convai enables you to quickly modify a non-playable character (NPC), from backstory and knowledge to voice and personality.
]]>Much of the communication between drivers goes beyond turn signals and brake lights. Motioning another car to proceed, looking over to see if another driver is paying attention��even the friendly Jeep wave��all rely on human-based communication rather than vehicle technology. As autonomous vehicles (AV) must coexist with human drivers for the foreseeable future, they must be able to interpret��
]]>In the fourth installment of this series on the superpowers of OpenUSD, learn how any digital content creation tool can be connected to USD. OpenUSD��s data source interoperability allows data from different tools to be used in the same scene or project.
]]>NVIDIA researchers took the stage at SIGGRAPH Asia Real-Time Live event in Sydney to showcase generative AI integrated into an interactive texture painting workflow, enabling artists to paint complex, non-repeating textures directly on the surface of 3D objects. Rather than generating complete results with only high-level user guidance, this prototype shows how AI can function as a brush in��
]]>From monotonous highways to routine neighborhood trips, driving is often uneventful. As a result, much of the training data for autonomous vehicle (AV) development collected in the real world is heavily skewed toward simple scenarios. This poses a challenge to deploying robust perception models. AVs must be thoroughly trained, tested, and validated to handle complex situations��
]]>From last-minute cut-ins to impromptu U-turns, human drivers can be incredibly unpredictable. This unpredictability stems from the complex nature of human decision-making, which is influenced by multiple factors and varies across different operational design domains (ODD) and countries, making it difficult to emulate in simulation. Yet, autonomous vehicle (AV) developers need to confidently��
]]>For manufacturing and industrial enterprises, efficiency and precision are essential. To streamline operations, reduce costs, and enhance productivity, companies are turning to digital twins and discrete-event simulation. Discrete-event simulation enables manufacturers to optimize processes by experimenting with different inputs and behaviors that can be modeled and tested step by step.
]]>Synthetic data can play a key role when training perception AI models that are deployed on autonomous mobile robots (AMRs). This process is becoming increasingly important in manufacturing. For an example of using synthetic data to generate a pretrained model that can detect pallets in a warehouse, see Developing a Pallet Detection Model Using OpenUSD and Synthetic Data.
]]>Data is the lifeblood of AI systems, which rely on robust datasets to learn and make predictions or decisions. For perception AI models specifically, it is essential that data reflects real-world environments and incorporates the array of scenarios. This includes edge use cases for which data is often difficult to collect, such as street traffic and manufacturing assembly lines.
]]>NVIDIA announced major updates to the NVIDIA Isaac Robotics platform today at ROSCon 2023. The platform delivers performant perception and high-fidelity simulation to robotics developers worldwide. These updates include the release of NVIDIA Isaac ROS 2.0 and NVIDIA Isaac Sim 2023.1 and perception and simulation upgrades that simplify building and testing performant AI-based robotic applications��
]]>Photogrammetry is the process of capturing images and stitching them together to create a digital model of the physical world.
]]>Take this free self-paced course to learn how to leverage NVIDIA Omniverse Kit to easily build apps on the Omniverse platform.
]]>Sensor simulation is a critical tool to address the gaps in real-world data for autonomous vehicle (AV) development. However, it is only effective if sensor models accurately reflect the physical world. Sensors can be either passive, such as cameras��or active, sending out either an electromagnetic wave (lidar, radar) or an acoustic wave (ultrasonic) to generate the sensor output.
]]>Custom schemas in Universal Scene Description, known as OpenUSD or USD, are pivotal for developers seeking to represent and encode sophisticated virtual worlds. By formalizing data models, schemas enable the interpretation of raw data by USD-compliant runtimes. Whether underpinning physics simulations or expanding digital twins, custom schemas provide the foundation for creativity, fidelity��
]]>Moment Factory is a global multimedia entertainment studio that combines specializations in video, lighting, architecture, sound, software, and interactivity to create immersive experiences for audiences around the world. At NVIDIA GTC 2024, Moment Factory will showcase digital twins for immersive location-based entertainment with Universal Scene Description (OpenUSD). To see the latest��
]]>Developing extended reality (XR) applications can be extremely challenging. Users typically start with a template project and adhere to pre-existing packaging templates for deploying an app to a headset. This approach creates a distinct bottleneck in the asset iteration pipeline. Updating assets inside an XR experience becomes completely dependent on how fast the developer can build, package��
]]>The latest release of NVIDIA Omniverse delivers an exciting collection of new features based on Omniverse Kit 105, making it easier than ever for developers to get started building 3D simulation tools and workflows. Built on Universal Scene Description, known as OpenUSD, and NVIDIA RTX and AI technologies, Omniverse enables you to create advanced, real-time 3D simulation applications for��
]]>This post was updated January 16, 2024. Recent years have witnessed a massive increase in the volume of 3D geospatial data being generated. This data provides rich real-world environmental and contextual information, spatial relationships, and real-time monitoring capabilities for industrial applications. It can enhance the realism, accuracy, and effectiveness of simulations across various��
]]>A new paradigm for data modeling and interchange is unlocking possibilities for 3D workflows and virtual worlds.
]]>Smart cities are the future of urban living. Yet they can present various challenges for city planners, most notably in the realm of transportation. To be successful, various aspects of the city��from environment and infrastructure to business and education��must be functionally integrated. This can be difficult, as managing traffic flow alone is a complex problem full of challenges such as��
]]>Imagine you are a robotics or machine learning (ML) engineer tasked with developing a model to detect pallets so that a forklift can manipulate them. ?You are familiar with traditional deep learning pipelines, you have curated manually annotated datasets, and you have trained successful models. You are ready for the next challenge, which comes in the form of large piles of densely stacked��
]]>Robotics simulation enables virtual training and programming that can use physics-based digital representations of environments, robots, machines, objects, and other assets.
]]>Siloed data has long been a challenge in architecture, engineering, and construction (AEC), hindering productivity and collaboration. However, new innovative solutions are transforming the way that architects, engineers, and construction managers work together on BIM (building information management) workflows, offering new possibilities for real-time collaboration. The new NVIDIA Omniverse��
]]>Join this AMA on June 28 and ask our experts how to build an AI-powered extension for NVIDIA Omniverse using ChatGPT.
]]>Embedded edge AI is transforming industrial environments by introducing intelligence and real-time processing to even the most challenging settings. Edge AI is increasingly being used in agriculture, construction, energy, aerospace, satellites, the public sector, and more. With the NVIDIA Jetson edge AI and robotics platform, you can deploy AI and compute for sensor fusion in these complex��
]]>