Rich Harang – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-06-12T18:48:31Z http://www.open-lab.net/blog/feed/ Rich Harang <![CDATA[Securely Deploy AI Models with NVIDIA NIM]]> http://www.open-lab.net/blog/?p=101701 2025-06-12T18:48:31Z 2025-06-11T11:00:00Z Imagine you��re leading security for a large enterprise and your teams are eager to leverage AI for more and more projects. There��s a problem, though. As...]]>

Imagine you’re leading security for a large enterprise and your teams are eager to leverage AI for more and more projects. There’s a problem, though. As with any project, you must balance the promise and returns of innovation with the hard realities of compliance, risk management, and security posture mandates. Security leaders face a crucial challenge when evaluating AI models such as those…

Source

]]>
Rich Harang <![CDATA[Structuring Applications to Secure the KV Cache]]> http://www.open-lab.net/blog/?p=99425 2025-05-15T19:08:32Z 2025-04-29T22:43:01Z When interacting with transformer-based models like large language models (LLMs) and vision-language models (VLMs), the structure of the input shapes the...]]>

When interacting with transformer-based models like large language models (LLMs) and vision-language models (VLMs), the structure of the input shapes the model’s output. But prompts are often more than a simple user query. In practice, they optimize the response by dynamically assembling data from various sources such as system instructions, context data, and user input.

Source

]]>
Rich Harang <![CDATA[Defining LLM Red Teaming]]> http://www.open-lab.net/blog/?p=96239 2025-04-23T02:37:15Z 2025-02-25T18:49:26Z There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to...]]>

There is an activity where people provide inputs to generative AI technologies, such as large language models (LLMs), to see if the outputs can be made to deviate from acceptable standards. This use of LLMs began in 2023 and has rapidly evolved to become a common industry practice and a cornerstone of trustworthy AI. How can we standardize and define LLM red teaming?

Source

]]>
Rich Harang <![CDATA[Agentic Autonomy Levels and Security]]> http://www.open-lab.net/blog/?p=96341 2025-04-23T02:36:53Z 2025-02-25T18:45:05Z Agentic workflows are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable...]]>

Agentic workflows are the next evolution in AI-powered tools. They enable developers to chain multiple AI models together to perform complex activities, enable AI models to use tools to access additional data or automate user actions, and enable AI models to operate autonomously, analyzing and performing complex tasks with a minimum of human involvement or interaction. Because of their power…

Source

]]>
Rich Harang <![CDATA[NVIDIA Presents AI Security Expertise at Leading Cybersecurity Conferences]]> http://www.open-lab.net/blog/?p=89054 2024-09-19T19:29:43Z 2024-09-18T17:03:46Z Each August, tens of thousands of security professionals attend the cutting-edge security conferences Black Hat USA and DEF CON. This year, NVIDIA AI security...]]>

Each August, tens of thousands of security professionals attend the cutting-edge security conferences Black Hat USA and DEF CON. This year, NVIDIA AI security experts joined these events to share our work and learn from other members of the community. This post provides an overview of these contributions, including a keynote on the rapidly evolving AI landscape…

Source

]]>
Rich Harang <![CDATA[Defending AI Model Files from Unauthorized Access with Canaries]]> http://www.open-lab.net/blog/?p=85254 2025-02-04T19:45:15Z 2024-07-11T19:06:21Z As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important....]]>

As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important. Organizations are designing policies and tools, often as part of data loss prevention and secure supply chain programs, to protect model weights. While security engineering discussions focus on prevention (How do we prevent X?), detection (Did X…

Source

]]>
1
Rich Harang <![CDATA[Best Practices for Securing LLM-Enabled Applications]]> http://www.open-lab.net/blog/?p=73609 2024-07-08T20:07:28Z 2023-11-15T18:00:00Z Large language models (LLMs) provide a wide range of powerful enhancements to nearly any application that processes text. And yet they also introduce new risks,...]]>

Large language models (LLMs) provide a wide range of powerful enhancements to nearly any application that processes text. And yet they also introduce new risks, including: This post walks through these security vulnerabilities in detail and outlines best practices for designing or evaluating a secure LLM-enabled application. Prompt injection is the most common and well-known…

Source

]]>
0
Rich Harang <![CDATA[NVIDIA AI Red Team: Machine Learning Security Training]]> http://www.open-lab.net/blog/?p=71491 2024-07-08T20:05:26Z 2023-10-19T20:26:15Z At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the...]]>

At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the unique risks presented by machine learning (ML) in today’s environments. In this post, the NVIDIA AI Red Team shares what was covered during the training and other opportunities to continue learning about ML security.

Source

]]>
5
Rich Harang <![CDATA[Securing LLM Systems Against Prompt Injection]]> http://www.open-lab.net/blog/?p=68819 2024-07-08T20:08:30Z 2023-08-03T18:43:12Z Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is...]]>

Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is made more dangerous by the way that LLMs are increasingly being equipped with “plug-ins” for better responding to user requests by accessing up-to-date information, performing complex calculations, and calling on external services through…

Source

]]>
0
Rich Harang <![CDATA[Improving Machine Learning Security Skills at a DEF CON Competition]]> http://www.open-lab.net/blog/?p=57692 2024-07-09T16:36:32Z 2022-11-30T21:00:00Z Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the...]]>

Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and data science domains. While the state-of-the-art moves forward, there is no clear onboarding and learning path for securing and testing machine learning systems. How, then…

Source

]]>
0
���˳���97caoporen����