Posts by Joseph Lucas
Cybersecurity
Apr 29, 2025
Structuring Applications to Secure the KV Cache
When interacting with transformer-based models like large language models (LLMs) and vision-language models (VLMs), the structure of the input shapes the...
11 MIN READ
Cybersecurity
Dec 16, 2024
Sandboxing Agentic AI Workflows with WebAssembly
Agentic AI workflows often involve the execution of large language model (LLM)-generated code to perform tasks like creating data visualizations. However, this...
7 MIN READ
Cybersecurity
Jul 11, 2024
Defending AI Model Files from Unauthorized Access with Canaries
As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important....
6 MIN READ
Data Science
Jun 27, 2024
Secure LLM Tokenizers to Maintain Application Integrity
This post is part of the NVIDIA AI Red Team’s continuing vulnerability and technique research. Use the concepts presented to responsibly assess and increase...
6 MIN READ
Cybersecurity
Oct 19, 2023
NVIDIA AI Red Team: Machine Learning Security Training
At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the...
4 MIN READ
Data Science
Oct 04, 2023
Analyzing the Security of Machine Learning Research Code
The NVIDIA AI Red Team is focused on scaling secure development practices across the data, science, and AI ecosystems. We participate in open-source security...
12 MIN READ