UW Security and Privacy Research Lab logo

Security, Privacy, and Safety for Agentic AI Systems

UW Security and Privacy Research Lab

Advances in language and other machine learning models have enabled a new agentic computing paradigm, in which systems integrate one or more AI agents that promise to take actions on behalf of users based on their natural language instructions. Although these systems can enable many exciting use cases and have already begun to transform the computing landscape, there are serious security, privacy, and safety risks to consider and mitigate. Building on its long history of studying and improving security and privacy for emerging technologies, the UW Security and Privacy Research Lab (with collaborators) is teaching and conducting research in this space.

Courses

CSE 599R: Agentic Systems Security

Instructor: Franziska Roesner

Spring 2026

This PhD-level course explores security challenges in agentic AI systems — software that takes actions on users' behalf based on natural language commands. The course covers traditional systems security principles, identifies vulnerabilities in current agentic systems, and examines defenses and secure system designs.

Projects

Towards Automating Data Access Permissions in AI Agents

Yuhao Wu, Ke Yang, Franziska Roesner, Tadayoshi Kohno, Ning Zhang & Umar Iqbal

IEEE Symposium on Security & Privacy, May 2026

We propose automated permission management for AI agents, arguing that conventional permission models are inadequate for the agentic paradigm. Through a user study, we identify factors influencing users' permission decisions and develop an ML-based prediction model.

Agentic Browsers and the Same-Origin Policy

Franziska Roesner & David Kohlbrenner

Agents in the Wild Workshop @ ICLR, April 2026

We investigate how emerging agentic browsers handle the same-origin policy, finding that prompt injections can leverage browser agents to circumvent cross-origin protections. We demonstrate a full proof-of-concept attack on ChatGPT Atlas and identify preconditions for attacks on several other agentic browsers.

IsolateGPT: An Execution Isolation Architecture for LLM-Based Systems

Yuhao Wu, Franziska Roesner, Tadayoshi Kohno, Ning Zhang & Umar Iqbal

Network and Distributed System Security Symposium (NDSS), February 2025

We propose IsolateGPT, an architecture that brings execution isolation to LLM-based systems with third-party apps, protecting against many security, privacy, and safety issues.

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

Umar Iqbal, Tadayoshi Kohno & Franziska Roesner

AAAI Conference on AI, Ethics, and Society (AIES), October 2024

We propose a framework for analyzing the security, privacy, and safety of third-party integrated LLM platforms. Applying it to OpenAI's plugin ecosystem, we develop an attack taxonomy exploring how platform stakeholders could exploit their capabilities, and uncover plugins that concretely demonstrate these risks.

Acknowledgments

This work is or was financially supported in part by the U.S. National Science Foundation and by gifts from Microsoft. The website was designed in part by Claude Code.