UW Security and Privacy Research Lab
Advances in language and other machine learning models have enabled a new agentic computing paradigm, in which systems integrate one or more AI agents that promise to take actions on behalf of users based on their natural language instructions. Although these systems can enable many exciting use cases and have already begun to transform the computing landscape, there are serious security, privacy, and safety risks to consider and mitigate. Building on its long history of studying and improving security and privacy for emerging technologies, the UW Security and Privacy Research Lab (with collaborators) is teaching and conducting research in this space.
Spring 2026
This PhD-level course explores security challenges in agentic AI systems — software that takes actions on users' behalf based on natural language commands. The course covers traditional systems security principles, identifies vulnerabilities in current agentic systems, and examines defenses and secure system designs.
IEEE Symposium on Security & Privacy, May 2026
We propose automated permission management for AI agents, arguing that conventional permission models are inadequate for the agentic paradigm. Through a user study, we identify factors influencing users' permission decisions and develop an ML-based prediction model.
Agents in the Wild Workshop @ ICLR, April 2026
We investigate how emerging agentic browsers handle the same-origin policy, finding that prompt injections can leverage browser agents to circumvent cross-origin protections. We demonstrate a full proof-of-concept attack on ChatGPT Atlas and identify preconditions for attacks on several other agentic browsers.
Network and Distributed System Security Symposium (NDSS), February 2025
We propose IsolateGPT, an architecture that brings execution isolation to LLM-based systems with third-party apps, protecting against many security, privacy, and safety issues.
AAAI Conference on AI, Ethics, and Society (AIES), October 2024
We propose a framework for analyzing the security, privacy, and safety of third-party integrated LLM platforms. Applying it to OpenAI's plugin ecosystem, we develop an attack taxonomy exploring how platform stakeholders could exploit their capabilities, and uncover plugins that concretely demonstrate these risks.