AI agents are reshaping how people and organizations interact with technology. Whether recommending a workout class, supporting a customer service call, or summarizing internal code, AI agents are already embedded across industries and workflows. These systems use real-time data and external feedback to act on behalf of users—bringing autonomy, personalization, and scale to the way information is processed and decisions are made.
As organizations and policymakers grapple with how to govern these emerging systems, one thing is clear: Regulation must be grounded in context. The risks and benefits of AI agents vary not by the label “AI,” but by how, why, and where the system is used and what kind of data is involved.
In our new paper, AI Agents, Privacy, and the Importance of Context in Data Regulation, we show how many AI agent applications already operate within today’s legal frameworks, including sector-specific privacy laws, state statutes, and long-standing oversight from agencies like the Federal Trade Commission.
We use real-world use cases, drawn from retail, customer support, fitness, enterprise software, and more, to illustrate how context determines whether and how AI agents raise new privacy considerations. These examples demonstrate that AI agents are not inherently riskier than the systems they often replace. In fact, when implemented responsibly, they can offer more privacy by embedding constraints and automating controls.
To guide policymakers as more complex AI agent use cases emerge, we propose six core principles for regulating AI agents. These principles focus on risk, user experience, transparency, and alignment with existing laws. They offer a foundation for developing rules that are precise, proportional, and adaptable.
We also preview areas where future regulatory questions are likely to arise, including how data provenance, lineage, and explainability should shape the design and oversight of next-generation AI agents. This is an area of active exploration within the Data & Trust Alliance, and one we look forward to advancing with our members and partners.
The most common AI agents deployed today operate with clear data boundaries and serve specific user needs. But as their capabilities grow, especially in more sensitive contexts, governance frameworks will need to evolve.
A one-size-fits-all approach will not work. We can only turn abstract concerns into actionable standards by taking context into account. Doing so ensures strong protections in high-risk areas, while avoiding unnecessary barriers in low-risk applications. By focusing on purpose, data sensitivity, and potential harm, policymakers can promote responsible innovation and build trust in AI systems.
We welcome continued dialogue with policymakers, industry leaders, and the broader public as AI agents evolve. By centering context, we can build a regulatory approach that is both adaptable and trustworthy.
For questions or to learn more about our upcoming work on data provenance, transparency, and contextual governance, contact the D&TA Team.