<- Back to all work
AI Agents, Privacy, and the Importance of Context in Data Regulation
Project launched: 08.06.25
Page updated: 08.06.25
Resources

AI agents are reshaping how people and organizations interact with technology. Whether recommending a workout class, supporting a customer service call, or summarizing internal code, AI agents are already embedded across industries and workflows. These systems use real-time data and external feedback to act on behalf of users—bringing autonomy, personalization, and scale to the way information is processed and decisions are made.

As organizations and policymakers grapple with how to govern these emerging systems, one thing is clear: Regulation must be grounded in context. The risks and benefits of AI agents vary not by the label “AI,” but by how, why, and where the system is used and what kind of data is involved.

A Contextual Approach to Regulating AI Agents

In our new paper, AI Agents, Privacy, and the Importance of Context in Data Regulation, we show how many AI agent applications already operate within today’s legal frameworks, including sector-specific privacy laws, state statutes, and long-standing oversight from agencies like the Federal Trade Commission.

We use real-world use cases, drawn from retail, customer support, fitness, enterprise software, and more, to illustrate how context determines whether and how AI agents raise new privacy considerations. These examples demonstrate that AI agents are not inherently riskier than the systems they often replace. In fact, when implemented responsibly, they can offer more privacy by embedding constraints and automating controls.

To guide policymakers as more complex AI agent use cases emerge, we propose six core principles for regulating AI agents. These principles focus on risk, user experience, transparency, and alignment with existing laws. They offer a foundation for developing rules that are precise, proportional, and adaptable.

We also preview areas where future regulatory questions are likely to arise, including how data provenance, lineage, and explainability should shape the design and oversight of next-generation AI agents. This is an area of active exploration within the Data & Trust Alliance, and one we look forward to advancing with our members and partners.

Aligning Governance with the Reality of AI Agents

The most common AI agents deployed today operate with clear data boundaries and serve specific user needs. But as their capabilities grow, especially in more sensitive contexts, governance frameworks will need to evolve.

A one-size-fits-all approach will not work. We can only turn abstract concerns into actionable standards by taking context into account. Doing so ensures strong protections in high-risk areas, while avoiding unnecessary barriers in low-risk applications. By focusing on purpose, data sensitivity, and potential harm, policymakers can promote responsible innovation and build trust in AI systems.

As AI agents become more integrated into the technologies we use every day, it’s essential that our regulatory frameworks keep pace without overcorrecting. This paper emphasizes a practical, risk-based approach that starts with how AI agents are actually being used. Context matters, and by grounding regulation in real-world applications, we can promote innovation while protecting consumers.
Jeff Brueggeman, Vice President of Global Public Policy, AT&T; D&TA Policy Committee Co-Chair
Responsibly governing agentic AI will require moving beyond abstract concerns and focusing on how these tools function in real environments. This paper provides a roadmap for aligning regulatory approaches with actual deployment contexts, ensuring meaningful guidelines without stifling progress. It underscores why policymakers should prioritize targeted, adaptive guardrails that evolve alongside the technology.
— Ritika Gunner, General Manager, Data and AI, IBM
Regulating AI effectively means understanding the environment in which it operates. This work highlights how existing laws already apply to many AI agent use cases—and where thoughtful, contextual updates may be needed as new technologies evolve. Our goal is to support policymakers in building agile, forward-looking frameworks that strengthen trust and safeguard individuals.
JoAnn Stonier, Fellow of Data and AI, Mastercard; D&TA Policy Committee Co-Chair
The Agentic AI revolution is here and Salesforce is bringing the digital workforce to innovate alongside employees. But we know that we must lead with trust, and clear, interoperable regulatory frameworks ensure that we can deliver for our customers, prioritizing the responsible development and deployment of AI. This work product provides the basis for a discussion about what guardrails already exist and what future regulations must consider as we build a trust-first agentic-empowered future.
— Sabastian Niles, President and Chief Legal Officer, Salesforce

We welcome continued dialogue with policymakers, industry leaders, and the broader public as AI agents evolve. By centering context, we can build a regulatory approach that is both adaptable and trustworthy.

For questions or to learn more about our upcoming work on data provenance, transparency, and contextual governance, contact the D&TA Team.