<- Back to all News
Blog Post
A Policy Roadmap for AI Governance: Promoting Innovation and Competition While Prioritizing Safety and Security

By Camille Stewart Gloster, Data & Trust Alliance

If wielded responsibly, Al can foster inclusion, economic growth, greater safety, and well functioning societies, not least in its capacity to enhance and democratize accuracy, consistency, and creativity. To realize that potential, however, all major stakeholders—AI developers, deployers, governments and civil society—must play an active part in ensuring the systems are trustworthy.


Policy Recommendations for the Responsible Use of Artificial Intelligence,’ released today by the Data & Trust Alliance (D&TA), provides a roadmap for policymakers building AI governance. These recommendations, developed by our consortium of leading business deployers of data and AI, seek to promote innovation and competition while proactively prioritizing safety and security for all stakeholders. We will build upon this initial roadmap by identifying areas deserving more detail and developing a cross-sector perspective to help inform AI governance efforts. D&TA created the Policy Committee to bridge policy and practice by leveraging the experience and expertise of member companies to find alignment and refine a cohesive perspective to inform global policy development.

Everyone has a role to ensure AI is used responsibly—companies, governments and developers. At the heart of these recommendations is the commitment to manage risks—not restrict use—and spark continued investment and innovation.
— JOANN STONIER , Mastercard Fellow and Co-Chair of the D&TA Policy Committee.

The policy recommendations were developed by data, AI, ethics, compliance, data privacy, policy and legal experts from Alliance companies including AARP, General Motors, Howso, Humana, IBM, Mastercard, Meta, NFL, Nielsen, Nike, Transcarent, Walmart and Warby Parker.

The report outlines a series of recommendations across a range of topics, including:

  1. Transparency and Explainability: AI models should be understandable and their decision-making processes are transparent to users, commensurate with the risk of the intended uses and purposes of the AI system.

  2. Fairness and Non-Discrimination: Measures should be implemented to prevent biases and discrimination in AI systems, promoting equitable outcomes for all users.

  3. Privacy and Security: User data should be protected with robust privacy and security measures, ensuring that data is handled responsibly throughout its lifecycle.

  4. Accountability and Governance: Existing sector-specific regulatory authorities – those best able to regulate AI use in their fields – should apply effective existing regulations.

  5. Harmonization: Where possible, policymakers should strive for consistency and harmonization in definitions and frameworks in order to reduce friction and enhance the innovative potential of AI systems, particularly for low-risk use cases.

  6. Education and Workforce Development: Increased investment in AI education at all levels is necessary to better prepare present and future workers for AI-related job opportunities and to encourage consumer fluency with AI systems to enhance trust in those systems and to encourage critical evaluation of AI outputs.

Artificial intelligence has the power to speed innovation and make roadways safer. As part of the Data & Trust Alliance, we’ve developed policy guidelines that will serve an important role in encouraging the safe and responsible deployment of AI.
— DAVID STRICKLAND , VP, Global Regulatory Affairs at General Motors and Co-Chair of the D&TA Policy Committee
Commentary from our member companies

Glen Tullman, CEO, Transcarent: “AI is quickly changing our economy, aspects of our everyday lives, and our health and care. We are only at the beginning, we are uncovering new and innovative ways to apply AI, and it will constantly evolve. It’s important to advance quickly and responsibly, protecting safety, privacy, and trust, while establishing precedents for future success. As we work to transform the consumer health and care experience with generative AI at Transcarent, we are proud to work with the Data & Trust Alliance on developing policy recommendations that ensure innovation can thrive while protecting the safety and earning the trust of the public.”

Karthik Rao, CEO, Nielsen: I am pleased that D&TA's policy paper recognizes that the responsible development and adoption of AI should build on empirical data sources for both developers and deployers. Reliable data throughout the value chain will give society visibility and assurance to the operation and accuracy of AI. I also strongly agree with D&TA that parties utilizing AI data sources must respect the underlying intellectual property rights of hard working creators and innovators in the development and deployment of AI.

Nick Clegg, President, Global Affairs, Meta: “As policymakers and regulators around the world think through the rules that will govern AI, they must reflect the nuances that exist across the sectors where it will be deployed. At the same time, they must regulate in a way that preserves the benefits of open source innovation. The Data & Trust Alliance is a crucial partner in providing a cross-industry perspective on these issues and their policy recommendations are a meaningful step forward for industry and government alike.”

Rob Thomas, Senior Vice President Software and Chief Commercial Officer, IBM: “IBM is proud to offer our guidance in D&TA’s policy recommendations to help AI creators and users deploy ethical and trustworthy technology. Our company has long committed to building and using trustworthy AI, and we’ve also worked alongside policymakers and stakeholders to encourage smart, effective AI regulations that protect citizens while promoting innovation. AI must be governed responsibly to earn trust — and corporations and governments have critical roles to play.”

“CEOs formed the Data & Trust Alliance to develop responsible data and AI practices. These are organizations and practitioners who share a commitment to earning trust with stakeholders as they deploy AI systems in the real world,” said Jon Iwata, Executive Director of the Data & Trust Alliance, “Our policy recommendations reflect this pragmatic perspective.”

We hope these recommendations are constructive and will help encourage critical discussion. We encourage like-minded organizations to work with us to continue to put forward policy recommendations and engage governments and civil society to advance trustworthy AI. If you support these recommendations, please reach out and share with relevant parties.

Camille Stewart Gloster is the Chief AI Policy Strategist at D&TA. Camille is a leader on emerging technology and cybersecurity strategy and policy. She was the first Deputy National Cyber Director over Technology & Ecosystem Security at the White House and has held leadership roles at large organizations including Google, Department of Homeland Security, and Deloitte.