<- Back to all News
09.26.24
Blog Post
Harnessing the Promise of AI: Mitigating Potential Harms Through a Risk-Based Approach

By Camille Stewart Gloster, Data & Trust Alliance

Striking the right balance between innovation and safety is crucial for efforts to regulate Artificial Intelligence (AI). The principles outlined by the Data & Trust Alliance (D&TA), including the Consequential Decision Approach to regulation, can serve as a powerful framework for mitigating potential harms. This approach recognizes the need for human oversight at critical decision points, ensuring that human context, values, and ethics remain central to the decision-making process.

thumbnail

The D&TA approach has several benefits:

  1. Clear and easier to implement: If applied consistently, this approach promotes harmonized regulation across jurisdictions and leverages existing legal frameworks. This makes policy easier to implement and enables companies of all sizes to keep pace with a shifting regulatory environment.

  2. Innovation-friendly: By focusing on consequences rather than restricting specific technologies, this regulatory model encourages innovation. AI developers can experiment and explore new frontiers, knowing that applications with high-risk consequences and minimal human intervention will need additional safeguards and face regulation.

  3. Adaptability: This approach allows regulators to respond to new developments without overhauling entire systems or imposing rigid restrictions on all AI applications.

  4. Balanced and proportional: Not all AI applications carry the same level of risk. Regulating AI based on the likelihood of harm of its recommendations and amount of human oversight allows for a more proportional response, preventing the kind of overregulation that could stymie growth in low-risk areas.

Read the paper ->
Smart regulation that focuses on the highest-risk uses of AI and holds companies accountable for the AI they create and deploy is the best way to protect consumers while unlocking AI’s massive upside potential.
Rob Thomas, Senior Vice President, Software and Chief Commercial Officer, IBM

“These principles reflect the alignment of companies that often have divergent views on Artificial Intelligence policy. One of the strengths of D&TA is the ability to drive consensus on policy through a lens of trust—informed by how policy translates to business practice. This paper will be a helpful tool as companies, government, and civil society continue to come together to create an AI governance structure that unlocks the value of AI while mitigating risk,” remarks Saira Jesani, Executive Director of the Data & Trust Alliance.

The future of AI holds immense promise, but realizing this potential depends on how effectively we can manage its risks. These principles ensure regulation is clear and risk-based to help AI technology development maximize benefits while minimizing harms.

Just as technology evolves and adapts, so too must the regulations around it, especially with respect to artificial intelligence. The Data & Trust Alliance’s paper on consequential decisions goes beyond simply asking policymakers to take a risk-based approach to AI regulation, and instead provides a helpful framework to ensure that guardrails are appropriately tailored to high-risk use cases and mitigates any potential harmful uses of AI while at the same time allowing for the innovation of this technology to flourish.
David Strickland, VP of Global Regulatory Affairs and Public Policy, General Motors
Advances in AI technology make what was once impossible, possible. As the technology evolves so does its risk and the need for updated regulations. We are proud to coauthor this paper with leaders across diverse industries and hopeful that its insights will support policymakers as they establish guidelines that both encourage innovation and reduce potential harms that could directly affect peoples’ lives.
Dena Mendelsohn, Privacy Officer and Senior Compliance Manager, Transcarent