By Camille Stewart Gloster, Data & Trust Alliance
Striking the right balance between innovation and safety is crucial for efforts to regulate Artificial Intelligence (AI). The principles outlined by the Data & Trust Alliance (D&TA), including the Consequential Decision Approach to regulation, can serve as a powerful framework for mitigating potential harms. This approach recognizes the need for human oversight at critical decision points, ensuring that human context, values, and ethics remain central to the decision-making process.
The D&TA approach has several benefits:
Clear and easier to implement: If applied consistently, this approach promotes harmonized regulation across jurisdictions and leverages existing legal frameworks. This makes policy easier to implement and enables companies of all sizes to keep pace with a shifting regulatory environment.
Innovation-friendly: By focusing on consequences rather than restricting specific technologies, this regulatory model encourages innovation. AI developers can experiment and explore new frontiers, knowing that applications with high-risk consequences and minimal human intervention will need additional safeguards and face regulation.
Adaptability: This approach allows regulators to respond to new developments without overhauling entire systems or imposing rigid restrictions on all AI applications.
Balanced and proportional: Not all AI applications carry the same level of risk. Regulating AI based on the likelihood of harm of its recommendations and amount of human oversight allows for a more proportional response, preventing the kind of overregulation that could stymie growth in low-risk areas.
“These principles reflect the alignment of companies that often have divergent views on Artificial Intelligence policy. One of the strengths of D&TA is the ability to drive consensus on policy through a lens of trust—informed by how policy translates to business practice. This paper will be a helpful tool as companies, government, and civil society continue to come together to create an AI governance structure that unlocks the value of AI while mitigating risk,” remarks Saira Jesani, Executive Director of the Data & Trust Alliance.
The future of AI holds immense promise, but realizing this potential depends on how effectively we can manage its risks. These principles ensure regulation is clear and risk-based to help AI technology development maximize benefits while minimizing harms.