AI Regulation: Nexus Nurturing Responsible and Inclusive Innovation
AI Regulation: Nurturing Responsible and Inclusive Innovation
AI Regulation presents a unique set of challenges and opportunities as -technology continues to shape various aspects of society. AI has the potential to improve efficiency, decision-making and safety across multiple industries. However, there are concerns about accountability, transparency, bias and potential erosion of human agency, all of which can exacerbate existing power imbalances and social inequalities.
AI implementation in education can lead to bias when the training data is skewed or discriminatory, resulting in unequal opportunities. For example, research found gender bias in an AI-powered grading system used by a US university. Female students received lower grades, despite similar academic performance to male students. This raises concerns about gender-based inequity in education. In addition, the use of predictive policing algorithms in schools can contribute to the over-policing and criminalization of students of color. AI Regulation will be critical in educational programming.
AI Regulation will also play a role in healthcare where AI algorithms, such as diagnostic systems or predictive models, can be biased if the training data is skewed or does not represent diverse populations. For example, if the data primarily includes medical records from certain demographics, the algorithm may not perform well for underrepresented groups, leading to disparities in diagnoses and treatments.
AI Regulation In the justice system: AI systems used in risk assessment tools or facial recognition, have been found to exhibit biased outcomes, such as higher false positive rates for certain racial or ethnic groups.
Striking the right balance between the freedom of AI innovation and the responsibility to protect human agency, through AI Regulation, becomes a delicate task, especially considering the complex social dynamics involved.
AI Regulation in the United States
In the United States, the pace of AI regulation has been relatively slow, with no comprehensive federal regulation dedicated solely to AI. However, various governmental bodies have taken steps to regulate specific aspects of AI. For instance, the Federal Trade Commission (FTC) has been actively involved in consumer protection related to AI, issuing guidance on AI use in advertising and marketing and taking enforcement actions against biased algorithms. The National Institute of Standards and Technology (NIST) has also developed a framework to address AI risks, emphasizing the importance of transparency, explainability and accountability in AI development and deployment.
To further advance AI regulation, the National Artificial Intelligence Initiative (NAII) was established to support AI research and development while promoting responsible innovation. The Biden-Harris Administration has made efforts to prioritize responsible AI development and citizens' rights by outlining principles in the Blueprint for an AI Bill of Rights. Congress is currently considering AI-related bills covering data privacy, algorithmic bias and cybersecurity.
While policymakers and government agencies are making progress, they are still grappling with the complexities of AI. Understanding the intricate nature of AI involves recognizing its interconnection with human culture and identity, and human rights, which opens up the potential to harness AI for positive societal impact.
To enhance policymakers' understanding and inform decision-making processes, the establishment of an International AI Regulation Agency is proposed. This agency could serve as a global governance board, studying the global effects of AI and promoting international cooperation on AI regulation, taking inspiration from the ‘The Intergovernmental Panel on Climate Change’.
The Future of Global AI Governance
In that direction, The Future of Global AI Governance Report offers a unique perspective on global AI governance, merging legal expertise from Dentons, AI insight from VERSES and socio-technical standards guidance from the Spatial Web Foundation. It introduces innovative socio-technical standards that aim to reshape the global dialogue on AI governance, addressing challenges like interoperability, explainability and AI's rapid advancement. The report also suggests an AI rating system to evaluate intelligence and autonomy levels, proposing governance frameworks for each level and tackling fundamental questions surrounding AI governance.
The report states that to enable this “smarter” infrastructure for the 21st century, it is crucial to establish an AI governance framework based on socio-technical standards that balance the following core needs:
- A shared understanding of meaning and context between humans and AIs.
- Explainability of AI systems, enabled by the explicit modeling of their decision-making processes.
- Interoperability of models and data that enables universal interaction and collaboration across organizations, networks, and borders.
- Compliance with diverse local, regional, national and international regulatory demands, cultural norms and ethics.
- Authentication and credentialing, to ensure compliance and potential control over critical activities, with privacy, security, identity and transparency embedded by design.
In this respect, smarter governance in AI can be achieved through the implementation of socio-technical standards and intelligent strategies. These standards act as a guiding force, steering AI towards its full potential by facilitating the creation of autonomous systems that are both explainable and interoperable, resulting in enhanced safety, fairness, and alignment with values.
In contrast to the World Wide Web's standards, which often disregarded privacy and security, the Spatial Web standards, particularly, introduce secure and verifiable communication protocols that minimize security risks associated with AI by granting data and device access only to authorized entities, with the ability to manage access as needed.
It's crucial to avoid a future where AI development is solely influenced by opaque large language models or a race towards a centralized superintelligence controlled by a single entity. A more natural evolution of AI might involve the emergence of agile and self-governing "Intelligent Agents" capable of cultivating general and potentially superintelligent abilities. These agents can engage in activities such as information sharing, action clarification, inquiry and even demonstrating curiosity, aligning with a distributed and organic AI progression.
In June 2023, a groundbreaking research paper by VERSES researchers introduced a notable advancement in the design of explainable AI using active inference. The paper outlines methods for developing AI systems that are auditable and explainable, presenting an architecture that equips AI systems with human-like introspection abilities for decision-making. This approach involves two core capabilities: "self-modeling," for internal representations, and "self-access," enabling AI to analyze internal states and decision-making processes. These capabilities lead to a deeper understanding of AI decision-making, continuous improvement, improved outcomes, and the ability to provide thorough explanations for actions.
With a focus on adapting to dynamic job markets, safeguarding psychological well-being, and upholding core human values, these insights contribute to inform public discourse, enrich educational frameworks and guide governance and policy making. Policymakers can prioritize the contextual and societal impact of AI, mitigate bias, and promote interdisciplinary research and education. Collaborative efforts among stakeholders, along with inclusive outreach, are vital to ensure accountability, inclusivity, and effective human-robot interaction in AI development.
In line with the Future of Global AI Governance Report, to ensure the effectiveness of AI policies and governance, it is crucial to establish comprehensive frameworks covering essential aspects such as privacy, security, transparency, explainability and accountability. Regular monitoring and evaluation of these frameworks, combined with active participation from diverse communities of users, will help maintain an equitable and responsible AI ecosystem.
In conclusion, AI Regulation is a complex task that requires policymakers to strike a delicate balance between fostering AI advancements and safeguarding against potential risks. By adopting responsible and inclusive practices, policymakers can harness AI's potential for positive societal impact while mitigating its potential negative consequences. Collaborative efforts between stakeholders and international cooperation will play a crucial role in effectively regulating AI in a timely and efficient manner.
Written by Dr. Inês Hipólito, PhD
Dr. Hipólito, an internationally renowned researcher in cognitive neuroscience and AI, is an AI Ethicist at VERSES. She champions the integration of ethical considerations, inclusivity and environmental responsibility in AI design.