In the second part of the webinar series, The Future of Global AI Governance, “A New Pathway for Policymakers,” industry leaders delved into the complexities and nuances of standards, ethics and the ever-evolving landscape of artificial intelligence. Combining the AI acumen of VERSES, the legal expertise of the world’s largest law firm, Dentons, and guidance of socio-technical standards from the Spatial Web Foundation, the webinar panelists offered key insights during this critical time of AI development.
Gabriel René, Founder and CEO of VERSES; Philippe Sayegh, Chief Adoption Officer at VERSES; George Percivall, Engineering Fellow with the Spatial Web Foundation and Peter Stockburger, Managing Partner at Dentons Venture Technology and Emerging Growth Group, discussed the recent global developments in AI policy. These included the White House executive order, the UK AI safety summit and the G7 recommended code of conduct.
The experts emphasized the significance of governments taking AI technology seriously and the need for a comprehensive AI policy at a global scale. They discussed the challenges in achieving a global consensus on the matter, particularly in balancing local specificities with a global approach. The group also highlighted the distinction between conversations around principles and standards in AI policy and the importance of focusing on the algorithms themselves.
It is important to look at global developments in AI policy, of which there have recently been many. These include the above-mentioned White House executive order, the UK AI safety summit and the G7 recommended code of conduct. Collectively, the group agreed upon the significance of this worldwide governmental recognition.
“I think the top line here is that governments are taking AI technology very seriously,” said René, Founder and CEO of VERSES. “I don't think we've ever seen this kind of mobilization in government and regulation at this scale before and so quickly.”
The panelists noted the need for a comprehensive global AI policy and the challenges in achieving a global consensus on the matter. The speakers discussed how the White House's executive order calls for developing global technical standards around AI and how NIST promotes consensus industry standards. They also emphasized the need to encode societal ethics, values, laws and rules into AI by creating socio-technical standards, which will allow the translation of human values into something machines can understand.
As technology and artificial intelligence assume increasingly prominent roles in our daily lives, we are discussing how to leverage them to regulate AI effectively. The interplay between technological advancements and regulatory measures sheds light on the symbiotic relationship necessary to balance innovation and responsible governance.
The group discussed the success of the European Union's Flying Forward 2020 project, where self-governing drones using spatial web standards and VERSES’ cognitive computing platform successfully flew their mission in compliance with regional laws and policies. HSML and HSTP enable compliance to be built into the network, allowing machines to align with our laws and principles in both the digital and the physical world. The process begins with syntactic parsing and pattern recognition, identifying time, activities, users, domains and space in unstructured legal text.
“What’s interesting in the socio-technical standards is we are in a position to define how much autonomy we want to give to the agents,” Sayegh explained. “The key here is to be able to govern the network. It is something that we need to start working on sooner [rather] than later. If we don't embed a governance capability at the modeling language level, at the algorithm level, it's going to be complicated to change trajectories or backtrack if we want to govern things at scale.”
It is necessary to explore a standards-based approach to AI governance with regard to regulating AI and the role of technology in solving the problem of ambiguity in law and spoken language. We need to test and refine the specifications and standards that come out of the technology challenge and the importance of involving grassroots efforts in the process. There are polarized perspectives in the public policy debate, and this group of industry leaders suggests a bigger tent perspective that considers the arc of AI.
“The challenge of actually translating legislation into machine readable code is an issue that crops up in my head as an attorney,” Stockburger stated. “We have laws that are written in an ambiguous way: be reasonably safe, don’t engage in unfair business practices.”
“These general concepts that are often played out in courts are highly circumstantial,” he continued. “But what does the future look like in terms of writing legislation to govern AI?”
As the conversation continued, Percivall offered insight from an interoperability point of view. He said the way it works with respect to meeting societal expectations “are the results of the testing that we really need to do and demonstrate and get people’s feedback in these times now about implementing these directives, seeing the testing, understanding the standards that came out so we can move forward with an industry that works together.”
The ensemble of AI experts emphasized the importance of socio-standards in AI, not only in regulations but also in the crucial realm of simulation. René outlined the necessity for standards in creating simulation tools. Like those governing electricity, these standards provide a common language for diverse stakeholders. They enable simulations at various scales, from local to global, facilitating comprehensive testing and learning from potential mistakes in a controlled environment.
René emphasized the importance of looking at artificial intelligence deployment's five to 10-year horizon.
“You want to invest in the simulations of this,” René said. “So you need tools that operate on the standards. Companies like VERSES are building those tools, companies like Dentons can build those tools, anyone can build those tools as the standards come out.”
These simulations prove valuable since any errors can be made within a simulated environment rather than in real-world scenarios.
There is an inherent tension between the drive for innovation and the imperative of ensuring safety in developing and deploying AI technologies. The consensus is that licensing is essential for technologies with global implications. You can draw parallels between the evolution of the World Wide Web and the need for an authority, whether governmental or industry-driven, to track and certify the production and distribution of AI technologies. You can also draw parallels with the FAA’s role in aviation. The shift toward autonomous systems necessitates reevaluating mechanisms for managing risk, going beyond the capabilities of existing aviation authorities for example. It’s important to explore, with nuance, where licensing or certification should be applied and how they could foster innovation without compromising safety.
From global policy considerations to the intricate dance between technology and regulation, this webinar series offers a unique lens through which to view the challenges and opportunities inherent in shaping the future of artificial intelligence.
Challenges in AI governance will continue to emerge in uncharted territories. The need for new solutions while balancing innovation with safety, privacy and ethical considerations remains a pressing concern.
The Future of AI Global Governance webinar series encapsulates the depth and breadth of challenges and opportunities in AI governance. Part two of the series provided a holistic view, from the intricacies of licensing to the development of socio-technical standards and ethical considerations embedded in responsible AI.
As society hurtles toward an AI-powered future, these discussions serve as beacons guiding the development of robust, adaptable and ethical frameworks that will define the relationship between humanity and artificial intelligence.
As the journey continues, the importance of ongoing dialogue, collaboration and adaptation cannot be overstated. The complexities of AI governance necessitate a multifaceted approach that considers technological advancements, ethical considerations and global collaboration.
By fostering an environment where diverse perspectives converge, we pave the way for a future where AI augments our capabilities and aligns with our shared values and aspirations. We are headed into an era where responsible AI governance is not just a goal but a shared commitment to building a future where technology serves humanity in the best possible way.
For more information on AI Governance or to watch Part I and Part II of The Future of Global AI Governance Webinar Series, visit: www.verses.ai/ai-governnace.