When J. Robert Oppenheimer, otherwise known as the ‘Father of the atomic bomb’, witnessed the explosive force of the atomic bomb test at Los Alamos, New Mexico in 1945, his first words were a paraphrased recitation of the Bhagavad Gita, a sacred Hindu scripture: “Now, I am become Death, the destroyer of worlds”. The genesis of nuclear weapons (whilst a perceived victory for the US in the arms race, and science and defense innovation) meant that humankind had created its own ‘destroyer’ and annihilation. In a statement signed by AI experts from the likes of Google’s DeepMind, Anthropic and OpenAI, to university scholars and governmental officials, it was declared that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.[1] Placing the threat of AI in parallel to the threat of nuclear war emphasizes the catastrophic potential of AI if maliciously mishandled and ungoverned at a global and regional level where cooperation and understanding between nations is nurtured and maintained.
While there are still many unknowns in the future development and use of AI, and many chastise the notion of ‘singularity’ and an all-powerful AI surpassing human intelligence to ultimately destroy us as too futuristic and intangible, AI is likely to challenge the global order and geopolitics as we know it - it already has.[2] A global, multi-stakeholder governance approach that includes voices and opinions from marginalized groups and the so-called ‘Global South’; places the onus of accountability and transparency on tech companies and those developing AI to mitigate biases and injustice; and upholds and monitors principles of safety and security, non-discrimination and human rights, privacy, accountability and explainability, and awareness, is desperately needed as the threat of the ‘new’ arms race looms over geopolitics. It is clear that the range of actors involved in the AI game is continuously expanding. Lessons from the nuclear nonproliferation regime point to establishing and nurturing a global regime of norms and values before it is too late.
What is the current lay of the AI land?
In 2018, the UK announced in its AI Sector Deal that it would commit nearly £1 billion to boost its position as a global “leader” in AI, paving the way to safety and stability with “democratic values”, ensuring that any “international agreements embed our ethical values” and “setting an example… with the government leading from the front”.[3] Five core principles across safety, transparency, and accountability were set to guide the AI ecosystem’s development whilst protecting privacy, data, and the consumer. Progress on the UK’s plan to boost investment, ensure AI benefits across sectors and regions, and govern AI ‘effectively’ have included the Department of Health and Social Care committing £1.4 million towards AI research on racial and ethnic inequalities, the Centre for Data Ethics and Innovation creating a ‘Roadmap to Assurance’ for AI and the UK government’s sustained engagement in multilateral forums such as the OECD and UNESCO.[4] Largely missing from documents such as the ‘Roadmap to Assurance’ is the ethical dimension of AI, notably the “ethical and societal impacts of AI amongst all those developing and using AI”.[5]
Similarly to the UK, the Biden administration’s non-binding ‘AI Bill of Rights’ promotes five guiding principles for the development of AI. In the EU, the ‘AI Act’ is promised to be the “first-ever comprehensive legal framework on AI worldwide”, helping to ensure that “AI systems respect fundamental rights, safety and ethical principles” by adopting a risk-based approach to classifying and allowing the development of AI with certain characteristics.[6] While the EU champions precaution and regulation, the US focuses on technological development, innovation, and economic competition. China maintains state-led control and oversight of AI alongside its assertive geopolitical moves and investments. Much like the context prior to the founding of the nuclear nonproliferation regime, competing states with competing interests are deploying and using AI at different speeds and with different purposes so much that some analysts have argued AI “cries out for governance, not merely to address the challenges and risks but to ensure we harness its potential in ways that leave no one behind”.[7] I would add “and ensure no harm is done to mankind, whether in the name of scientific and technological advancement, or due to malicious incentives”.
What can we learn from the Nuclear Nonproliferation regime?
Whereas the proliferation and eventual use of nuclear weapons are certainly likely to lead to human destruction, some posit that “the trajectory of AI is unlikely to lead to either utopia or apocalypse” because the technology itself has both “rights supporting and oppressive potential”, owing to the benefits of AI.[8] Nonetheless, the harmful potential of AI and its proliferation in the wrong hands should sound similar alarm bells. Although the nuclear and AI arms races are different, the proliferation of AI brings with it a new level of threat in that AI models and codes can be copied and deployed much more easily and quickly than the building of a nuclear weapon. Zimmer and Rodehau-Noack write that this drastically and dangerously alters the potential harm from the proliferation of AI, “particularly because - in contrast to the strict governmental oversight of nuclear weapons - AI development is highly commercialized and privatized”.[9] In this sense, attempting to predict dystopian futures of singularity and an evil AI taking over the human race is futile and unproductive because the world is already experiencing something akin to an “AI nuclear winter” and disruption to political, social and ecological systems.[10] During the nuclear arms race, nation-states were developing weapons to potentially destroy one another (and the world) as a final resort. The world would live in fear but remain relatively undamaged until either the US or USSR would hit their nuclear red button. With the AI arms race, regardless of what the final end-goal is for nation-states or tech companies alike - the world will inevitably change along the way.
What would Global AI Governance look like and what purpose would it serve?
'Global governance’ can be defined as a framework of norms, standards, institutions, and rules guiding interaction between states to foster cooperation and prosperity and resolve contention. Therefore, ‘global AI governance’ would seek to build a multi-stakeholder environment where a range of voices collaborate and cooperate on AI topics to benefit and protect humanity. As the UNESCO ‘Recommendation on the ethics of artificial intelligence’ stipulates, guiding principles of such a framework would include: proportionality and ‘do no harm’, safety and security, fairness and non-discrimination, sustainability, rights to privacy and data protection, human oversight and determination, transparency and explainability, responsibility and accountability, awareness and literacy and multi-stakeholder and adaptive governance.[11] Multilateral cooperation would comprise of collaboration on research and knowledge, governance and regulation and universal norms. Importantly, the regime would have to be “anticipatory,…, responsive and agile,…, enforceable where necessary,…, whilst avoiding duplication”.[12] ‘Anticipatory’ because the AI landscape is constantly changing and calling out the risks and potential impacts of actions will be crucial for establishing protective and preventative measures. ‘Enforceable’ because it is much easier to agree AI should be trained to be non-biased and non-discriminatory in practice, when humans feed these biases to AI - consciously or not.
‘Avoiding duplication’ relates to the ability of such a global AI regime to draw principles and values from existing foundational frameworks such as international human rights (IHR), the UN’s Sustainable Development Goals (SDGs) and the UN’s Guiding Principles on Business and Human Rights (UNGP). Human rights undoubtedly relate to the AI conversation because while AI can be a rights enabler, it can also limit a person’s right to healthcare, education, housing, protection, and much more. While IHR requires states to protect human rights within their borders and beyond, the UNGP requires the same of businesses. Much like the scope of IHR, AI will impact an individual’s social, civil, political, economic, and cultural rights, as well as likely impacting marginalized or protected groups such as women, children, ethnic minorities, refugees and those with disabilities in different and disproportionate ways.[13]
Some argue that a human rights and norm-based AI framework should not replace or ‘undermine’ existing frameworks given the applicability and the fact that these are cornerstone frameworks that currently govern the world and are respected by most states. In fact, IHR would provide an already-defined and agreed-upon set of norms and standards, a “shared language” to encourage cooperation (the development of which is often difficult and faced with high barriers to entry) and an “architecture for convening, deliberation and enforcement”.[14] IHR can “provide an aspirational, positive roadmap that can help guide decision-making, including the balancing of trade-offs”.[15] This framework would importantly place upholding human rights as the primary guiding principle to all AI development and use, and also a key desired end-goal. Currently, this notion is challenged by relentless geopolitical quests for power and competition. A paper from Chatham House importantly draws attention to differentiating between ethical and human-rights-based approaches to governing AI, and why a combination of both is necessary and reinforces the end-goal when considering a human-centric lens.[16] While ethical principles have often supported the formation of rules and regulations, ethics itself is “a branch of philosophy, not a system of norms: multiple versions are possible” and because of its “malleability”, accountability and responsibility are difficult to enforce.[17] Consequently, human rights “crystallize a set of ethical values into international norms” with an “agreed blueprint for the protection of human values and the common good that has proven itself capable of adaptation to new circumstances”.[18] In this sense, ethics and human rights reinforce one another: where the former is a guiding compass on what is deemed right and wrong, the latter is the process of establishing norms and behaviors around these principles. IHR norms have the benefit of universality and therefore applicability but a test to the applicability of IHR to AI will be the numerous grey areas in AI’s use and impact.
Promoting a ‘multi-stakeholder’ approach
There are many national and global level discussions on AI that are already starting to produce guiding documents and principles. Interestingly, some posit that “none of the existing initiatives can address the challenges of maximizing the opportunities of AI while identifying and minimizing the risks alone”.[19] This may be because while existing frameworks focus on the use of AI, “ethical questions regarding AI systems pertain to all stages of the AI system lifecycle", ranging from research and design, to deployment, maintenance, financing, monitoring, and termination.[20] Guiding principles that protect and enshrine human rights in the development of AI are useful but the forums within which these were decided and drafted often do not include technology companies and AI developers, people, and organizations who can ultimately provide transparency into the design of such systems. Lessons from the phenomenon of social media show us that regulation may come from governments but understanding algorithms, data privacy and potential harms has to be a conversation that includes technology companies. AI developers and designers would provide information about how they built their respective datasets, assumptions made during the coding of systems, quality assurance processes and other topics.
Another guiding framework to help deploy and use AI for a ‘positive’ impact is the notion of human-centered AI (HCAI) which refers to “the development of AI technologies that prioritize human needs, values and capabilities at the core of their design and operation”.[21] AI developers’ northern star would be enhancing human capabilities, excellence and wellbeing rather than scaling AI to replace or diminish human participation in society and the workforce. An example given by the Interaction Design Foundation is that of healthcare: while the traditional and current use of AI seeks to optimize efficiency in something like diagnostic procedures, HCAI may do this and additionally provide patient mental health support, thus making the process more holistic and personal.[22] The principles of HCAI are similar to that of the human rights and ethics-based approach outlined in UNESCO’s recommendation; however, they crucially highlight where the roles and responsibilities of technology companies and AI developers fill in the gaps (in bold): empathy and understanding, ethical consideration and bias mitigation, user involvement in the design process, accessibility and inclusivity, transparency and explainability, continuous feedback and improvement, balance between automation and human control.[23] A multi-stakeholder approach would therefore factor in technology companies as one of many voices, albeit an important one.
What’s missing from the current discourse?
The Bletchley Declaration signed by countries at the AI Safety Summit in 2023 acknowledged the role that ‘all’ actors have to play to secure the safety of AI, including “nations, international fora and other initiatives, companies, civil society, and academia”.[24] Interestingly, there is also a commitment to “engage and involve a broad range of partners” due to the importance of “inclusive AI and bridging the digital divide”, and supporting “sustainable growth and [addressing] the development gap”.[25] As with other international fora, engaging the so-called ‘Global South’ has often been an after-thought rather than a pre-requisite to fostering global collaboration and cooperation. Some efforts have arisen from G20 collaboration, such as the 2023 New Delhi’s Leaders’ Declaration, the BRICS Institute for Future Networks, and China’s Global AI Governance Initiative that seeks to “promote equal rights and opportunities for all nations” in AI development.[26]
While the ‘Global South’ may face structural limitations to developing AI systems (especially at the speed and scale of the ‘global North’) due to a lack of talent and capacity, data, models and tools, and technical infrastructure, engaging them in international discourse relating to the establishment of AI norms and standards should be done in parallel to injecting strategic investment into AI capability-projects.[27] This notion is reiterated by an article from the World Economic Forum that states “governments should utilize the expertise and capacity of global technology providers as well as local development communities to co-design the roadmap in meeting the key requirements for responsible deployment of AI”.[28] Nurturing an international forum where the ‘Global South’ countries are included in the discussions on ethical and human-rights-based AI from the start ensures universal agreement on principles and therefore higher likelihood of universal safe use and deployment. As many digital development projects looking to leapfrog the ‘Global South’ out of poverty have found, a ‘one-size-fits-all approach’ is unproductive and potentially harmful. There will be specific social realities and values, cultural contexts, and economic and political forces unique to each country which mean bringing the ‘Global South’ into the conversation will create a more holistic and ‘future-ready’ AI framework; ensure the dispersion of AI resources and knowledge is equitable and designed to benefit all countries and not an ‘elite’ few; and ensure that investments in AI - wherever they may be - will be carried out with an understanding of local contexts to boost a community’s standards of living and ‘leave no one behind’.
The Road Ahead
Using the lessons and legacies of the nuclear nonproliferation regime and birth of nuclear weapons is useful to acknowledge the importance of establishing a human-centric, global, and inclusive framework that brings nation-states, civil society, academia, and technology companies together. With a shared goal of bettering mankind and scientific discovery but not at the expense of people (especially those marginalized and vulnerable), we have the opportunity to establish universal norms that listen to differing global voices and different political systems but are united by shared values.
As the nations at the forefront of the AI arms race and arguably those with the most technologically advanced capabilities, the US and China should engage in bilateral discussions with the primary goal of promoting dialogue and transparency. Governments around the world should boost public education on the role of human rights in AI deployment and use, and promote the development of AI to benefit society as a whole and support the UN’s SDGs. Alongside this, newly established AI frameworks and commitments should be developed to become actionable and enforceable as well as binding where realistic at this moment in time, working in harmony to mitigate inefficient duplicity and the reinforcement of goals. For example, the Hiroshima AI Process launched at the Ministerial level meeting of the OECD in May 2024 and supported by 49 countries and regions, should now focus its weight on including more countries within its scope and ensuring interoperability between its forum and other like-minded AI initiatives.[29]
A multi-stakeholder, global AI governance regime should be established to bridge discussions and motivate information-sharing, boost cooperation, mitigate risks, and minimize the ‘grey’ areas. Tech companies and AI developers should continue to explore HCAI and promote a human-centered framework within their organizations, including for the identification of biases and continuous improvement of systems. Ultimately, all initiatives should seek to keep the dialogue on AI continuously flowing and growing from all parties, as new stakeholders and voices are brought into the ever-changing landscape.
Nathalie Balabhadra is a Cybersecurity Consultant at Wavestone, with a background in global affairs, international security and cybersecurity. She holds an undergraduate degree in Economics and Politics at the University of Edinburgh and Master’s in Global Thought from Columbia University in New York. She won the University of Edinburgh’s Russell Keat Award for ‘Most Distinguished Dissertation in Politics, 2021’ and published pieces on nuclear nonproliferation and human rights in forums such as the Royal United Services Institute (RUSI) and E-International Relations; article on data justice in digital development in ICT Works; and articles relating to cybersecurity (identity and access management, AI regulation, Zero Trust security).
Bibliography
Canales, Maria Paz and Ian Barber. “What would a human rights-based approach to AI governance look like?”. Global Partners Digital (September 2023). https://www.gp-digital.org/what-would-a-human-rights-based-approach-to-ai-governance-look-like/.
Centre for AI Safety. “Statement on AI Risk”. Open Letter. https://www.safe.ai/work/statement-on-ai-risk#open-letter.
Department for Science, Innovation & Technology, UK Government. “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023”. (November 2023). https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
European Commission. “AI Act”. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
Habuka, Hiroki. “Shaping Global AI Governance: Enhancements and Next Steps for the G7 Hiroshima AI Process”. Centre for Strategic & International Studies (May 2024). https://www.csis.org/analysis/shaping-global-ai-governance-enhancements-and-next-steps-g7-hiroshima-ai-process.
HM Government. “National AI Strategy”. (September 2021). https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf.
Interaction Design Foundation. “Human-centred AI (HCAI)”. https://www.interaction-design.org/literature/topics/human-centered-ai.
Jones, Kate. “AI governance and human rights”. Chatham House (January 2023). https://www.chathamhouse.org/2023/01/ai-governance-and-human-rights/03-governing-ai-why-human-rights.
Kaspar, Lea, Canales, Maria Paz and Michaela Nakayama Shapiro.“Navigating the Global AI Governance Landscape”. Global Partners Digital (October 2023). https://www.gp-digital.org/navigating-the-global-ai-governance-landscape/.
Kerry, Cameron F., Meltzer, Joshua P, Renda and Andrew W. Wyckoff. “Should the UN govern global AI?”. Brookings (February 2024). https://www.brookings.edu/articles/should-the-un-govern-global-ai/.
Leverhulme Centre for the Future of Intelligence. “Comment on the UK National AI Strategy”. http://lcfi.ac.uk/news-and-events/news/2021/sep/23/comment-uk-national-ai-strategy/.
One World Trust. “Global Governance of Artificial Intelligence”. https://www.oneworldtrust.org/global-governance-of-artificial-intelligence.html.
UK Government. “National AI Strategy - AI Act”. (July 2022). https://www.gov.uk/government/publications/national-ai-strategy-ai-action-plan/national-ai-strategy-ai-action-plan.
UNESCO. “Recommendation on the Ethics of Artificial Intelligence”. (2021). https://unesdoc.unesco.org/ark:/48223/pf0000380455.
Wright, Nicholas. “Three Distinct Artificial Intelligence Challenges for the United Nations.” Our World (2018), https://ourworld.unu.edu/en/three-distinct-artificial-intelligence-challenges-for-the-un.
Yu, Danni, Rosenfeld, Hannah and Abishek Gupta. “The ‘AI divide’ between the Global North and the Global South”. World Economic Forum (January 2023). https://www.weforum.org/agenda/2023/01/davos23-ai-divide-global-north-global-south/.
Zimmer, Daniel and Johanna Rodehau-Noack. “Today’s AI threat: More like nuclear winter than nuclear war”. Bulletin of the Atomic Scientists (February 2024). https://thebulletin.org/2024/02/todays-ai-threat-more-like-nuclear-winter-than-nuclear-war/#:~:text=Where the vast build-up,destination is ever reached (or.
[1] “Statement on AI Risk,” Open Letter, Centre for AI Safety, https://www.safe.ai/work/statement-on-ai-risk#open-letter.
[2] Nicholas Wright, “Three Distinct Artificial Intelligence Challenges for the United Nations”, Our World, 2018, https://ourworld.unu.edu/en/three-distinct-artificial-intelligence-challenges-for-the-un.
[3] “National AI Strategy,” HM Government, September 2021, https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf.
[4] “National AI Strategy - AI Action Plan,” UK Government, July 2022, https://www.gov.uk/government/publications/national-ai-strategy-ai-action-plan/national-ai-strategy-ai-action-plan.
[5] “Comment on the UK National AI Strategy,” Leverhulme Centre for the Future of Intelligence, http://lcfi.ac.uk/news-and-events/news/2021/sep/23/comment-uk-national-ai-strategy/.
[6] “AI Act,” Shaping Europe’s Digital Future, European Commission, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
[7] Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff, “Should the UN govern global AI?” Commentary, Brookings, February 2024, https://www.brookings.edu/articles/should-the-un-govern-global-ai/.
[8] Maria Paz Canales and Ian Barber, “What would a human rights-based approach to AI governance look like?” Global Partners Digital, September 2023, https://www.gp-digital.org/what-would-a-human-rights-based-approach-to-ai-governance-look-like/.
[9] Daniel Zimmer and Johanna Rodehau-Noack, “Today’s AI threat: More like nuclear winter than nuclear war,” Bulletin of the Atomic Scientists, February 2024, https://thebulletin.org/2024/02/todays-ai-threat-more-like-nuclear-winter-than-nuclear-war/#:~:text=Where the vast build-up,destination is ever reached (or.
[10] Zimmer and Rodehau-Noack, “Today’s AI threat: More like nuclear winter than nuclear war”.
[11] “Recommendation on the Ethics of Artificial Intelligence”, UNESCO, 2021, https://unesdoc.unesco.org/ark:/48223/pf0000380455.
[12] “Global Governance of Artificial Intelligence,” One World Trust, https://www.oneworldtrust.org/global-governance-of-artificial-intelligence.html.
[13] Maria Paz Canales and Ian Barber, “What would a human rights-based approach to AI governance look like?”.
[14] Nicholas Wright, “Three Distinct Artificial Intelligence Challenges for the United Nations”.
[15] Nicholas Wright, “Three Distinct Artificial Intelligence Challenges for the United Nations”.
[16] Kate Jones, “AI governance and human rights,” Chatham House, January 2023, https://www.chathamhouse.org/2023/01/ai-governance-and-human-rights/03-governing-ai-why-human-rights.
[17] Kate Jones, “AI governance and human rights”.
[18] Kate Jones, “AI governance and human rights”.
[19] Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff, “Should the UN govern global AI?”.
[20] “Recommendation on the Ethics of Artificial Intelligence”, UNESCO.
[21] “Human-centred AI (HCAI),” Interaction Design Foundation, https://www.interaction-design.org/literature/topics/human-centered-ai.
[22] “Human-centred AI (HCAI),” Interaction Design Foundation.
[23] “Human-centred AI (HCAI),” Interaction Design Foundation.
[24] “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” Policy Paper, Department for Science, Innovation & Technology, UK Government, November 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
[25] “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” Policy Paper, Department for Science, Innovation & Technology, UK Government.
[26] Lea Kaspar, Maria Paz Canales, Michaela Nakayama Shapiro, “Navigating the Global AI Governance Landscape,” Global Partners Digital, October 2023, https://www.gp-digital.org/navigating-the-global-ai-governance-landscape/.
[27] Danni Yu, Hannah Rosenfeld, Abishek Gupta, “The ‘AI divide’ between the Global North and the Global South,” World Economic Forum, January 2023, https://www.weforum.org/agenda/2023/01/davos23-ai-divide-global-north-global-south/.
[28] Danni Yu, Hannah Rosenfeld, Abishek Gupta, “The ‘AI divide’ between the Global North and the Global South”.
[29] Hiroki Habuka, “Shaping Global AI Governance: Enhancements and Next Steps for the G7 Hiroshima AI Process,” Centre for Strate