In a new digital age of intense growth both in artificial intelligence and the potential ramifications of the technology’s implementation, there is an unfulfilled need for a worldwide regulatory body to help unify the globe in the pursuit of safe and ethical technological advancement. While artificial intelligence potentially represents a new digital and economic revolution that can lead to unprecedented growth and prosperity, it also presents potentially harmful trends such as job displacement or weapons system development programs that could lead to adverse effects upon the lives of hundreds of thousands, if not millions of global citizens. Over four thousand American jobs were lost in May due to artificial intelligence advancements, and Goldman Sachs estimated in a report published in March that close to three hundred million jobs might be lost globally due to automation. That same article, however, highlights that the use of AI across a variety of economic sectors such as health care, education, and other service sectors could boost global GDP by 7%. On a more national-security oriented tangent, there are also reports that nations such as the United States and China are actively looking into the development of AI for use in command and control systems utilized for the management of nuclear weapons. The potential of a software error leading to nuclear war would be immense, especially considering how AI itself is still in its youth when it comes to its application in military technologies and unmanned vehicles or systems. Early warning systems or similarly automated dead hand mechanisms, in which nuclear weapons can be launched automatically or wrongfully alerted to the start of a nuclear conflict, have been notorious for a number of close calls since the 20th century. From these examples it is easy to see just how beneficial, and destructive, AI and its use around the world could potentially become. To that end, a worldwide regulatory body should be put in place to both regulate and control the potential uses and effects of AI in both the civilian and military sectors.
Highlighting the potential concerns inherent in an unregulated global AI industry does not mean that nations are standing idly by as the future rushes beyond their reach. A wide variety of state-level regulations have been put into place surrounding the use of AI by corporations around the world. Nations like Israel or Japan have either published draft policies on AI regulation or have signaled their intentions to wait and see how the technology progresses before stifling any potential innovation in the field of AI. The European Union meanwhile has progressed rapidly on the issue, with the European Parliament voting to approve the AI Act in June and fully implementing the legislation just last week. This is the world’s first regulatory act on AI, prescribing threat levels to certain AI uses and technologies, while also regulating the use of such technologies in security oriented national departments such as law enforcement or border management. The European Union has provided a useful framework and a great example of the first steps that could be undertaken by a worldwide regulatory body dealing with the diffusion and production of artificial intelligence.
There may exist certain fears in the halls of power around the globe that rogue states could potentially go against the grain of international regulation or norms when it comes to AI, or that a nation could develop autonomous weapons systems or other potentially harmful systems with the use of AI to gain a strategic edge over their geopolitical competitors. It is certainly true that a rogue actor could potentially use these technologies for coercive or aggressive powers, such as the use of cyberspace-oriented and other digital weapons by pariah states such as North Korea. The world’s current superpowers in the form of the USA and China could engage in an arms race involving the use of AI technologies in a strategic or military sense. But the threat of both of these globally harmful actions could be diminished if those superpowers and the majority of the world came together in something akin to the Paris Agreement signed in 2016 by over 195 parties. Such talks or an agreement could be used to place guardrails against the negative economic effects of millions of jobs being lost to automation. It could also serve as the origin point for an international organization to be developed, adopting regulatory strategies from global regulatory bodies like the International Civil Aviation Organization or the International Atomic Energy Agency.
Only by both protecting against the negative harmful economic effects of AI and its applications in modern military technology can the world hope to harness AI as a force for good for all of humanity. Hopefully, the world can come together to create not just internationally recognized norms and guidelines for AI, but globally protective safeguards that can reduce global tensions and risks of AI-enhanced conflict.
Justin Lee is a junior pursuing a degree in Government & International Politics at George Mason University.
Works Cited
Hatzius, J., Pierdomenico, G., Kodnani, D., & Briggs, J. (2023, March 26). The Potentially Large Effects of Artificial Intelligence on Economic Growth. key4biz.it. https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf
Klimentov, M. (2023, September 3). AI regulation around the world, from China to Brazil. The Washington Post. https://www.washingtonpost.com/world/2023/09/03/ai-regulation-law-china-israel-eu/
Satariano, A. (2023, December 8). E.U. agrees on Landmark Artificial Intelligence Rules. The New York Times. https://www.nytimes.com/2023/12/08/technology/eu-ai-act-regulation.html
Schumann, A. (2022, October 14). The Soviet false alarm incident and able archer 83. Center for Arms Control and Non-Proliferation. https://armscontrolcenter.org/the-soviet-false-alarm-incident-and-able-archer-83/
Xiang, L. (2019, October 1). Artificial intelligence and its impact on weaponization and arms control. In L. SAALMAN (Ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Volume II East Asian Perspectives (pp. 13–19). Stockholm International Peace Research Institute. http://www.jstor.org/stable/resrep24532.8