X Welcome to International Affairs Forum

International Affairs Forum a platform to encourage a more complete understanding of the world's opinions on international relations and economics. It presents a cross-section of all-partisan mainstream content, from left to right and across the world.

By reading International Affairs Forum, not only explore pieces you agree with but pieces you don't agree with. Read the other side, challenge yourself, analyze, and share pieces with others. Most importantly, analyze the issues and discuss them civilly with others.

And, yes, send us your essay or editorial! Students are encouraged to participate.

Please enter and join the many International Affairs Forum participants who seek a better path toward addressing world issues.
Sun. May 03, 2026
Get Published   |   About Us   |   Donate   | Login
International Affairs Forum

Around the World, Across the Political Spectrum

When “Responsible” Stopped Being Enough

Comments(0)

By Delaney Sparacio

In February 2023, the United States walked into REAIM (Responsible AI in the Military Domain) Summit in The Hague, Netherlands. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy was non-binding, broadly worded, and deliberately easy to sign. It included endorsing states to utilize human knowledge, test their systems, hold senior officials responsible for large uses, and try not to insert bias into everything. By late 2024, nearly sixty countries had signed on. As far as diplomatic documents go, this seemed successful.

Three years later, the U.S. Department of Defense was rebranded: the Department of War. Its 2026 AI strategy declares this the year we raise the bar for military AI dominance. The Pentagon has demanded that frontier AI companies grant access to their models for lawful reasons and when one of them, Anthropic, declined to drop restrictions on autonomous weapons and domestic mass surveillance, the administration designated it a supply-chain risk and ordered every federal agency to stop using its products. A federal judge has paused that order as the lawsuits continued. The Joint Chiefs of Staff (CJCS) chairman, Dan Caine, now describes autonomous weapons as an important function of future warfare. The Political Declaration is not gone. But the world it was written for is.

The Comfort of Voluntary Norms

The Declaration’s greatest strength was always its greatest weakness: it asked very little. That is how you get fifty-eight signatures from countries as different as Singapore, Malawi, Israel, and Liberia. You write commitments that almost any state running a defense ministry can claim it already meets. Auditable systems, trained personnel, review for high-consequence applications; all completed.

The Nuclear Non-Proliferation Treaty did not come from nothing. The taboo against chemical weapons required decades to consolidate. Norms must originate somewhere, and a minimum threshold capable of attracting adherence frequently proves more consequential than a standard that forecloses consensus.

The trouble is that military AI is not patient. The technology is moving from slideshows into procurement contracts faster than the diplomatic process can keep up. In July 2025, the Pentagon awarded contracts of up to two hundred million dollars each to four AI companies to put frontier models into intelligence analysis, operational planning, and cyber operations. By early 2026, large language models were reportedly used in the operation that captured Venezuelan President Nicolás Maduro. Lawmakers have asked whether AI was involved in a strike on an Iranian school. These cases continue and continue to occur. Voluntary guidelines are starting to feel thin since the systems they govern started killing people.

The “All Lawful Uses” Problem

Here is the line that should worry anyone who took the Declaration seriously. The Pentagon’s standard contracting language now requires AI vendors to allow their models to be used “for all lawful purposes.” On its face, this sounds reasonable. The military is supposed to operate within the law. But “lawful” is vague when regards to the power of AI.

U.S. policy on autonomous weapons is governed largely by Department of Defense Directive 3000.09, which the Department itself can modify. The Pentagon insists it does not currently use AI for fully autonomous weapons or for domestic mass surveillance. “All lawful uses” is a moving target whenever the rule-writer is also the buyer. When Anthropic refused to grant unrestricted access on the grounds that frontier AI is not reliable enough to power fully autonomous weapons, the company was not invoking a treaty. It was invoking its own terms of service.

In 2026, the most concrete restraint on a particular military application of AI in the United States is the usage policy of a private company: a document that can be rewritten, litigated around, or destroyed by a Defense Production Act invocation. That is not a place where serious arms control should be.

What the Declaration Could Still Do

None of this means the Political Declaration was a mistake, but means it is incomplete. The original framing to endorse, share best practices, and build capacity made sense for 2023. For 2026, it needs three upgrades, and none of them are impossible.

The first is verification. Right now, endorsing a declaration costs a state nothing and reveals nothing. A modest reporting mechanism, even an annual statement on how each country implements the ten measures, would convert the Declaration from a press release into a paper trail. States that implement seriously would have something to point to. States that endorsed for the optics would have to choose between embarrassment and improvement.

The second is scope. The Declaration covers “military AI capabilities,” a deliberately capacious phrase. But the high-stakes category, like systems that select and engage targets without further human intervention, deserves its own specific instructions. The UN Secretary-General called in 2023 for a legally binding ban on lethal autonomous weapons that operate outside meaningful human control by 2026. We are now in 2026. That deadline did not magically produce a treaty, but it should produce a negotiation. The U.S. has so far preferred the responsibility track, partly because it preserves operational flexibility. The cost is that other countries read the U.S. position as permission.

The third upgrade is the hardest: the U.S. government has to decide whether it actually wants the architecture it built. “Military AI dominance” and “responsible military use of AI” are not strictly contradictory, but they pull in different directions when a vendor says no. The Anthropic episode is the test case. A government committed to the Declaration would have treated a private red line on autonomous weapons, imposed by an American company on its own technology, as a reasonable contribution to responsible use. Instead, the company was branded a national security risk.

The Argument for Allies

There is a tempting story in which AI companies are simply self-interested, their ethics statements marketing, and policymakers should ignore them. Some of that is true some of the time. But red lines imposed by vendors are one of the few sources of friction in a system that is otherwise sprinting. They cost the companies revenue, they survive lawsuits, and they are written in plain English a senator can read. If you actually believe the principles in the Political Declaration, you should be glad someone is enforcing them, even imperfectly.

The alternative is a world in which the only meaningful constraints on military AI are the ones states impose on themselves, in policy documents that states can rewrite, in declarations that states can ignore. We have a name for that world. It is the one we live in. It is also the one that produced the Political Declaration in the first place.

Back to The Hague

In 2023, the simple story about the Political Declaration was that it represented American leadership on a hard problem. The harder story, in 2026, is that leadership is not a singular event. The countries that signed the Declaration did so on the promise that the U.S. would keep building on it, toward verification, toward narrower binding rules on the highest-stakes applications, toward a posture in which “responsible” is something a defense department actually has to demonstrate, not just claim.

That promise is the one currently in trouble. Renaming the Pentagon the Department of War is a press release, not a policy. But publicity shape policy. So does the spectacle of the U.S. government punishing an AI company for refusing to be used in autonomous weapons. So does the slow drift from “responsible use” to “dominance.” The next country deciding whether to endorse the Declaration notices.

Military AI will be one of the defining international issues of the next decade. The U.S. wrote the document that begins to govern it. The hard part is making the document mean something when the technology, the politics, and the contracts get uncomfortable, which will be the work of the next few years.

“Responsible” was the right word in The Hague. It is still the right word now. Whether anyone is willing to do the work it implies is a different question.

 

Delaney Sparacio is a Political Science and Public Policy Student at the University of California, Berkeley. Her academic interests include social policy, gender rights, environmental protectionism, and the intersection of governance and justice. She currently writes for NextDem's Party Playbook, is writing an Honors Thesis on climate governance, and author of "The 2021 Capitol Raid: The Federal Response as Depicted Through Agencies" in the California Legal Studies Journal. She is a copyeditor for Berkeley Political Review and Policy Review @ Berkeley.

Comments in Chronological order (0 total comments)

Report Abuse
Contact Us | About Us | Donate | Terms & Conditions X Facebook Get Alerts Get Published

All Rights Reserved. Copyright 2002 - 2026