ATLANTISCH PERSPECTIEF
Future-proofing AI arms control
Mahmoud Javadi
AI arms control is a necessity. Two key initiatives, REAIM and the Political Declaration, originating from the transatlantic community, aim to achieve this goal globally. However, their progress may encounter challenges, including the potential return of Donald Trump to the White House. To future-proof military AI governance, the transatlantic community—especially Europeans—must invest in the complementarity of these initiatives. Concurrently, this effort could be in vain if Europe does not take bolder diplomatic and other steps to ensure that the next U.S. administration remains engaged in current AI arms control efforts.
Until very recently, the use of artificial intelligence (AI) in warfare was considered a concept of the future. However, we now find ourselves in that future. Since Russia’s aggression in February 2022 in Ukraine, there has been a notable increase in the deployment of military AI technologies for both defensive and offensive purposes. Furthermore, the hideous actions of Hamas on October 7, 2023, and subsequent escalations across the Middle East have prompted Israel and the United States to use military AI assets against legitimate targets but also innocent civilians.
While the salience of military AI gradually grows in mainstream discourse, its integration into national military arsenals is rapidly progressing. This is an inevitable trend. Armies are increasingly drawn to the advantages of military AI for national deterrence and defense posture. Nevertheless, the (mis)use of AI in the Ukraine and Gaza theaters have already shown that employing AI in conflicts can foreseeably lead to profound consequences for any nation involved in armed engagements, regardless of scale. Now consider an AI-armed great power’s military conflict with a near-peer or peer competitor: an apocalyptic for humanity.
To avert this doomsday scenario but also mitigate even the less conspicuous risks of military AI misuse, countries must aim to guard rail AI in the right direction. This is similar to other high military technologies whose lifecycle, from development to use, has been subject to varying degrees of arms control regimes. To this end, two concurrent efforts have been come to existence since February 2023 to shape AI arms control in the long run: (1) Global Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and (2) Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy (hereafter Political Declaration).
While these initiatives originate from the Euro-Atlantic space—Netherlands and the United States, respectively—both aim to influence global military AI governance with stakeholders across the globe. The logic is straightforward. Establishing a robust global governance framework for the life cycle of military AI technologies is essential. And relying solely on national and even regional efforts falls short in confronting the multifaceted challenges posed by these transformative technologies, often equated in significance to nuclear weapons.
REAIM and the Political Declaration are not inherently contradictory, although each has distinct motives and features that set them apart in the first instance. However, a deeper look reveals they are complementary. This complementarity is particularly crucial for the successful conclusion of any AI arms control regime negotiation. Such a regime to gain traction and sustain requires significant political will and political force by key stakeholders. Among them are countries in the Euro-Atlantic area. Until concluding a universal AI arms control accord, the transatlantic community has a cardinal global responsibility. In the short term, it needs to capitalize the complementarity of REAIM and the Political Declaration. Without this synergy, future-proofing AI arms control negotiation and its outcome becomes significantly onerous. In addition, Europeans have a cardinal responsibility to direct the future-proofing efforts against the backdrop of Trump’s presidency.
Why AI Arms Control Regime?
The development, deployment and recent use of military AI have raised profound ethical, strategic, and security concerns; thus, necessitating an AI arms control regime. The integration of AI into weapons systems can vastly enhance their lethality, precision, and decision-making speed. Without comprehensive and universally enforced rules and regulations, the unchecked proliferation of AI-powered weaponry could lead to calamitous consequences, including the escalation of conflicts and an arms race that destabilizes global security. Even in a world where revisionist and rogue states may refuse to adhere to such a control regime, the establishment of AI arms control measures is critical for several reasons.
Firstly, an AI arms control regime can set international norms and standards, fostering a global understanding of the responsible use of AI in military contexts. With clear guidelines, countries that are committed to ethical practices can lead by example, applying pressure on non-compliant states through diplomatic, economic, and, where necessary, military means. This normative framework can also help to isolate, stigmatize and securitize states that refuse to adhere to the agreed-upon standards, potentially reducing their influence and deterring others from following suit.
Secondly, a well-designed AI arms control regime can facilitate transparency and confidence-building measures among nations. Mechanisms such as data sharing, verification protocols, and mutual inspections can help to ensure compliance and build trust, reducing the likelihood of misunderstandings and miscalculations that could lead to unintended escalation. Enhanced transparency can also aid in identifying and addressing violations, enabling the international community to respond collectively and effectively to breaches of the regime.
Furthermore, an AI arms control regime is essential for addressing the ethical implications of autonomous weapons systems. AI can make life-and-death decisions at speeds and with a level of complexity beyond human capabilities, raising serious moral and legal questions about accountability and the laws of armed conflict. An internationally acknowledged arms control regime can help ensure that AI is used in ways that are consistent with international law, preventing the deployment of systems that could indiscriminately target civilians or commit other violations of human rights.
Against the backdrop of values an AI arms control regime can bring, the existence of such a regime can provide a basis for collective action against the non-compliant nations. It can justify strict export controls, embargoes and other measures by the international community against nations that pose a threat due to their misuse of AI in weaponry. In other words, AI arms control regime is priority number one over other measures, and it can be realized through multilateral efforts.
Until early 2023, the world lacked collectivism regarding the reckless proliferation of military AI. However, in February, signs of collective action for AI arms control emerged through two parallel initiatives swung by the Netherlands and the United States. Each initiative has unique set of incentives and features. Yet both are crucial for sustaining efforts towards an AI arms control regime down the line.
Existing frameworks and reasons behind their initiation
The necessity of establishing a regulatory framework for military AI prompted Dutch diplomats and bureaucrats in February 2023 to convene a variety of state and non-state stakeholders to initiate a foundational dialogue on the definition and governance of military AI. This marked the launch of the REAIM process. However, the Biden administration saw an opportunity to advance its own military AI agenda during the REAIM Summit.
The United States asserts that emerging disruptive technologies like AI are crucial for defining military superiority in geopolitical rivalries. Thus, AI governance in military sphere requires American leadership. In the words of US Secretary of State Blinken in 2022, “We also have to be the ones who are at the table who are helping to shape the rules, the norms, the standards by which technology is used. If we’re not, if the United States isn’t there, then someone else will be, and these rules are going to get shaped in ways that don’t reflect our values and don’t reflect our interests.”
With this attitude, the United States introduced the Political Declaration in February 2023, which advocates a top-down approach by addressing sovereign states as its primary audience. It outlines ten foundational measures for developing, deploying, and using military AI capabilities, accompanied by six pledges that endorsing countries—fifty-four as of July 2024—should undertake to further the goals of the Political Declaration.
Washington has maintained the momentum initiated by the Political Declaration in its first year by convening a plenary meeting in March 2024, attended by delegates from sixty endorsing and observer nations. This gathering led to the formation of three working groups focused on distinct aspects of military AI security and ethics: Assurance, Accountability, and Oversight. Notwithstanding these endeavors, China’s and its like-minded states’ cold shoulder to the Political Declaration, largely influenced by rivalries between Washington and Beijing, would bar the Political Declaration’s universalization. Thus, REAIM could provide a viable alternative based on more robust foundations.
REAIM aims to involve a diverse range of state and non-state stakeholders in collaboratively developing a framework for military AI governance. The first Summit, held in the Hague, welcomed 2,000 attendees from 100 countries, among them 80 government representatives. Of these, 57 endorsed the REAIM’s joint statement, ‘Call to Action.’ It delineates nine measures, primarily focused on encouraging both state and non-state actors to contribute ideas and solutions within the Summit’s thematic framework for the second edition of the Summit in Seoul in September 2024. It also underscores the REAIM’s aspiration to foster a bottom-up approach, offering a forum for all stakeholders to shape the discourse on responsible military AI.
Unlike the Political Declaration, REAIM’s inclusivity has already ensured participation from Beijing and garnered support from many states in the Global South who might otherwise have advocated for a complete ban on military AI—an approach that Washington and many of its European allies do not favor.
Future-proofing military AI governance
To disempower and discipline revisionist states in their reckless development and deployment of military AI technologies, establishing rules and regulations needs to be seen as the foremost priority. These measures enable responsible nations to progressively harmonize their policies, leverage their capabilities, and enhance their statecraft tools such as technology export controls and financial embargoes against sovereign states with the intent to misuse AI technologies. This long-term strategy will be secured if the responsible states underscore the values of complementarity between the REAIM and the Political Declaration.
The transatlantic community, recognizing its global responsibility to protect the current world order and its underpinnings, must take a leading role. A crucial step in this direction is to make military AI governance a ‘depoliticized priority’ on the global agenda. This can be achieved by initially rallying support for REAIM. However, the process should aim at strengthening the Political Declaration over time.
The reason is straightforward: it is ultimately the responsibility of sovereign nations to negotiate AI arms control and adhere to its outcomes. The Political Declaration moves in this direction; nonetheless, this objective requires to have a variety of state and non-state stakeholders on board. This is a task which can be facilitated by REAIM. It not only aims to be inclusive as much as possible, but also it attempts to create a unified understanding of military AI is and how it should be governed.
The interlinkage between REAIM and the Political Declaration will lead to an AI arms control regime if the initial steps get future proofed. Potential overhauls, if not upheavals, in U.S. domestic politics following the presidential election in November 2024 as well as increasing mistrust among major powers present significant short-term challenges to the complementarity of these two initiatives. The latter struggle is apparently inevitable; however, the former can be mitigated and even avoided.
For the REAIM and the Political Declaration, their future-proofing is akin to Trump-proofing. The conventional wisdom contends that if the next U.S. administration continues to support multilateralism, prioritization of REAIM can leverage the substantial indirect backing for the Political Declaration. Yet, if the White House under Trump’s leadership neglects multilateral efforts toward governing military AI, the transatlantic community can sustain the momentum—generated by both initiatives—through REAIM.
It goes without saying that the latter is neither an ideal nor a likely scenario. Washington’s negligence towards AI arms control is precisely what revisionist states would take advantage. Without a robust regime, responsible nations would lack the collective framework and appropriate tools to deter the malicious use of military AI technologies. And REAIM is not designed for such deterrence by any means.
The transatlantic community, and the world in general, witnessed global governance during Donald Trump’s presidency. Therefore, instead of contemplating how to establish an AI arms control regime without the Political Declaration and Washington’s leadership, incumbent leaders in the transatlantic community and the architects of both initiatives, particularly the Europeans, need to seriously consider how to make current trajectory of AI arms control appealing to Trump and his administration.
The Path Forward
Ironically, the best strategy for making AI arms control efforts resilient to Trump’s toxic influence is to involve him directly—‘Trump-on-boarding.’ American leadership is indispensable in any AI arms control regime. However, a major risk is that not all stakeholders, particularly revisionist powers, share this perspective. Additionally, with Donald Trump in power, there is a high chance that Washington might neglect any control regime, arguing that it would limit the defense posture of the US and the transatlantic community overall.
In the short term, Europeans must consider two concurrent strategies. They need to intensify their diplomatic efforts to remind the future US administration—regardless of its political ideologies—that Washington’s leadership in these initiatives would align with both the ‘responsible competition’ approach championed by Democrats and the ‘Make America Great Again’ (MAGA) agenda promoted by Donald Trump and his entourage.
After the November 2024 elections, if Democrats remain in power, Europeans will need less diplomatic efforts to ensure Washington stays on course. However, if a new Trump administration takes office, Europe will need to engage diplomatically to sustain existing military AI governance initiatives. Recent history indicates that Europeans have less leverage to influence a Trump administration, and this diplomatic approach may not guarantee the future-proofing, a.k.a Trump-proofing, of AI arms control efforts.
Despite this inherent limitation, Europe still has the lever to convince the Trump administration to adhere to current trajectory of military AI governance efforts. Both the Trump and Biden administrations have progressively invested in export control regimes for critical technologies, including AI, to nations of significant concern. This course of action will certainly be pursued more vigorously under the new administration. Europeans need to support the US export control regime through both words and actions. Currently, the Netherlands is the only European country that has joined Washington in this endeavor. The new Dutch government is well-positioned to lead bilateral and multilateral efforts, through the new EU institutional cycle (2024-2029), to Europeanize the American export control regimes for critical technologies.
This course of action does not necessarily guarantee the future-proofing or Trump-proofing of military AI governance. However, it remains a viable, and potentially the only, option for Europeans in the short run to ensure the longevity of both REAIM and the Political Declaration while utilizing their complementarity.
This work has been funded by the REMIT project, funded from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101094228.
Header photo: Wikimedia Commons
Mahmoud Javadi is as an AI Governance Researcher at Erasmus University Rotterdam (EUR) in The Netherlands. His X/Twitter account is @mahmoudjavadi2.