Emerging technologies and Intelligence and Security Services: A Balancing Act

Willemijn Aerdts, Ernst Dijxhoorn
Technological innovation, particularly Artificial Intelligence (AI), has produced a paradigm shift in intelligence by enhancing collection and analysis capabilities, predictive intelligence, and operational efficiency. To avoid a capability gap vis-à-vis opponents, regulation is needed that simultaneously gives the services the powers they need to detect and avert threats, protects civil liberties and privacy of the public, makes effective oversight of the services possible, and that can adapt in a timely fashion to emerging technologies. Part of the answer in dealing with this challenge lies in having an open discussion on the needs of both civilian and military services and what we expect from them as a society, and part in designing flexible regulation. 

OpenAI’s launch of ChatGPT in 2022 made Generative AI available to the wider public, reignited debate on the responsible use of AI, and put the spotlight on a technological arms race already well under way.[1] It demonstrated that break-out moments for emerging technology, for instance in quantum computing, can come unexpectedly. The huge disagreement among experts over when Artificial General Intelligence (AGI) will occur shows that the only safe prediction about emerging technologies is that change is accelerating.[2] For the (Dutch) intelligence community and lawmakers this creates two interlocked problems. First, their use of new technologies leads to legal and civil rights questions, as technological developments outpace legislation. Second, they are confronted with adversaries that use the full range of possibilities offered by emerging technologies, without regard for international law or civil liberties. To prevent an intelligence capability gap from emerging, legislation regulating intelligence and security services must protect civil liberties and make effective oversight possible while at the same time give the services powers that enable them to use technological innovations where necessary to detect threats.

Break-out moments for emerging technology, for instance in quantum computing, can come unexpectedly.


The primary functions of intelligence and security services are to avoid strategic surprise, and to provide warning for severe threats, long-term expertise and knowledge on national security matters, and timely and actionable intelligence to the government.[3] Services rely on information from human sources (HUMINT), signals intelligence (SIGINT), imagery intelligence (IMINT), Measurement and Signature Intelligence (MASINT), Geospatial Intelligence (GEOINT) and open-source intelligence (OSINT).[4] A surge in sensors and images combined with AI making large datasets increasingly available for analysis enhances identification and prioritization of targets across ‘INTs’.[5] One of the first ways services started to use AI was by developing machine-learning models, a subset of AI, to forecast future events, including potential security threats, by identifying patterns and anomalies in large data sets.[6] AI can also be used to: enable biometric analysis like facial recognition to identify individuals or members of a group, making surveillance across divergent sensors and datasets possible; translate and understand communications and documents in other languages, reducing the limitations intelligence agencies face of having sufficient people who speak foreign languages; and decipher coded messages or encrypted communications. AI can identify anomalies that point towards cyberattacks and respond instantly to malware or interference and can be used in the detection of deep fakes and countering disinformation campaigns.[7]

These developments also underline the value of intelligence diplomacy.[8] Sharing of intelligence is already crucial in combating global threats, but the rapid evolution of technology means that sharing both emerging technology and the big data sets needed to train AI systems is necessary to bolster the capacity of allies. Especially for larger, technologically advanced states like the US, a collaborative approach to intelligence that helps partners in building capabilities gives them an advantage over more isolated adversaries. This includes datasets that have already been collected by services but without AI would be too vast to distil usable intelligence from. For services it is therefore of vital importance to be able to access, acquire and retain (bulk)data to be effective and to not miss threats.[9] However, not knowing what actionable intelligence an ally might extract from these datasets with the help of AI makes sharing large data sets less appealing and sometimes sharing is restricted by law.[10]

While AI can minimize human errors by automating tasks, including administrative and organizational processes like compliance and accountability, espionage remains a human activity. As Bill Burns stated: “The defining test for intelligence has always been to anticipate and help policymakers navigate profound shifts in the international landscape.”[11] Successful cooperation of humans with AI technology can expand, refine and speed up intelligence collection and processing, and thereby boost the capacity of human analysts to make better strategies, analyses and intelligence products for decision-makers.

Adversarial AI & Capability Gap 

This AI-enabled boost in capacity is needed as technologically capable state and non-state adversaries develop, or have access to, offensive AI-enabled capabilities. These actors do not necessarily feel bound by the same (international) legal framework Western (or Dutch) intelligence and security services are operating within. For non-state actors, cheap and easy to distribute AI can help overcome some of the asymmetries with state actors, for instance by amplifying autonomous systems, cyber-operations, and disinformation campaigns, and enhancing their intelligence-gathering and analysis capabilities.[12]

The US and China are already in open competition over the strategic technologies of the future.[13] Both view global leadership in AI as a vital national interest, invest in it accordingly and impose export restrictions.[14] Chinese government policies enhance its ability to gather mass data from surveillance systems, data from Chinese companies, and data ’harvested from around the world’ in what the head of MI6 called ‘data traps’,[15] undoubtedly complemented by data obtained through hacking, as the malware recently detected on a Dutch armed forces computer shows.[16] This not only enables the Chinese Government to build better systems; the information inferred from the data by AI can also be weaponized against opponents.[17]

There is an imbalance in that AI developed in the EU and US has little access to Chinese data, while Chinese AI benefits from more open economies and at the same time is not bound by the privacy regulations that restrict US and EU governments and companies. Nowadays, we also see certain countries gathering all the information that they can lay their hands on, even if they cannot directly use or decrypt it, for later use and analysis. Already in 2013 former senior American officials reported that intellectual property was being stolen on an unprecedented scale, and they were pointing fingers at China and Russia, but also at India and Venezuela.[18] According to the Center for Strategic and International Studies (CSIS) the US intelligence community already faces a critical challenge in maintaining a competitive advantage in strategic intelligence due to AI.[19] By not investing enough in innovative technologies, or by overly restricting their use, we risk creating a growing ‘capability gap’ in intelligence and national security vis-à-vis our state and non-state adversaries.

Current legal constraints  

Because of technological developments and the increased production of, and reliance on, data in society, the design and organization of intelligence and security services has changed. Increasingly, they can be seen as data-driven organizations, which has implications for their set of legal powers and for oversight over the intelligence  and security services. Over the last couple of decades we have already seen how regulation had to adapt to new technology, especially in terms of communications and data technology – from analogue to computers and from telegraph to telecom and from telecom to internet. RUSI argued that, in the UK context, an agile approach in relation to new AI capabilities within the existing oversight regime is essential to ensure the ‘intelligence community can adapt in response to the rapidly evolving technological environment and threat landscape.’[27] This is also what we see in the Netherlands, as the proposed ‘Temporary act pertaining to investigations by the General Intelligence and Security Service (AIVD) and the Defence Intelligence and Security Service (MIVD) of countries with an offensive cyber programme’ introduces dynamic oversight to deal with specific powers of the services.[28]

Services are currently working with a law that was designed ten years ago.

National legislative processes also have their constraints, both regarding their scope and the pace of the legislative process. For example, the ‘new’ Dutch Intelligence Act (Wiv 2017) entered into force on May 1, 2018, to replace the 2002 law (Wiv 2002). But, before entering into force, the law was adopted by the House of Representatives (Tweede Kamer) in February 2017, passed the Senate (Eerste Kamer) in July 2017, and in March 2018 was rejected in an Advisory Referendum (this rejection did not prevent the coming into force of the new Act, but some extra guarantees were introduced, including an evaluation of the act within two years).[29] Drafting the act started in 2014, meaning that services are currently working with a law that was designed ten years ago. Considering the increasing speed of change, it was impossible to foresee then the implications of emerging technologies on the collection, storage, and analysis of large data sets. This illustrates the challenges legislators will face more often in the light of innovation and development of new technologies, for instance when quantum computing has a breakout moment.

Furthermore, the outcomes of legislative processes are static. After the Wiv 2017 came into force, it turned out that one of the special powers granted in the Act, the tapping of internet cables, cannot effectively be used due to the restrictive interpretation by one of the oversight bodies of an amendment parliament made to the law. After a 2021 evaluation of the law,[30] a temporary act was passed to allow for a legislative fix. This act is currently being scrutinized by the Senate, so that ten years after the first draft, almost six years after coming into force, and three years after being reviewed, the law still does not allow the services to use the full set of necessary powers they were granted.

To avoid a lacuna in legislation that either allows for too much intrusion or prevents services from using the tools they need to do their work, both international and domestic regulation should be able to adapt to innovative technologies. That task is difficult because the implications for the storage of data, the sharing of data with international partners, or the interpretation of the oversight bodies depend on ‘legisprudence’ that was unable to take into account technologies that were not developed yet. Looking at the purpose of using technology, rather than the system used, might help, but agreement must be reached in society on what a proper purpose might be. The work and organizational culture of Intelligence services traditionally is secretive and not conductive to interactions with citizens, civil society, or political actors other than warning them about threats. And although the understanding of the threat level is key, encouraging public debate and engagement on the use of emerging technologies in intelligence gathering and analysis helps in understanding societal concerns, and expectations that civilians can and should have of the services.


The pace of developments in the field of AI has shown the unpredictability of technological breakthroughs. This poses a challenge for intelligence and security services, lawmakers seeking to regulate the powers of services, and oversight bodies. Emerging technology raises crucial questions about legal frameworks and civil rights, especially as ever faster evolving technology vastly outpaces the legislative process, and existing rules do not always suffice. Intelligence and security services at the same time are faced with adversaries who develop and exploit these technologies with little consideration for international law or civil liberties. To protect our society and the values our liberal democracy is based on we cannot permit a technological capability gap vis-à-vis our adversaries to emerge. Sitting on our hands while opponents develop AI-enhanced data-gathering and analysis systems, surveillance and offensive cyber capabilities and the ability to influence our society with deepfakes and disinformation is not an option.

Therefore, it is essential to develop regulation that both safeguards civil liberties with robust oversight and empowers intelligence and security agencies to use technological innovations to counter these technologically enhanced threats. To start bridging these divergent interests within our society, we propose an ongoing societal discussion about national security, the role of the services, and their powers, with the aim of designing flexible regulation and oversight mechanisms that will accommodate and incorporate new technological possibilities while protecting privacy and civil liberties. This debate should take place not only when something goes wrong, when new legislation is proposed, or a breakout moment of technology forces us to rethink national security; there should be a continuous debate among civil society, government, politicians and citizens about how much security we want, from what, for whom, and at what cost.

Header photo: / Boykov




[2]; Mustafa Suleyman (2023) The Coming Wave: AI, Power and the 21st Century’s Greatest Dilemma, London: Bodley Head

[3] Lowenthal, Intelligence, 2017, pp. 2-5.

[4] What is Intelligence? (


[6] Forecasting Significant Societal Events Using The Embers Streaming Predictive Analytics System (; fp_20201130_uncomfortable_ground_truths.pdf (



[9] Willemijn Aerdts & Ludo Block, Use of bulk data by intelligence and security services: caught between a rock and a hard place?, in: the boundaries of data, Bart van der Sloot & Sasha van Schendel (eds), forthcoming 2024, pp. 65-82.

[10] In the proposed Temporary Intelligence Act in the Netherlands, for example, this difficulty is being overcome by introducing oversight before sharing datasets.  In this proposed Act, the oversight body has five days to prevent to intended data sharing in case they deem this unlawful (see article 6 sub 7)


[12] For example, Project Maven is intended to incorporate computer vision and AI algorithms into intelligence collection cells that would comb through footage from uninhabited aerial vehicles and automatically identify hostile activity for targeting. In this capacity, AI is intended to automate the work of human analysts who currently spend hours sifting through drone footage for actionable information, potentially freeing analysts to make more efficient and timely decisions based on the data Kelley M Sayler, “Artificial Intelligence and National Security,” (2020), p. 10


[14] DUN22612 (




[18] Fighting China’s hackers, the Economist, May 25 2013 & Unusual suspects, the Economist, May 25 2013.Fighting China’s hackers (


[20] Rebecca Stark, China’s Use of Artificial Intelligence in Their War Against. Xinjiang, 29 TUL. J. INT’L & COMPAR. L. 153, 170–72 (2021)

[21]  Jack McDonald (2019) ‘Autonomous agents and command responsibility’ In: James Gow, Ernst Dijxhoorn, Guglielmo Verdirame & Rachel Kerr (eds.), Routledge Handbook of War, Law and Technology (1st ed.) Abingdon: Routledge, 2019 pp. 141-153, p.149



[24]  Dapo Akande, Antonio Coco & Talita de Souza Dias (2022) ’Drawing the Cyber Baseline: The Applicability of Existing International Law to the Governance of Information and Communication Technologies‘ , International Law Studies Vol. 99 , no. 4. pp.4-36, p.10

[25] James Gow and Ernst Dijxhoorn, ‘Obvious and Non-Obvious: the Changing Character of Warfare’, In: James Gow, Ernst Dijxhoorn, Guglielmo Verdirame & Rachel Kerr (eds.), Routledge Handbook of War, Law and Technology (1st ed.) Abingdon: Routledge, 2019 p. 13



[28] Dossier Tijdelijke wet onderzoeken AIVD en MIVD naar landen met een offensief cyberprogramma., bulkdatasets en overige speciefeke voorzieningen

[29] Willemijn Aerdts, Diensten met geheimen: hoe de AIVD en MIVD Nederland veilig houden, 2023, p. 50.

[30] Evaluatie Wet op de inlichtingen- en veiligheidsdiensten 2017, Commissie Jones-Bos, January 2021.

Willemijn Aerdts is a lecturer at Leiden University in the Institute of Security and Global Affairs (ISGA) and senator.

Ernst Dijxhoorn is an assistant professor at Leiden University in the Institute of Security and Global Affairs (ISGA).