Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

A bit of trivia to start this week's column: 9 May marks Europe Day and the anniversary of the Schuman Declaration signed in 1950. The Declaration laid the foundation of the European Union as we know it today. Signatories may not have known at the time that they were starting a project that would live on seven decades later, though that was probably the hope. Who could have guessed something that started as a coal and steel community, would later be home to policies on cybersecurity and artificial intelligence.

Cybersecurity is one of the essential elements for an AI system to be safe, and indeed the EU AI Act requires high-risk systems to have a cybersecurity-by-design approach. The AI Act is horizontal and, as such, has to coexist with other legislation, including cybersecurity laws. The Cyber Resilience Act, the Cybersecurity Act, and the Critical Entities Resilience Directive are all explicitly listed in the EU's new AI law, not to mention adjacent laws that include cybersecurity provisions such as the EU General Data Protection Regulation.

In this cybersecurity cross-over context into AI, standards will play a big role as a compliance tool. For instance, if we look at the AI Act and Cyber Resilience Act, both use standards, both set horizontal rules, both are part of a new legislative framework safety rulebook, both have the presumption of conformity. The European standardization bodies are running an ongoing process to develop new standards or where possible, to leverage existing ones, with an expected delivery in August this year.

All in all, this also illustrates a life cycle approach to risk management, noticeable across EU digital policy instruments. When we look at AI and cybersecurity, risk management will go through various steps of threat modeling and threat detection, looking at exposure, vulnerability, testing, evaluation and red teaming, across the entire AI life cycle of design and development, training, evaluation and retirement phases.

The intersectionality of cybersecurity and AI creates AI-specific cybersecurity challenges in their duality. For instance, advanced general-purpose AI models can create adaptive malware as well as detect malware.

In a 2024 update to its "Foresight Cybersecurity Threats for 2030" report, the European Union Agency for Cybersecurity found "Abuse of AI" was seen as increasingly likely, suggesting the potential for AI-driven cyberattacks, alongside a steady trend of AI as a tool for disrupting/enhancing cyberattacks. Experts consulted by ENISA had raised concerns at the time, such as "the potential blind spots introduced by probabilistic threat detection methods, the public's over-reliance on AI, and the need for strategic reframing in threat assessment."

Additionally, the participants proposed four novel threats, including the risk of overreach in trust toward algorithms — stating "People will 'believe' the computer too much" — and an emergence of metaphysical relationships with AI. This would be characterized by a growing number of people who will identify with AI.

It is often said cybersecurity is about people, processes and technology, and that it is a team sport. It does not seem any different when it meets AI.

Isabelle Roccia, CIPP/E, is the managing director, Europe, for the IAPP.

This article originally appeared in the Europe Data Protection Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.