top of page

When Cybersecurity Meets AI: Navigating Europe’s Digital Future

Kyle Dane Ballogan

"Hard Power of AI" session at the World Economic Forum Annual Meeting 2024 in Davos-Klosters, Switzerland, 18 January. Image sourced from World Economic Forum via Flickr.


As artificial intelligence (AI) rapidly transforms different aspects of our lives, it also reveals game-changing discoveries beyond the traditional boundaries of cybersecurity. Consequently, European governments must more effectively regulate AI tools to ensure their responsible multisectoral use. 


AI’s Role in Cybersecurity Today

Currently, AI offers promising improvements in cybersecurity. For example, automated biometric recognition systems utilized in intelligence and algorithms are a product of AI. In some nations, AI provides reliable security systems such as plate number recognition, video surveillance, information retrieval, threat detection and security warnings, and predictive analytics in businesses and governments. 


However, AI also poses significant risks, making it a potent weapon for cyberattacks. For instance, deepfake technology has been exploited in several high-security incidents. In 2023, TV services in the UAE, UK, and Canada had AI-interrupted broadcasts regarding the Israel- Hamas conflict. Similarly in the Philippines, the President was recently the target of an inimical deepfake instructing the military to attack China. Finally, in South Korea, schools are currently dealing with a deepfake porn crisis


In Europe, the weaponization of AI-generated content has been particularly damaging. For example, the AI-generated conversation of a supposed phone call between Michal Šimečka of Progresíve Slovakia and Monika Tódová, a well-known journalist, discussing how to manipulate the Slovakian elections alarmed thousands of social network users. Likewise, an AI-generated video of Valerii Zaluzhny, the Supreme Commander of the Ukraine Armed Forces, calling for a coup against President Zelensky was distributed by Russian Telegram channels to spread fake news about the Ukraine war. Even military organizations like NATO are not immune from automated attacks like APT29, which mimics web services to target companies and IT service providers. Indeed, AI has become an integral means for cybercriminals and state-sponsored hackers to target governments and vulnerable people. These incidents highlight the growing need for regional collaboration to combat AI-driven threats.


Implications for Europe

The technological advancement of AI in cybersecurity will have wide-ranging impacts on European policy and security. European security spending could total USD$84 billion by 2027. This trend underscores the increasing recognition of the demand for hiring additional security services and skilled personnel to defend against AI-driven cyber threats. Moreover, EU intelligence agencies, and security alliances like NATO will likely accelerate their investment in AI-powered drones and similar weaponry to strengthen their engagement with adversaries. Indeed, all agencies and departments - both within and beyond national and regional security communities - will have to allocate resources and conduct upskilling initiatives to become and remain cybersafe.  


The European Parliament is already beginning to direct and regulate these developments under its AI Act. In response, tech companies and cybersecurity startups may emerge as a competitive European industry by 2026 by filling the skills shortages in the tech industry and meeting the demand for cyber expertise. Nevertheless, European governments must prioritise not just AI-focused cybersecurity collaboration but the development of stringent regulations and educational initiatives for all involved stakeholders, including members of the public.


AI and Cybersecurity Legislation

First, lawmakers need to avail legal tools to maximize cyber laws aimed at safeguarding the government itself from cyber threats, including but not limited to insider threats, automated attacks, hacktivists, and state-sponsored attackers. Additionally, the EU should revise its Blueprint in accordance with current developments in cybersecurity. Furthermore, should the EU expedite the finalization of its Code of Practice by May 2025, drafters must refer to existing resources and international standards like the UNESCO Recommendation on AI Ethics, the UN Resolution on AI, and the OECD AI Principles.


Governance and sound legislation are good measures to combat the challenges posed by AI in Europe's future. Yet given the cross-border nature of cyber threats, their effective operation hinges upon strengthened and supported global collaboration and standardization.


Experts and Dialogues

Listening to experts, academics, and civil rights groups is a sine qua non in tackling the ethical complexities of AI technologies. Collaboration and proactive investments with experts who truly understand, implement, and develop AI will strengthen the content and enforcement of AI laws. Furthermore, policymakers should exert more effort through the AI Pact to collaborate with and regulate tech companies thereby securing governmental cybersecurity oversight without constraining AI’s potential and innovation. Finally, authorities must include roundtable discussions, risk assessments and exercises, and EU-wide conferences to tackle sectorial specificities and the burden of costs, excessive bureaucracy, and the potential of overregulation on companies. 


Public Awareness of AI

Ultimately, a key strategy to combat AI-driven cyber threats is enhancing public awareness and education. Once public consumers sufficiently understand AI, this chain of knowledge will develop into a unified approach to cyber hygiene. AI already experiences widespread public use from different demographics to serve different needs, but it is not often safe and informed. With the emergence of large language models (LLMs) and self-learning systems, governments could partner with AI-driven platforms to oversee and regulate generative AIs to protect public users. Public interest in and use of AI could be further nurtured by the EU through the establishment of a Cybersecurity Academy to train and upskill cybersecurity players within the region. 


Overall, effectively regulating AI will require a longstanding, diligent and collective commitment by governments, industry experts and the public. Successfully navigating the AI landscape will require a great deal of caution. Accordingly, European leaders in cybersecurity and emerging technologies must venture beyond the traditional security mindset of mere protection to cyber resiliency while responsibly balancing innovation with protection. 



Kyle Dane Ballogan is a Humanities and Social Sciences (2021) and Political Sciences (2024) graduate in the Republic of the Philippines. His research and academic interests focus on international relations, statecraft, and international law, with a special reference to grand strategy, diplomacy, foreign policy, and public international law.

 
 
 

Commentaires


  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
acnc-registered-charity-logo_rgb.png

Young Australians in International Affairs is a registered charity with the Australian Charities and Not-for-Profits Commission.

YAIA would like to acknowledge Aboriginal and Torres Strait Islander peoples as Australia’s First People and Traditional Custodians.​

 

We value their cultures, identities, and continuing connection to country, waters, kin and community.

 

We pay our respects to Elders, both past and present, and are committed to supporting the next generation of young Aboriginal and Torres Strait Islander leaders.

© 2025 Young Australians in International Affairs Ltd

ABN 35 134 986 228
ACN 632 626 110

bottom of page