Generative Data Intelligence

Europe Sees More Hacktivism, GDPR Echoes, and New Security Laws Ahead for 2024

Date:

An evolving geopolitical landscape and shifting regulatory requirements have transformed Europe’s cybersecurity environment over the past year, bringing new challenges for safeguarding critical infrastructure and sensitive data.

The Ukraine war and the conflict in Gaza have led to a rise in hacktivism, and ransomware gangs have excelled in capitalizing quickly on new critical vulnerabilities to gain initial access within many organizations. This is exacerbated by threat actors having more access to various means of automation, be it readily available command and control (C2) toolkits, generative AI (genAI) to support their spear-phishing efforts, or commercially available ransomware from the Dark Web.

The conflict in Ukraine dominated the early part of the year, with the threat of nation-state cyberattacks and counter attacks potentially escaping from the theater of war into the wider European cyber ecosystem. “Critical infrastructure will remain a target for both ‘propaganda’ and genuine disruption purposes,” says Gareth Lindahl-Wise, CISO at Ontinue. “Sensitive data will continue to be actively sought for operational military advantage, criminal extortion purposes, and also for nation-state and commercial advantage.”

The European Union Agency for Cybersecurity (ENISA), the EU agency dedicated to achieving a high common level of cybersecurity across Europe, recorded approximately 2,580 incidents between July 2022 to June 2023. That number does not include the 220 incidents specifically targeting two or more EU Member States, according to ENISA spokesperson Laura Heuvinck. “In most cases, top threats may be motivated by a combination of intentions such as financial gain, disruption, espionage, destruction, or ideology in the case of hacktivism,” Heuvinck says.

EU Pushes Forward With Security Rules

And on the data regulatory front, the European Union remains incredibly active.

The General Data Protection Regulation (GDPR) — a comprehensive data protection law implemented by the EU in May 2018 — has driven a significant amount of focus and energy in people who staff security functions to better understand the data they have, where it is, how it is secured, and who it is shared with. “Outside of the ‘consent’ and ‘right to use’ elements, these should have been core basics for data security from the get-go,” Lindahl-Wise says. “There is a danger that commercially sensitive yet non PII data is left as a poor relative in prioritization.”

The new European Union directive, NIS 2 Directive 2022/2555, is aimed at improving the security and resilience of network and information systems across the EU. Affected organizations (providers of what’s considered “essential services,” such as energy suppliers, drinking water, financial and healthcare institutions, internet service providers, transportation, and public administration, to name a few) are legally obligated to implement “appropriate and proportionate technical, operational, and organizational safeguards” to manage and mitigate cybersecurity risk. Orgnizations have until October 2024 to comply.

While GDPR has led to an increasing scrutiny on data privacy and data processing — who is using our data, where, and for what purpose — NIS2 is driving European organizations to significantly step up their cyber maturity, says Max Heinemeyer, chief product officer at Darktrace, noting that NIS2 has been a major topic at various European security conferences this year. “Organizations are feeling the pressure to act and keep up with compliance,” Heinemeyer says.

In early December, the European Commission, Council, and Parliament announced they had reached an agreement on the text of the Cyber Resilience Act. This means that while there are still things to hammer out during the legislative process, the Act is expected to become law and take effect early 2024. The CRA, which aims to safeguard consumers and businesses using digital products, will introduce a new set of cybersecurity obligations, such as mandatory security updates for a minimum of five years, and disclosing unpatched vulnerabilities actively being exploited to government agencies.

Securing AI/ML Security

The EU has reacted to potential cybersecurity risks from AI and machine learning with the European Artificial Intelligence Act. While the Act still needs to go through several rounds of legislative proceedings before it becomes law, there is agreement around the broad outlines. The proposed elements will restrict the use of automatic face recognition technologies, prohibit various ways in which AI can be used, place high-risk products running AU under scrutiny, and impose transparency and oversight requirements in relation to AI models. Cybersecurity is an important element of the Act’s requirements to ensure that AI systems are trustworthy.

The AI Act would be the first comprehensive regulation on AI technology, and similar to how GDPR set a standard for data protection, this would set a high standard for AI regulation for other countries to follow. However, there are concerns that AI regulation would be too difficult, and could potentially hamper innovation in Europe, says Ron Moscona a partner at the international law firm Dorsey & Whitney. If the EU imposes regulations on the development and distribution of AI software, this will impact developers and providers operating in the EU, but it would be largely ignored by companies, research institutions, and state agencies in other countries.

“The result can mean that whilst local technology development is hampered in Europe as a result of demanding regulations, it will continue to develop elsewhere relatively unchecked and it will be very difficult to rely on local regulations to stop non-compliant AI software generated around the world from finding its ways to European markets and users,” says Moscona.

Other AI, Cybersecurity Initiatives

There are efforts such as the creation of the European Cybersecurity Skills Academy and the European Cybersecurity Competence Center, as well as the development of European Cyber Security Schemes, a comprehensive certification framework. “These initiatives mainly focus on such aspects as supply chain security, transparency, security by design and skill building and training,” says Jochen Michels, head of public affairs in Europe for Kaspersky.

ENISA is working on mapping the AI cybersecurity ecosystem and providing security recommendations for the challenges it foresees. The agency also published the Artificial Intelligence and Cybersecurity Research report, which aims to identify the need for research on cybersecurity uses of AI and on securing AI. A security risk assessment should take into account the design of the system and its intended purpose, says ENISA’s Heuvinck. Cybersecurity and data protection is important in every part of the AI ecosystem to create trustworthy technology.

There are two different aspects to consider about the cybersecurity impact of AI. On one hand, AI can be exploited to manipulate expected outcomes, such as how AI is used in ENISA’s Open Cyber Situational Awareness Machine, which automatically gathers, classifies, and presents information related to cybersecurity and cyber incidents from open sources. On the other hand, AI techniques can be used to support security operations — but for this to work, organization’s need to be able to assess AI’s impact, as well as monitor and control it with a view to making AI secure and robust.

“Cybersecurity is a given if we want to guarantee the trustworthiness, reliability, and robustness of AI systems, while additionally allowing for increased user acceptance, reliable deployment of AI systems, and regulatory compliance,” Heuvinck says.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?