Generative Data Intelligence

How the EU AI Act Will Affect Businesses, Cybersecurity

Date:

Experts are ringing the alarm bells over the risks unfettered development of artificial intelligence (AI) technology could pose to humanity. Enter the European Union (EU), already a leader in data protection and privacy rights, where the EU Parliament has agreed on a law governing AI technology.

Jonathan Dambrot, CEO of Cranium, says it’s not surprising that the EU, once again, has taken the lead on tech regulation.

“We saw this with GDPR and data privacy, and now we’re seeing the same with AI,” he says.

While the text of the so-called AI Act will likely undergo further refinements and modifications, steady progress on the law indicates governments are stepping up to the challenge of harnessing — or attempting to harness — a technology that has come to dominate headlines in a few short months.

“As businesses navigate this landscape, it is crucial to understand the context of existing regulations, such as the GDPR, and the key elements of the upcoming AI Act,” says Kyle Kappel, US leader for cyber at KPMG.

From his perspective, compliance with these regulations means putting into practice more robust data management, including careful handling of user information.

“Businesses should also be prepared to ensure explainability of AI decisions, document AI behavior, and potentially undergo external testing to address concerns like bias,” he adds.

Compliance with evolving AI regulations will likely drive businesses to establish cohesive data and AI/machine learning operational practices (MLOps), treating regulations as interconnected components.

A Double-Edged Sword

Craig Jones, vice president of security operations at Ontinue, says the new regulatory environment could be a double-edged sword.

“While it might stimulate more robust, ethical, and secure AI applications in cybersecurity, it has the potential to curb experimental approaches and slow down the speed of innovation,” he says.

From Jones’ perspective, it’s a tightrope walk between ensuring responsible AI use and maintaining a vibrant, dynamic research and development ecosystem.

“On the upside, the [AI] Act provides a regulatory safety net that seeks to ensure ethical and safe AI applications, which can instill more public trust in these technologies,” he says. “It also raises the bar for AI transparency and accountability.”

The downside might be that it could temper the pace of AI innovation, making the EU less attractive for AI startups and entrepreneurs.

“The balance between transparency and protection of proprietary algorithms also poses a complex challenge,” Jones notes.

Global Impact on AI Regulation

Chris Vaughan, vice president of technical account management at Tanium, says the AI Act will force many commercial organizations to work within the EU framework.

“It is a powerful and well-established marketplace that many companies wish to conduct businesses within,” he says. “To do so they must be compliant with EU law. This instantly creates a global impact.”

Cranium’s Dambrot agrees that the EU’s decision will “absolutely” have a global impact, just like the GDPR did.

“People are more afraid of AI than their privacy historically. The need for the EU, US, China, and every major power to regulate will be important for the adoption of AI universally,” he says. “With the EU AI Act, Europe is leaning in and taking a first mover advantage in these regulations.”

If there’s no comprehensive framework or guidance, Dambrot adds, then US companies are going to have competing compliance pressures at the state and federal levels.

“Although the precedent for privacy is for states to take the lead, my hope is that there be a comprehensive AI regulation, like the EU AI Act, to help regulate the responsible and safe use of AI,” Dambrot says.

This will help make it easier for both US and foreign AI developers to better serve and navigate clients securely, he notes.

“It’s really interesting when you see major tech players, like OpenAI, say to Congress, ‘Please regulate us,'” Dambrot adds.

US Faces Challenging Regulatory Hurdles

Not everyone is so sure the US will act with speed, however, including Mike Britton, CISO of Abnormal Security. Britton says the Feds will face several roadblocks in following suit. For starters, the US lags when it comes to privacy and regulation in general.

“It’s complicated for a variety of reasons, including the fact that privacy is not a fundamental right in the US like it is in Europe,” he says.

Another big challenge for US lawmakers: Privacy regulations are implemented around specific types of information — HIPAA for healthcare, GLBA for financial services, and COPPA for protection of children’s privacy.

“There is currently very little desire to harmonize these various privacy laws since agencies and organizations have already laid claim to the regulations that govern these areas,” Britton says.

Finally, he points out “Big Tech” has been extremely successful in lobbying for self-regulation and taking a laissez-faire approach to regulating technology.

“I imagine they will push hard to do the same here,” Britton says. “On the bright side, the White House recently released an AI Bill of Rights, which shows that there is some consideration being given to the issue.”

Impact of AI on Cybersecurity

Dambrot predicts AI will worm its way into almost every cyber function, from incident response and security operations centers to third-party risk and other applications. CISOs who have not yet prioritized AI until this year now must play catch-up, he warns.

“Technology such as ChatGPT is now at a point where it can rewrite malware — meaning traditional detection programs are unable to identify it,” Tanium’s Vaughan explains.

Vaughan predicts cybersecurity and AI innovation will compete in a game of cat and mouse to see which can develop quicker.

“For example, as malware, phishing, and cyberattacks evolve, the defensive counterparts must develop alongside,” he says. “We also need protection against the malicious use of AI technology, such as deepfakes. We have enough problems with online harassment with real images — permitting fake images into the mix could have catastrophic results.”

How Privacy Requirements Affect AI

The EU legislation focuses on aspects of AI that can harm individuals, which could affect how the technology progresses.

“AI innovations may become more difficult,” Vaughan says. “AI algorithms are based on data, which must be sourced from somewhere.”

So far there have been few — if any — requirements for AI developers to reveal where they got their data or how they used it to teach their AI systems.

However, with the new EU legislation, innovators will have to openly state the origin of their data and provide details on how they used it to train their AI algorithms. This is to ensure transparency and accountability in the development of AI technology.

“There are some unintended consequences. Consider the right to be forgotten,” Dambrot says. “If individuals can demand that their [personally identifiable information], which may have been included in training a model, be removed, then what’s the security impact to that model?”

Adds Vaughan: “This creates additional red tape to businesses but ultimately protects people. “A slight delay in innovation is a worthy sacrifice for safety.”

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?