Generative Data Intelligence

What Does the EU’s AI Act Mean for Startups?

Date:

The EU Parliament passed the much-awaited AI Act last week, making it the first major jurisdiction in the world to introduce comprehensive rules for the sector, but compliance could prove tough for startup companies new to regulation. 

While aspects of the AI Act are open to interpretation and have yet to be implemented, some founders worry the measures could hurt smaller companies and hamper investment and innovation, putting Europe further behind the U.S. and China in the AI race.

The new rules, which aim to restrict the use of AI considered high-risk, like deepfakes and facial recognition software in public areas, will apply to all firms that deploy AI in the 27 EU-member states. Altogether, the bloc represents about 20% of the global economy.

Also read: EU Parliament Finally Adopts the AI Act 

AI Act: The knock-on effects

“While the Act is quite progressive and likely to have a knock-on effect in other regions, there are concerns on how it might impact innovation in the EU,” Nitish Mittal, partner at U.S. tech group Everest, told MetaNews.

Mittal, who leads Everest’s digital transformation and IT services in Europe and the United Kingdom and Ireland, said, over the last few decades, Europe appears “to have slipped behind the U.S. & China in terms of technology innovation.”

But he also pointed to how the EU had anticipated such weaknesses and started to prepare for them before the law, which is expected to be fully implemented over the next two years, comes into force later this year.

“The EU does recognise some of these challenges and is trying to implement some avenues for helping startups and the innovation around AI,” Mittal said.

In late January, the bloc announced a range of measures aimed at boosting innovation for European startups developing what it calls “trustworthy” AI that “respects EU values and rules.”

It said the firms will have “privileged access to supercomputers” and that the EU itself will build “AI Factories” to make sure the AI infrastructure is available for startups to buy and upgrade.

What Does the EU's AI Act Mean for Startups? 
An AI-powered robot operated by an engineer. Image credits: EU Commission

Risky business

Even before the European Parliament, the main legislative body in the EU, voted in favor of the AI Act, the law faced criticitism from startup founders working with generative models.

In October, Cedric O, founder of French AI startup Mistral, said the law will “kill” his firm. The entrepreneur worried that the law placed excessive scrutiny on large language models, even if they were not being used for sensitive stuff like hiring, Sifted reported.

Jonas Andrulis, CEO of Aleph Alpha, the German rival to the U.S. creator of ChatGPT, OpenAI, said classifying “general purpose AI” like LLMs as high-risk could have unintended consequences. His comments were echoed by Peter Sarlin, CEO of Finland’s Silo AI.

“If we are sort of generalizing across generative AI technology, and saying that all use cases that utilize generative pre-trained transformers (GPTs) are high-risk, then I think we will also be regulating quite a lot of use cases that aren’t actually high-risk,” Sarlin said at the time.

It wasn’t only entrepreneurs raising concerns about the AI Act. A U.S. State Department analysis in October warned that some rules in the law were based on “vague or undefined” terms, according to Bloomberg.

The analysis said Europe’s AI Act would benefit the largest tech firms that have the financial clout to train AI models and machine learning systems. Smaller firms are likely to suffer losses, it added.

The AI Act outlines different risk categories of AI use, ranging from “low-risk” to “high” and “unacceptable risk.” AI apps that are considered as a threat to individual rights, like social scoring or facial recognition software in public places, will be banned outright.

Sensitive “high-risk” use cases that will be allowed include things like border management, education, and recruitment. Companies that use such technologies will be required to disclose more information about the data used to train their systems.

Adjusting to the AI Act

“While the Act encourages ethical artificial intelligence development, it also introduces specific requirements and obligations, especially for high-risk AI systems,” Michael Borrelli, CEO of London-based AI & Partners, told MetaNews.

Borrelli, whose firm helps companies with regulatory compliance in Europe, added that the new rules could necessitate adjustments in how startups operate and innovate.

“The need to comply with these regulations may initially pose challenges but ultimately aims to foster a safer and more reliable AI ecosystem, potentially enhancing the growth and global competitiveness of European start-ups,” he explained.

One of the major issues raised by startup founders relates to how the new legislation classifies all generative AI models as high-risk, even when the firms behind them face different challenges.

Nitish Mittal, the Everest Group partner, was keen to emphasize that certain sectors categorized as high-risk “will likely need more safeguards and understanding” on how this will apply to their companies and the “measures they need to take.”

“Every organization will need to take a harder and closer look at the data and all aspects around it,” Mittal tells us.

“For instance, who owns the data that they use, are they using it to train their models, how do they work with partners as well as client etc,” he added.

What Does the EU's AI Act Mean for Startups? 

Competing with the US

Europe lags behind the U.S. in the number of large generative AI firms, but it continues to foster an active ecosystem of smaller players. Some of the more notable ones include Mistral AI, Oslo-based Iris AI, Amsterdam-based Zeta Alpha, and others.

The AI Act recognizes this difference and speaks directly to the European startup community. As Michael Borrelli highlighted, the law mandates priority access to AI regulatory sandboxes for small and medium businesses, including startups.

It also offers measures to support innovation, such as organizing awareness and “training activities tailored to SMEs and reducing fees for conformity assessments proportionately to their size,” he said.

But venture funds are unlikely to invest in startups classified by the AI Act as high-risk, according to a 2023 survey of 14 European VCs by the Initiative for Applied AI. Eleven of the funds said they were less likely to invest in companies with a high-risk rating, and eight said it would badly impact the startup’s valuation.

For Borrelli, the fact that the U.S. is still trying to figure out the right legal framework for AI – currently it is less centralized and varies by state – means that the EU’s AI Act, which provides a harmonized set of rules for the entire European market, has an upper hand.

“This harmonization can offer a clear and consistent regulatory environment for startups, potentially making it easier to scale across the EU without navigating disparate regulations,” he said.

But he warned that the collective regulatory approach could also “slow down [the] rapid scaling” of artificial intelligence products.

“The stringent requirements for high-risk AI systems and the focus on ethical AI development require European startups to invest more in compliance and ethical considerations than their US counterparts,” Borrelli explained.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?