Generative Data Intelligence

AI Act Gain Momentum With Full Endorsement From EU Countries

Date:

After EU nations accepted a political agreement created in December, Europe took a step closer to enacting regulations controlling the use of artificial intelligence and models like Microsoft-supported ChatGPT.

On Friday, Europe moved a step closer to adopting rules governing the use of artificial intelligence (AI) and AI models such as ChatGPT, backed by Microsoft. This comes after the European Union (EU) countries endorsed a political deal reached in December.

As proposed by the European Commission three years ago, the rules aim to set a global standard for the technology. This technology is used in several industries, from banking and retail to the car and airline sectors.

The Historical Act

Thierry Breton, an EU industry chief, said the Artificial Intelligence (AI) Act is historical and a world first. According to him, member states endorsed the political agreement reached in December, recognizing the perfect balance found by the negotiators between innovation and safety.

However, the regulations were initially proposed by the European Commission three years ago.  The regulations also set parameters for using AI for military, crime, and security purposes.

Additionally, the agreement reached on Friday, Feb. 2, was expected after France, the final dissenting party, withdrew its opposition to the AI Act.

France secured stringent conditions, and they were to balance transparency. These conditions also protected business secrets while minimizing the administrative burden on high-risk AI systems.

EU diplomatic officials, in addition, said the objective is to foster the growth of competitive AI models within the bloc. As they were not authorized to comment publicly on the issue, the officials chose to remain anonymous.

On Deepfakes

A significant concern is that artificial intelligence (AI) has fueled the spreading of deepfakes. These are authentic-looking yet artificially generated videos by AI algorithms trained on extensive online content.

Significantly, deepfakes often circulate on social media, thus contributing to the blurring of fact and fiction in public life.

Margrethe Vestager, the European Union (EU) digital chief, emphasized the necessity for new regulations. She said this in light of the recent surge in the dissemination of fabricated sexually explicit images featuring pop singer Taylor Swift on social media.

According to her, what happened to Taylor Swift tells it all: the harm that AI can trigger if badly used, the responsibility of platforms, and why it is so important to enforce tech regulation.

According to sources, Mistral, a French AI start-up, has been actively lobbying their respective governments on this matter. Mistral was established by former researchers from Meta and Google AI, along with Germany’s Aleph Alpha.

Germany Backed the Regulations

CCIA, a tech lobbying group that records Alphabet’s Google, Amazon, Apple, and Meta Platforms as members, warned of roadblocks ahead.

Boniface de Champris, CCIA Europe’s Senior Policy Manager, said that many of the new AI rules remain unclear and could slow down the development and roll-out of innovative AI applications in Europe. He continued by saying that the act’s proper implementation will ensure that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market.

Moreover, for the AI Act to become legislation, the next step is a vote by a key committee of EU lawmakers on Feb. 13 and the European Parliament vote either in March or April.

The AI Act will likely be signed into law before the summer and should be implemented in 2026, although parts of the legislation will kick in earlier.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?