Generative Data Intelligence

AFP and Other Media Giants Call for Global AI Regulation to Protect Integrity

Date:

Several top global news and publishing organizations, including Agence France-Presse (AFP), European Pressphoto Agency, Getty Images, and others, have signed an open letter addressing policymakers and industry leaders. They are urging the establishment of a regulatory framework for generative AI models to preserve public trust in media and protect the integrity of content.

The letter, entitled “PRESERVING PUBLIC TRUST IN MEDIA THROUGH UNIFIED AI REGULATION AND PRACTICES,” outlines specific principles for the responsible growth of AI models and raises concerns about the potential risks if appropriate regulations are not implemented swiftly.

Proposed Regulations

Among the proposed principles for regulation, the letter emphasizes:

Transparency: The disclosure of training sets used in the creation of generative AI models, enabling scrutiny of potential biases or misinformation.

Intellectual Property Protection: Safeguarding the rights of content creators, whose work is often utilized without compensation in training AI models.

Collective Negotiation: Allowing media companies to collectively negotiate with AI model developers over the use of proprietary intellectual property.

Identification of AI-Generated Content: Mandating clear, specific, and consistent labeling of AI-generated outputs and interactions.

Misinformation Control: Implementing measures to restrict bias, misinformation, and abuse of AI services.

Concerns and Risks

The letter details potential hazards if regulations are not promptly put in place. These include erosion of public trust in media, violations of intellectual property rights, and the undermining of traditional media business models.

Generative AI models are capable of producing and distributing synthetic content on a scale previously unseen, potentially leading to the distortion of facts and the propagation of biases. Additionally, the letter highlights the financial impact on media companies, which may see their content disseminated without attribution or remuneration, threatening the sustainability of the industry.

A Call for Global Standards

The signatories are not only seeking immediate regulatory and industry action but also expressing support for consistent global standards applicable to AI development and deployment. While recognizing the potential benefits of generative AI technology, the letter emphasizes the necessity of responsible growth to protect democratic values and media diversity.

Though the letter applauds some efforts made within the AI community and various governments to address these concerns, there is a collective call to further the dialogue and advance regulations. The signatories express eagerness to be part of the solution, ensuring that AI applications continue to thrive while respecting the rights of media companies and individual journalists.

U.S. Government’s Recent Initiatives in AI Regulation

Global concerns regarding AI regulation, encompassing privacy, security, copyright, and misuse, have been met with recent initiatives by U.S. governmental bodies.

On July 13, 2023, the U.S. Federal Trade Commission (FTC) began a thorough check into ChatGPT over consumer protection concerns. OpenAI, the company behind ChatGPT, received a 20-page demand for records from the FTC, which is particularly investigating whether OpenAI’s handling of AI models has been unfair or misleading, possibly causing damage to people’s reputations.

On July 26, 2023, the SEC proposed new rules to prevent potential conflicts of interest arising from investment advisers and broker-dealers using predictive data analytics and AI. The SEC chairman Gary Gensler has even expressed concerns that AI may lead to the next financial crisis.

Image source: Shutterstock

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?