Generativna podatkovna inteligenca

Google, Meta, OpenAI se združujejo z drugimi velikani industrije proti posnetkom zlorabe otrok z umetno inteligenco – dešifriraj

Datum:

Za boj proti širjenju gradiva o spolni zlorabi otrok (CSAM) je koalicija najboljših generativnih razvijalcev umetne inteligence – vključno z Googlom, Meto in OpenAI – obljubila, da bo uveljavila varovalne ograje za nastajajočo tehnologijo.

Skupino sta združili dve neprofitni organizaciji: otroška tehnološka skupina Thorn in All Tech is Human iz New Yorka. Thorn, prej znan kot DNA Foundation, sta leta 2012 ustanovila igralca Demi Moore in Ashton Kutcher.

The collective pledge was announced Tuesday along with a new Thorn report advocating a “Safety by Design” principle in generative AI development that would prevent the creation of child sexual abuse material (CSAM) across the entire lifecycle of an AI model.

“We urge all companies developing, deploying, maintaining, and using generative AI technologies and products to commit to adopting these Safety by Design principles and demonstrate their dedication to preventing the creation and spread of CSAM, AIG-CSAM, and other acts of child sexual abuse and exploitation,” Thorn je dejal v izjavi.

AIG-CSAM je CSAM, ki ga ustvari umetna inteligenca, kar poročilo prikazuje, da ga je mogoče razmeroma preprosto ustvariti.

Slika, ki prikazuje, kako lahko AI popači sliko.
Slika: Trn

Thorn develops orodja and resources focused on defending children from sexual abuse and exploitation. In its 2022 impact report, the organization said over 824,466 files containing child abuse material were found. Last year, Thorn reported more than 104 million files of suspected CSAM were reported in the U.S. alone.

Already a problem online, deepfake child pornography skyrocketed after generative AI models became publicly available, with stand-alone AI models that don’t need cloud services being circulated on temni splet forumi.

Generative AI, pravi Thorn, olajša ustvarjanje količin vsebine zdaj kot kadar koli prej. En sam otrok plenilec bi lahko ustvaril ogromne količine gradiva o spolni zlorabi otrok (CSAM), vključno s prilagajanjem izvirnih slik in videoposnetkov v novo vsebino.

»Pritok AIG-CSAM predstavlja veliko tveganje za že tako obdavčen ekosistem varnosti otrok, kar poslabšuje izzive, s katerimi se soočajo organi kazenskega pregona pri prepoznavanju in reševanju obstoječih žrtev zlorab, ter povečuje novo viktimizacijo več otrok,« ugotavlja Thorn.

Thorn’s report outlines a series of principles the generative AI developers would follow to prevent their technology from being used to create child pornography, including responsibly sourcing training nabor podatkov, incorporating feedback loops and stress-testing strategies, employing content history or “provenance” with adversarial misuse in mind, and responsibly hosting their respective AI models.

Others signing onto the pledge include Microsoft, antropično, Mistral AI, Amazon, Stabilnost AI, Civit AIin Metafizika, each releasing separate statements today.

"Del našega etosa pri Metaphysicu je odgovoren razvoj v svetu umetne inteligence, res, gre za opolnomočenje, a gre za odgovornost," je povedal vodja marketinga pri Metaphysicu Alejandro Lopez Dešifriraj. »Hitro spoznamo, da začeti in razvijati to pomeni dobesedno zaščititi najbolj ranljive v naši družbi, to so otroci, in na žalost najtemnejši del te tehnologije, ki se uporablja za gradivo o spolni zlorabi otrok v obliki globoko lažne pornografije. , to se je zgodilo.”

Začela v 2021, Metafizika came to prominence last year after it was revealed several Hollywood stars, including Tom Hanks, Octavia Spencer, and Anne Hathaway, were using Metaphysic Pro technology to digitize characteristics of their likeness in a bid to retain ownership over the traits necessary to train an AI model.

OpenAI ni želel komentirati pobude, temveč je zagotovil Dešifriraj javna izjava njene vodje za varnost otrok Chelsea Carlson.

“We care deeply about the safety and responsible use of our tools, which is why we’ve built strong guardrails and safety measures into ChatGPT and DALL-E,” Carlson said in a Izjava. “We are committed to working alongside Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles and continue our work in mitigating potential harms to children.”

Dešifriraj stopil v stik z drugimi člani koalicije, vendar se ni takoj oglasil.

“At Meta, we’ve spent over a decade working to keep people safe online. In that time, we’ve developed numerous tools and features to help prevent and combat potential harm—and as predators have adapted to try and evade our protections, we’ve continued to adapt too,” Meta je dejal v pripravljeni izjavi.

“Across our products, we proactively detect and remove CSAE material through a combination of hash-matching technology, artificial intelligence classifiers, and human reviews,” google‘s vice president of trust and safety solutions Susan Jasper wrote in a post. “Our policies and protections are designed to detect all kinds of CSAE, including AI-generated CSAM. When we identify exploitative content we remove it and take the appropriate action, which may include reporting it to NCMEC.”

In October, the UK watchdog group, the Fundacija Internet Watch, warned that AI-generated child abuse material could ‘overwhelm’ internet.

Uredil Ryan Ozawa.

Bodite na tekočem s kripto novicami, prejemajte dnevne posodobitve v svoj nabiralnik.

spot_img

Najnovejša inteligenca

spot_img

Klepetajte z nami

Zdravo! Kako vam lahko pomagam?