Generative Data Intelligence

Boffins urge AI regulation to head off future threats

Date:

A group of 24 AI luminaries have published a paper and open letter calling for stronger regulation of, and safeguards for, the technology, before it harms society and individuals.

“For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough,” the group urged in their document.

Led by two of the three so-called “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, the group said that AI progress has been “swift and, to many, surprising.”

There’s no reason to suppose the pace of AI development will slow down, the group argued, meaning a point has been reached at which regulation is both required and possible – an opportunity they argue could pass.

“Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long,” the letter asserts. “Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective.”

Future development of autonomous AI is the focal point of the letter. Such systems, the boffins argue, could be designed with malicious intent, or equipped with harmful capabilities making them potentially more dangerous than the many nation-state actors currently threatening sensitive systems.

Further, bad AI could “amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society,” the authors wrote.

To forestall the worst possibilities, the letter urges companies researching and implementing AI to use “safe and ethical objectives”. The authors suggest think tech companies and private funders of AI research should allocate at least a third of their R&D budgets to safety.

The authors urge governments act, too, and point out that there aren’t any regulatory or governance frameworks in place to address AI risks, yet governments do regulate pharmaceuticals, financial systems, and nuclear energy.

Governments should ensure they have insight into AI development through regulations like model registration, whistleblower protection, incident reporting standards and monitoring of model development and supercomputer usage, the letter-writers argue.

Governments should be given access to AI systems prior to their deployment “to evaluate them for dangerous capabilities” like self-replication, which the authors argue could make an autonomous AI an unstoppable threat. In addition, developers of cutting-edge “frontier AI” models should be held legally accountable for harms inherent in their models if those issues ” can be reasonably foreseen or prevented.”

Regulators should also give themselves the authority to “license [AI] development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready,” the group asserts.

“There is a responsible path, if we have the wisdom to take it,” Hinton, Bengio and their colleagues wrote.

Meta’s AI boss disagrees

The call for better AI risk management comes just a week before the world’s first summit on AI safety being held at the UK’s Bletchley Park in November. Global governments, tech leaders and academics will all be in attendance to discuss the very threats that the open paper cautions about.

One of the participants at the Bletchley summit will be Yann LeCun, the third of three AI godfathers who won the Turing Prize in 2019 for their research into neural networks, and whose name is conspicuously absent from the risk management paper published today.

In contrast to Bengio and Hinton, the latter of whom left Google in May and expressed regrets for his contributions to the AI field and the harm they could cause, LeCun continues his work with the private tech industry as the chief AI scientist at Facebook parent company Meta, which has gone all-in on AI development of late.

LeCun even got into a debate on Facebook earlier this month with Bengio.

The Meta exec claimed that a “silent majority” of AI scientists don’t believe in AI doomsday scenarios and believe that the tech needs open, accessible platforms to become “powerful, reliable and safe.”

Bengio, in contrast, said he thinks something with as much potential as AI needs regulation lest it fall into the wrong hands.

“Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun … From memory, you disagreed with such policies,” Bengio said in a response to LeCun’s Facebook post. “Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.”

LeCun didn’t respond to questions from The Register, but he did speak to The Financial Times last week and make points that now read like an anticipatory response to the claims in the academic-authored AI risk management paper.

“Regulating research and development in AI is incredibly counterproductive,” LeCun told the FT, adding that those who are asking for such “want regulatory capture under the guise of AI safety.”

LeCun dismissed the possibility that AI could threaten humanity as “preposterous,” arguing that AI models don’t even understand the world, can’t plan, and can’t really reason.

“We do not have completely autonomous, self-driving cars that can train themselves to drive in about 20 hours of practice, something a 17-year-old can do,” LeCun argued. Trying to control rapidly evolving technology like AI should be compared to the early days of the internet, which only flourished because it remained open, the Meta man argued.

It’s worth noting that the authors of the paper and open letter published today make no claims that the current generation of AI is capable of the threats they predict. Rather, they want regulations imposed before such issues emerge.

“In 2019, GPT-2 could not reliably count to ten. Only four years later, deep learning systems can write software, generate photorealistic scenes on demand, advise on intellectual topics, and combine language and image processing to steer robots,” the 24-academic group noted.

“We must anticipate the amplification of ongoing harms, as well as novel risks, and prepare for the largest risks well before they materialize.” ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?