Generative Data Intelligence

AI will help create bioweapons within 3 years, says expert

Date:

AI systems are rapidly improving and will accelerate scientific discoveries – but the technology could also give criminals the power to create bioweapons and dangerous viruses in as little as two to three years, according to Anthropic CEO Dario Amodei.

Anthropic, founded by former OpenAI employees, prides itself on being safety-oriented and is best known for its large language model (LLM) chatbot Claude. Over the past six months the startup has reportedly been working with biosecurity experts to study how neural networks could be used to create weapons in the future.

On Thursday the head of the AI biz warned a US Senate technology subcommittee that regulation is desperately needed to tackle misuse of powerful models for harmful purposes in science and engineering, such as cyber security, nuclear technology, chemistry, and biology. 

“Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday. 

“Today, certain steps in the use of biology to create harm involve knowledge that cannot be found on Google or in textbooks and requires a high level of specialized expertise,” Amodei said in his opening statement to the senators.

“The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk. 

“However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

You can see where he’s coming from. Though the fundamental principles of modern nuclear weapons are publicly known and documented, actually engineering the devices – from producing the fuel and other materials at the heart of them, to designing the conventional explosives that trigger them, to miniaturizing them – is difficult and some of the steps remain highly classified. The same goes for biological weapons: there are steps that relatively few people know, and there is a danger a future ML model will be able to fill in those gaps for a wider audience.

Although the timescale seems dramatic, it’s not so far-fetched. Folks have taken to chatbots asking for instructions on how to create weapons such as pipe bombs and napalm, as well as drug recipes and other nefarious topics. The bots are supposed to have guardrails that prevent them from revealing that kind of information – a lot of which can be found through web searches or libraries, admittedly. However, there is a realistic risk that chatbots can make that sensitive info more easily accessible or understandable for curious netizens.

These models are trained on large amounts of text, including papers from scientific journals and textbooks. As they become more advanced, they could get better at gleaning insights from today’s knowledge to come up with discoveries – even dangerous ones – or provide answers that until now have been kept tightly under wraps for security reasons.

If nuclear bombs were software, would you allow open source of nuclear bombs?

Collaboration Pharmaceuticals, based in North Carolina, previously raised concerns that the same technology used to develop drugs could also be repurposed to create biochemical weapons.

LLMs therefore pose a potential threat to national security, as foreign adversaries or terrorists could use this knowledge to carry out large scale attacks. Bear in mind, though, it’s just information – actually obtaining the material, handling it, and processing it to pull off an assault would be tricky.

The dangers are further heightened by the release of open source models that are becoming more and more powerful. Senator Richard Blumenthal (D-CT) noted that a group of developers had used the code for Stability AI’s Stable Diffusion models to create a text-to-image system tailored to generating sexual abuse material, for example.

Let’s hear from one of the granddaddies

Yoshua Bengio, a pioneer researcher in neural networks and the scientific director of the Montreal Institute for Learning Algorithms, agreed. Bengio is often named as one of the three “Godfathers of AI” alongside Geoff Hinton, a computer science professor at the University of Toronto, and Yann LeCun, chief AI scientist at Meta. 

He urged lawmakers to pass legislation moderating the capabilities of AI models before they can be released more widely to the public. 

“I think it’s really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we’re opening all the doors to bad actors,” Bengio said during the hearing. “As these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bioweapons or cyber security, in order to take advantage of systems like this.”

“I think it’s really important that the government come up with some definition, which is going to keep moving, but makes sure that future releases are going to be carefully evaluated for that potential before they are released,” he declared.

“I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Geoff Hinton, my colleague, was saying: if nuclear bombs were software, would you allow open source of nuclear bombs?”

“When you control a model that you’re deploying, you have the ability to monitor its usage,” Amodei said. “It might be misused at one point, but then you can alter the model, you can revoke a user’s access, you can change what the model is willing to do. When a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.”

Although companies like Meta have tried to limit the potential risks of their systems, and prohibit developers from using them in harmful ways, it’s not a very effective method for preventing misuse. Who is responsible if something goes wrong?

“It’s not completely clear where the liability should lie,” said Stuart Russell, a professor of computer science at the University of California, Berkeley, who also testified at the hearing.

“To continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability resides with the company that decided to sell the enriched uranium?

“They could put advice on it that says ‘do not use more than three ounces of this in one place or something’, but no one is going to say that absolved them from liability … The open source community has got to start thinking whether they should be liable for putting stuff out there is ripe for misuse.”

Leaders in the open source AI community, however, seem to disagree. On Wednesday, a report backed by GitHub, Hugging Face, Eleuther AI and others argued that open source AI projects should not be subjected to the same regulatory scrutiny outlined in the EU AI Act as products and services built by private companies.

You can watch a replay of the hearing here. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?