Generative Data Intelligence

DARPA launches contest to build AI software defenders

Date:

Black Hat In a surprise announcement at the opening Black Hat keynote today, DARPA unveiled what it’s calling an AI Cyber Challenge (AIxCC). That’s a two-year competition to build protective machine-learning systems that can safeguard software and thus critical infrastructure.

The contest, which begins Wednesday, will pit teams against each other to build models that can identify risks within code, block attacks against those vulnerabilities, and address these flaws. It’s hoped that these models, once capable of protecting applications in general, will therefore be able to defend IT infrastructure at the software level.

Anthropic, OpenAI, Google, and Microsoft have pledged to provide advice and software for participants to use, and the Open Source Security Foundation (OpenSSF) is also on the team.

More background details, rules, and deadlines can be found here.

“The push for this comes from within DARPA, and when we approached people like Anthropic and others about this, they said they were thinking exactly the same thing,” AIxCC program manager Perri Adams told The Register in Las Vegas earlier today. She added this is all coming together “in a climate in which we’re seeing fantastic AI technology.”

“There’s a lot of people who see that AI has enormous potential to secure code and that was a really fantastic opportunity,” she added. “What we’re focused on is trying to secure as broad a swath of software as possible. So, we’re trying to model challenges on general purpose software because that is what we find in critical infrastructure systems.”

darpa

Perri Adams, DARPA’s AIxCC program manager, tells Black Hat attendees there’s millions on the table

DARPA, the research nerve-center of the US military, is inviting those keen to take part to register, either for a self-funded Open Track or a Funded track, the latter of which will accept up to seven selected small businesses who will be given as much as $1 million each to compete. Registration for the funded partition must be in by September 19, while open track contestants have until December 15 to sign up.

In Spring 2024 the teams will compete in a series of trials to determinate eligibility for the semi-final competition, where the top 20 teams will compete at next year’s DEF CON conference. The top five teams in that competition will get $2 million apiece in prize money.

The following year, at DEF CON in 2025, will see the final five compete for a $4 million top prize, with $3 million for second place and $1.5 million for third. Teams will need at least one US permanent resident or citizen on the squad to be eligible to take part.

That’s $18.5 million total, and $7 million for the small biz entrants.

“When people hear security and AI, all the synapses start firing, but this is not focused on ‘is the model secure,’ but ‘let’s assume that we have this amazing tool called AI, now how do we apply it across broad amounts of software’,” Omkhar Arasaratnam, OpenSSF’s general manager, told us.

“If you’re a software engineer and you’re using zlib on your phone or in a Linux distribution on your desktop, it’s still zlib and we have to ensure the same security properties apply.”

That approach is embedded in AIxCC: safeguarding code wherever it may be, or so it seems to us.

So many black boxes

AI was a running theme today. Another Black Hat keynote speaker, Maria Markstedter, founder of Arm code specialist Azeria Labs, warned the future, in terms of securing machine-learning technologies, was looking uncertain due to two factors: the industry’s desire to move fast and break things, and a lack of internal technical details for security professionals to work with.

That is to say, some organizations are rushing models into real-world scenarios in a way that could put people at risk, and how the models are trained and deployed is kept secret or is simply hard to follow. All of which is not a great situation for infosec researchers, and end users and administrators.

“The corporate AI arms race has begun,” Markstedter opined. “At the forefront of this is, of course, Microsoft.”

She pointed out that in February, Microsoft CEO Satya Nadella boasted his corporation was going to move fast in this area, and she added that Redmond exec Sam Schillace was even quoted as saying it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.” Anyone who has applied Windows patches will know this all too well.

Let’s hope that ML doesn’t start off as smartphones did. Markstedter said early models ran everything with root access, they were riddled with critical bugs, had very little in the way of remediation, and there was no sandboxing of data.

Change came because security folks could take the handsets apart, physically and code-wise, see how they worked, find flaws, and show manufacturers where they needed to up their game. The same won’t be possible with black-box AI products, she suggested.

To make maters worse, with training data lifted from the web wholesale, today’s big AI stacks are “like a dummy who believes everything they read on the internet,” she said. In addition, there is research showing that poisoning training datasets could lead to huge impacts on accuracy and operations of ML systems.

“We don’t know what that situation might look like right now,” she warned. “The biggest problem isn’t the existence of these challenges; our problem is that we don’t have enough people with the skills to address them.” ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?