Generative Data Intelligence

Better Phishing, Easy Malicious Implants: How AI Could Change Cyberattacks

Date:

Artificial intelligence and machine learning (AI/ML) models have already shown some promise in increasing the sophistication of phishing lures, creating synthetic profiles, and creating rudimentary malware, but even more innovative applications of cyberattacks will likely come in the near future.

Malware developers have already started toying with code generation using AI, with security researchers demonstrating that a full attack chain could be created. 

The Check Point Research team, for example, used current AI tools to create a complete attack campaign, starting with a phishing email generated by OpenAI’s ChatGPT that urges a victim to open an Excel document. The researchers then used the Codex AI programming assistant to create an Excel macro that executes code downloaded from a URL and a Python script to infect the targeted system. 

Each step required multiple iterations to produce acceptable code, but the eventual attack chain worked, says Sergey Shykevich, threat intelligence group manager at Check Point Research.

“It did require a lot of iteration,” he says. “At every step, the first output was not the optimal output — if we were a criminal, we would have been blocked by antivirus. It took us time until we were able to generate good code.”

Over the past six weeks, ChatGPT — a large language model (LLM) based on the third iteration of OpenAI’s generative pre-trained transformer (GPT-3) — has spurred a variety of what-if scenarios, both optimistic and fearful, for the potential applications of artificial intelligence and machine learning. The dual-use nature of AI/ML models have left businesses scrambling to find ways to improve efficiency using the technology, while digital-rights advocates worry over the impact the technology will have on organizations and workers. 

Cybersecurity is no different. Researchers and cybercriminal groups have already experimented with using GPT technology for a variety of tasks. Purportedly novice malware authors have used ChatGPT to write malware, although developers attempts to use the ChatGPT service to produce applications, while sometimes successful, often produce code with bugs and vulnerabilities.

Yet AI/ML is influencing other areas of security and privacy as well. Generative neural networks (GNNs) have been used to create photos of synthetic humans, which appear authentic but do not depict a real person, as a way to enhance profiles used for fraud and disinformation. A related model, known as a generative adversarial network (GAN), can create fake video and audio of specific people, and in one case, allowed fraudsters to convince accountants and human resources departments to wire $35 million to the criminals’ bank account.

The AI systems will only improve over time, raising the specter of a variety of enhanced threats that can fool existing defensive strategies.

Variations on a (Phishing) Theme

For now, cybercriminals often use the same or similar template to create spear-phishing email messages or construct landing pages for business email compromise (BEC) attacks, but using a single template across a campaign increases the chance that defensive software could detect the attack.

So, one main initial use of LLMs like ChatGPT will be as a way to produce more convincing phishing lures, with more variability and in a variety of languages, that can dynamically adjust to the victim’s profile.

To demonstrate the point, Crane Hassold, a director of threat intelligence at email security firm Abnormal Security, requested that ChatGPT generate five variations on a simple phishing email request. The five variations differed significantly from each other but kept the same content — a request to the human resources department about what information a fictional company would require to change the bank account to which a paycheck is deposited. 

Fast, Undetectable Implants

While a novice programmer may be able to create a malicious program using an AI coding assistant, errors and vulnerabilities still get in the way. AI systems’ coding capabilities are impressive, but ultimately, they do not rise to the level of being able to create working code on their own.

Still, advances could change that in the future, just as malware authors used automation to create a vast number of variants of viruses and worms to escape detection by signature-scanning engines. Similarly, attackers could use AI to quickly create fast implants that use the latest vulnerabilities before organizations can patch.

“I think it is a bit more than a thought experiment,” says Check Point’s Shykevich. “We were able to use those tools to create workable malware.”

Passing the Turing Test?

Perhaps the best application of AI system may be the most obvious: the ability to function as artificial humans. 

Already, many of the people who interact with ChatGPT and other AI systems — including some purported experts — believe that the machines have gained some form of sentience. Perhaps most famously, Google fired a software engineer, Blake Lemoine, who claimed that the company’s LLM, dubbed LaMDA, had reached consciousness.

“People believe that these machines understand what they are doing, conceptually,” says Gary McGraw, co-founder and CEO at the Berryville Institute of Machine Learning, which studies threats to AI/ML systems. “What they are doing is incredible, statistical predictive auto-associators. The fact that they can do what they do is mind boggling — that they can have that much cool stuff happening. But it is not understanding.”

While these auto-associative systems do not have sentience, they may be good enough to fool workers at call centers and support lines, a group that often represents the last line of defense against account takeover, a common cybercrime.

Slower Than Predicted

Yet while cybersecurity researchers have quickly developed some innovative cyberattacks, threat actors will likely hold back. While ChatGPT’s technology is “absolutely transformative,” attackers will likely only adopt ChatGPT and other forms of artificial intelligence and machine learning, if it offers them a faster path to monetization, says Abnormal Security’s Hassold. 

“AI cyberthreats have been a hot topic for years,” Hassold says. “But when you look at financially motivated attackers, they do not want to put a ton of effort or work into facilitating their attacks, they want to make as much money as possible with the least amount of effort.”

For now, attacks conducted by humans require less effort than attempting to create AI-enhanced attacks, such as deepfakes or GPT-generated text, he says.

Defense Should Ignore the AI Fluff

Just because cyberattackers make use of the latest artificial intelligence system does not mean the attacks are harder to detect, for now. Current malicious content produced by AI/ML models are typically icing on the cake — they make text or images appear more human, but by focusing on the technical indicators, cybersecurity products can still recognize the threat, Hassold stresses.

“The same sort of behavioral indicators that we use to identify malicious emails are all still there,” he says. “While the email may look more legitimate, the fact that the email is coming from an email address that does not belong to the person who is sending it or that a link may be hosted on a domain that has been recently registered — those are indicators that will not change.”

Similarly, processes in place to double check requests to change a bank account for payment and paycheck remittance would defeat even the most convincing deepfake impersonation, unless the threat group had access or control over the additional layers of security that have grown more common.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?