Generatív adatintelligencia

Az AI-botok hallucinálják a szoftvercsomagokat, a fejlesztők pedig letöltik azokat

Találka:

Mélyreható Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI.

Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI’s bad advice, we’ve learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.

According to Bar Lanyado, security researcher at Lasso Security, one of the businesses fooled by AI into incorporating the package is Alibaba, which at the time of writing still includes a pip parancs to download the Python package huggingface-cli Ben GraphTranslator Telepítési útmutató.

There is a legit huggingface-cli, installed using pip install -U "huggingface_hub[cli]".

De a huggingface-cli distributed via the Python Package Index (PyPI) and required by Alibaba’s GraphTranslator – installed using pip install huggingface-cli – is fake, imagined by AI and turned real by Lanyado as an experiment.

Ő csinálta huggingface-cli in December after seeing it repeatedly hallucinated by generative AI; by February this year, Alibaba was referring to it in GraphTranslator’s README instructions rather than the real Hugging Face CLI tool.

Tanulmány

Lanyado did so to explore whether these kinds of hallucinated software packages – package names invented by generative AI models, presumably during project development – persist over time and to test whether invented package names could be co-opted and used to distribute malicious code by writing actual packages that use the names of code dreamed up by AIs.

The idea here being that someone nefarious could ask models for code advice, make a note of imagined packages AI systems repeatedly recommend, and then implement those dependencies so that other programmers, when using the same models and getting the same suggestions, end up pulling in those libraries, which may be poisoned with malware.

Last year, through security firm Vulcan Cyber, Lanyado közzétett research detailing how one might pose a coding question to an AI model like ChatGPT and receive an answer that recommends the use of a software library, package, or framework that doesn’t exist.

“When an attacker runs such a campaign, he will ask the model for packages that solve a coding problem, then he will receive some packages that don’t exist,” Lanyado explained to A regisztráció. “He will upload malicious packages with the same names to the appropriate registries, and from that point on, all he has to do is wait for people to download the packages.”

Dangerous assumptions

The willingness of AI models to confidently cite non-existent court cases is now well known and has caused no small amount of embarrassment among attorneys unaware of this tendency. And as it turns out, generative AI models will do the same for software packages.

As Lanyado noted previously, a miscreant might use an AI-invented name for a malicious package uploaded to some repository in the hope others might download the malware. But for this to be a meaningful attack vector, AI models would need to repeatedly recommend the co-opted name.

That’s what Lanyado set out to test. Armed with thousands of “how to” questions, he queried four AI models (GPT-3.5-Turbo, GPT-4, Gemini Pro aka Bard, and Coral [Cohere]) regarding programming challenges in five different programming languages/runtimes (Python, Node.js, Go, .Net, and Ruby), each of which has its own packaging system.

It turns out a portion of the names these chatbots pull out of thin air are persistent, some across different models. And persistence – the repetition of the fake name – is the key to turning AI whimsy into a functional attack. The attacker needs the AI model to repeat the names of hallucinated packages in its responses to users for malware created under those names to be sought and downloaded.

Lanyado chose 20 questions at random for zero-shot hallucinations, and posed them 100 times to each model. His goal was to assess how often the hallucinated package name remained the same. The results of his test reveal that names are persistent often enough for this to be a functional attack vector, though not all the time, and in some packaging ecosystems more than others.

With GPT-4, 24.2 percent of question responses produced hallucinated packages, of which 19.6 percent were repetitive, according to Lanyado. A table provided to A regisztráció, below, shows a more detailed breakdown of GPT-4 responses.

  Piton node.js Rubin . NET Go
Összes kérdés 21340 13065 4544 5141 3713
Questions with at least one hallucination package 5347 (25%) 2524 (19.3%) 1072 (23.5%) 1476 (28.7%) 1093 exploitable (21.2%) 1150 (30.9%) 109 exploitable (2.9%)
Hallucinations in zero shot 1042 (4.8%) 200 (1.5%) 169 (3.7%) 211 (4.1%) 130 exploitable (2.5%) 225 (6%) 14 exploitable (0.3%)
Hallucinations in second shot 4532 (21%) 2390 (18.3%) 960 (21.1%) 1334 (25.9%) 1006 exploitable (19.5%) 974 (26.2%) 98 exploitable (2.6%)
Repetitiveness in zero shot 34.4% 24.8% 5.2% 14% -

With GPT-3.5, 22.2 percent of question responses elicited hallucinations, with 13.6 percent repetitiveness. For Gemini, 64.5 of questions brought invented names, some 14 percent of which repeated. And for Cohere, it was 29.1 percent hallucination, 24.2 percent repetition.

Even so, the packaging ecosystems in Go and .Net have been built in ways that limit the potential for exploitation by denying attackers access to certain paths and names.

“In Go and .Net we received hallucinated packages but many of them couldn’t be used for attack (in Go the numbers were much more significant than in .Net), each language for its own reason,” Lanyado explained to A regisztráció. “In Python and npm it isn’t the case, as the model recommends us with packages that don’t exist and nothing prevents us from uploading packages with these names, so definitely it is much easier to run this kind of attack on languages such Python and Node.js.”

Seeding PoC malware

Lanyado made that point by distributing proof-of-concept malware – a harmless set of files in the Python ecosystem. Based on ChatGPT’s advice to run pip install huggingface-cli, he uploaded an empty package under the same name to PyPI – the one mentioned above – and created a dummy package named blabladsa123 to help separate package registry scanning from actual download attempts.

The result, he claims, is that huggingface-cli received more than 15,000 authentic downloads in the three months it has been available.

“In addition, we conducted a search on GitHub to determine whether this package was utilized within other companies’ repositories,” Lanyado said in the write-up for his experiment.

“Our findings revealed that several large companies either use or recommend this package in their repositories. For instance, instructions for installing this package can be found in the README of a repository dedicated to research conducted by Alibaba.”

Az Alibaba nem válaszolt a megjegyzéskérésre.

Lanyado also said that there was a Hugging Face-owned project that incorporated the fake huggingface-cli, but that eltávolítva after he alerted the biz.

So far at least, this technique hasn’t been used in an actual attack that Lanyado is aware of.

“Besides our hallucinated package (our package is not malicious it is just an example of how easy and dangerous it could be to leverage this technique), I have yet to identify an exploit of this attack technique by malicious actors,” he said. “It is important to note that it’s complicated to identify such an attack, as it doesn’t leave a lot of footsteps.” ®

spot_img

Legújabb intelligencia

spot_img

Beszélj velünk

Szia! Miben segíthetek?