• Sun. Nov 24th, 2024

Large language models hallucinating non-existent developer packages could fuel supply chain attacks

Byadmin

Sep 30, 2024



Large Language Models (LLMs) have a serious “package hallucination” problem that could lead to a wave of maliciously-coded packages in the supply chain, researchers have discovered in one of the largest and most in-depth ever studies to investigate the problem.

It’s so bad, in fact, that across 30 different tests, the researchers found that 440,445 (19.7%) of 2.23 million code samples they generated experimentally in two of the most popular programming languages, Python and JavaScript, using 16 different LLM models for Python and 14 models for JavaScript, contained references to packages that were hallucinated.

The multi-university study, first published in June but recently updated, also generated “a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat.”



Source link