How do attackers use AI hallucinations to create malicious code libraries?
Attackers exploit AI hallucinations by repeatedly prompting AI tools like ChatGPT until they generate recommendations for non-existent code libraries. Once identified, the attacker creates malicious libraries with these exact names and uploads them to open source repositories. When developers search for these AI-recommended libraries, they initially find nothing, but later discover and implement the attacker's malicious code. This technique serves as a Trojan horse, allowing malware to infiltrate development pipelines. With tight deadlines and limited time for validation, developers unknowingly integrate these malicious libraries into their products, potentially affecting thousands of customers—similar to the SolarWinds attack.
People also ask
TRANSCRIPT
Load full transcript
0
From
AI-Driven Code Library Threats: Understanding NPM Malware Risks for Developers
Semperis·6 months ago