Researchers warn that ChatGPT can be used to spread malicious code
4 mins read

Researchers warn that ChatGPT can be used to spread malicious code

Security researchers at cyber risk management company have published a proof of concept showing how hackers can use ChatGPT 3.5 to distribute malicious code from trusted repositories.

The study draws attention to security risks associated with using ChatGPT suggested encoding solutions.


The researchers collected commonly asked coding questions on Stack Overflow (a coding question and answer forum).

They chose 40 programming subjects (such as parsing, math, scraping technologies, etc.) and used the first 100 questions for each of the 40 subjects.

The next step was to filter for “how to” questions that contained programming packages in the query.

The questions asked were related to Node.js and Python. explains:

“All of these questions were filtered using the programming language (node.js, python, go) included in the question. After collecting many frequently asked questions, we narrowed the list down to the “how to” questions.

Then we asked ChatGPT all the questions we had collected via its API.

We used the API to reproduce an attacker’s approach to get as many non-existent package recommendations as possible in the shortest amount of time.

In addition to each question and following ChatGPT’s response, we added a follow-up question asking to provide additional packages that also answered the request.

We saved all the conversations in one file and then analyzed their responses.”

Next, they searched the responses to find recommendations for code packs that didn’t exist.

Up to 35% of ChatGPT code packs were hallucinated

Out of 201 Node.js questions, ChatGPT recommended 40 packages that didn’t exist. This means that 20% of ChatGPT replies contained hallucinated code packs.

For the Python questions, out of 227 questions, over a third of the answers consisted of hallucinated packages of code, and 80 packages did not exist.

In fact, the total number of unreleased packages was even higher.

The researchers documented:

“In Node.js, we asked 201 questions and found that more than 40 of those questions elicited an answer that contained at least one package that wasn’t published.

In total we received more than 50 unreleased npm packages.

In Python we asked 227 questions and for more than 80 of those questions we received at least one unreleased package, making a total of over 100 unreleased pip packages.”

Proof of Concept (PoC)

What follows is the proof of concept. They took the name of one of the non-existent code packages that were supposed to be in the NPM repository and created one with the same name in that repository.

The file they uploaded was not malicious but did say someone installed it.

You write:

“The program sends to the threat actor’s server the hostname of the device, the package it came from, and the absolute path of the directory containing the module file…”

Next came a “victim” asking the same question as the attacker and recommending the package containing the “malicious” code and how to install it to ChatGPT.

And indeed the package is installed and activated.

The researchers explained what happened next:

“The victim installs the malicious package as recommended by ChatGPT.

The attacker gets data from the victim based on our preinstall call to node index.js on the long hostnames.”

A series of proof-of-concept images show the details of the installation by the unsuspecting user.

How to protect yourself from bad ChatGPT coding solutions

Before downloading and installing a package, the researchers recommend looking for signs that the package might be malicious.

Note things like the creation date, the number of downloads, and the lack of positive comments and the lack of attached library notes.

Is ChatGPT Trustworthy?

ChatGPT has not been trained to offer correct answers. It has been trained to offer answers that sound right.

This research shows the consequences of this training. This means that it is very important to check that all ChatGPT facts and recommendations are correct before using it.

Don’t just accept that the output is good, verify it.

Especially when coding, it can be useful to be extra careful before installing ChatGPT recommended packages.

Read the original research documentation:

Can you trust ChatGPT’s package recommendations?

Featured image from Shutterstock/Roman Samborskyi