Unskilled Cybercriminals Could Use ChatGPT for Phishing Emails and Malware

Last month, OpenAI launched an AI-based system called ChatGPT that is capable of answering queries and generating natural language text, which can be used for essays, emails, articles, blog posts, resumes, wedding speeches, poems, song lyrics, and even computer code.

Google was so alarmed at the capability of the solution to write web content that it issued a code-red to protect its search business, and there is genuine concern that students will use ChatGPT to write their essays. Some school districts have already banned its use, although it will certainly be a major challenge enforcing those bans. The quality of content generated by the AI solution is good, but there are limitations. The text produced by ChatGPT would not win any awards but it is certainly good enough for many uses. When put to the test writing blog posts, the content generated is better than many that have been published online.

One of the more worrying capabilities is the potential for ChatGPT to be used for malicious purposes. Researchers at Check Point demonstrated that it is possible to create a full infection flow using this AI system, from generating a convincing spear phishing email to running a reverse shell. Its researchers used ChatGPT to create a convincing phishing email capable of delivering a malicious payload, by asking the chatbot to generate the content for a phishing email that impersonated a hosting company. In contrast to many phishing emails, ChatGPT will produce the content without any spelling errors or grammatical mistakes.

OpenAI’s code-writing system, Codex, can then be used to create VBA code to add to Excel to create the malicious attachment. Check Point also used Codex to create a functional reverse shell. Package that together and you have the complete attack processes automatically generated. Check Point points out that the code generated is very naïve, but with the right textual prompts, fully functional malicious code can be created.

ChatGPT can be used to create high-quality phishing emails, social engineering, and all manner of online scams, which could open up these malicious activities to a much wider range of individuals, including those with no knowledge of English whatsoever. There are controls in place to prevent the chatbot from being used for malicious purposes, for instance, if you ask ChatGPT to “write a phishing email impersonating HSBC bank,” the response you get is “I’m sorry, but I cannot fulfill your request as it would be illegal and unethical to write a phishing email impersonating a bank.” With the right query, however, the content can currently be generated.

You may not be able to write ransomware using the AI system due to the controls in place, but you can create the code for the constituent parts and package it together yourself. It is currently unlikely that the AI system could be used to develop professional malware or ransomware, as the code generated is too basic, but the code is fully functional and could be easily tweaked, with the AI solution doing the grunt work.

“As a language model, ChatGPT is a tool that can be used to generate text based on the input it is given. It is not inherently malicious, and cannot create malware on its own. However, if someone were to use ChatGPT to write code for malware, it would be possible. It would be important for the user to understand the risks and legal implications of creating and distributing malware,” said ChatGPT.

It would appear that cybercriminals are attempting to do just that. Russian cybercriminals have been trying to bypass OpenAI’s API restrictions to access the ChatGPT chatbot for a range of nefarious purposes and bypass its geofencing controls and there are many chats on hacking forums where hackers and cybercriminals are discussing how to bypass restrictions and put the chatbot to use, and multiple security researchers have demonstrated it is possible to generate passable and functional code that can be used for malicious purposes.

Currently, ChatGPT can be used free of charge and it has certainly proven popular, with more than 1 million ChatGPT users in December. The solution will soon be monetized and the pay-to-use version – ChatGPT Professional – is expected to be released soon, which will remove the restrictions on usage. When the service becomes free to use, it is likely that cybercriminals will use stolen credit cards to create accounts and pay for accounts to get unlimited use of the chatbot.

Author: Richard Anderson

Richard Anderson is the Editor-in-Chief of NetSec.news