Thanks to the widespread use of ChatGPT and the possibilities to have chatbots generate text upon user request, a new wave of malicious emails has begun to circulate and the fact that they are written by chatbots makes them seem more credible. In this way, cyber criminals’ new malicious campaigns gain increased credibility and have fewer and fewer failures to detect that they are fake emails.
With ChatGPT, you can receive flawless phishing emails without any spelling or grammatical errors. Well-written phishing emails can help scam victims more effectively, as they feed into the air of legitimacy that the spoofed sender is trying to mimic.
Phishing sites with impersonation
They are also impersonating OpenAI themselves, which may cause you to click on a website thinking it is the official ChatGPT website. So, you create an account by entering your name, contact details and other information. If the website you are using is indeed a malicious site, the information you enter is likely to be stolen for their exploitation.
Alternatively, you may receive an email from a user claiming to be a Chat GPT staff member, stating that their account requires some form of verification. Obviously, all they’re going to try to do is steal your credentials.
Fake ChatGPT apps
Although it’s free to use if you don’t want to try GPT-4 yet, cyber criminals can also use the ChatGPT name to enhance malicious applications. Malicious apps are nothing new and have been used for years to deploy malware, steal data, and monitor device activity.
Since around February, there has been a spurt in development of fake ChatGPT apps to spread Windows and Android malware. The main hook of cyber criminals is to take advantage of OpenAI’s ChatGPT Plus fee to trick users into believing that they can sign up for the free version of the premium tool. Here cyber criminals aim to steal credentials or deploy malware. It is important to do background research on any type of software program to see if it has a positive reputation and never download these apps that promise premium features for free when hosted from top-notch unreliable sources. Are.
Fake browser extension
As in the previous case, fake and malicious versions of browser extensions are also used to install malware and steal data. It’s true that there are extensions focused on ChatGPT (such as Merlin and Enhanced ChatGPT), but not all extensions you see in your browser’s app store are secure.
For example, an extension called “Chat GPT for Google” began circulating in March 2023 with an explosion of OpenAI tools. While spreading, the fake ChatGPT extension was stealing Facebook information of thousands of users.
Malware created by Chat-GPT
Actually, ChatGPT can be used to create malware. Chatbots have a tremendous range of possibilities and keeping in mind that it can be of great help at the programming level, it was normal that sooner or later its use would be forced to its dark side.
So far, no seriously dangerous malware such as ransomware has been identified as being built into OpenAI chatbots. But the tool’s ability to write even simple malware opens a door for those who want to engage in cybercrime but don’t have much or no technical background.