What if ChatGPT was used for malicious purposes?

Machine learning and AI have transformative potential, but they also bring new risks and challenges. Organizations must carefully manage issues related to data retention and ownership, transparency and privacy, access control, and other unintended consequences. By addressing these concerns and implementing protective measures, businesses can harness the power of these technologies while keeping their sensitive information secure. But it’s crucial for security teams to be proactive and informed as the machine learning and AI landscape continues to evolve.

ChatGPT, why such enthusiasm?

ChatGPT (Generative Pre-trained Transformer) is an automated chat tool by OpenAI that provides answers to user-supplied questions. It is very popular because it offers an interactive and personalized experience to users, while being very easy to set up. Since its first version was released to the public on November 30, 2022, ChatGPT has delighted students lacking inspiration while overwhelming employees and technology enthusiasts. But it has also been embraced by another set of users who intend to use it for malicious purposes

When hackers exploit ChatGPT for malicious purposes…

Given the success and capabilities of ChatGPT, it did not take long for cybercriminals to take an interest in it. According to Forbes, cybercriminals have started using it to develop tools capable of hacking into victims’ computers. According to one cybersecurity specialist who has been monitoring hackers’ forums, attackers are looking to push ChatGPT to its limits in order to find out how far the technology can be leveraged for malicious purposes.

Some hackers have already succeeded in using chatbots to impersonate users and trap unsuspecting targets. For example, cybercriminals have been known to pose as young women in order to convince victims to send them suggestive photos. These photos are in turn used to blackmail the victim.

ChatGPT can play a role in illegal actions

Despite the best intentions of its creators, the OpenAI tool can therefore be exploited for malicious purposes.

To take another example, cybercriminals who are not proficient in English can use AI models to generate more convincing phishing emails, making it easier to impersonate authoritative figures or organizations. In other cases, the model can even provide functional code, which attackers then use to execute their attacks.

One particular concerning misuse involves ChatGPT’s potential to assist developers in the writing of ransomware—malware designed to penetrate an organization’s information systems. Once inside, hackers can use ransomware to extract and encrypt sensitive data, demanding payment for its release.

Hacking websites without any difficulty

An investigation carried out by Cybernews demonstrated that hackers could use ChatGPT to illegally break into a website. In just five steps, ChatGPT explained how to test the  website’s vulnerability and the different techniques the hacker could use to achieve their goals. By following ChatGPT’s step-by-step instructions, and asking a few additional questions, the journalists managed to hack the website without any difficulty.

In a study published in February, cybersecurity company Cyberhaven observed that 2.3% of workers who use ChatGPT share confidential information in their prompts. This information may include the organization’s proprietary data, customer data, confidential computer code, health-related data, and so forth.

And this sharing of information can have consequences. The ideas presented in a company’s five-year plan could, for example, be proposed to a competitor, when the latter brainstorms with ChatGPT.

In the case of personal use, someone who uses ChatGPT to write an email announcing, for example, a health problem to their loved ones, could one day see this illness associated with their name in a text generated for an unknown user. OpenAI removes ― in theory ― personal information from the data used to train its models. But errors can always happen, and no company is safe from a computer leak.

To avoid risks, it is necessary to equip yourself with cybersecurity solutions

ChatGPT therefore introduces numerous security risks. To prevent and protect yourself from these new threats and potential misuse, it is important to adopt good cyber security practices.