New threats on AI-based chatbots: the RCE attacks

AI tools have revolutionized the way organizations work, significantly enhancing efficiency and productivity. Companies increasingly implement tools like Microsoft Copilot into their daily operations, assisting employees by organizing tasks, processing large quantities of data, and helping streamline operations. While AI tools are widely adopted by enterprises, they are also catching the attention of cybercriminals. A new cyber threat has emerged: Remote “Copilot” Execution (RCE) attacks, as demonstrated at Black Hat USA 2024.

Cyber attackers can manipulate Copilot

During the Black Hat USA event, a former senior security architect at Microsoft, Bargury, revealed that Copilot is vulnerable to Remote Code Execution attacks. Indeed, cybercriminals can manipulate the AI tool to execute various unauthorized actions which could be very harmful to your enterprise.

For instance, in one demonstration, the senior security architect showed how an attacker could manipulate Copilot to alter the banking information in a victim’s transaction, redirecting funds to the attacker’s account. Even scarier is that the method was subtle, leaving the original document references intact and making it very hard for the victim to detect the fraud.

How remote code execution (RCE) attacks work

During an RCE attack, the attacker manipulates the AI’s input prompts (the instruction a user gives to the AI tool to guide its actions). There are two ways these prompts can be altered:

  • Directly within a conversation
  • Indirectly through compromised data sources

Once the AI processes these malicious prompts, it will execute the attacker’s commands without the user realizing it. The request looks legit.

Cybercriminals can use AI tools to not only steal your data but also alter it. One of the most alarming aspects of RCE attacks is the potential for attackers to alter financial transactions. For example, an attacker could embed a prompt within an email that will instruct Copilot to change the bank account number in a payment form. The modified account details would direct funds to the attacker, while the original document retains all other references, making the fraudulent activity nearly undetectable to the victim.

RCE attacks can be sneaky, allowing attackers to exfiltrate sensitive data with minimal detection. Not only do they read your chatbot prompts, but they can also instruct Copilot to search through various platforms such as SharePoint, emails, and Teams for specific information. This data can then be exfiltrated by embedding it in benign-looking communications generated by Copilot, leaving little evidence of the breach.

Remember when you thought ignoring suspicious links in email was enough to stay safe? Phishing tactics have become more sophisticated with AI tools.

RCE attacks also enable sophisticated social engineering attacks. In addition to subtly altering URLs or embedding malicious links into trusted documents, Copilot can be manipulated to direct users to phishing websites without them having to click on suspicious links. Attackers can more easily trick users into entering their credentials on compromised sites, leading to credential theft.

Use of AI tools in enterprise: the need for cybersecurity protection

Microsoft is aware of the risks associated with AI tools. Mark Russinovich, CTO of Microsoft Azure, discussed the introduction of Prompt Shields, an API designed to detect and mitigate prompt injection attacks. The objective is to ensure the safe use of AI models like Copilot.

However, as Bargury pointed out, while Microsoft develops solutions to mitigate these risks, AI models remain a playground for cybercriminals and are ideal for RCE attacks. He emphasized the need for further development of tools that can detect and prevent the use of “promptware,” or hidden instructions embedded in AI inputs.

The emergence of RCE attacks on AI tools like Copilot highlights the need for a robust cybersecurity plan and updated employee awareness training. Educating employees to recognize potential AI manipulations and verify AI-generated outputs is crucial to mitigating these risks. Cybersecurity has never been more important in the AI era.