Cyber Security

ChatGPT presents new risks – here are five things you can do to mitigate them

ChatGPT is the latest darling of the internet, a chatbot offering a remarkably convincing simulacrum of natural language. Launched in November 2022 by OpenAI, the US research lab, and built on GPT3.5, the tool is open via a public beta for anyone to try out.

Microsoft is set to invest $10bn in OpenAI, and recently said that it will add ChatGPT to its Azure OpenAI suite. This lets businesses integrate AI tools, including the image-generation tool DALL-E, and Codex, which translates natural language into code, into their technology stacks.

With any new tools, however, come new concerns and cybersecurity risks. High on the list of concerns for cybersecurity practitioners is the possibility that ChatGPT could be used to generate malicious code. This means many more people could create malware, potentially leading to many more attacks and breaches.

Cybersecurity researchers CyberArk recently detailed how their researchers bypassed ChatGPT’s content filters and got it to create what they described as “functional code” to inject a DLL into explorer.exe. The researchers went on to use the chatbot to create polymorphic code that is difficult for antimalware software to spot and deal with.

The CyberArk researchers also managed to get the chatbot to create code that could find files of interest to ransomware criminals, and then asked the chatbot to write and encrypt a file.

There is however a difference between the kind of proof of concept work done by the CyberArk researchers and the leap from proof of concept into actual malware created by criminals. And it seems that malicious code generated by ChatGPT might have already made the leap.

In January 2023, Check Point researchers reported that they had found users of cybercrime forums using ChatGPT to create malware. They said that a thread had been started in a hacking forum where the creator of the thread shared screenshots of Python code allegedly generated by ChatGPT that searches for common file types such as Office files and PDFs, copies them to a random folder, zips those folders and uploads them to an FTP server.

As well as writing malicious code, cybersecurity researchers are also concerned about the use of ChatGPT to produce credible phishing and spear-phishing content to be used in social engineering attacks.

Cybercriminals writing phishing messages by hand do however reveal themselves with their mistakes. For example, spelling and grammar errors can alert a target that the email sender is not a native English speaker. Criminals also often fail to hit the tone of voice used by the entity they are impersonating.

ChatGPT makes it easier for criminals to impersonate an organization or even an individual. Researchers at WithSecure published a report in January detailing their experiments with using ChatGPT to generate convincing-sounding phishing and spear-phishing emails, such as an attempt to socially engineer a member of a company’s finance team into transferring money to the scammer.

WithSecure notes that the skill of creating convincing phishing emails with ChatGPT lies in the crafting of the prompts, and gives examples of detailed prompts that deliver detailed, tailored emails that are hard to distinguish from a genuine email.

A further concern about ChatGPT is the potential for it to be used to craft misinformation. Therehas been a great deal of concern about the use of “troll farms” supported by hostile governments, such as the one uncovered by the UK newspaper The Guardian in 2015. The newspaper alleged that “hundreds of paid bloggers work around the clock to flood Russian internet forums, social networks and the comments sections of western publications with remarks praising the president, Vladimir Putin, and raging at the depravity and injustice of the west”.

More recently, troll farms were involved in pumping out content about the UK’s vote to leave the European Union, as well as the US presidential election that swept Donald Trump into the White House. Researchers fear that those seeking to spread disinformation could turn to ChatGPT to create content that spreads misinformation.

These are widely discussed concerns, but there are others, too. It’s worth noting that at present, ChatGPT is trained on publicly available content on the internet: as yet there is no API available for third parties to train it on specific content. When it becomes possible to train the chatbot on custom content, we can expect disinformation, phishing, and other attack vectors to become much more finely tuned and targeted.

A further concern is that content created by ChatGPT can sound extremely confident and authoritative and yet be completely wrong: this is known as “hallucination”, and it is feared that this tendency will add further to the torrent of misinformation online. It is difficult enough for individuals to pick through the firehose of content available and parse what is good information and what is not good information. Content created by a GPT-trained chatbot that confidently presents bad information is a threat.

Ironically, given the concern about the potential for chatbots to produce more finely tuned and targeted information once it is trained on a wider range of content, not fine-tuning models trained on a specific corpus of work is also concerning. For example, a chatbot trained on a database of tax documents but which cannot differentiate outputs appropriate for the context could come back confidently with a response that’s not only wrong, but which could cost money

Adversarial AI is already a problem: so called “deepfake” technology that convincingly simulates both video and audio of people worries security researchers. Adding words crafted via an AI chatbot to sound like the person being impersonated adds a further layer of deception and threat, and could be used to create very convincing impersonations for targeted social engineering. It’s not hard, for example, to imagine a deepfake version of an attack on a finance team to convince them to transfer money to a criminal.

Cybersecurity professionals are constantly updating their threat models and not only trying to stay one step ahead of threats, but are also constantly keeping their audiences – whether that’s a company’s workforce, consumers or other security professionals – apprised of the current threat landscape. ChatGPT has made the task much more challenging.

What can the cybersecurity community do in response to ChatGPT?

Keeping abreast of the overall threat landscape is a full-time job, and the above concerns have intensified the burden of cybersecurity professionals. We suggest five strands of defense:

  • ChatGPT itself can be useful for this effort: it could for example be used to generate reports and analysis of cybersecurity threats, and then turn those reports into content that can be shared more widely with the organization in language that non-specialists will understand, thus raising awareness of the threats among the entire workforce.
  • Given the risk of targeted phishing attacks, cyber professionals should consider their existing phishing recognition training efforts and highlight to their users the kind of very detailed and credible emails they might receive. Training could include custom examples – generated by ChatGPT itself – of emails they should be alert to.
  • Good cybersecurity practice is holistic, covering more than just technology. Security professionals need to communicate clearly with their users that each individual has a personal responsibility for the security of the organization, and thus user training needs to include not just details of how to, say, report a suspicious email but also must ensure that the user understands why a suspicious email is a threat to the entire organization.
  • Given the potential rise in polymorphic malware, thanks to the threat demonstrated by CyberArk researchers, it is more important than ever to keep on top of patching users’ devices so that they are always up to date.
  • Similarly, a well maintained zero-trust environment, limiting access to resources – both digital and physical - to authenticated users with a clear business need, is a further defence.