AI against AI cyberthreats


  • Firms face constant, alarming cyberattacks, with generative AI increasing their pace and scale.
  • An unsuspecting consumer, employee, or supply chain vulnerability can be exploited by sophisticated code.
  • AI can be used by adversaries to conduct prolific phishing campaigns. Other threat vectors include AI-powered DDoS attacks, which are powerful, sharp, and fast.
  • Using AI against AI makes sense, and is transitioning from talk to implementation.
  • Companies also need to invest in training right across the organization so that zero-trust becomes the norm.

Firms face constant, alarming cyberattacks — from AI-driven spear phishing to the 2023 polymorphic malware attack on ICBC Financial Services, halted only after the firm disconnected its systems and pumped in $9 billion from its parent company to settle trades with BNY Mellon.

The annual cost of cyberattacks, including theft, lost productivity, and reputational harm, is estimated to reach $10.5 trillion globally by 2025. Generative AI will further increase the pace, scale, and effectiveness of current threats.

Figure 1. More malware attacks every year

Source: AV-TEST Institute

It takes just one weak link…

In enterprise security, we often think about the organization’s broader threat perimeter. But most attacks come by way of a single access point — an unsuspecting consumer, employee, or supply chain vulnerability exploited by code. AI can play on these vulnerabilities further, evolving alongside enterprise defenses, and enhance the legitimacy of social engineering operations.

Many attacks against organizations can be traced back to a single click, credential, or one-off mistake that gave intruders access.

The result is data loss, reputational damage, regulatory penalties, and compromised commercially sensitive information.

Talking about the ransomware attack on ICBC, Oz Alashe, founder of CybSafe, a British cybersecurity and data analytics firm, said: “With the rising severity, sophistication, and frequency of cyberattacks, often involving human error, companies urgently need to rethink their approach [to defense].” At ICBC, LockBit ransomware group, linked to Russia and known for similar attacks on Royal Mail in the UK and airline manufacturer Boeing, a small vulnerability was found that gave the adversaries a chance to navigate their way through the network and carry out their attack.

This prompts inquiries: What can firms do to keep critical assets and data safe given AI’s ability to evolve, learn, and adapt to threat detection systems? What happens when even AI-aware employees don’t know the difference between a family member and an AI-generated fake that understands very specific information about them?

It takes just one weak link…

How adversaries use AI

Adversaries effortlessly enhance the sophistication of their malware and exploits through AI. They are writing code at a faster pace that is more sophisticated and has capabilities previously unheard of. Polymorphic malware can assess the usage pattern of antivirus and antimalware tools and write code that evades detection. It uses an encryption key to change its shape and signature.

Bad actors also use AI for advanced behavioral analytics. AI and ML aggregate vast amounts of data. For example, phishing campaigns are prolific, and are more effective when the recipient can be convinced it’s real. “When an adversary can take everyone’s Facebook page, all their social media accounts, data that’s on the internet, public-facing websites about who’s who in the organization, profit and loss statements from that company, then it’s possible to get an incredibly scary, accurate picture of exactly who everyone is in the company,” says Amber Boyle, a cybersecurity expert with Infosys Consulting. “When that information is then coupled with AI and ML — creating deepfakes to replicate someone’s voice — or to know where the water cooler is down the hall and what the context is behind a major deal – you can imagine how much more effective those precision attacks will be.”

Besides malware (including ransomware) and phishing, other attack methods include:

  • Ditributed denial of service (DDoS) attacks
  • Man-in-the-middle (MITM) attacks
  • SQL injection
  • Zero-day exploits
  • Watering hole attacks
  • Web shell attacks
  • Domain name system (DNS) poisoning
  • Port scanning
  • Cross-site scripting
  • Rootkits

All these attacks can be AI-powered. For instance, DDoS attacks, which seek to deny authorized users access to a firm’s website and server by flooding targeted websites with fictitious traffic, are often used as cover for other attacks such as disabling security features.

With AI, these attacks are more powerful, sharper, and faster, and makes the source of the attack difficult to trace.

AI-enabled attacking machine does not get tired, and has a nonexistent error rate. Worryingly, they also predict the defense strategy a targeted firm might enact.

More ways adversaries use AI and ML:

  • Generate new variants of older malware
  • Create phishing and spam content based on successful campaign training sets
  • Help phishers and scammers detect recurring patterns in malicious content
  • Detect vulnerable points in enterprise networks and target entry points for spyware, phishing, or DDoS attacks
  • Build artificial hackers to execute personalized attacks
  • Improve malware targeting by profiling potential victims using publicly available, harvested, or extracted data
  • Allow botnet nodes to learn collectively and share intelligence to identify the most effective attack methods

Using AI against AI makes sense

Firms should use AI and ML at the forefront to mitigate attacks. For example, cybersecurity professionals look at logs and events – all incidents that happen on a day-in, day-out basis – to triage the most dangerous events for the firm. With AI, they can identify significant logs and act.

AI can automate threat hunting and enhance security execution, including threat and malware detection, vulnerability detection, patch deployment, and security countermeasures and controls.

AI provides the following solution for malware detection:

  • Detects, analyzes, and prevents evolving malware variants.
  • Identifies features, such as accessed application programming interfaces and consumed bandwidth.
  • Predicts future threats leveraging inference techniques based on known malware characteristics.
  • Performs behavior analysis and classifies malware before they execute using ML models.
  • Discovers relevant properties of malware samples with unsupervised auto-encoders.

AI and ML solutions not only provide real-time alerts and advanced detection but also detect old, inactive, or anomalous machines in botnets.

This kind of threat detection is transitioning from talk to implementation. “No firm is going to select a tool or solution five years from now that doesn’t have some heavy reliance on AI or ML for improving and evolving defense strategies,” says Boyle. “Adversaries are using it, and so firms can’t afford to not use it themselves. It’s a cat and mouse game, and firms can’t afford to be behind the learning curve.”

What firms can do right now

Businesses are aware of the threat from AI and automation, but many don’t know how to leverage the same technology to improve their security.

Good security starts with a focus on fundamentals. Security professionals must prioritize understanding basics, such as asset management, data protection, and access controls.

Chief information security officers (CISOs) and security professionals should consider the following questions:

  • Do we know about our asset management?
  • Do we know where our data is stored?
  • Are we controlling data access?
  • Do we have good access controls?
  • Are we protecting data, when, in a zero-trust environment, the assumption is that the network is already penetrated?
  • How can we train our employees on advanced AI attacks?

The last point is important. Few firms ensure humans are aware and informed of advanced and convincing AI cyberthreats, but many are yet to do so. The click through rate on phishing attacks is substantially high (19.8%, according to some studies). When those attacks are coupled with AI, the situation is even worse.

Companies need to invest in training right across the organization. This includes simulated phishing attacks to increase awareness of just how sophisticated AI has become. When people become aware, zero-trust becomes the norm, and verification of all data and knowledge a de facto posture.

C-suite vigilance

The average time for intrusion is now shrinking. According to Infosys Consulting, 10 minutes is all it takes for bad actors to take over a network, putting data and systems at risk.

That said, the cyber C-suite can’t afford to not leverage AI and ML. The technology is a defense against access to the networks, helping to triage events and take proactive actions in real time.

If CISOs don’t use AI to fight AI attacks, frontline defenders are at a disadvantage.

Real-time threat detection through AI is not a nice-to-have extra; it’s a vital tool in the cybersecurity professionals’ armory.

Related Stories

Connect with the Infosys Knowledge Institute

Opt in for insights from Infosys Knowledge Institute Privacy Statement