Back to blog
AI & Cybercrime

AI in cyber attacks: how hackers use AI in phishing and malware

Artificial intelligence is not just a tool for defenders — cybercriminals are increasingly leveraging AI to craft more convincing phishing campaigns, develop smarter malware, and automate attacks at scale. We analyze the latest threats and how to defend against them.

The rise of AI-powered cyber attacks

The rapid advancement of artificial intelligence has not only transformed the technology sector — it has fundamentally changed the playing field for cybercriminals. Where attackers once relied on manual techniques and relatively simple scripts, they now have access to powerful AI tools that make their attacks more efficient, scalable, and harder to detect.

In 2025 and 2026, we are seeing a clear shift: AI is no longer a future threat but a present-day weapon in the hacker's arsenal. From AI-generated phishing emails that are nearly indistinguishable from legitimate ones, to self-learning malware that adapts to security measures — threats are evolving faster than ever.

AI-powered phishing: the end of typos

Traditional phishing emails were often recognizable by poor grammar, unnatural language, and generic content. Those days are over. Using large language models (LLMs) like GPT-4 and open-source alternatives, attackers now generate flawless, contextually relevant phishing messages in any language — including perfect localized content for any target region.

AI makes it possible to personalize phishing at industrial scale. By combining public information from LinkedIn, social media, and company websites, AI can craft a convincing tailored message for each target within seconds. This has been dubbed 'spear phishing as a service': what once took hours of manual research is now automated in milliseconds.

Additionally, AI models are being deployed to respond to victim replies in real time. Chatbots carry on full conversations on behalf of the attacker, making social engineering attacks more convincing than ever. The victim believes they are talking to a colleague or IT administrator, when in fact they are communicating with an AI.

Deepfakes and voice cloning: the next generation of social engineering

AI-generated deepfakes pose a growing threat. Criminals use voice cloning technology to impersonate a CEO or CFO and authorize payments over the phone. In multiple documented cases, companies have been defrauded of millions by a phone call that sounded exactly like their own executive.

Video deepfakes are being used in so-called 'CEO fraud' scenarios and even in fake video conferences. With just a few minutes of existing audio material — for example from a YouTube video, podcast, or voicemail — AI can generate a convincing voice clone. This makes the traditional method of 'just call to verify' increasingly unreliable.

AI-developed malware: smarter, faster, harder to detect

AI is not only used for social engineering — it is also transforming malware development itself. Attackers use AI to generate polymorphic malware: malicious code that automatically changes its own structure with each infection, making it undetectable by traditional signature-based antivirus software.

Furthermore, AI assists in finding vulnerabilities. Automated tools based on machine learning scan application code and networks for security flaws, far faster than a human hacker could. While ethical hackers use the same tools for defense, criminals leverage them to discover and exploit zero-day vulnerabilities more quickly.

A particularly concerning trend is the emergence of AI agents that autonomously execute entire attack chains: reconnaissance, initial access, lateral movement, and data exfiltration — all orchestrated by an AI system that adapts to the target's security environment.

FraudGPT and WormGPT: AI tools for criminals

On the dark web, AI tools specifically designed for cybercriminals are readily available. FraudGPT and WormGPT are examples of so-called 'jailbroken' or purpose-trained models that have no ethical restrictions. These tools generate phishing emails, malware code, exploits, and social engineering scripts on demand — without the safety guardrails that mainstream AI services enforce.

These services are offered as 'Malware-as-a-Service' (MaaS) with subscription models, customer support, and even tutorials. This significantly lowers the barrier to cybercrime: even individuals without technical expertise can now launch sophisticated attacks using AI-powered tools.

How organizations can protect themselves

The rise of AI-powered attacks demands a fundamentally different approach to cybersecurity. Traditional defenses are no longer sufficient. Organizations must adapt their strategy:

  • Implement AI-powered defense: use machine learning-based email filters and endpoint detection that can identify AI-generated phishing and polymorphic malware based on behavior rather than signatures.
  • Conduct advanced security awareness training that specifically addresses AI-driven threats, including deepfakes, AI phishing, and voice cloning.
  • Establish strict verification procedures for financial transactions and sensitive actions — do not rely solely on phone or email confirmation. Use out-of-band verification across multiple channels.
  • Implement Zero Trust architecture: continuously verify every user, device, and session, regardless of their location in the network.
  • Deploy User and Entity Behavior Analytics (UEBA) to detect anomalous user behavior that may indicate AI-automated attacks.
  • Keep your incident response plan up to date with AI-powered attack scenarios and practice them regularly.
  • Monitor the dark web for leaked credentials and corporate information that could serve as input for AI-driven attacks.

The future: an AI arms race

We are at the beginning of an AI arms race in cybersecurity. As defenders build better AI detection systems, attackers will refine their AI models to evade them. This dynamic makes it essential for organizations not to stand still, but to continuously invest in both technology and knowledge.

The good news is that AI is also an enormously powerful defensive weapon. AI-driven security operations centers (SOCs), automated threat detection, and intelligent incident response systems make it possible to respond to attacks faster and more effectively than ever before. The key is for organizations not to fall behind in the AI race.

One thing is certain: cybersecurity will never be the same again. The organizations that invest now in understanding and defending against AI-powered threats will be best protected tomorrow. Don't wait until you become the target of an AI attack — prepare today.

Want to protect your organization?

Our cybersecurity experts can help you assess and strengthen your security posture before attackers strike.

Get in touch