By security practitioners, for security practitioners novacoast federal | Apex Program | novacoast | about innovate
By security practitioners, for security practitioners

AI-Written Malware – An Emerging Cybersecurity Threat

The new dawn of AI-assisted coding has yielded a major advantage for cyberattackers. The ease and speed with which malicious code can be generated and complex payloads orchestrated—the heavy lifting performed by AI—is creating an even riskier landscape for defenders.

What many feared has actually come to pass: we’re observing AI-written and AI-generated malware in the wild. Hackers are developing malware using AI tools to create, modify, or obfuscate their malicious code. It’s an emerging but very active threat. 

Malicious actors have adopted AI to enhance their cyber attacks, which has presented new challenges to I.T. Teams, specifically threat hunters.

Here, we’re looking at AI malware and APTs in the wild and defending against them.

AI-Generated Malware in the Wild

In June 2024, HP discovered a phishing campaign with the typical invoice-based lure and an encrypted HTML attachment; this is basically HTML smuggling used to avoid detection. The malware payload is standard, but the twist comes after a closer look. The malware comes in the form of an AI-generated dropper. Analysts say this is a leap by evolutionary standards that will likely lead to more AI-generated payloads in the future.

AI-generated malware can adapt and improve autonomously, making it challenging to detect. The machine learning algorithms employed here let malicious malware evolve based on the surroundings, bypassing any security measures encountered and creating a dynamic threat landscape.

This type of malware works in real-time and can make decisions on the fly. It adjusts its attack vectors and implements personalized approaches to its attack to increase its success.

Threat Actors’ Use of AI

Security researchers see threat groups taking advantage of LLMs (large language models) to generate new malicious JavaScript code variants and in ways that better evade detection. While LLMs are still struggling to create malware from scratch, cybercriminals are using them to rewrite code and obfuscate current malware, making them more challenging to detect.

Researchers think that, with enough transformations, this method may degrade the performance of malware classification applications over time, rendering malicious code harmless.

While LLM providers are implementing tighter security protocols to keep them from going rogue and creating unintended output, some threat actors advise AI tools such as WormGPT, which can automate the process of creating convincing and detailed phishing emails to prospective targets and create unique malware.

Types of AI-Backed Attacks

With bad actors honing their AI skills and relying heavily on them to develop new or improved malware, many attacks actively use AI and machine learning to target victims. These can include:

  • AI-Infused Social Engineering Attacks can use AI algorithms to assist in creative concepts, research, or launching an attack. Bad actors define these attacks to achieve their goals of manipulating human behavior that fulfills a specific purpose. Examples include sharing sensitive data, transferring ownership of high-value items, transferring funds, or granting access to a device, application, database, or system.
  • Prompt Injection Attacks on AI Models began in 2024, and threat actors are still leveraging them in 2025. These attacks exploit vulnerabilities in AI Models such as ChatGPT, Google’s Gemini, and Microsoft’s CoPilot, which allow hidden webpage content to manipulate AI responses. In addition, these attacks demonstrate how adversaries can use AI to generate evasive malware and conduct sophisticated phishing campaigns.
  • Deepfake Scams are where cybercriminals leverage AI to create deepfake videos and audio that impersonate trusted individuals or executives. The aim is to deceive employees or partners into sharing confidential data or transferring funds.
  • AI-generated Malware Variants help threat actors create polymorphic malware that can change its code to evade detection by traditional antivirus solutions, which leads to an increase in successful infections.
  • Autonomous Malware is a recent development that makes malware capable of making decisions without human input. The malware allows for more adaptive and persistent threats.

Incidents using these AI tactics underscore threat actors, escalating the use of AI to create advanced malware and raising the necessity for organizations to implement enhanced security measures and proactive defense strategies to mitigate evolving threats like these.

Recent Attacks Using AI

Analysts observe increased AI use by cybercriminals to develop sophisticated malware that can evade usual security defenses. Some of the more notable AI-generated malware include:

  • JarkaStealer malware targets the Python Package Index as part of supply chain attacks. The attackers uploaded malicious packages containing the JarkaStealer. Malware developers created the JarkaStealer to exfiltrate sensitive data from compromised systems and distribute it through packages that appeared as legitimate tools. AI Chatbots helped promote the offering.
  • Morris II Worm is a worm introduced by researchers to target Generative AI (also called GenAI) infrastructure using adversarial self-replicating prompts. It could insert prompts into inputs processed by GenAI models, thus prompting malicious activities and replication, exploiting the interconnected nature of GenAI apps.
  • Egan Ransomware (Evolutional Generative Adversarial Network) was a method of mutating ransomware files while preserving their original functionality. The mutation allowed the ransomware to evade detection by AI-powered antivirus systems, highlighting the challenges in defending against AI-enhanced malware.  
  • FunkSec FunkLocker Ransomware is new but is actively attacking multinational targets. Though it only appeared in October 2024, it had racked up 85 victims by the end of December. Analysts are paying close attention to this group since the FunkLocker ransomware code is at least partially AI-generated. The group spawns legitimate Windows processes to run system reconnaissance. Specifically, it leverages PowerShell commands that look for specific system policies in the Windows Registry, including internet settings, system certificates, the current control set code integrity settings, and more.
  • CloudSorcerer/EastWind is an advanced persistent threat (APT) campaign that used the public cloud infrastructure to perform large-scale data exfiltration and surveillance. These cyber criminals employed sophisticated phishing campaigns aimed at infiltrating government and private sector organizations and critical infrastructure.

    The increase in AI-assisted attacks underscores the need for enhanced security measures and proactive defense strategies to mitigate these evolving threats.

    Defending Against AI-Generated Malware Attacks

    All organizations currently face the challenge of being prepared for AI-generated malware-based cyberattacks. Defending systems and network infrastructure from attacks from AI-generated malware requires businesses to implement a multi-layered cybersecurity strategy.

    Here are a few steps to enhance organizational security:

    1. AI-Powered Threat Detection
      • Deploy AI-driven security solutions that leverage machine learning to detect evolving malware patterns.
      • Organizations should also implement behavioral analysis to identify anomalies and possible AI-generated threat
    2. Endpoint Protection and Network Security
      • Implement Endpoint Detection and Response (EDR) solutions and next-generation antivirus solutions.
      • Leverage Zero Trust Architecture and Identity solutions to limit access based on user behavior and authentication.
    3. Advanced Email and Phishing Protection
      • Deploy AI-based email security tools that detect and block AI-generated phishing emails.
      • Train employees on how to recognize AI-enhanced deepfake and phishing scams.
    4. Threat Intelligence and Real-Time Monitoring
      • Leverage Cyber Threat Intelligence to stay ahead of AI-driven attack tactics.
      • Partner with reputable and trusted cybersecurity vendors for real-time threat monitoring and response.
    5. Strong Identity and Acess Management (IAM)
      • Implement multi-factor authentication (MFA) and passwordless authentication for all critical systems.
      • Use privileged access management (PAM) to limit high-level account access.

    By proactively implementing these standards, organizations can strengthen their defenses against AI-generated malware and minimize cybersecurity risks.

    Previous Post

    Weekly Top 10: 03.17.2025: Meta Warns of Vulnerability in FreeType; ObscureBat Loader Cisco Vulnerability Leads to DoS of BGP Routers, and More.

    Next Post

    Weekly Top 10: 03.24.2025: Semrush Impersonation Scam Hits Google Ads; Detecting and Mitigating Apache Tomcat, VSCode Extensions Found Downloading Early-Stage Ransomware, and More.

    Innovate uses cookies to give you the best online experience. If you continue to use this site, you agree to the use of cookies. Please see our privacy policy for details.