Beware of This AI Worm That Steals Your ChatGPT and Gemini Data

Researchers have discovered a new threat capable of infecting popular models like ChatGPT and Gemini, highlighting the risks inherent in AI systems. This AI worm, known as "Morris II," poses a serious threat, with the potential to steal sensitive information such as credit card numbers from AI-powered platforms.

This threat stems from the concept of an "adversarial self-replicating prompt," which researchers used to create the Morris II worm. This malicious body spreads independently across generative AI entities, reproducing itself while secretly capturing data and even spreading viruses.

This result has far-reaching consequences, especially as generative AI systems gain acceptance in a variety of sectors, from virtual assistants to email automation tools. And as these devices become more integrated into regular activities, the possibility of exploitation increases dramatically, offering a tremendous challenge to cybersecurity standards.

Source: (1)

Source: (2)

Understanding Morris II AI Worm

The Morris II worm, named after the infamous Morris computer worm of 1988, marks a new frontier in cyber threats. Unlike traditional malware, which requires human participation to spread, this AI-powered threat functions autonomously, utilising the very mechanisms designed to boost productivity and efficiency.

The Morris II worm exploits weaknesses in generative AI models like ChatGPT and Gemini to gain unauthorised access and exfiltrate sensitive data. Researchers used a series of coordinated operations to show how the worm could penetrate email assistants, modify prompts, and harvest confidential information from emails.

One way used by the researchers was to inject a malicious prompt into an email, successfully circumventing security protections and causing the AI model to generate a response containing sensitive information. This response, when conveyed to unsuspecting recipients, acts as a vector for additional proliferation, continuing the infection cycle.

Furthermore, the researchers designed a strategy in which a self-replicating prompt was inserted within an image file, allowing the worm to abuse unwary users by forwarding messages with spam or abusive content. This subtle strategy jeopardises not only data integrity but also user trust in AI-powered communication solutions.

While the Morris II worm has so far only been tested in controlled situations, its potential influence on real-world applications cannot be emphasised. As AI ecosystems become more interconnected and autonomous, the risk of widespread distribution of such threats grows, necessitating proactive steps to prevent malicious activities.

In response to these discoveries, the researchers have called for better architecture design in the AI ecosystem, emphasising the importance of strong security procedures and rigorous validation processes. 

Furthermore, they have informed significant players, such as Google and OpenAI, about the inherent risks posed by prompt-injection vulnerabilities, asking them to strengthen defences and reduce possible threats.

Despite researchers' proactive posture, the appearance of generative AI worms in the wild is still a distinct possibility, given the rapid proliferation of AI technologies across numerous industries. Vigilant and proactive cybersecurity procedures are critical to mitigating the changing threat landscape and protecting sensitive data from abuse.


The recent finding of the Morris II worm emphasises the critical need for improved cybersecurity measures in the rapidly developing field of artificial intelligence. Google has not yet responded to the AI worm but big players must act fast before it gets out of hand, protecting the integrity of AI-driven systems. 

Watch the Ben Nassi Video Here:

Beware of This AI Worm That Steals Your ChatGPT and Gemini Data