The cybersecurity world is currently holding its breath for Claude Mythos. As Anthropic’s frontier model remains behind the locked doors of Project Glasswing, a sense of anxiety has set in. Since the model is known for its ability to chain 32-step cyberattacks and unearth decades-old zero-day vulnerabilities, regulators are scrambling and tech giants are racing to patch infrastructure before it is released.
The only flaw with this narrative is that it assumes the adversary is waiting for Mythos to start their AI revolution when in reality, they’re not. While the industry fixates on a hypothetical future model, adversaries have already operationalized an arsenal of open-source and unrestricted AI tools to change the rules of engagement today.
The Open-Source Arsenal
The fixation on proprietary models ignores the powerful capabilities of the open-source community. Threat actors are currently using jailbroken versions of Llama or specialized models like WormGPT and FraudGPT, tools which are specifically designed to bypass ethical guardrails. These aren’t future threats; they are active subscriptions available today. They allow even low-skill attackers to generate perfect, localized phishing lures and sophisticated malware code at a scale no human team can match.
Real World Examples:
AI Driven DeepFake Attacks
We have already seen cases where attackers used AI-generated voice cloning to imitate executives, convincing employees to transfer millions of dollars. These vishing (voice phishing) attacks don’t require the Mythos model; they require basic, widely available generative AI that can be trained on just a few seconds of public audio from a YouTube interview or a quarterly earnings call.
Rapid-Fire Smishing
Attackers are now using AI to manage thousands of simultaneous SMS-based phishing conversations. If a target replies with a question, the AI provides a very technically accurate response in real-time to build trust until the victim clicks a malicious link. This level of interaction was previously impossible at scale and now it’s fully automated.
AI-Orchestrated Data Espionage
In late 2025, security researchers at Google’s Threat Intelligence Group (GTIG) identified a new class of AI-native malware dubbed QUIETVAULT. Unlike traditional stealers that just grab everything and run, QUIETVAULT uses an integrated LLM to act as an on-site data triager.
How is the Attacker Using AI as an Operational Accelerator?
To combat this, we must stop looking at AI as a new tool. Based on examples above of things we’ve seen in the field, the adversaries have shifted their approach in a few different ways:
- Automated Exploit Generation: Attackers are using AI to analyze new software patches the moment they are released. By comparing the patched and unpatched code, AI can identify the vulnerability and generate a working exploit in minutes. This effectively eliminates the “patch window” that security teams used to rely on.
- The Adversary-in-the-Loop Model: We are seeing the rise of autonomous attack agents. These agents handle the grunt work of a breach like scanning networks, trying credentials, and escalating privileges without human intervention. A human attacker only steps in to make the final decision once the AI has already opened the door.
- AI-Powered Data Parsing: In traditional breaches, it took weeks for attackers to sort through stolen data. Today, they run exfiltrated databases through LLMs to instantly identify the highest-value targets, credentials, and sensitive personal information, allowing them to pivot to the next phase of the attack almost instantly.
- Hyper-Localized Social Engineering: AI has removed the language barrier. A threat actor in Eastern Europe can now send a perfectly written, culturally nuanced text message in Spanish to a target in Bogota, or in Japanese to a target in Tokyo.
Prioritizing Defense Against Today’s Operational AI Threats
It is natural to focus on the arrival of frontier models, the risks they pose are significant and deserve our attention. However, the danger of fixating solely on these future threats is that we might overlook a reality that is already in effect. While we wait to see how models like Mythos change the landscape, adversaries are already using an arsenal of open-source and unrestricted AI tools to change the rules of engagement today.
Modern defense requires a shift in perspective. Rather than preparing for one specific high-profile model, we must focus on the infrastructure of the attack which is the one thing that stays constant. Whether an adversary uses a theoretical autonomous agent or a simple script enhanced by basic LLMs, they must still move across the network and contact malicious infrastructure.
Operationalizing Defense for the AI Era
While most security stacks operate like a Swiss cheese model (where layers of defense exist but often fail to see the threats slipping through the holes) the approach here is fundamentally different. In an era where AI-driven adversaries move laterally through your environment in seconds, a fragmented defense is no defense at all.
Effective protection requires visibility across the entire environment, including the network, endpoint, identity, and cloud. A modern approach to combat AI attacks can be grouped into three main pillars:
- Continuous, Real-Time Visibility: Rather than waiting for a specific model to trigger a breach, security teams must be able to identify the earliest indicators of risk. By continuously analyzing telemetry and measuring the compromise state 24/7, organizations can stop chasing hypothetical future threats and start defending against the reality of today.
- Unified Visibility: Bridging the gap between disparate environments is the only way to eliminate blind spots. By correlating live network activity against known malicious infrastructure and identity behaviors, it becomes possible to expose the lateral movement that autonomous agents rely on to stay hidden.
- Machine Speed Detection and Response: When an adversary uses AI, human triaging becomes a bottleneck. Modern defense relies on automated threat response and host isolation to block confirmed malicious activity in real-time, stopping intrusions and data exfiltration before they can cause impact.
The emergence of these frontier models is a reflection of the speed and sophistication we are already seeing in the field. The tools will keep evolving, but the strategy for defending against them is grounded in the fact that you cannot manage what you cannot see. By prioritizing unified visibility and a responsive posture, you can move away from speculation and focus on the facts. Navigating this landscape isn’t about waiting for the next breakthrough. It’s about ensuring your defenses are just as active and agile as the threats they face every day.