Live Training | What's New in Lumu Defender

Already have an account? Sign in

Sign in

3 Game-Changing Cybersecurity Trends From DEFCON 33

Fresh from DEFCON 33, Lumu’s Mario Lobo identifies a revolutionary shift in AI-driven attacks that is changing the rules for cyber defenders and their tools.
DEFCON 33

Table of Contents

I’ve just spent two weeks immersed in the global cybersecurity community at BSides Las Vegas and DEFCON 33. After eighteen years in this industry, I can tell you the atmosphere this year was different. The usual buzz around a specific, clever exploit was replaced by something deeper: a fundamental revolution in how attacks are conceived, automated, and launched.

While the keynotes at these premier conferences offer great insights, the real story unfolds in spaces where people congregate, such as the Red Team Village, the Blue Team Village, and (especially this year) the AI Village. It’s in these focused environments, through hushed conversations and impromptu demos, that you see trends come to life. This is where the theoretical becomes all too real, such as the new AI-driven technique that bypasses financial controls or exfiltrates healthcare data.

The volume of information can be overwhelming, so my goal is simple: to cut through the noise. Here are the three critical, correlated trends from these conferences that will define the cybersecurity battlefield for the next eighteen months.

Trend 1: The AI Arms Race Is No Longer Theoretical

Last year, conversations about AI in cybersecurity felt like exciting thought experiments. This year, the experiments are over. AI is rapidly driving a new arms race, and both attackers and defenders are building their arsenals.

The central theme humming through the conference was this stark reality: security operations and attack campaigns are now happening at machine speed.

For Defenders: The Rise of the AI Analyst

The most promising development for defense is the emergence of AI agents that function like seasoned security analysts.

In a standout presentation at BSides, Oudy Even Haim demonstrated how cognitive frameworks use Large Language Models (LLMs) to connect disparate alerts, reconstruct complex attack timelines, and prioritize threats by business impact. This is about moving beyond simple automation to augment human intuition with the scale and speed of a machine.

In another session, How LLMs Helped Catch Lazarus and Stop a Crypto Backdoor, we heard about how LLMs are helping security teams. It enables teams not just to respond but also to contextualize with threat intelligence in real time.

For Attackers: A New Wave of AI Attacks

The flip side is sobering: attackers are adopting AI even faster to make their operations more evasive and efficient.

Frameworks like AIMaL (Artificially Intelligent Malware Launcher), from researchers Natyra & Endrit Shaqiri, use AI to mutate malware in real time. This creates a constantly shifting threat designed to slip past modern EDR and XDR controls. It’s no longer static malware; it’s malware that thinks.

Other workshops showed how attackers can deploy a sophisticated, AI-enhanced Command and Control (C2) environment in under an hour using tools like Mythic C2, Nemesis, and RAGnarok.

Even social engineering is being industrialized. The talk Automating Phishing Infrastructure Development Using AI Agents by Fred Heiding showed how a generative AI model could create a convincing, large-scale phishing campaign from a single prompt: the company’s name. The AI can even generate scripts for a follow-up vishing (voice phishing) attack to trick employees into giving up credentials.

Trend 2: Cracks in the New AI Foundation

Every new tech wave follows the same cycle: innovation moves at light speed while security cleans up the mess later. The AI boom is no exception. The rush to build has created frameworks that prioritize functionality over resilience, and the DEFCON community is already finding the cracks. Two in particular stood out.

The New Social Engineering: Tricking the AI

The first crack is a vulnerability in Large Language Models (LLMs) to prompt injection. This is a cybersecurity exploit where malicious actors trick an LLM into executing malicious commands, almost like social engineering but for LLMs. Prompt injection can manipulate the LLMs into performing actions they shouldn’t, like revealing sensitive information or generating harmful content.

A talk by Junki Yuasa, Krity Kharbanda, and Yoshiki Kitamura highlighted prompt injection’s growing impact as a security issue across a variety of applications.

In another session, two tools that automatize the generation of diverse prompt injection payloads were demonstrated by their creator Sasuke Kondo.  These were called MPIT (Matrix Prompt Injection Tool) and ShinoLLM.

Our AI Foundation Is Being Built on Quicksand

Secondly, we are already standardizing the central workings of the AI ecosystem: a talk from Srajan Gupta and Vinay Kumar showed just how dangerous this might be.

They took a close look at the Model Context Protocol (MCP), which is quickly becoming the default way to connect AI agents to data and tools. They revealed a series of critical vulnerabilities beneath MCP’s slick, user-friendly facade.

This isn’t a flaw in one application, it’s a fundamental weakness in the connective tissue that the entire AI industry is starting to rely on. We’re building skyscrapers on a foundation we haven’t properly inspected.

Trend 3: Hiding in Plain Sight With Novel C2 Channels

As defenders build higher walls with Zero Trust and continuous monitoring, attackers are getting more creative. This year, the focus was on novel Command and Control (C2) channels — the secret communication pathways used to control compromised systems and exfiltrate data. The new approach is not just to be stealthy but to hide in plain sight by making malicious traffic look like normal business activity.

Abusing Your Everyday Cloud Services

The presentations at DEFCON 33 showed that attackers use the very cloud services we trust as their personal C2 infrastructure.

  • Terada Yu, demonstrated how to turn Microsoft Outlook itself into a C2 channel, using standard email protocols (SMTP/IMAP) to bypass heavily restricted environments.
  • Ariel Kalman showed how attackers can leverage Google Cloud’s Identity-Aware Proxy (IAP) — a security feature! — to exfiltrate data and sidestep traditional network controls.
  • Emre Odaman and Anıl Çelik unveiled GlytchC2, a tool that can exfiltrate data of any kind by encoding it into the video feed of live-streaming platforms like YouTube or Twitch.

The genius of these methods is that the traffic is flowing through services your organization has likely already approved. This makes it very difficult to detect.

Living Off the Land 2.0: Turning Your Tools Against You

Living off the Land (LotL) is when an attacker uses a target’s own legitimate tools against them. Now this technique is evolving in an unexpected direction. It’s no longer just about using PowerShell or system binaries: attackers are using the very tools that were designed to defend you.

In his talk Who Scans the Scanner? Lucas Carmo showed how a vulnerability in a Trend Micro security platform could be weaponized. This tool, meant for protection, was turned into an attacker’s accomplice for internal reconnaissance.

Similarly, a talk by Quentin Roland and Wilfried Bécard demonstrated how Group Policy Objects (GPOs), a tool for Windows administrators, can be abused for a surprising range of offensive C2 actions.

In this new landscape, it is clear that even your most trusted security and administrative tools can be turned against you.

The Battlefield Has Changed, Has Your Strategy Kept Up?

Reflecting on DEFCON 33, one conclusion is inescapable: the line between legitimate and malicious activity has never been blurrier. To connect the dots, attackers are using AI to build adaptive weapons (Trend 1), exploiting foundational flaws in new platforms (Trend 2), and hiding their communications inside your trusted services (Trend 3).

The common thread through all of this is the network. These advanced threats are designed to blend in, making them invisible to endpoint security alone and raising a fundamental question: How do you stop an attack you can’t see?

What you need is to be able to observe, analyze, and react to what is happening on your network. The one certainty in cybersecurity is that no matter what door or window attackers come in through, they need to use the network. An effective NDR solution, like Lumu Defender, allows you to spot the telltale signs of hostile activity amongst the ocean of legitimate noise. It can react to suspicious activity in milliseconds, reducing the window of opportunity for attackers.

The challenges are significant, but you don’t have to face them blind. Contact Lumu today to learn how to defend against this new generation of threats.

Summarize this post


Your FREE compromise assessment is just a few clicks away

Share this post

Subscribe to Our Blog

Get the latest cybersecurity articles and insights straight from the experts.

RELATED POSTS

EDR Evasion
Trends

Why EDR Evasion is the New Threat Standard

Reading Time: 4 mins48% of ransomware attacks successfully evade EDR. Threat actors like Qilin are exploiting the ‘tuning gap’ in managed security. We look at how to regain the upper hand.

Join our pre-day 
workshop waitlist

By clicking “Submit Request” you agree to the Lumu Terms of Service and Privacy Policy.