Interviews

Flash Talk: Forrester on SIEMs, Alert Fatigue, and Prioritizing Cybersecurity Projects

Table of Contents

Forrester industry analyst Joseph Blankenship has established himself as one of the most recognized and influential voices across the cybersecurity industry. His experience working on the frontlines of product leadership at some of the world’s most innovative vendors like McAfee (Intel Security), Vigilar (one of the earliest managed security services companies), and IBM (ISS) have helped to inform his broad perspective on the evolving cybersecurity landscape and is one of the reasons why so many organizations are keen to solicit his expert guidance as they prioritize their budgets and strategic initiatives. 

So we were thrilled to have the opportunity for Joseph to participate in our most recent Flash Talk on the topic of “SIEMs, Alert Fatigue, and Prioritizing Cybersecurity Projects.” What follows is a condensed version of the talk, highlighting some of the key discussion points between Lumu CEO & Founder, Ricardo Villadiego, and Joseph. The entire half-hour discussion can be viewed here.

Simplifying Cybersecurity

Ricardo Villadiego (RV): Security teams are demanding simpler yet more effective ways to approach the breach problem. How can this be done?

Joseph Blankenship (JB): One thing I think we’ve really got to focus and address is the issue of complexity. Year after year in our security technographic survey that we do with users and buyers of security technologies all over the world, one of the things that they tell us is a top concern is the complexity of the systems that they use. So it’s not only the complexity of the IT systems that the business uses to operate, but it’s also the complexity of the security apparatus that we use to monitor and secure those systems. So, because we’ve got all of this complexity built in, it also kind of builds in, uh, vulnerabilities into the end of the system. And then we add security controls on top of this which makes our job as security professionals that much harder to do.

So one of the things I believe that we’ve got to do as security leaders is to rationalize our technology investments and consolidate some of our security tooling instead of having myriad vendors doing near different things—we really have to kind of evaluate some of those technology decisions and make these things easier to use. So rationalizing those investments means investing in technologies that allow them to deliver the best prevention and detection possible where they’re needed the most in the enterprise.

Alert Fatigue

RV: What is the root cause of the growing alert fatigue issue? How did we get here and what can we do to fix it?

JB: I think a lot of it has to do with the fact that it made a decision a long time ago in security that may not have been the best decision. We said we wanted to bring in a lot of data and then we’re going to find the needles in the haystack. And what we figured out was that in order build those haystacks, we need lots and lots of data. So we turned security into this big data problem and we decided that all data, was important data. And we have to go and try to mine all of this data to find the things that are important. Now, one of the things that ended up happening is without the ability to adequately tune this technology, it would throw a lot of alerts to a security analyst who has to figure out which signals they should be responding to. 

The other thing that happened is we are getting duplicate alerts in different technologies. Now we had the promise of things like SIEMS that were supposed to deep duplicate these things and help us correlate all of these alerts, but it never really worked the way that it should. It was never really that easy to do, especially at scale. So we have poorly tuned technologies—and all of these technologies also have different interfaces—so it becomes really hard for the security analyst to keep up with all of these things.

What this really kind of tells us is that without the ability to successfully correlate these alerts at scale, we don’t really know where to focus our efforts. What we need to do is to get really high fidelity signal to our analysts. We have to give them the confidence that the alert that they see is actually something that’s important—that it’s a true positive and not a false positive. We’ve really got to pivot away from this big data problem and focus more on high fidelity alerts, where we have high confidence that something is happening and there’s a potential for a breach.

Visibility and Zero Trust

RV: What is the role of visibility in a Zero Trust cybersecurity model?

JB: One of the reasons we put the ring of visibility and analytics around all of these things is that the Zero Trust extended model says that most of our security tools are really focused on what’s coming at us externally. And that’s what it’s reporting on. We had very little information on what was being consumed by security operations in terms of what was happening inside their networks and systems. So one of the things we realized we had to do was to shine the light on where things are actually happening, where our assets are, where all of our data is stored, where it’s processed, and where our users are operating. We need to have visibility into that environment—what assets are attached to networks, visibility into how data was transacted, and how users behave inside the systems. 

So it became really important that we bake that into Zero Trust, because we’re going to say, “Hey, we’re going to build this set of prevention techniques in order to protect data and to protect systems”. We also have to be able to have visibility into the way all of those things are connected, and then be able to give security operators visibility into anything that’s happening inside the systems. So we made it a key part of the model to really be able to empower security teams, to detect and respond to threats inside the environment and not just what you’re observing externally.

The Role of the Modern SIEM

RV: Where do SIEMs fit into the context of modern cybersecurity architectures? 

JB: SIEMs have been evolving for quite a long time. We kind of had this period a little over a decade ago where it sort of plateaued and became like the storage facility. We just dumped logs into it so we could meet compliance requirements. Then at a certain point, we kind of realized, you know, “Hey, we weren’t really wanting to try and do threat detection with these things. So let’s start pumping tons and tons of data into this zone. Let’s start feeding threat intelligence into this.” So let’s do all of these other things but the SIEM was really not built to handle that kind of a data problem. Like I said before, we started treating security as this big data problem. And so the SIEM manufacturers responded by building much more scalable infrastructure to be able to take all this stuff in. The problem, however, was the rules-based SIEM still wasn’t well suited for all of that.

We really haven’t even covered SIEM as a standalone capability here at Forester in over a decade. And when I started about five years ago, I really started looking at this at the SIEM space as being a little more evolutionary, where we’d gone from rules-based SIEM to adding in behavioral characteristics—things like user behavior analytics, or network analysis and trying to get visibility into what’s going on inside of a network. We also started adding elements of automation. And so now you’ve got all of these capabilities kind of being lumped together in what we describe as the ‘Security Analytics Platform’ market, where we’re bringing all of these different capabilities together.

So I think that was one evolution. I think our next evolution will be to take SIEM out of the data center and out of the on-premise deployment that it’s been locked in for a long time and will move to a more cloud-focused delivery model. And instead of attempting to analyze all the data as some big data analytics problem, we’ll be able to get far more selective about the types of data that we bring in, the scrutiny we apply to that data, and how we prioritize that high fidelity data, and maybe we archive some of this other stuff. So, if we need to go and conduct a threat hunt or go back and do forensics, we’ll be able to make that available. But for our actual focus on threat detection and response, we need to be super focused on things that we know are high fidelity alerts in which there is a lot of context. And then we can apply automation to augment our human analysts.

We’d like to thank Joseph for joining us at this illuminating Flash Talk. We’re already looking forward to hosting the next one and would love to hear your suggestions for topics and/or speakers.

Watch the rest of the 30-min Flash Talk for more questions and  more of Jospeh’s enlightening answers.

Recent Posts

  • Trends

CISA Reveals How 12 Ransomware Gangs are Bypassing EDRs

Reading Time: 7 minsEndpoint Detection and Response (EDR) has a critical role in most companies’…

3 weeks ago
  • Trends

Does Infostealer Malware Have US Organizations Under Siege?

Reading Time: 5 minsLumu’s Compromise Report for 2024 uncovers surprising information about how infostealer malware…

4 weeks ago
  • Trends

Lumu Compromise Report 2024: 2 Essential Tips for MSPs to Protect Clients

Reading Time: 4 minsFor MSPs to proactively protect their clients they need good intelligence, such…

1 month ago
  • Technical

The Hidden Pitfalls of Deep Packet Inspection

Reading Time: 6 minsExplore DPI's limitations in network security and discover how Lumu’s cloud-native, metadata-driven…

1 month ago
  • Trends

3 Cybersecurity Trends From the Lumu Compromise Report 2024

Reading Time: 3 minsLumu’s new Compromise Report 2024 reveals the greatest current cybersecurity trends and…

2 months ago
  • Technical

Lumu’s Journey to Log Retention: Reducing Costs and Enabling Compliance

Reading Time: 6 minsDiscover how Lumu's Playback feature improves visibility and efficiency while addressing the…

2 months ago