As new threats hit enterprise systems and light up enterprise security dashboards, security analysts need to make swift and accurate decisions so that they can respond in the best way possible. Yet, so many alerts come at any given time that the ability for the typical security team to focus on the alerts that matter can seem impossible.
Consider these findings reported in this InfoSecurity magazine story: the typical security team today is not only deluged with 174,000 security alerts a week, but they are also only equipped and staffed to deal with 12,000 of those security alerts. If security teams want to have any degree of success, they better pick the right 12,000 alerts to investigate.
We all know that isn’t reality. The reality is that security teams are not going to find the right 12,000 alerts to investigate. They are going to exhaust themselves as they chase false positives. At the same time, they also learn only after the fact that that they missed critical notifications that — had they been adequately investigated — could have stopped a breach. The result is, over time, teams become so fatigued that they burn themselves out, and performance drops even further.
Eventually, they will likely choose to leave the organization for employment elsewhere. And the organization ends up losing talent and experience it will have a very tough time replacing. Additionally, business leaders don’t believe these teams can deliver value. Unless dramatic action is taken, situations like this don’t tend to improve without such intervention.
Of course, enterprises could try to fix the situation by hiring more security analysts. The problem here is that those skilled workers don’t exist in great enough numbers (at least not at salaries most organizations are willing to pay) for every organization to hire.
Fortunately, it’s a situation that can be resolved if security teams and enterprise technology leaders can focus on everyday situations that exacerbate alert fatigue. Based on my conversations with CISOs, here are five common conditions that make alert fatigue worse than it need be:
Poorly tuned security and monitoring systems. Technology teams are generally inundated with alerts. While operations teams get system and infrastructure alerts, application owners are likewise pinged constantly about the status of their plans. Still, it is undoubtedly security teams that get inundated with the most alerts. These alerts pound the screens of security analysts and threat hunters at such a rate that many meaningful alerts just get overlooked. Yet, there are accurate indications that systems have been compromised hidden within this sea of alerts.
Poorly tuned systems are one of the primary reasons for alert overload. When systems are adjusted poorly, they can issue thousands of meaningless alerts triggered by incorrectly configured security rulesets that don’t understand the context of the environment, nature of the risk, or magnitude of the issue — if there’s an issue at all in actuality.
While false positives are created by systems that are tuned too broadly and identify events that shouldn’t trigger alerts, false negatives are situations that are missed by monitoring tools because the rules are written too tightly. Staff who are more concerned about the possibility of missing attacks will error on tuning their systems too tightly, while those who feel overrun by alerts will adjust theirs too loosely. Ideally, enterprises should work with their monitoring tool vendors to tune their systems correctly, identify strategies and networks that need the closest monitoring, and adjust accordingly. Focusing adequately on this challenge alone can dramatically improve the quantity and quality of the alerts with which teams must contend.
Never-ending tool sprawl. Another challenge that exacerbates alert fatigue is tool sprawl. Tool sprawl is essentially the result of organizations buying tool after tool and not investing the time, the training, the needed personnel to manage them properly. That means enterprises are investing in security tools, hoping they can be deployed and solve their problems — but too often assume they can be deployed and forgotten.
This creates a mess when one considers that organizations have cloud and on-premises security tools, IPS/IDS systems, data protection systems, behavioral analysis systems, authentication systems, network, and application monitors, and so on, all monitoring their systems. That’s a lot of dashboards to watch, way more dashboards than most enterprises can successfully monitor.
Additionally, suppose systems aren’t appropriately tuned, as we mentioned in the previous section. In that case, it’s not just the multitude of dashboards that teams must try to contend with — it’s the flood of alerts from all of these systems. And this isn’t often because these tools can’t do their job, it’s because employees either don’t have the training they need, or there aren’t enough employees to manage the tools properly.
This is an area where organizations can take a hard look to see what security tools are necessary and which ones can be reasonably retired. For those tools that remain, make sure staff are fully trained in their use. This is an area where enterprises can learn how to work smarter and not harder.
The lack of skilled security personnel. There remains, and will likely remain, the significant challenge for enterprises to fill the security seats they want to fill. A recent study from the security professionals association (ISC)² showed just how challenging it is. According to that study, the number of trained cybersecurity staff needs to expand by 145% to meet demand. The 2019 Cybersecurity Workforce Study pegged the current cybersecurity force at 2.8 million professionals. But there’s a need for more than 4 million such professionals.
This is a long-term problem, and organizations will need to learn how to do what they can to get more from the staff and the security technology investments that they’ve made and increase productivity wherever they can. For certain functions, it may make sense to leverage service providers to access the necessary expertise.
Complex, hybrid infrastructure. Another common reason security teams are suffering from alert fatigue is the complexity of today’s environments. Most mid to large enterprises have systems on-premises and in various types of clouds with virtualization and containers. And these environments are rapidly changing and evolving, making it everything security teams can do to keep up with the change.
According to research firm ESG’s paper, Exploring Hybrid Cloud Adoption and the Complexity of Securing East-West Traffic, 88% of survey respondents said they are using both infrastructure-as-a-service and on-premises infrastructure. ESG doesn’t expect that mix to change any time soon.
Of course, there is little an enterprise can do about the complexity of their environments. With the pace of digital transformation, environments are likely going to continue to grow increasingly complex. They can, however, take steps to normalize their environments and look for ways to automate the monitoring and management of their environment as much as possible. And that will help security teams better manage their way through the complexity.
Too high of reliance on manual analysis. Much of the work within SOCs is still manual. In many cases, security analysts are still trying to manually correlate alert events themselves through searching multiple dashboards and through logs. But this is all work that artificial intelligence/machine learning is very good at today.
Wherever possible, security teams need to seek out the right tools that can be used to scour their alerts and find the alerts that matter; instead of trying to connect the dots manually, rely on machine learning to correlate the alerts that should be grouped. This way, the picture of what is happening will come into clear focus for analysts, and they can better determine what events need the most investigating.
With the correct analysis tools, which today means leveraging machine learning, security analysts are provided what they need to know beyond the context of an attack, including the “who” is attacking, the “what” is happening behind the attack, and potential business impact of the incident. Bringing this all together typically requires detail about why an incident is essential. This usually requires rapid evaluation about not only who is launching the attack (such as IP address and geographic location) but also the criticality of the systems being targeted, such as the type of system being targeted, whether the data is sensitive or falls under regulatory compliance, among other types of business context.
Machine learning can also help identify historical patterns that rules-based systems and human analysts overlook easily. Such patterns can help signify who may be behind an attack, if an incident is part of a concerted effort, and illustrate the potential impact of an attack.
When alerts are coming in at a relentless pace, analysts can’t be expected to do it all on their own they need the help machine learning and behavioral analytics can provide, not by making decisions for security teams, but by helping to clear away the noise to focus on the alerts and incidents that matter so that security operations can improve their effectiveness by focusing on what matters.
And that’s what all of this truly about: helping security analysts and security operations professionals focus on what matters most and clearing away the alert clutter.
tags
George V. Hulme is an internationally recognized information security and business technology writer. For more than 20 years Hulme has written about business, technology, and IT security topics. From March 2000 through March 2005, as senior editor at InformationWeek magazine, he covered the IT security and homeland security beats. His work has appeared in CSOOnline, ComputerWorld, Network Computing, Government Computer News, Network World, San Francisco Examiner, TechWeb, VARBusiness, and dozens of other technology publications.
View all postsDon’t miss out on exclusive content and exciting announcements!