Preempting Cybersecurity Risks When Recency Bias Is Good
When it comes to implementing an investment strategy, bias is usually seen as a bad thing. Portfolio managers, analysts, and traders, like anyone else, are susceptible to unconscious cognitive biases that skew their forecasts. If those biases make their way into investment theses and quantitative models, they can wreak havoc on fund performance.
In investing, this is particularly true for the phenomenon of recency bias, where recent events are given more weight than historical events. It’s one of the reasons investment teams analyze years of historical data to surface subtle and pronounced trends in economic and market cycles of up to 30 years or more.
But there are situations where investment managers can benefit from recency bias. Those benefits are usually captured by the technology, information security, and operations teams that support the investment team, generating better predictions by giving more weight to recent events.
For example, retailers like Walmart typically forecast demand for their goods by looking at trends over three or four years, noting peaks on weekends and seasonal patterns. However, when COVID arrived, those normal trends were severely disrupted. To generate more accurate predictions, retailers had to focus on demand data after March 2020, which reflected the changing conditions.
With the amount of change happening in the world of cybersecurity, even apart from COVID, you can understand why focusing on recent events might be useful for making predictions. The tools, applications, and hardware that companies use change regularly; the average server hardware refresh cycle is about four years, for example. Bad actors change their approaches as new vulnerabilities develop and conditions shift. When the pandemic lockdown saw a significant increase in people working from home, there was a corresponding spike in bot attacks on remote endpoints.
An IT problem that happened recently is much more likely to be indicative of a problem happening today than a problem that happened two years ago. A malicious IP address detected yesterday has a higher likelihood of being malicious today, all else being equal.
Taking this a step further, partnering AI with human brilliance makes it possible to spot meaningful patterns and anomalies in large amounts of data and highly changeable environments. Those patterns can be used to predict—and ultimately prevent—security issues. People supply the ingenuity, and AI supplies the speed and power.
Learn More
How algorithmic models handle recency
When developing an algorithmic model for predictions, the best practice is to account for the relative importance of recent events. Data scientists might do that by training the algorithm only on recent data in situations where older data isn’t as relevant.
They might also give different weight to subsets of the data based on which trends are most important for the time period and outcome they want to predict.
Behind the scenes at Agio
Here at Agio, we track company trends in attack highlights within our security information and event management system (SIEM). Events are scored based on severity (How dangerous are the consequences of the event?) and fidelity (How confident are we that we have correctly detected a malicious event?). We then look for patterns in the risk score of the events as well as the timing and number of events.
Algorithms are then tested through a process of model validation to determine which of these data points are best at predicting the outcome. One way to do this is by splitting up the historical data, training the model on part of the data, and testing on another part.
This cross-validation shows how accurately the model can predict based on data it hasn’t seen before—which is exactly what would be used when the model is live in production. If it can predict with some accuracy based on historical data, it can be trusted to predict similarly with new data. That’s how we start to catch problems and breaches before they happen, while also identifying which events are noise that can be safely ignored.
And the more data we can analyze for patterns, the more accurate and useful our predictions. The fact that we monitor security events for hundreds of companies and thousands of users and devices means that we are much more likely to encounter and identify an attack on one client that we can then predict and prevent for the rest of our clients.
The Upshot Is That Trends Matter
Recent events can help us make better predictions. But the larger principle to point out here is that, in a world of evolution and crisis, trends matter. Recognizing patterns, whether they’re recent or long-standing, steady or volatile, provides crucial intelligence for businesses.
And as there is more trend data to track and analyze—and as trends need to be identified sooner and with more sensitive detection—humans can’t do it alone.
Even when we don’t know the time frame of relevant events, we can use data science to identify normal variations and anomalies that would be imperceptible to human beings. A human agent would not notice that an indicator fluctuated by 15% instead of 10%, but an AI model would notice and put out an alert.
In fact, it would be almost impossible to take full advantage of recent event analysis in the way we’re talking about without using data science as well as automation. Because it’s human nature to fall prey to cognitive distortions, the only way to properly incorporate recent trends in a way that is intelligent and purposeful is to use machine learning (ML) and automation.
More behind the scenes at Agio
Our current SIEM solution employs supervised, unsupervised, and adaptive ML to interpret the data and find patterns that humans couldn’t detect. Then, through human feedback, it can predict future outcomes, produce higher fidelity detections, and reduce false positives. Any automated alert that triggers in our SIEM platform has gone through a data enrichment process that correlates contextual information from multiple events and determines if it warrants a threat response when looked at holistically.
ML still relies on human input to produce the best results. We can gain more efficiency by leveraging automation to resolve routine issues and compress the number of incident tickets through alert grouping. This automation allows our analysts to focus more on issues that require a human decision and to inform the ML feedback loop.
Organizations and IT service providers that aren’t investing in data science and adopting time-based pattern recognition, along with anomaly-based long-range detection, are missing a significant opportunity to improve their IT experience and increase their protection from cyberattacks.
To learn more about how humans and AI are partnering to achieve better IT and security, read 3 Pitfalls of IT Support (and the AI Innovations That Eliminate Them).
Share post
Featured Posts
Connect with us.
Need a solution? Want to partner with us? Please complete the fields below to connect with a member of our team.