This article is more than 1 year old

We can rebuild him, we have the technology: AI will help security teams smack pesky anomalies

Big data, smart machines and analytics, with a human behind the wheel

Analysis With highly targeted cyber attacks the new normal, companies are finding the once-hidden Security Operations Centre (SOC) is the part of their setup they really count on.

SOCs have existed in a variety of guises for decades, emerging in recent years as a natural consequence of centralising security monitoring across organisations that have become increasingly geographically and technologically complex.

It made sense to put expertise in one place, or virtualize them across regions, to respond to the growth in security threats. Although hugely successful, their use of conventional tools and products in a world where known security threats are giving way to novel and unknown patterns is proving a challenge.

Although many successful security compromises are built from a toolkit of relatively simple techniques and common weaknesses, the chances of new attack patterns combining these with an unknown vulnerability have risen dramatically.

It's difficult to estimate the scale of this phenomenon, but an unknown threat might be anything from an unexpected insider attack to one that abuses internal credentials or exploits one or more zero-day software flaws – recent examples from a SOC perspective would include the WannaCry and NotPetya attacks of 2017 that affected thousands of organisations across numerous sectors.

Faced with the likelihood of compromise, breached SOCs' effectiveness was measured in terms of response, mitigation, and clean-up. Reacting in minutes or even seconds made the difference, as did the quality of security response plans. Detection was no longer the only game; SOCs needed to respond quickly or find themselves coping with a mess in the long run.

Now for AI

Faced with this growing reality, it's not surprising that technologies flying under the AI banner have arrived like a saviour on the back of promises that are not always well-understood or explained.

The traditional SOC makes use of and depends upon layers of security sensors and systems, all of which generate information, often in the form of logs and events. For years, the answer to processing this was to channel as much as possible into repositories such as those used by Security Information and Event Management (SIEM). As the volume of event and log data grows, the complexity of analysis and decision making has increased, in turn leading to challenges in threat detection and response times.

But response times are now everything as organisations must accept they will be targeted and breached. This leaves them sitting uneasily between over-reactive detection, which generates too many false positives, and under-active detection, which leads to false negatives. It hasn't helped, of course, that the people needed to detect, triage, respond to and, where necessary, escalate threat events, require an expanding suite of skills that are in perennially short supply.

AI – more specifically machine learning and big data analytics – has felt like a way out of this morass because it promises to do things that humans using SIEM have found difficult, namely correlate and spot anomalies quickly and in an automated way. The problem is that AI isn't a standardised technology so much as a set of concepts and algorithms, which makes it hard to distinguish what's hype and what's not.

"If I were to reinvent it I'd call it 'augmented intelligence'," suggests Ian Glover, president of global ethical pen-testing body CREST, who worries that a useful concept is in danger of being misunderstood.

"What's actually happening in relation to SOCs, is big data combined with AI to help with analytics," he says.

In addition to response, AI's other important benefit is the ability to learn more quickly – or at all – which goes back to the issue of unknown threats. Attacks evolve employing different MOs as they look for and exploit weaknesses, but their development is always gradual. If AI analytics can be used to understand the deeper patterns of these small changes, the defenders have discovered a way of evolving with them.

Anomaly response

At its heart, SOC security rests on identifying the anomaly – data that stands out as being unusual or unexpected. Everything – tools, processes, the human response – is predicated on this. The limitation is that anomalies not only vary from network to network, device to device and system to system, but they are also inevitable. Most unusual behaviour by individual users, or the application of protocol traffic they generate, turns out to be completely innocent. Conversely, ordinary traffic can also hide anomalous traffic as is the case where attackers abuse stolen privileged credentials. Conventional perimeter security finds this sort of compromise very hard to spot because it appears completely legitimate.

Using AI effectively in the SOC depends upon identifying anomalies in a sophisticated way using baselining. In theory, doing this requires the baselining of multiple data points and not simply one user or resource. The power of AI is that there is no theoretical limit to the number of data points that can be used to define a baseline and what is deviating from it.

What AI brings is the ability to learn, that is to constantly adjust these parameters over time. "AI is going to be used to work out whether those anomalies are things we should be concerned about. But if all of a sudden we're seeing anomalies and they're all OK, then the AI system should feed back into the analytics system to say that this is an expected behaviour," says Glover.

What none of this can replace is the role of the human decision makers, which in SOCs comprise layers of skills from detection, response, and mitigation right up to skilled forensics. AI can alert any one of these layers to the problem, but it is not yet capable of telling them what to do about it.

AI does not mean some magical transformation in which machines take over the job of defending networks from other, malevolent machines. It's a tool in which humans are always the decision makers, using the analytics provided by machine intelligence to make better and quicker decisions.

"AI on its own is not the answer," says Glover. "We should be using learning systems to feed back into the inference engines and analytics." Conceptually, "data analytics and the AI allow analysis of data to be conducted faster. The final triage of invoking a cyber-response plan would go through the SOC managers."

New-world SOCs

None of this really explains how AI can get to grips with the data problem that SIEM has struggled with – namely that simply adding more sensors and security layers risks creating more alerts that, in turn, confronts SOCs with a greater number of situations to evaluate and possibly act on. How does a security manager in this world determine which alerts are worth following and which are phantoms unless they have some kind of reference point?

This is where User Behaviour Analytics (UBA) and, more recently, User and Entity Behavior Analytics (UEBA) have staked their claim. In UEBA, what matters first is not simply the idea of a baseline – a version of the "normal" from which an anomaly can be discerned – but that this is based on behaviour associated with network users and accounts.

Again, not a brand-new idea – user monitoring has been around for years – but for SOCs its power is enhanced by harnessing it to possibilities offered by machine learning. Unlike rule-based systems, UEBA baselining with machine learning can adjust its worldview of a user's behaviour, understanding this in greater depth, as the user's behaviour changes over time. It's a principle that can be extended to applications and devices if need be.

This should be the perfect job for machine learning: in this scenario, security analytics simply becomes the appliance of algorithms to what is just another big-data challenge. And machine learning, after all, grows on big data. But while the concept is sound, it's early days for such systems and the proof of their effectiveness will be in the security they deliver in tackling real-world incidents.

The SOC defenders have time to perfect UEBA, but – perhaps, given the changing times and escalating threats – not as much as some of them would like. ®

More about

TIP US OFF

Send us news


Other stories you might like