This article is more than 1 year old

The machine fights back: AI that fights cyber-threats on behalf of humans

Darktrace AI learns organisation's 'pattern of life'

Paid feature AI does more than recommend TV shows and validate our bank transfers. Since 2016, it has also been working behind the scenes within the security operations centre (SOC). The reason is simple: when it comes to spotting and neutralising digital threats, humans need help. As we grapple with a torrent of online threats, machines are increasingly fighting our battles for us.

Manual operators are good at reasoning and making judgement calls, but they face challenges. For one thing, they aren't always around. SOCs can't always afford to employ people around the clock, and those that do might only support a skeleton crew out of hours. As we know, online ne'er-do-wells know no such restrictions.

Even when live workers are on site, they can't always respond quickly enough. There's simply too much happening on the average enterprise network. As online crime increases, criminal groups like initial access brokers are building businesses based on attack volume. They rattle the doors on networks around the world at an alarming rate, vacuuming up compromised infrastructures and selling them on to the highest bidder.

Combine this increase in criminal activity with the increasingly sensitive telemetry on modern networks and human analysts face a deluge of network incident data. This information overload was a problem a decade ago, and it's getting worse. The explosion of IoT devices will create a bigger attack surface with more endpoints, more traffic, and more alerts to sift through.

Existing approaches aren't sustainable

SOCs have responded to this by trying to automate their responses. Security, orchestration, automation, and response (SOAR) platforms try to make analysts' lives easier by mapping out automated incident response playbooks that coordinate activities between security appliances.

SOAR is useful to a point, but it's effectively a complicated flowchart with triggers. It looks only at the signals it's given and has a limited set of responses based on what it sees. This rigid data includes signatures based on indicators of compromise, along with predefined YARA rules to deal with particular kinds of malware.

This empirical approach makes it difficult for traditional security automation systems to accommodate changing conditions. These include shifts in the company's own infrastructure and behaviours along with evolving attack techniques, tools, and procedures.

Systems taking a traditional approach to security automation also struggle with nuance. They don't understand context, which changes based on a wide range of factors, including the people and devices involved in an incident, the location, and even the time of day.

An alternative in autonomous response

In 2016, UK company Darktrace set out to change this with a different approach. Instead of traditional automated security, it used artificial intelligence (AI) to develop an autonomous response.

The distinction between these two terms is key. Rather than following a predefined set of steps to handle known conditions, Darktrace's Antigena tool uses native AI that adapts to new conditions as they happen.

To do this, Darktrace began from the opposite direction to automated systems, with no rules or signatures at all. It uses AI to learn what it calls an organisation's 'pattern of life'.

This is the baseline of normal behaviour across the company. Antigena learns this over time by watching all aspects of a company's digital activities. As it watches, it creates a statistical model of what's usual.

This model has to encompass a wide collection of activities to accommodate an incident's context. What might be normal behaviour for one employee might be unusual for another. Each employee leaves a digital footprint across their company's infrastructure, which Antigena uses to build a baseline understanding of context. It takes into account thousands of data points.

A baseline for the whole infrastructure

Darktrace tracks digital activities everywhere to get a comprehensive understanding of normal behaviour. That means watching what happens on a company's own network, by using integration with vendors including Checkpoint, Cisco, Fortinet, and Palo Alto.

It also means watching employees' email interactions, learning from their content and the metadata they create. The AI comes in especially useful here given email's popularity as an attack vector.

For example, when a senior executive at F1 racing team McLaren got a phishing email, Antigena spotted deviations from the normal. These included the sender, who was unusual for that company and especially that recipient. There was also a URL hidden in the email that raised the autonomous response system's suspicions still further. That enabled it to save the company's infrastructure from compromise.

Beyond email, Darktrace's self-learning AI involves watching what happens in a customer's cloud operations and hybrid infrastructures.

That cloud monitoring includes watching activities in SaaS applications. In one case, an IT admin with a grudge downloaded sensitive files from their SaaS account and tried to transfer them to a home server from the company's computers. They tried to use an IT-approved file transfer account, expecting their files to fly under the radar. However, Antigena judged the unusually large file SaaS downloads to be abnormal and blocked the admin from uploading them.

The disgruntled employee tried to steal the data using their corporate cloud account and then via a remote endpoint connected to the VPN, but Darktrace said that it blocked all those attempts, too.

A proportional response

Automated tools that don't understand context risk compensating with aggressive tactics that disrupt business processes.

Conversely, putting suspicious activity in context allows autonomous response to deliver a proportional response. Understanding the nuances of an incident enables it to judge which actions will contain the risk while maintaining normal operations.

In some cases, an incident might require no autonomous response at all. Or the AI might simply neutralise an attack by converting attachments to harmless file types.

In others, it might do no more than hold back an email that represents a localised attack. In the case of the Maclaren executive, Darktrace's system double locked the link to avoid anyone following it and moved the email to the executive's junk folder.

At the other end of the spectrum, Antigena might quarantine a critical server beaconing to a destination it has never contacted before, choking off all its traffic to save the rest of the organisation.

Autonomous response throughout the attack chain

Antigena's autonomous response doesn't come switched on out of the box. As it learns more about a company's pattern of life, it will begin spotting anomalous behaviour and recommending mitigation measures in what's called human confirmation mode. Customers will only turn on active mode, which provides full hands-off autonomous response, after they trust the system.

The system has identified and recommended mitigation steps for progressive stages in the attack chain while running in human confirmation mode. This illustrates what an autonomous response would look like in a typical ransomware scenario.

Initiation

In one ransomware attack the initial victim, which Darktrace calls 'patient zero', downloaded an executable from a server that no other devices on the company's network had ever contacted. The system's flag went up and it logged the issue, but as it wasn't in active mode, the attack proceeded to the next stage.

C2

Typically the next stage in a ransomware infection is communication with the command-and-control (C2) server for further instructions. In this case, the infected device beaconed the malicious server with a GET request and hit other external computers with self-signed SSL certs that criminals often use in a classic sign of C2 activity. Darktrace explains that Antigena would have blocked all traffic from the device at this point to protect the rest of the network.

Lateral movement

Lateral movement is where the ransomware moves through the infrastructure to establish a foothold on other devices and find data to target. In this ransomware case, the pwned endpoint began polling internal devices on RDP and SMB ports to identify vulnerabilities before connecting via SMB to dozens of destination devices and infecting them with malicious files. Had Antigena been running in active mode, it would have first blocked the specific SMB requests. Had the lateral movement attempts continued, it would have choked off all traffic once again from the device.

Data encryption

Encryption is still the endgame for large numbers of ransomware infections. In one case when ransomware hit an electronics manufacturer, Antigena noticed a machine on the network accessing hundreds of Dropbox-related files on SMB shares and then encrypting them. It was able to block all unusual connections for five minutes, giving it time to quarantine the malicious machine from the rest of the network for 24 hours. The zero-day malware scrambled just four documents before the AI cut it off.

Autonomous response is a promising technology that offers the chance to catch attacks at machine speed and at scale. This is becoming increasingly important thanks to a long-awaited trend: experts believe that cutting-edge attacks will become increasingly automated - and perhaps autonomous.

In a survey of over 300 C-suite executives that Darktrace conducted with MIT Technology Review, over two-thirds said they expect AI to be a tool in impersonation and targeted phishing attacks. Over half fretted about more effective ransomware using autonomous techniques. As attackers threaten to use AI for evil, perhaps it's just as well that autonomous algorithms are taking them on.

Sponsored by Darktrace.

More about

TIP US OFF

Send us news