AI defenders ready to foil AI-armed attackers

Operational AI cybersecurity systems have been gaining valuable experience that will enable them to defend against AI-armed opponents.

Sponsored Feature For some time now, alerts concerning the utilisation of AI by cybercriminals have been sounded in specialist and mainstream media alike – with the set-to between AI-armed attackers and AI-protected defenders envisaged in vivid gladiatorial terms.

Too often overlooked in all this conjecture is the fact that defensive AI has already been proving its efficacy against conventional cyber-attack types like malware and ransomware, especially in targeted sectors such as energy, healthcare and retail. And in doing so AI security systems are being trained by experiences that will prove of high value as threat actors scale-up the intelligence quotient of their attacks.

But while its success rate gathers pace, AI in cybersecurity must transition beyond outdated perceptions that might prevent it from gaining the mainstream adoption critical for organisations to protect themselves against weaponised AI offensives when they kick-off at scale.

Those companies themselves also have to adjust as AI occupies a centralised and fully integrated role in enterprise-wide IT operations. The adoption trend seems assured. According to Forrester the value of the AI software market will be worth $64bn in 2025, with cybersecurity the fastest-growing AI software category within that.

The increasing importance of AI will bring new challenges however, Forrester cautions. No longer does AI occupy specialist, self-contained domains, for example. And that means AI-driven enterprises now need to manage a complex technology across the breadth of their IT infrastructure, practices and processes, business models, and human skills assets.

Not if but when…

For the time being, tangible signs that current cyber-threats are deploying AI at scale is relatively sporadic compared to non-AI attack trends. This might indicate that digital wrongdoers are in no rush to bring AI into their attack plans. One reason for this could be that they are doing well enough out of non-AI assault methods. Another could be using AI does not (as yet) promise hackers high enough return-on-investment for their labours.

There's evidence that this situation is changing however. Threat actors will take advantage of ready-developed tools that become available, according to experts at Darktrace – OpenAI's ChatGPT being the most recent much-publicised example. Darktrace found that the average linguistic complexity of phishing emails has risen by 17 percent since ChatGPT's November 2022 launch.

ChatGPT has punted AI further into public consciousness revealed a more recent Darktrace survey published in its 'Generative AI: Impact on Email Cyber-Attacks' whitepaper. It revealed that 35 percent of its UK respondents said they have tried ChatGPT (or other chatbots) for themselves.

Dire expectations about the implications for cyber defense emerged in parellel, with eighty-two percent of global employees expressing concern that hackers can use AI to create scam emails indistinguishable from genuine communication.

Darktrace researchers also observed a 135 percent increase in 'novel social engineering' email attacks across thousands of its active customers between January and February 2023, a period that loosely corresponded with the widespread adoption of ChatGPT launched two months before.

These novel social engineering attacks use sophisticated linguistic techniques, including increased text volume, punctuation and sentence length, with no bad links or attachments. The trend suggests that AI provides a means for threat actors to craft sophisticated and targeted attacks quickly and extensively.

Another key learning for IT security teams is that AI-armed attacks will not necessarily take over from other threat types, says Germaine Tan, VP of Cyber Risk Management at Darktrace.

"Threats will continue to come from every direction, in any form," Tan warns, "so it's imperative that security teams have access to AI-grade support tools. There's simply too much going on out there for even well-resourced security teams to contend with. Added to which their organisations' digital estates continue to expand: remote working, mobile devices, web conferencing, hybrid cloud – you name it. It's in helping them to monitor activity across these new-model estates, that Darktrace AI offers a unique advantage to stretched cyber-security resources."

Integrating AI into the security regimen

For AI-backed cyber defences to deliver optimum effectiveness requires close integration between the technology and security teams, advises Tan, with no conceptual divide between their respective activities.

"Achieving this goes beyond investing pervasive trust in AI functionality," he maintains. "It means ensuring that AI tools are embedded into standard work procedures, and not seen as something apart from, or different to, the standard security assets that cyber professionals have become accustomed to use. To achieve this, our perceptions of what AI is, what it's capable of delivering, and how it delivers it, have to be clarified and updated throughout the organisation."

To begin with, we should always define and redefine what is meant by 'AI', Tan says: "While standard definitions of AI exist, of course, they are not always easily market-testable against a security solution that is promoted and sold as being 'AI-powered' or 'AI-enabled'."

For instance, automation in cybersecurity is sometimes communicated as being 'AI', and it becomes difficult for security teams to differentiate: "In automation, what you are looking for is very well-scoped. You already know what you're looking for – you're just accelerating the process with rules and signatures. True AI, on the other hand, is dynamic. You should no longer need to define activities that deserve your attention – the AI highlights and prioritises this for you."

This is not to suggest that AI replaces or supersedes automation, Tan adds: "Not every process needs AI. Some processes will simply need automation…."

When dealing with known threats, such as known malicious malware and hosting sites, automation is great at keeping track. However, when it comes to those 'unknown unknowns', such as zero-day attacks, insider- and IoT threats, and supply chain compromises, AI is best-equipped to detect and respond to these hazards as soon as they emerge.

Tan adds: "The distinction we make is that, automation helps you to quickly make a decision you already know you will make – whereas true AI helps you make a better decision."

Supplements, not supplants

Next, the erroneous notion that the implementation of AI in cybersecurity is a wholesale replacement for existing systems must be debunked. Rather, the imperative should be that AI integrates with non-AI security solutions seamlessly and effectively, and add value to their operations and functionality.

In a similar context we must counter the notion that AI is about reducing headcount and replacing the requirement for human cybersecurity expertise. Quite the opposite is the case, as Tan points out: "In cybersecurity AI absolutely needs human input and interaction, and works optimally when working alongside flesh-and-blood cyber practitioners."

Tan adds: "On top of which, we find that by creating an opportunity to work directly with high-level, innovative technology like Darktrace AI, our customers retain their cyber expertise, and security team personnel are more likely to stay in the job. In that way our products help an organisation's valuable talent retention." (Gartner has highlighted how, with plenty of market opportunities for cybersecurity professionals, talent churn will jeopardise business effectiveness.)

Darktrace has created the Cyber AI Loop to designate its approach to cybersecurity. The four product families contained – Darktrace PREVENT, DETECT, RESPOND and HEAL – cover a key aspect of an organisation's cybersecurity posture. Each feedbacks into a continuous, virtuous cycle, reinforcing each other's capabilities – and this cycle augments human inputs at every stage of an incident lifecycle.

Fourth on Darktrace's AI change agenda is demystification – moving away from seeing AI as an arcane technology that happens inside an unseen black box that's the preserve of AI adepts. It's a misconception that's far removed from the application of AI in 2023, Tan says.

"We have to start regarding AI in terms of a mature, work-a-day and totally practical technology, rather than through out-dated jargon and buzzwords," she insists. "Yes, it is complex – so AI needs to produce outputs that are clear and easy to understand in order to be useful. During a cyber incident, human teams need to quickly comprehend what's happened. When did it happen? What devices are affected? What should I deal with first? What does it mean for our business?"

To this end, Darktrace applies an additional level of AI on top of its initial findings that autonomously investigates in the background, reducing a mass of individual security events to just a shortlist of overall cyber incidents that warrant human review: "This generates natural-language incident reports with all the relevant information for allmembers of a cybersecurity team to make better judgements quickly" says Tan.

Making the explainable attainable

The ultimate aim is to make sure that Darktrace's AI is accessible and assessable – that customers understand what it is doing, that it is trustworthy, and that its functions are explainable – in other words, what the AI's 'thought processes' are.

Tan adds: "Explainable AI helps customers feel trust toward the AI. But it also empowers them to take this AI to the business and say, 'this is what we found', and 'this is what the AI has said', discovering insights about their own digital estate. We want to bring the customer in, be open, have full disclosure, that's important to get customer acceptance, buy in. And with better understanding they use the AI better."

With more and more user organisations leaning toward AI, the question is when this once esoteric technology will become mainstream. Tan is reluctant to predict: "Actually, I'm not altogether sure when a technology qualifies as being mainstream. But I am sure that AI will be useful to the mainstream in many different ways – is that going mainstream? Maybe. Will it become ubiquitous? Highly."

Sponsored by Darktrace.

More about

More about

More about

TIP US OFF

Send us news