How to Use Machine Learning and AI in Cyber Security

14 Likes Comment

Cyber criminals are constantly seeking new ways to perpetrate a breach but thanks to artificial intelligence (AI) and its subset machine learning, it's becoming possible to fight off these attacks automatically.

The secret is in machine learning's ability to monitor network traffic and learn what's normal within a system, using this information to flag up any suspicious activity. As the technology's name suggests, it's able to use the vast amounts of security data collected by businesses every day to become more effective over time.At the moment, when the machine spots an anomaly, it sends an alert to a human usually a security analyst to decide if an action needs to be taken. But some machine learning systems are already able to respond themselves, by restricting access for certain users, for example.

 

The Human Element

While talk of AI and automation often brings with it fears of mass redundancy, in the sphere of security machine learning is being used within several different areas to complement, rather than replace, traditional measures such as firewalls.

Despite their increasing ability to perform without human intervention, the systems aren't meant to replace security analysts. On the contrary, they're intended to crunch vast amounts of data to free up analysts for more complex tasks.

However, according to Moonpig’s head of cybersecurity, Tash Norris, AI data analysis can also provide other benefits: Speaking as part of the IT Pro Panel earlier this year, he said that “analysts will naturally look for correlations they've seen before, or that they expect to see”.

The panelists agreed that the most sensible place to deploy AI and machine learning systems is in the broad category of detection and response functions, including tasks like SIEM, SOAR, and EDR. By automating these more manual processes, staff can be freed up to work on more dangerous threats, using AI as a force multiplier to extend the capabilities of a security team.

Dave Palmer, director of technology at Darktrace, says: "Having machine learning allows companies to prioritise more effectively. We don't take human risk decision-making out, but we allow tactical fire-fighting so security teams can do the work on their own timescales."

The Cambridge-based AI startup has recently collaborated with Microsoft to provide AI-enhanced cyber security to organisations transitioning to the cloud. The partnership focuses on addressing security challenges in the “critical areas” of email security, data integration, as well as simplified and streamlined security workflows. This includes Microsoft’s Azure hosting Antigena Email, which uses Darktrace’s artificial intelligence technology to stop the most advanced email threats, with the product also being listed on the Azure Marketplace.

Darktrace director of Email Security Products, Dan Feinat, warned that the AI startup witnesses “attackers impersonate CEOs or compromise vendors’ accounts to send out targeted, topical emails that look legitimate” on a daily basis.

Stuart Laidlaw, CEO of UK cybersecurity startup Cyberlytic, also advocates using machine learning to reduce a security analyst's workload. "It's about cutting through the noise: these guys are swamped in their day jobs and they can't respond to everything. We use machine learning to do the triage."

 

Introducing machine learning

As the range of use cases continues to grow, how can companies start to introduce the technology? It's relatively simple: when used for anomaly detection, it's not necessary to train the machine learning system to a great extent initially.

"You provide it with a stream of data and flag up things that look unusual," says Steven Murdoch, a security architect at the VASCO Innovation Center in Cambridge. "This can then be used for intrusion protection."

Machine learning is also available at a low cost: like cloud, the products can often be used on a free trial basis. In addition, says Laidlaw, companies such as Amazon Web Services (AWS) offer an AI component."Some solutions just plug in and you can throw a couple of data scientists at it to discover anomalies."

Palmer advises: "Get a feel for how it fits into your business. AI as a field is very inclusive; books and training courses are available online."

However, as with any new technology, there are potential pitfalls to take into account. Some experts are cynical about machine learning's potential, pointing out that cyber criminals can use the technology to attack companies. In addition, it could be possible to trick the machine learning systems used for security.

At the same time, the technology itself has limitations. Charl van der Walt, chief security strategy officer at SecureData, says many cyber-attacks won't fit the patterns machine learning is trained to recognize. "The adversary is agile and is changing all the time. So, it's hard to find data sets where there is an adversarial pattern."

Using data to make accurate predictions is the number one challenge, says Dr Yifeng Zeng, head of the machine intelligence research group at Tees-side University. In addition, he says: "Using machine learning, companies claim they can deal with previous attacks, but how will they deal with new ones? The important thing about cyber security is predicting a future attack. So, how do we use the previous data to identify unexpected patterns?"


The future

Despite the challenges, cyber security experts are predicting a bright future for machine learning. As the technology improves, it's possible programmes will emerge that understand when they are under attack and can take measures to protect themselves.

Meanwhile, according to Palmer: "The ways human beings respond to different types of attacks and how they investigate them is something machines can study. They could, for example, make suggestions such as, 'people in your situation took these steps next' acting as a coach or sounding board in a contextually useful way."

In addition, it has been suggested that machine learning systems will soon be deployed in order to deceive the adversary, rather than just using it to predict what's bad. "This entails artificially reshaping your environment to make it a moving target and encouraging adversaries to be chasing lots of red herrings," according to Van der Walt. This could include creating fake targets for the adversary such as files and systems that look real but aren't. "That's a different way of thinking about machine learning: deception as a defensive strategy."

Back to the present day, how can AI and machine learning form part of a company's cyber security strategy? It has a lot of potential but the technology can't be a company's only method of security; it's one part of an overall defense. For now, Laidlaw advises: "Know where your crown jewels are, and protect what is most valuable, using AI as part of that."

You might like

About the Author: admin