How AI is helping in the fight against coronavirus and cybercrime – Software Testing News

Matt Walmsley is an EMEA Director at Vectra, a cybersecurity company providing an AI-based network detection and response solution for cloud, SaaS, data centre and enterprise infrastructures.

With the spread of the Coronavirus, cybercriminals have gained more power and become more dangerous, leaving some IT infrastructures at risks. That is why Vectra is offering the use of AI to protect the data centre and specifically, cybersecurity powered by AI to help secure data centres and protect an organisations network.

In this interview, Matt explains why data centres represent such a valuable target for cybercriminals and how, despite the vast security measures put in place by enterprises, they are able to infiltrate a data centre system. He also explains the storyline of an attack targeting data centres and how cybersecurity powered by AI can help the security teams to detect anomalous behaviours before its too late.

Whats your current role?

Im an EMEA Director at Vectra. Ive been here for five years since we started the business, and Im predominately doing thought leadership, technical marketing and communicating information. I spent most of my time thinking about how we put AI to use in the pursuit of, in our case, cybersecurity and a big part of that is cloud and data centre.

To get into the core of what you do, you talk about cloud, data centres and AI, but which one is the core driver of all of those for your business?Which types of devices are the Vectra AI you use, integrated into and in what sectors is it applied within?

Our perspectives on this as experts of cybersecurity and AI is a culmination of those two practices: machine learning and applying it to cybersecurity user cases. So, were using it in an applied manner, i.e. to solve a particular set of complex tasks. In fact, if you look at the way AI is used in the majority of cases today, its used in a focused manner to do a specific set of tasks.

In cybersecurity practice, using AI to hunt and detect advanced attackers that are active inside the cloud, inside your data, inside your network, its really doing things at scale and speed that human beings just arent able to do. In doing so, cybersecurity is like a healthcare issue, if we find an issue early and resolve it early, well have a far more positive outcome, than if we left it festering, and theres nothing to be done. Its just the same as cybersecurity.

You talk a lot about the rapid ability of it to scale to bigger projects, in relation to your work, do you see AI as a way to solve problems in the future, or do you think theres a long way to go with it? Is AI the future? Or do you think humans managing AI is the future?

AI is becoming an increasingly important part of our lives. In cybersecurity practice, its going to be a fundamental set of tools but it wont replace human beings. Nobodys building a Skynet for cybersecurity, that sorts it all out and turns the table back on us. What were doing is building tools at size and scales to do tasks that humans beings cant do. So its really about augmenting human ability in the context of cybersecurity, and for us, its a touchstone of our business and a fundamental building block for cybersecurity operations now and in the future.

Theres a massive skills gap in our industry, so automating some cybersecurity tasks with AI is actually a very rational solution to fixing the immediate massive skills resource gap. But it can also do things humans cant do. Its not just taking the weight off your shoulders, its going to do things like spotting the really really weak signals of threat actor hiding in an encrypted web session. Its impossible to do that by hand, to do it looking at the packets, the bits of bytes, you need a deep neural net, that can look at really subtle temporal changes. AI does it faster and broader and it does things we are just not capable of doing at the same level of competency.

Its optimistic that its going to have such a dramatic effect on our working process. In terms of Data centres, how is AI working to protect data centres?

The data centres change and Im sure, as youve seen, its becoming increasingly hybrid. Theres more stuff going out to the cloud, even though people still have private data centres and clouds. One of the main challenges that a security team has with a data centre is that, as workloads are increased, moved, or mobile or flexed, its really hard to know about it.

As security experts usually have incomplete information, they wont know which VM you have just sped up or what its running. They dont know all of those things and they are meant to be agile for the business but that agility comes with a kind of information cost. I have imperfect information, I never quite know what Ive got.

Ill give you an example: I was at a very large international financial services provider and I was talking to their CCO. He had their cloud provider in and he told me where we are at with licensing and usage. What he thought he had to cover and what the business actually had was about ten times off. So there was ten times more workload out there than he and his security team even knew about.

So how can AI help us with that?

Well, if we integrate AI and allow it to monitor virtual conversations, it can automatically watch and use past observations spot, new workloads that are coming in there and how they are interacting with other entities. Its those behaviours that are the signal that tells the AI to look and find attackers. So its not about putting software in its workload, just monitoring how it works with other devices.

In doing so, we can then quickly tell the security team: here are all the workloads were seeing, here are the ones with behaviours that are consistent with active attackers and then we can score and prioritise them. What were doing is automating and monitoring the attacker for behaviours, so as a security team youre getting more signals, more noise and less ambiguity. Its not just headlined Malware or exploits, which are the ways people get into systems.

What else do you see in threat actor events?

Exactly what happens next in an advanced attack. An advanced attack can play out in days, weeks, months The attacker has got to get inside, hes had to get a credential, had to exploit a user, hes got to do research and reconnaissance, hes going to move around the environment, hes going to escalate. We call that lateral movement then hell make a play for the digital jewels, which could be in the data centre or in the cloud.

So, if you can spot those behaviours that are consistent with an attacker, youve got multiple opportunities to find an attacker in the life cycle of the attack. Just to use that healthcare analogy again, find it early and it will much better and faster to close it down. If you find them when they are running for the default gateway with a big chunk of data doing a big infiltration, you are almost breached and thats a bit too late.

Using AI, basically, is like being the drone in the sky looking over the totality of the digital enterprise and using the individual devices and how the accounts are talking to each other, looking for the very subtle, hard to spot for robust signs of the attackers and thats what were doing. I can see why AI speeds that up efficiently.

Is there a specific method or security process that Vectras cybersecurity software implements, to help protect mass data centres?

Thats quite an insightful question because not all AI is. Built the same AI, is quite a nebulous term. It doesnt tell you what algorithmic approach people are taking. I cant give you a definitive answer for a definitive technology but I can give you a methodology.

The methodology starts with the understanding that the attacker must manifest. If I got inside your organisation and I want to scan and look for devices, there are only so many techniques available for me to do that. Thats behaviour and we have tools and the protocols, to spot that. So, we can see how can we spot the malicious use of those legitimate tool or procedures, these TTPs.

How does that whole process start?

That starts with a security research team, to look for the evidence that attackers do use these behaviours, because it may be a premise, it may not be accurate, once weve done that we bring a data scientist to come in and work with this team.

So, lets find some examples of this behaviour attacker, of this behaviour manifesting in a benign way, as an attacker, in a malicious way and lets look at some regular no good data. The data scientist looks at that data, does a lot of analysis and tries to understand it. They look at the attributes, what they call a feature, and what are the feature selections. I might find it useful to build a model to find this and there are various ways you can look at data and separate the customers infrastructure and all of the different structure inside it. Then theyll go off and build a model and theyll train it with the data. Once weve got it to an effective level and performance and were happy with it, we release into our incognito NDR, network detection platform, and that goes off and looks for individual behaviour.

Remote Desktop Protocol, RDP, recon will be completely different from the thing thats looking for hidden https command and ctrl behaviours. So, it has different behaviours and data structures and different algorithmic approaches. However, some of those attacks manifest in the same way in everybodys network. We can pre-train those algorithms.

Are they aware of those behaviours?

Yes. Its like a big school, Its got its certification, its ready to go, as soon as you turn it on. Theres no concept of what it has to learn anything else, its already trained, it knows what to look for, its a complex job but weve already trained it.

But there are some things that we could never in advance. For example, Ill never know how your data centre is architected, what the IP ranges are, theres no way of me knowing it in advance and theres a lot of things we can only earn through observation.

So, we call that an unsupervised model and we blend these techniques. Some of them are supervised, some are unsupervised, some of them use recursive really deep neural networks. When its really challenging when were looking into a data problem, we just cant figure it out, what are the attributes? What are the features? We know its in there.

But what is it?

We cant figure it out. We are going to get a neural net to do that for themselves, once again doing things at a scale human could not do in an effective way. Weve got thirty patents pending now, we are rewarded on different algorithms that build they are brains that we built that does that monitoring and detection.

Do you think there are any precautions people should take to avoid cybercrime during coronavirus?

Our piece of the security puzzle is: how do I find someone who already penetrated my organisation in a quick manner, so were not the technology that stops known threats coming in? You might think of healthcare who adopted this really quickly. Healthcare, the WHO, recently called out a massive spike in COVID related phishing.

Thats the threat landscape, thats whats happening out there, thats what the threat actors are doing. We are actually inside healthcare and we did not see a particularly large spike in post intrusion behaviour, so we did not see evidence that more attackers were getting into these organisations, they all had done a reasonable job in keeping the wolf from the door.

But what we did see, because we were watching everything, were changes in how users were working. We saw a rapid pivot to using external services, generally services are associated with cloud adoption, particularly around collaboration tools and we saw a lot of data moving out of those, that created compliance challenges.

What do you mean?

Sensitive data suddenly being sent to a third party. Thats not to beret health organisation during a really challenging time but their priorities were obviously making sure clinical services were delivered but in doing so, they also opened up the attack surface where the potential for attackers to get was in increased.

Its important to maintain visibility so you can understand your attack surface, and you can then put in the appropriate procedures and policy and controls to minimise your risk there.

See the rest here:

How AI is helping in the fight against coronavirus and cybercrime - Software Testing News

Related Posts

Comments are closed.