- March 31, 2021
- 3 minute read
We’ve all seen this scene in some movie or TV show: a hacker sits in a shadowy room, busily typing on his keyboard. Suspenseful music plays in the background, the camera pans around him in a slow movement, and within the space of a few clicks – voilà! – our protagonist has broken into the highly-secured target he was trying to penetrate.
“I’m in.” he says.
This may make for great TV, but the reality of data breaches is, well… not quite as exciting.
The fact is that the biggest, most harmful attacks don’t happen in minutes. They rather unfold over months. They aren’t executed in a few clicks, but through a long process of exploration and exploitation.
According to The Cost of Data Breach Report by IBM, the average time to detect and contain a breach is 280 days. That’s over 9 months. Detecting and containing a breach caused by a malicious attack takes even longer: 315 days in average.
A Breach is Not an Event; It’s a Process
The most important thing to understand about data breaches is that it is not a singular event; rather, it is an ongoing process with multiple steps to it.
The first step usually is infiltration. This is the step by which the attacker gains a foothold in the network. Infiltration can happen in many ways: it can come by way of targeted credential theft, exploiting vulnerable web applications, third party credential theft, malware, and more. However, this is just the first step, and there is a long way to go.
The next step is usually reconnaissance. This is where attackers try to understand what is the network architecture, what access they have via their stolen credentials, and where sensitive data is stored. Compare this to a thief breaking in the middle of the night into a house they haven’t been to. The first thing they do is to look around and see the layout of the house and where the valuables are being kept. Cyberattacks are no different.
Once attackers are done with basic reconnaissance, they will usually attempt lateral expansion in the network – that is, move within the network into higher-tier with better access, perform privilege escalation to gain permissions with wider access, acquire sensitive data, and finally – exfiltrate it outside of the network.
These steps take weeks and months to progress, performed via a painstaking trial-and-error process by attackers, as they strive to identify sensitive resources and expand within the network.
Usually in the case of a data breach we hear only of the first and last steps – infiltration into the network, and data exfiltration – but in-between them is a whole world of activity.
How, then – you might ask – if a data breach is made of so many individual steps, that these steps are not detected and immediately identified for the malicious exploits that they are?
The answer is that they are detected. But the main problem of cloud security today is not one of detection. It’s correlation.
Modern security systems detect a lot. In fact, they probably detect too much: according to study by IT security Bricata, the average SOC receives over 10,000 alerts each day from an ever-growing array of monitoring and detection products.
However, in spite of these massive numbers of alerts, there are a number of reasons why malicious activity still goes undetected:
Too many logs: when you have too many logs, it’s impossible to know which alerts matter, and which do not. Identifying a malicious event in a sea of false positives is like trying to find a needle in a haystack.
Low risk alerts: while many events are being detected, most of them are medium and low-risk alerts which are not worth investigating.
Lack of context: looking at an individual activity separately, it’s impossible to tell whether that activity is legitimate or not. That administrator logging in the middle of night – it is because he is sleepless or did someone steal his user? That DevOps engineer invoking an API call she has never used before – is that because she is working on something new, or a hacker trying something shady? Without context it is impossible to tell.
Stretching over time: going back to our original point – data breaches take a long time to unfold. This means that likewise, alerts related to it will be detected over an extended period. When events are detected in sequence, it is easy to tell that they are related. But what happens when they are detected months apart?
Given these realities, it is unrealistic to expect security managers to be able to connect a random event to another event they spotted weeks or months ago. The answer, therefore, is to use automated tools which not only detect individual events, but also correlate them into a logical sequence that shows how they are related.
Cyberattacks take time, and the bigger, more complex the network, the more time it will take. Over such a drawn-out period, it is impossible to keep track of individual events and connect them manually. Rather, you need automated tools which will do it for you, track separate activities over long time spans, alert you to the aggregate threat of the event sequence and automatically respond to break the attack kill-chain as it progresses before it’s too late.