Featured

If I knew then what I know now – Zero Day Vulnerabilities and the fear of the unknown

Whenever we hear of a new Zero Day threat, trying to work out when Day Zero actually was is always an inexact process. The first insight the general public know is when patches are released, which means somebody fell victim to the exploit days or weeks before. Sometimes we get more color from whichever researcher first analyzed the threat but it’s what was going on before Day Zero that’s the real concern. How long has the exploit of these vulnerabilities been going on? It’s a troubling ‘known unknown’. It leaves us feeling like we are swimming in the ocean knowing there are Great White sharks out of sight below us – it won’t be long before one of them gets hungry!

 

So, what can you do to counteract the threat of something, when you don’t even know what that something looks like?

Think that it won’t happen to you? Think again – Solarigate and Hafnium showed us that a deliberately-engineered zero-day threat can be unleashed at any time, while the more traditional ‘exploit-that-we-only -just-discovered’ vulnerabilities will always have an endless supply, as built-in functions get misused and abused when the wrong hands get on them.

Trying to defend against future threats that have yet to take shape presents a particularly challenging problem. So far, the answer from the Cyber Security Industry is The Security Controls Framework. OK, it may not be the answer you wanted, but to be fair, the question being asked was – tough!

The great thing about Security Controls Frameworks, for example, the NIST CSF, is that it gives you all the answers for mitigating, counteracting and remediating the full technicolour spectrum of cyber threats. On the downside, the full array of tools, processes, best practices and controls you need to operate is vast and not always congruous with your optimum business operations.

As a good analogy to the need for comprehensive, layered security and the consequences of leaving gaps, think of the Star Wars Death Star. It was the most formidably defended spaceship ever built, with not just laser guns, but squadrons of tie fighters to intercept and destroy attackers before they even got close, backed up with forcefields for protection, just in case! And yet the infamous Thermal Exhaust Port left one remaining weak spot - a Zero Day vulnerability that was exploited to devastating effect.

Meanwhile, back in the Real-World, we have all been under attack from a deadly invisible enemy for the last 2 years. It was a Zero Day virus with no vaccine and, so in the early days, we had no protection, while treatments were far from as effective as needed. As such, much emphasis was placed on the need for Contact Tracing strategies, sometimes known as Test, Track and Trace. The thinking behind this was that, if infected or even potentially infected individuals could be identified promptly, they could be told to isolate, therefore cutting off the otherwise exponential spread of the disease.

In many respects the same rules apply to cyber breaches, especially when we are dealing with Zero Day malware, where our automated defences are blind to the never-before-seen toxic infection.

Early detection and containment are critical in terms of limiting the depth of any incursion and the opportunity for data theft or disruption. Latest data from the Verizon Data Breach Investigation Report is that the Time To Detect for any breach is around 160 days, while the time for exfiltration of data is usually within the first few days. You could be victim to a smash and grab data-theft months before you have any idea that your systems have been compromised.

In the IT World, our Track and Trace capability is known as Change Control. The Change part of this refers to the need to get sufficiently comprehensive visibility of all change within our systems. Once we have this, then we will be able to expose what are known as Indicators of Compromise. When dealing with Ransomware, APTs or Trojan Malware, this could be new or modified files on our systems or other changes such as Registry Settings, new processes, network ports etc.

But to leverage this new visibility of change we also need the Control element too. Our IT Systems do change a great deal, and frequently. Every time we patch our systems the same kind of changes will be seen as we would get with a cyber-attack. The only difference is that our changes are for good, where the others are for bad. This is why we need Change CONTROL – we need to be able to distinguish between intended, positive changes and unexpected, unwanted changes.

The only way to get the necessary level of forensic detail and – crucially – context of changes is to take a FIM (File Integrity Monitoring) approach. Its not enough to just know that, say, a system file has changed on a device. We need to know when it changed, the before and after of how it changed, who made the change? Was this planned, related to an RFC approved in our ITSM system? Ideally, we would also reference file-reputation data, for example, is the file signed and known to be part of an Official Publisher patch? Has this same file been seen on other systems?

Of course, Change Control isn’t just about providing a sensitive breach detection mechanism. It also helps us maintain our defences against attacks before they happen, identifying configuration decay and allowing us to remediate before there is a problem. But when all our other security measures have failed to prevent malicious activity, FIM and Change Control provide a Last Line of Defence.

We may never know what we don’t know, but by accepting that new threats are inevitable, and that the unthinkable could happen to us, we can put in place the additional measures we need to counteract any adversary. Even those that don’t exist – yet.