Ben Desjardins, VP of product strategy at cyber security firm RSA Security, talks to TEISS about the importance of quantifying cyber security risks.
“Great usability: isn’t that the key to cyber security?” I asked Ben Desjardins when we met recently. “Not at the expense of security, itself”, he answered me. “Making usability of security products or features a trade-off for making them truly secure is a dangerous message. Of course, we all want to move towards frictionless security that is not disruptive to employees or customers, but people need to know that the security is there.”
The key to good cyber security, Ben tells me, is that it is adaptive. So for instance when someone is undertaking a low risk activity there might be no need for any additional authentication. But when a computer user is in a high risk place then you need to be confident you know exactly who they are – and they need to know that you are potentially monitoring them.
If you are going to do that though you need rules to differentiate between low and high risk activities:
- Is this activity happening at the normal time, from the usual location and with the device they normally use?
- How sensitive is the information that they are accessing?
- Are they behaving in a way you would expect them to?
Asking these questions will help you to quantify cyber security risk at any one moment. And quantification is at the heart of cyber risk management.
Quantification of cyber security
To be efficient, any security process needs to be able to adapt to any particular set of circumstances. How it adapts should be a function of the level of risk at that moment. That could be internal risk: for instance, “Is personal data being accessed?” Or it could be external risk: for instance, “Is our organisation at risk from a new threat we have just learned about?” Unless you can quantify the risk that surrounds a particular set of circumstances, you won’t know how to prioritise a wide array of incidents and what is the most effective way to react.
The process that Ben uses to quantify cyber risk is “FAIR” which stands for . FAIR is a framework for understanding the factors that contribute to information risk. It employs four steps that help organisation quantify information risk at a granular level. Put simply, these steps are:
- Identify the risk scenario components: the asset at risk, its value and the threat actors that may cause it damage
- Evaluate how frequent damaging incidents are likely to be, how strong the defences around the asset are, and thus the probability of a damaging incident
- Evaluate the probable size of the loss caused by damage to the asset
- Quantify the risk based on likelihood and impact
This approach provides a granular quantification of risk and enables you to prioritise your risks based on their likely financial impact. It also helps you translate certain actions and incidents into financial terms. For instance the management of an organisation could calculate that a change from a situation where 25% of employees use two-factor authentication (2FA) to one where 50% of employees use it would have the effect of reducing financial exposure by say £1000.
This is an important approach because, as Ben puts it, “I can control risks to a degree [by changing process, technology and people], but I can’t control threats [as they are on the whole external actors].”
It is important to be pragmatic though. The expectation shouldn’t be to quantify risk down to the nearest £, but to have a rough idea that will enable you to prioritise your defensive investments. It means that you can put a financial value on risk. And by using financial triggers to make security decisions, it means that you have a decision-making tool that has some logic behind it.
The approach can be used in several different ways. Take the automation of cyber security as an example. There is a lot of discussion of the benefits of automation and machine learning within cyber security. But these techniques are not free: automating security operations needs to be cost-effective.
The question to ask here is “how much money would I save through automating?” The benefit might be fewer false alarms and more continuous operations as a result. The requirement is to look for recurring incidents where it is cost-effective to automate the response. Once you know where automation is most cost-effective you can address how you are going to automate for instance by developing detailed playbooks, digitising manual processes or even using security automation tools such as RSA NetWitness Platform, which leverages automation capabilities through its partnership with Demisto.
Another technique is to use a “nodal” view so you can see the whole network and what is happening within it. Having a full view of the attack is critical for understanding the steps needed for remediation. Combined with good data visualisation, this gives you the ability to orchestrate your security capability.
Respond Nodal network diagram
Respond Nodal malware data diagram
The ultimate aim of this approach is to deliver a cyber security ROI, where “R” is the value of the reduction in risk. One caution though: security professionals accept it is impossible to keep 100% of threats out. That means that risk can never be totally eliminated. And that in turn means a shift from “protect” to “detect and respond”. In other words, the quantification of cyber security needs to contain two parts: the value achieved through reducing risk; and the cost necessary to clear up any damage that is caused.
Future of cyber technology: AI and IoT
The conversation moves on to the future of cyber security. I ask Ben whether AI is more of a threat to cyber security than it is a benefit. With machine learning, I argue, you might get into a position where you don’t know what decisions an AI machine took or why it took them.
“That’s true”, Ben agrees. “But AI doesn’t have to mean autonomous machines. Unsupervised, autonomous, machines might well make mistakes and then compound them with further mistakes because they have not been programmed well. But AI machines can be supervised; they can still learn over time but a human can guide them along safe paths. For example an AI machine could analyse security data but then a human being would undertakes an investigation based on the analysis. This would mean that a machine finds a suspicious pattern but a human identifies the appropriate response needed for that pattern.
Alternatively supervised AI machines can be told to “look for a particular thing” rather than simply searching for patterns and connections.
In the future though we may get more confident in the abilities of AI cyber security machines. If we fully automate the process, humans will lose control and any visibility of what is going on. That could be fine, but if that route is chosen then there must be a full catalogue of each step taken by the autonomous machines so that forensic analysis (and potential reprogramming of the machine) can take place at a later date.
In the meantime though, security analysts will need to develop better skills around evidence, ensuring they are able to analyse the whole history behind a breach as well as having the confidence to declare when an incident is serious enough to respond to.
Another technology we need to understand is robotics, and in particular the “Industrial Internet of Things” (IIoT), factory robots that are connected to the internet. There are a number of threats that come with this technology:
- A vulnerable device may be connected into the depths of the network and allow hackers a way in to critical information
- A vulnerable device may be shut down by hackers, impacting operations
- A vulnerable device might be used as a weapon for instance becoming part of a massive botnet or even mixing factory chemicals in a dangerous combination
“The universe of exploits continues to expand”, Ben says resignedly. And it isn’t just IIoT that is a headache for security professionals. Blockchain is also changing the threat landscape, adding anonymity and speed for hackers, while at the same time having questionable use as a defensive technology. And at some stage we are going to be faced with Quantum Computing which could rip apart current security protocols (although in the meantime we have to contend with the simpler but no less problematic issue of constantly increasing processing power).
Cyber security will always require a balance between defences and operational efficiency, whether the threat is quantum computing, AI, or the IoT. And critical to achieving that balance is the process of quantifying risk. Without it we cannot hope to prioritise our risks and invest in cyber defences cost effectively. But with it we can deliver high fidelity security.