Our latest blog takes a look at threat modeling and security testing within virtualised environments.
The continued deployment of Virtualisation within existing network architectures and the resulting collapse of network zones on to single physical servers are likely to introduce radical changes to current architectural and security models, resulting in an increased threat to the confidentiality, integrity or availability of the data held by such systems. Recent experience has already shown that the use of Virtualisation can introduce single points of failure within networks and successful attacks can result in the ability to access data from across multiple zones that would have historically been physically segregated.
To deal with this change will require a corresponding change to the architectural and security models used and a full understanding of the associated risks/threats that Virtualisation brings.
The purpose of this post is to set out areas that will need to be explored in order to gain assurance that Virtual Environments do not introduce or aggravate potential security vulnerabilities and flaws that can be exploited by an attacker to gain access to confidential information, affect data integrity or reduce the availability of a service.
Virtualisation raises lots of questions, including:
- Will the Virtual Environment breach existing security controls that protect the existing physical estate? If so, how?
- What additional controls will be required?
- Does the proposed change exceed risk appetite?
Threat Modeling
To explore these questions, a comprehensive threat modeling exercise should be undertaken to look at the level of risk and threats associated with Virtualisation. This exercise should be tailored to be specific to the environment / business market that you are in (for example – financial organisations will need to be aware of regulatory requirements such as PCI-DSS which could impact on the use of virtualisation.)
A detailed threat model will aid in the development of a robust architectural model as well as feed in to any assurance work conducted. Such activity should be completed at the design stage and, failing that, at the latest prior to any deployment, as reviewing potential threats after deployment can result in costly redesign and implementation work.
Any risks and potential vulnerabilities that are identified during the threat analysis phase should be mitigated with appropriate security controls and built in to the design prior to implementation. Security testing should then be arranged to verify that the risks have been effectively mitigated.
Of course, in some instances there will be environments where threat modeling is not part of usual business security practice. Where there this is the case, any team conducting assurance activity should complete a tactical threat modeling exercise as part of their engagement and to inform the direction and context of any testing / recommendations. A tactical threat modeling exercise is one where less time and effort is applied and is more focused on a ‘how would we attack this system’ basis. Such an approach will take into account possible attack scenarios and is likely to form the basis of a penetration testing scoping exercise.
Technology and functionality changes fast and this can lead to a change within the attack surface of an organisation. Threat assessments should be an on-going activity, even a tactical threat assessment on a regular basis to take in to how changes in technology deployed affect the efficiency of existing controls will aid in the organisational understanding of the threats posed and may help to avoid a costly breach or loss of data.
Example Questions
Questions that should be asked as part of a threat mapping exercise:
How will Virtualisation impact the patch management process?
- Consideration will be needed in terms of patching and virus control being in a centralised environment. Virtualisation introduces the risk of out-of-sync Virtual Machines existing within the network. As an example, introduction of VM snapshot/rollback functionality adds new capability to undo changes back to a “known good” state, but within a server environment when someone rolls back changes, they may also rollback security patches or configuration changes that have been made.
Is there sufficient segregation of administrative duties?
- Layer 2 devices are now virtualised commodities; this could lead to a new breed of ‘virtualisation administrators’ who straddle the role of traditional network / security engineers and thus holding the keys to the virtual kingdom. SANs Virtualisation admins will be responsible for assigning storage groups to specific VMs, so again they’re may have rights across that network divide as well, essentially making them Network/Server/Storage admins from a rights perspective.
Is there a suitable security model in place?
- Where multiple user groups / zones or where different risk appetites exist within an environment then separate security models should be created to contain breaches within one zone and help protect against know attack types.
No single security model should be applied across all groups or zones.
Is there suitable resiliency built in to the environment?
- What level of resiliency is required in terms of disaster recovery / denial of service mitigation and are they in place and more importantly are they effective?
- Is there sufficient Disaster Recovery built in to the environment?
How does the virtual environment interact with existing network architectures and authentication mechanisms?
- Does this introduce a weakness to the environment that could be utilised by a malicious party?
Has a design review been completed?
- Virtualised environments are complex, as such during the design stage effective and detailed review of the proposed design should be conducted to aid the development of a more robust and secure system. The output from threat modelling should be incorporated in to this review to ensure that a suitable design is implemented.
Are you outsourcing any component of the virtualised environment?
- Third party relationships have become a major focus over the last few years with several high profile data loss incidents in the media. The contract with the vendor should include appropriate clauses to ensure data security, while formally enabling testing and remediation activities. More specifically the contract should facilitate regular security testing.
- Does the outsource contract allow you to define physical location of your data? The jurisdiction can affect not only the threat model, but also your handling of risks.
To Close
Any security testing conducted against the virtual environment should follow detailed and industry standard methodologies (OWASP, CREST, OSSTM ) and comprise both infrastructure testing / build reviews and security device configuration reviews (i.e. firewall rule sets). However specific testing to break out of the virtual environment will need to be included as the ability to access the restricted hardware layer will result in data leakage and potentially the ability to compromise any other attached Virtual Machines (gaining unauthorised access to user data and systems).
Essentially many of the areas of risk which are specific to virtualization arise from the capabilities which it provides. For example virtualisation allows for the hosting of systems from different networks on a single physical server. This can have benefits in terms of reduced costs/datacentre space requirements, but also introduces a new perimeter between networks, the Virtualisation hypervisor. So if there is a security issue which affects the hypervisor it can allow attackers to jump from one network zone to another, effectively bypassing existing security controls such as Firewalls.