No! Not Casper, not that friendly GHOST!

Last year (2014) we saw a couple of big exploits that made the headlines and security teams all around the world are still picking up the pieces left by Heartbleed and ShellShock.

So where are we this year? We are not even 10% into the new year and already contenders are popping up trying to make their name. The newest vulnerability to get the brand treatment is GHOST.

A not so friendly GHOST

GHOST is a buffer overflow that affects the GNU C Library (otherwise known as glibc), specifically the __nss_hostname_digits_dots() function of glibc.

Because this function is available both remotely and locally (as gethostbyname functions generally are), it is exploitable both locally and remotely. Successful exploitation of this vulnerability allows for arbitrary code execution, resulting in unauthorised access. The full advisory can be read here and has been tagged with the corresponding CVE-2015-0235.

Who can I call? GHOSTbusters?

No working exploit has yet been disclosed. However, the technical explanation in the advisory could be sufficient to shed light on the matter and allow for a working version of the exploit to be developed.

GHOST’s impact should therefore be considered as critical and warrant early remediation.

As per the advisory, the disclosure of this vulnerability has been coordinated with several vendors in order to allow time to issue security related patches.

Vulnerable versions of glibc range between glibc-2.2 and glibc-2.17~glibc-2.18. However, many long term support and server grade distributions remain vulnerable. For example:

Debian 7 (wheezy)
Red Hat Enterprise Linux 6 & 7
CentOS 6 & 7
Ubuntu 12.04 LTS

Incident Response – keep cool in a crisis

SC Magazine recently published an article by our CEO, David Stubley on the topic of how to keep cool in a crisis.

“Learn from the misfortunes of your peers and prepare to defend against repeat use of the same cyber-attack techniques as part of your defence planning” advises David Stubley.

The full article can be found here and a link to our incident response services can be found here.

Forensic v’s Tactical

Forensic v’s Tactical – Acpo Guidelines Computer Evidence

A key consideration for any organisation responding to an incident will be the decision about whether to take a forensically sound approach to data acquisition and interrogation. The purpose of forensics is to gain legally permissive evidence from computers and digital storage media. Organisations should therefore take the decision at an early stage whether they may wish to take the case to court or involve law enforcement. Should this be the case organisations should use an approach that meets the evidential handling requirements of the local legal jurisdiction of the incident. In the UK the foundations for this approach have been well documented by the Association of Chief Police Officers (ACPO). More information can be found on their website:

http://www.acpo.police.uk/documents/crime/2011/201110-cba-digital-evidence-v5.pdf

Taking a forensically sound approach can limit the options available to you in responding to an incident. If your organisation only needs to understand the facts around an incident and has no requirement to involve law enforcement, a more tactical approach can be taken.

Taking a tactical approach broadens the tools and overall options available as part of an incident response. It will enable your organisation to gain a rapid understanding of the size and complexity of the event.

Cyber Security breakfast meetups for SMEs

Our CEO David Stubley will be presenting at the upcoming Cyber Security breakfast meetups for SMEs (Edinburgh) event on the 29th January at 8am.

During this talk David will explore the question of “What is Cyber Security?” Using real life case studies David will provide insight into the current and future threats faced by UK businesses.

The event will be held at New Register House in Edinburgh, for more information and to book a place, click here.

ScotlandIS Ecommerce Masterclass: Security and Legal

Our CEO David Stubley will be presenting at the upcoming Ecommerce Masterclass: Security and Legal event being held by ScotlandIS. At the event David will cover the current threat landscape and provide real life examples of online attacks.

The event will be held at the Bonham Hotel in Edinburgh on the 29th January 2015 at 2pm.

Matthew Godfrey-Faussett, Partner, IT & Healthcare at Pinsent Masons will also present on the legal aspects associated with information security and data protection.

For further information and to book a place, please visit ScotlandIS.

Threat Modeling and Security Testing within Virtualised Environments

Our latest blog takes a look at threat modeling and security testing within virtualised environments.

The continued deployment of Virtualisation within existing network architectures and the resulting collapse of network zones on to single physical servers are likely to introduce radical changes to current architectural and security models, resulting in an increased threat to the confidentiality, integrity or availability of the data held by such systems. Recent experience has already shown that the use of Virtualisation can introduce single points of failure within networks and successful attacks can result in the ability to access data from across multiple zones that would have historically been physically segregated.

To deal with this change will require a corresponding change to the architectural and security models used and a full understanding of the associated risks/threats that Virtualisation brings.

The purpose of this post is to set out areas that will need to be explored in order to gain assurance that Virtual Environments do not introduce or aggravate potential security vulnerabilities and flaws that can be exploited by an attacker to gain access to confidential information, affect data integrity or reduce the availability of a service.

Virtualisation raises lots of questions, including:

  • Will the Virtual Environment breach existing security controls that protect the existing physical estate? If so, how?
  • What additional controls will be required?
  • Does the proposed change exceed risk appetite?

Threat Modeling

To explore these questions, a comprehensive threat modeling exercise should be undertaken to look at the level of risk and threats associated with Virtualisation. This exercise should be tailored to be specific to the environment / business market that you are in (for example – financial organisations will need to be aware of regulatory requirements such as PCI-DSS which could impact on the use of virtualisation.)

A detailed threat model will aid in the development of a robust architectural model as well as feed in to any assurance work conducted. Such activity should be completed at the design stage and, failing that, at the latest prior to any deployment, as reviewing potential threats after deployment can result in costly redesign and implementation work.

Any risks and potential vulnerabilities that are identified during the threat analysis phase should be mitigated with appropriate security controls and built in to the design prior to implementation. Security testing should then be arranged to verify that the risks have been effectively mitigated.

Of course, in some instances there will be environments where threat modeling is not part of usual business security practice. Where there this is the case, any team conducting assurance activity should complete a tactical threat modeling exercise as part of their engagement and to inform the direction and context of any testing / recommendations. A tactical threat modeling exercise is one where less time and effort is applied and is more focused on a ‘how would we attack this system’ basis. Such an approach will take into account possible attack scenarios and is likely to form the basis of a penetration testing scoping exercise.

Technology and functionality changes fast and this can lead to a change within the attack surface of an organisation. Threat assessments should be an on-going activity, even a tactical threat assessment on a regular basis to take in to how changes in technology deployed affect the efficiency of existing controls will aid in the organisational understanding of the threats posed and may help to avoid a costly breach or loss of data.

Example Questions

Questions that should be asked as part of a threat mapping exercise:

How will Virtualisation impact the patch management process?

  • Consideration will be needed in terms of patching and virus control being in a centralised environment. Virtualisation introduces the risk of out-of-sync Virtual Machines existing within the network. As an example, introduction of VM snapshot/rollback functionality adds new capability to undo changes back to a “known good” state, but within a server environment when someone rolls back changes, they may also rollback security patches or configuration changes that have been made.

Is there sufficient segregation of administrative duties?

  • Layer 2 devices are now virtualised commodities; this could lead to a new breed of ‘virtualisation administrators’ who straddle the role of traditional network / security engineers and thus holding the keys to the virtual kingdom. SANs Virtualisation admins will be responsible for assigning storage groups to specific VMs, so again they’re may have rights across that network divide as well, essentially making them Network/Server/Storage admins from a rights perspective.

Is there a suitable security model in place?

  • Where multiple user groups / zones or where different risk appetites exist within an environment then separate security models should be created to contain breaches within one zone and help protect against know attack types.
    No single security model should be applied across all groups or zones.

Is there suitable resiliency built in to the environment?

  • What level of resiliency is required in terms of disaster recovery / denial of service mitigation and are they in place and more importantly are they effective?
  • Is there sufficient Disaster Recovery built in to the environment?

How does the virtual environment interact with existing network architectures and authentication mechanisms?

  • Does this introduce a weakness to the environment that could be utilised by a malicious party?

Has a design review been completed?

  • Virtualised environments are complex, as such during the design stage effective and detailed review of the proposed design should be conducted to aid the development of a more robust and secure system. The output from threat modelling should be incorporated in to this review to ensure that a suitable design is implemented.

Are you outsourcing any component of the virtualised environment?

  • Third party relationships have become a major focus over the last few years with several high profile data loss incidents in the media. The contract with the vendor should include appropriate clauses to ensure data security, while formally enabling testing and remediation activities. More specifically the contract should facilitate regular security testing.
  • Does the outsource contract allow you to define physical location of your data? The jurisdiction can affect not only the threat model, but also your handling of risks.

To Close

Any security testing conducted against the virtual environment should follow detailed and industry standard methodologies (OWASP, CREST, OSSTM ) and comprise both infrastructure testing / build reviews and security device configuration reviews (i.e. firewall rule sets). However specific testing to break out of the virtual environment will need to be included as the ability to access the restricted hardware layer will result in data leakage and potentially the ability to compromise any other attached Virtual Machines (gaining unauthorised access to user data and systems).

Essentially many of the areas of risk which are specific to virtualization arise from the capabilities which it provides. For example virtualisation allows for the hosting of systems from different networks on a single physical server. This can have benefits in terms of reduced costs/datacentre space requirements, but also introduces a new perimeter between networks, the Virtualisation hypervisor. So if there is a security issue which affects the hypervisor it can allow attackers to jump from one network zone to another, effectively bypassing existing security controls such as Firewalls.