Archives for 2013

Cloud Security

On the 24th September I had the opportunity to talk at the Cloud Security Alliance Symposium, a free event in support of the Cloud Security Alliance EMEA Congress 2013 hosted in Edinburgh on Cloud Security. My talk focused on real life examples of cloud security issues and internal research that we at 7 Elements had been working on. Our earlier paper on cloud security issues can be found here. This blog post covers some of the themes discussed during my talk.


Cloud Basics

What is the Cloud? Well, in short, it is a great marketing gimmick. There is no one such individual thing as the ‘Cloud’. The Cloud is a term used to describe multiple service offerings such as Software as a Service (SaaS), Platform as a Service (PaaS) as well as Infrastructure as a Service (IaaS). All of this is characterised by the use of on-demand provision, rapid ability to scale and are based on payment solely for the amount of resource required at any given point.


Key Risks

What are the key risks presented by using the Cloud? For me, the key risks and some of the issues that an organisation should explore when looking at the Cloud break down as follows:

Cloud Security Risks


Legal Jurisdiction

As an organisation you should be aware of how legal requirements to disclose data may be affected by the geography of where the data is stored. If you are based in the UK and use a US based Cloud provider, consider the impact on your organisation if the US courts enforce disclosure of your sensitive data. Where the Cloud is used to store or process sensitive personal data, there may be an impact on your compliance with the required regulation (Data Protection Act,) which you will need to fully understand and mitigate.


Geographical Location

Different geographical locations mean different legal jurisdictions, which will have an impact on your legal and regulatory requirements within each of those regions. This may restrict the type of data that can be stored or processed or limit how the data in question can be transferred between locations. The ability to encrypt data will also be impacted within certain locations due to export restrictions.


Access to Data

Many Cloud services are based on the use of shared services or multi-tenancy solutions. The benefit to the end user is reduced costs, but this can also lead to security concerns. The data may be at risk of attack from another user of the same Cloud service due to the architecture in use. Consideration should be given to how the Cloud provider has limited the possibility of data compromise.


Data Destruction

With the Cloud, you can grow and shrink your resource requirement. When the data on disks is no longer needed then it will need to be destroyed. You will need to gain assurance that this has been destroyed in compliance with your organisation’s standards, that the next user of that environment will not accidentally gain access to your data, and that you have met any regulatory requirements.


Data Availability

The Cloud sells itself as always being there. The data is ‘in the Cloud’, so you will always have access to it. However, the Cloud brings its own impact in relation to your organisational business continuity plans and disaster recovery approach. Consideration should be given to scenarios where the Cloud provider fails, or your ability to connect to the Internet fails. This may render the data unavailable.


Economic Denial of Service

What controls do you have in place to protect against unauthorised provisioning of cloud instances? Based upon a simple example of an attacker gaining access to an organisations provisioning capability (a real life example of gaining access is included later in the blog), we have estimated that an attacker could cause an individual organisation £14,000 of costs in a single day. More on this will follow in a separate blog on Economic Denial of Service.


Cloud Security

The talk then moved on to real life examples of cloud based security issues.


Geographical Location

The first focused on the geographical location for data. Did you know that internally created or ‘private’ cloud installations can be configured to automatically connect to the public cloud if capacity is reached?

The following example shows a private cloud, configured to do just this:


Server instantiated on Eucalyptus.
Number of instances running: 1
Auto-scaling successful
instantiated eb server: instanceID i-35AE00C1
Number of servers: 2
Cloud bursting successful
Instantiated web server on EC2: instanceID i-32CB323A
Number of servers: 3


The issue here is that data is now outside of the organisation’s boundary and is stored on Amazon EC2. Given this scenario, there would be no prior warning and no assessment of the data that is now in the public cloud. This could lead to potential information disclosure or breach data handling requirements.


Access to Data

We then looked at the issue of who has access to your data. A recent article outlined how Dropbox were accessing uploaded word documents. The researcher discovered his documents were being opened by Dropbox-owned Amazon EC2 instances automatically 10 minutes after they had been uploaded, although other file types were not being accessed. The following screenshot shows the EC2 IP addresses accessing the documents:




More on this issue can be found in our previous blog post here.


At the end of the talk, I then provided a live demo of how easy it was to identify valid Amazon EC2 and S3 access and secret key values and use these to enumerate running cloud instances.


$ ruby enumerate.rb
Enumerating AWS account AKWWR5MSIHCI7FH3HIAA
EC2 Instances
[*] i-828b26e4 / running /

S3 Buckets
– /home/data/_backups/20130902.database.sql.tgz
– /home/data/_backups/20130901.database.sql.tgz


An individual who has access to these credentials could choose to start new cloud instances (potentially leading to an economic denial of service), stop current services leading to a more common denial of service or more importantly, access the data currently stored within that instance. This issue will be covered in more detail in our next Cloud Security blog.


As we have seen in many ways, the Cloud is no different to the wider challenges of managing an organisation’s data securely. However, with these unique opportunities, unique risks will also arise. As such, we need to understand those risks and assess the data that we wish to put into the Cloud and understand how important  that data is in terms of confidentiality, availability and integrity to the business.



Has Dropbox Been Opening Your Private Documents?

While testing a new service called HoneyDocs, a service that allows the creation of documents that send a call back with a unique tracking code notifying you that the document was viewed/opened, Daniel McCauley discovered his documents were being opened by Dropbox-owned Amazon EC-2 instances.

(Yes, HoneyDocs will also know when someone is accessing your documents as well 🙂 )

The issue was addressed by Andrew Bortz (Security Expert at Dropbox) on HackersNews who explained that the Dropbox team disabled the loading of external resources. This renders the method of discovering whether Dropbox is opening your files utterly useless but doesn’t prevent them (or any other third-party) from reading them.

The Dropbox team has dismissed the importance of the issue reasoning there would be a requirement to generate thumbnails of the files for user browsing. The fact is nothing keeps the cloud-based provider from accessing the stored resources.

Although not a novelty item, I feel this news hasn’t been given proper media attention or online discussion as it affects a large number of internet users. Dropbox is one of, if not, the most popular cloud-based file storage service. These users might not even fully understand the extent of the privacy and security issues.

Here at 7 Elements we have discussed the cloud-based security issue before but still feel important to keep users informed about being and staying safe online.

What can I do to protect my files?

Using verified software that encrypts your files is the only sure way of ensuring no one else has access to them in this cloud-focused world. If you want an extra layer of security, be sure to encrypt the file’s names as well and not just their contents.

Cloud Security Alliance Symposium

Our CEO, David Stubley will be presenting on the subject of Cloud Security at the Cloud Security Alliance Symposium on the 24th September. Further information can be found here.

CVE-2013-5669 Thecus Pain Text Admin Password

Advisory Information

Title: Thecus NAS Server N8800 Firmware 5.03.01 plain text administrative password
Date published: August 2013
Ref: CVE-2013-5669 CWE-319

Advisory Summary

The Network Attached Storage (NAS) Administration Web Page for Thecus NAS Server N8800 transmits passwords in cleartext, which allows remote attackers to sniff the administrative password.



Affected Software

NAS Server N8800 Firmware 5.03.01

Description of Issue

The Thecus NAS Server N8800 sends NAS administrative authentication credentials in plaintext across the network. The credentials may be disclosed to attackers with the ability to intercept network traffic, which may enable them to gain unauthorised access to the NAS administrative interface.


There is no exploit code required.

CVE-2013-5668 Thecus Domain Administrator Password Disclosure

Advisory Information

Title: Thecus NAS Server N8800 Firmware 5.03.01
Date published: August 2013
Ref: CVE-2013-5668 CWE-317

Advisory Summary

The Domain Administrator Password within the ADS/NT Support page is disclosed due to clear text storage of sensitive information within the GUI.



Affected Software

NAS Server N8800 Firmware 5.03.01

Description of Issue

The Domain Administrator Password within the ADS/NT Support page is disclosed due to clear text storage of sensitive information within the GUI. Any user who has access to this page is able to retrieve the ADS/NT administrator ID and password. This could enable an attacker to gain access to the domain hosting the storage server.


Attackers can use a browser to exploit these issues.

CVE-2013-5667 Thecus OS Command Injection

Advisory Information

Title: Thecus NAS Server N8800 Firmware 5.03.01 get_userid OS Command Injection
Date published: August 2013
Ref: CVE-2013-5667 CWE-78

Advisory Summary

A lack of input validation allows an attacker to execute OS commands directly on the operating system.



Affected Software

NAS Server N8800 Firmware 5.03.01

Description of Issue

The application accepts user input through the get_userid parameter that can be used to create OS commands that are redirected to the operating system. An attacker can use this flaw to execute arbitrary commands.


Standard request:




Command Injection PoC:

1. Write value for user admin to /tmp


2. Display value of /tmp




Apache Struts 2 Exploit – have you patched?

Apache Struts 2 Exploit – have you patched?

In early July and then in mid July, Apache Struts 2 released information on two new vulnerabilities.  These recent vulnerabilities for Struts 2 appear to have gone under the radar in terms of patching urgency and active exploitation is now happening in the wild. The vulnerabilities appear to have gone unnoticed because they have received little media attention and the vulnerability description perhaps doesn’t do justice to the potential scope of the vulnerability:

Apache Struts 2 before allows remote attackers to execute arbitrary OGNL code

Of course, the vulnerabilities will have been dealt with through organisations’ patching programs, if they were picked up. But have you assessed the potential impact correctly and taken steps to remediate this issue?

So what’s the true potential impact of the vulnerability if it were exploited?

The vulnerability alone may not be hugely significant, but when put into the context of an attack it can have much greater consequences. The vulnerability allows for some post-exploitation techniques to be utilised, such as installing backdoors and JSP post-exploitation tool kits. This allows for more elaborate and complex attacks to occur.

The true impact of the exploitation of this vulnerability when combined with post-exploitation tool kits could be full compromise of a system with the ability for that system to be used for onward compromise of connected hosts.

How easy is it to use?

7 Elements has discovered an exploit kit designed to target Struts 2 vulnerabilities. Whilst the exploit code itself required reasonable skill to construct, the toolkit enables attackers with a low level of skill to use the toolkit to attack Struts 2 vulnerabilities. The Chinese built toolkit essentially allows an attacker to enter their target via copy and paste, press a button and view the results.

Struts2 exploit code

Struts2 exploit code

(It comes with instructions on how to use the tool in a blog post that can be easily translated via online translators. As such, this isn’t just available to Chinese speaking script kiddies!)


The exploit code makes use of the following arbitrary code execution vulnerabilities within Struts 2:

– 2010 S2-005: (CVE-2010-1870)

– 2011 S2-009: (CVE-2011-3923)

– 2013 S2-013: (CVE-2013-1966)

– 2013 S2-016: (CVE-2013-2251)


The vulnerabilities exists due to a lack of proper input validation. Input validation occurs where end user information is not validated before being used by a web application. If an attacker can embed malicious commands in these parameters, the system may execute those commands on behalf of the web application, resulting in the execution of remote code. A more detailed explanation on the inner workings of the exploit can be found here:

The option to choose which of the four Struts 2 vulnerabilities to exploit indicates that all are still useful to an attacker, and show that vulnerabilities from three years ago are still to be found.

What can you do?

Confirm that your current patching process has identified the need to apply this patch and correctly triaged the effort and priority. If you are yet to deploy a patch, then Apache has “Strongly recommended” that Struts 2 users upgrade to Struts Doing so will address the current vulnerabilities.

If historically your organisation has chosen to not patch and deploy other mitigation techniques such as filtering, revisit these controls to ensure that they are still effective and provide the desired protection from the current Apache Struts 2 Exploit.

Root Cause Analysis

Security Testing Root Cause Analysis:  A New Way of Reporting

At 7 Elements we have introduced an additional way of reporting on the findings from our security tests, Root Cause Analysis.  Whilst root cause analysis is not a new concept, it has not to date been readily applied to security testing output.  We feel it is time for a change.

Current Practice

At present it is standard practice across the industry to report on the findings from security tests individually.  This is of course necessary so that the extent of each vulnerability and the risk it poses can be understood and appropriate remediation applied. However, this only enables organisations to view one vulnerability at a time and thus results in organisations tackling vulnerabilities on a case by case basis.

What is Security Testing Root Cause Analysis?

The causes of vulnerabilities can often be attributed to a single technical cause, i.e. the root cause.  Frequently, vulnerabilities share the same root cause. 7 Elements has developed a new technique that enables us to identify the common technical root causes of vulnerabilities for both Application and Infrastructure findings.  In addition to reporting on individual vulnerabilities, 7 Elements also reports on the number of vulnerabilities attributed to common technical root causes, with individual narrative against each root cause.


Example of RCA

Why should we use Root Cause Analysis?

The identification of root causes of vulnerabilities enables organisations to take a more strategic view of their vulnerability management and information security practices.  It allows organisations to not only understand where they may have gaps but also why.  This enables organisations to take remediation action to tackle a root cause and thus remediating multiple vulnerabilities in a strategic approach.Through our root cause analysis, organisations frequently find that the majority of their vulnerabilities have only a couple of root causes.  As a result, by tackling one root cause an organisation is able to tackle multiple vulnerabilities with only one remediation action.

Prevention is better than cure though.  By tackling root causes, organisations are able to take preventative action to stop vulnerabilities from arising in the same way in the future and more importantly potentially remediate issues that have yet been identified.

If you would like to know more about how we approach testing and the additional value that we deliver, then please get in touch with our team.


Digital Technology Awards 2013

Last night the team from @7elements attended the ScotlandIS Digital Technology Awards 2013 awards to find out if we were successful in the ‘Best Newcomer’ category.

With nerves increasing, thankfully our category was first up. The shortlisted companies were announced in reverse order for the award and were as follows:

The evening was great fun, and well put together by ScotlandIS. Well done to all of the other winners and all of the shortlisted companies. A full list for each category can be found here.


Puppet Vulnerability

This week has seen a timely reminder on the importance of effective patch management in information security with the release of a security advisory about a remote code execution Puppet Vulnerability. Organisations needs to ensure that all services and technology platforms are covered, not just the major players.

Would you say ‘yes’ if asked if you have an effective patch management process? Yes for many people would mean that they are aware of the need to patch and take steps to maintain patching levels on core technology platforms such as Microsoft and Oracle. However, what about other key enabling technology in use within the organisation?

Puppet Labs[1] provides IT automation software that enables organisations to standardise builds and deployments and manage compliance activity through centralised patch management. On Tuesday they released information on a remote code execution vulnerability:

When making REST api calls, the puppet master takes YAML from an untrusted
client, deserializes it, and then calls methods on the resulting object. A YAML
payload can be crafted to cause the deserialization to construct an instance of
any class available in the ruby process, which allows an attacker to execute
code contained in the payload.[2]

What does this mean? Well, a malicious individual with internal network access could attack  and gain remote access to the ‘Puppet Master’.

As the ‘Puppet Master’ is the central server that manages all functions and controls the remote machines, gaining remote access to this device could potentially enable an attacker to make changes on all devices within the environment under control of the master. It would even be possible to create new accounts on all of the remote machines that are managed, thereby giving the attacker legitimate credentials on all of these devices.[3]

The ‘Puppet Master’ also functions as a certificate authority by default. So any compromise could also have an impact on the integrity of those certificates.

All in all, quite a headache if this were to be realised. It is a good example of why an organisation should take steps to ensure that as an organisation you have identified all vendors, have a process in place to collate all relevant security related advisories and are able to assess and implement updates in a timely and controlled manner.

Puppet Labs has issued updated software to address this specific vulnerability and details can be found here.