Archives for 2015

Keeping cool in a crisis – Incident Response

Back in January 2015, SC Magazine published my article on keeping cool in a crisis. With the ever-increasing portfolio of breached organisations, maybe it is time to revisit that advice again?

Cyber-Attacks

In today’s world it is inevitable that organisations will suffer cyber-attacks. When an organisation is attacked their incident management procedures will be key in sustaining the company through the crisis. However, with large scale breaches continuing to cost organisations and individuals dearly as well as hit the headlines, more could be done to improve incident management procedures.

Preparation

Preparation is key to any planned response but it can be difficult for organisations to anticipate what will be required in the event of an incident. For many organisations, incident response procedures plan to tackle scenarios identified through business continuity risks or following internal incidents. Procedures are often completed or reviewed as part of an annual business planning process by those with a focus on the business. This results in an introspective focus that can leave incident management procedures lacking.

An introspective focus does not effectively anticipate the full suite of scenarios that an organisation may face in responding to an incident. Such an internal emphasis does not take into account the evolving threat landscape or the changing external environment in which the organisation operates. Without placing incident response measures in this dynamic external context, organisations may find their response measures are lacking in the face of current attacks.

Learning from others

Of course, gaining information about factors external to your organisation, such as threats, is often an insurmountable challenge, but organisations have an opportunity to carry out reviews of the breaches of their competitors or other organisations similar to their own.

Groups conducting attacks, whether for financial gain or other motives, will frequently use the same methods of compromise. This fact has clearly been demonstrated in the recent attacks on the electronic point of sale systems in the US retail sector and the on-going use of targeted phishing emails to gain access to corporate networks. There are also previous attack trends of utilising SQL injection or memory scraping malware as attack methods to draw upon as examples of attack methodologies being reused. The use of similar methods by attackers means that organisations have an opportunity to identify attack approaches and vulnerabilities that could be applicable to them. Organisations should therefore look to use the experiences of others within their sector to enhance their own incident management procedures.

While it is accepted that the full details of the incident will not be publicly available, many industries have information sharing forums and employees build up relationships with their counterparts in other organisations. It is likely that an organisation will be able to garner sufficient information to identify vulnerabilities exploited by attackers and key attack vectors. This information can be used to review the incident and determine if the organisation is itself vulnerable to such an attack. In short organisations should conduct a post-incident review of the incidents that impact on other organisations.

Using the information available, an organisation can identify potential attack scenarios and whether they are likely to be breached as a result. By playing out these scenarios within the context of their own environment, organisations will be able to identify if they have compensating controls in place or where they may be required. Once compensating controls are in place organisations can then test their effectiveness in the context of these scenarios and therefore gain assurance that they are not exposed to the attacks their peers have suffered.

This process may be assisted by experts such as security testers, ordinarily external to the incident response planning process. Penetration testers can provide insight into the scenario planning and assessment process. By the very nature of their jobs, penetration testers are often skilled at identifying and understanding attack vectors. By using such experts, organisations will be able to add more rigor to their assessment of scenarios as well as challenge preconceptions. Ultimately this will result in a more resilient approach to incident response.

In summary

Reviewing the incidents of others will enable organisations to anticipate the types of attacks they may be vulnerable to and prepare for them, ultimately keeping cool in a crisis.

By keeping abreast of the threat landscape, spotting trends within relevant industries and reacting to the external environment, organisations will be able to plan effectively for incidents, if not reduce the likelihood of a successful attack. Should an attack occur, organisations will have more resilient incident response measures in place with which to tackle these anticipated threats. By learning from others’ misfortunes organisations may be able to avoid the pain of going through a similar experience.

Click here to find out more about our approach to incident response.

Cryptic message of the day

MjAxNS0wNy0yM1QwMDowMTowMCswMTowMCAweDM3CTB4NDUgCTB4NWYgCTB4
MzUgCTB4NTkJMHg1MgkweDUzCTB4NWYJMHg0ZgkweDRjCTB4NDQJMHg1Zgkw
eDU0CTB4NGYJMHg0NAkweDQxCTB4NTk=

OpenSSL Vulnerability Notice, Patch Now!

On the 24th of June 2015, Adam Langley and David Benjamin (Google/BoringSSL) reported a vulnerability that allows attackers to cause specific checks on untrusted certificates to be bypassed. By bypassing checking of the CA (certificate authority) flag, attackers could use a valid leaf certificate to act as a CA and clients would “validate” an invalid certificate.

Today the OpenSSL project released advisories to install a patch that remediates these problems with the OpenSSL certificate verification process (affecting versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o). The flaw exists within the verification process, because if the first attempt to build a chain failed then OpenSSL would attempt to find an alternative certificate chain.

OpenSSL have advised that: “This issue will impact any application that verifies certificates including SSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication.”

Detection

To detect which version of OpenSSL you have running, use the following command:

$ openssl version -a

Remediation

The “BoringSSL” project developed patch was published today. 7 Elements advise upgrading as soon as possible:

OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2d
OpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p

Additionally, 7 Elements reminds that support for OpenSSL versions 1.0.0 and 0.9.8 will end at the end of this year.

The use of unsupported software is discouraged and usually symptomatic of a weak stance on solid Information Security practices. Industry recommendations dictate that regular patching policies are key in protecting infrastructure assets within organisations.

If you would like support in completing any assurance activity then please get in touch using:
contact-us@7elements.co.uk.

References:

https://www.openssl.org/news/vulnerabilities.html
https://www.openssl.org/news/secadv_20150709.txt
http://cxsecurity.com/issue/WLB-2015070040
https://grahamcluley.com/2015/07/openssl-vulnerability-revealed/

Securing Server Message Block (SMB) Against Null Session Enumeration

Null session functionality within the SMB protocol enables anonymous access to hidden administrative shares on a system. Once a user is connected to the a share through a null session they can enumerate information about the system and environment.

Information that can be gained includes (but not limited to):

  • Users and groups
  • Operating system information
  • Password policies
  • Privileges
  • Available shares

Easy to use tools are freely available that can automate the enumeration and gathering of this data, providing an attacker with a wealth of information that may aid in an internal attack. For example, the enumeration of identified user accounts in combination with details of the password policy in use, provides an attacker with the ability to conduct specific targeted password guessing attacks. Increasing the overall likelihood of success and resulting in account compromise.

Exposure to null sessions can be tested by issuing the following from a command line:

net use \\remote_IP_address\ipc$ ”” /u:””

In this example the ipc$ share is a common default share, often is use. Other options include admin$ and C$.

Taking action to disable null sessions can be an important step in hardening the overall security posture of an organisation. It should be noted that current operating systems limit access by default, with older operating systems providing configuration options to apply additional security controls.

In order to restrict or disable null sessions the following steps can be taken:

 

Local Based:

Edit the following registry key and then set the ‘Value‘ accordingly:

Key Name: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\LSA

Value Name: RestrictAnonymous

Type: DWORD

Value: x

Setting the value to 2 disables anonymous access and therefore requires an authenticated account to communicate with the service.

Setting the value to 1 allows anonymous access but will deny enumeration of user accounts and admin shares.

Once the value has been changed, verify the changes a have taken affect by rebooting the devices and attempting to initiate a null session.

 

GPO Based:

This can also be achieved through group policy by setting the following:

Note: Disabling anonymous access could have a negative impact on functionality that relies on it. 7 Elements recommends that consideration be given to the potential impact, and testing should be carried in a non-production environment before making changes to production systems.

Securi-Tay IV, a field trip

7 Elements are pleased to convey our experiences of the Securi-Tay fourth annual security conference at Abertay University. With a graduate and senior tester attending, we split up to combine our efforts to hear as many talks as possible. We have written an overview of a subset of the great talks we heard, in no particular order.

Gavin Millard, Tenable Network Security: The Five Stages of Security Grief

The first talk was by Gavin Millard, EMEA Technical Director of Tenable Network Security. Gavin spoke about the five stages of security grief and the process of aiding in the establishment of what stage different people are in their process of identifying information security issues and deciding how the problems identified are dealt with. Gavin drew upon the parallels to the Kübler-Ross model of grief (Elisabeth Kübler-Ross, 1969). Gavin discussed how this grief model had parallel applications relating to the grief and anxiety of not being able to understand what actions to take to protect an organisation from pernicious threat actors. The talk focused on elements of the human condition and the coping mechanisms often employed before an organisation moves through the five stages of security grief. The talk progressed to outline how security consultants can assist organisations in progressing though to the final acceptance stage.

Dr Greg Fragkos, Virtual terminals and POS security; How I had a chance to become a billionare.

In this talk Dr Fragkos highlighted the inherent security flaws of Point-of-sale (POS) devices and Virtual Terminals in spite of their processing of transactions by using strong encryption and secure communication channels to connect to remote authorisation servers. The talk focused on the ease of committing fraud by either using a card to purchase items seemingly legitimately without the transaction ever leaving the local payment device.

Dr Fragkos provided some helpful advice for consumers. If you have any item containing RFID technology it should be kept in an RFID blocking container.

Read more about measures against RFID skimming: http://en.wikipedia.org/wiki/RFID_skimming

Freaky Clown, Portcullis Computer Security: Robbing Banks and other fun tales

This talk highlighted the deficit of a culture of challenge within the office workplace of organisations. The talk discussed many techniques about how a social engineer prepares for a test using publicly available resources such as “Google Maps” for investigating and pinpointing the implications of flawed security implementations or a lack of physical security meant to secure digital and physical access in restricted areas within a business. This involved presenting images of incorrectly installed magnetic locks on doors, spoofing heat based movement detecting sensors and detailing methodology for bypassing measures meant to enforce controlled access at receptions.

7 Elements have also witnessed a lack of challenge culture and flawed implementation of corporate access control mechanisms. Often the security deficit stems from problems with the technology being used and instead of fixing problems related to the automated access control mechanism, the technology is sometimes simply deactivated or only partially implemented to ensure the goals related to business efficiency are being met first.

Graham Sunderland, Portcullis Computer Security: We don’t take kindly to your types around here!

This talk discussed and demonstrated relatively unknown pitfalls that may inadvertently be introduced in code. The problem stems from the lack of focus on security considerations when coding with Object Oriented Programming languages such as C++ and PHP. The focus of this talk was in the serialisation and deserialisation of objects, several common development pattern vulnerabilities were demonstrated.

Lewis Arden, Leeds Beckett University: Creating vulnerable systems containing dynamically allocated vulnerabilities

Drawing upon the needs of students studying the “Computer Forensics and Security” undergraduate course, the possibility to create systems containing dynamically allocated vulnerabilities was identified as popular vulnerable systems such as Metasploitable2, OWASP, BWA and others. These popular vulnerable systems have many walkthroughs on how to exploit different vulnerabilities online. Whilst walkthroughs facilitate creating a methodology for exploitation, the vulnerabilities that are exploited on static systems cannot serve to evaluate student learning. The dynamic allocation of vulnerabilities allows each assigned box to have their own specific set of vulnerabilities. This approach should stop students from sharing solutions, instead encouraging the sharing of methodologies, thereby establishing a cooperative learning environment. This tool is to be launched soon at http://z.cliffe.schreuders.org/index.htm.

Barry Myles, SDR for security testers

Barry Myles’ presentation was severely hit by the live-demo gods wrath. During his presentation, Barry showed how he replaced the remote control of an array of radio controlled power plugs. Unfortunately, some of the demos he had planned were not shown and others did not work was planned. However, the presentation was perfect as an introduction to Software Defined Radio and has been responsible for the acquiring of additional research hardware (aka Toys for InfoSec boys!). I believe here is a lot of security research to be made on these types of devices as they are usually built and developed without security concerns in mind.

Kevin Sheldrake and Steve Wilson, Embedded Tool Kit

This presentation was divided in two and felt like it went by really fast! Steve Wilson talked first and focused on the hardware side of security testing embedded devices, brushing slightly on required hardware and showing the process of testing a TP-Link Wireless Extender (TP-WA850). Kevin Sheldrake spoke about some tools he has been developing in assisting debugging the software running on these embedded devices. With this purpose, Kevin has developed bps (a non-interactive debugger), cliapi (a command-line utility that allows running functions in executables and libraries) and jackal (a SSL certificate cloning utility for MitM attacks).
Their tools can be found at http://rtfc.org.uk.

Steve Lord, Anonaflops: It’s part in my downfall

Steve Lord is able to keep up engaged through what is a quite technical presentation through the use of down to earth examples and essentially being able to deliver real world examples without it being overly complicated as it usually the case in the InfoSec world. He knows what he talks about and was able to simplify the usual misconceptions on anonimity, privacy and free access to information. During his talk he debunked the Anonabox project and reviewed a much better concept called Cloak.

Javvad Malik, How to hack you career path and stand out

Javvad’s presentations are always fun to watch. He is a gifted speaker and focuses in the personal growth side of being a InfoSec professional. This presentation was no different and he introduced what he called the Personal OSI Model, a collection of items someone should take into account in order to improve their professional career. I found his “Skills VS Reputation” discussion with Steve Lord when considering a prospective employer to be most thought provocative.

Rory McCune, Secure and “Modern” Software Deployment

Modern software is complex and has a lot of dependencies. Nowadays, dependencies are usually installed automatically from repositories and are almost always implicitly trusted by default.
Rory McCune’s presentation described a couple of scenarios in which he shows different ways the deployment process can be abused by attackers to gain access to unsuspecting users’ machines. The attackers and scenarios he described ranged between high-end nation-state sponsored attackers to modest ones with limited resources.

Dr Jessica Barker, Social Security

Independent Information Security expert Dr. Jessica Barker tackled the usual InfoSec adage “It’s the user’s fault!”. She challenged this view by using Rosenthal’s Pygmalion effect. The usual victim blaming approach was also illustrated by the Golem Effect. Using these explanations, she hopes it could help us as InfoSec professionals to improve the way we handle the ever important education of users in fixing InfoSec’s biggest problem: education.

Stephen Tomkinson, Abusing Blu-ray Players

Stephen Tomkinson’s presentation on abusing blu-ray players was a very nice example of good and relevant research sponsored by an employer. He showed different types of attacks on blu-ray players ranging from network attacks to physical disc attacks which would allow an attacker to get a strong foot-hold in gaining access to your local network.
The research included the development of a new tool that will surely lead other researchers to finding other vulnerabilities in this ever growing connected world.

Cyber Essentials

CE_logo_affiliated_hi_res

 

 

 

7 Elements achieves Cyber Essentials Certification body status.

7 Elements, are now a certification body able to deliver Cyber Essentials (CE) and Cyber Essentials Plus (CE+) engagements for organisations that are aiming to meet this standard.

The move comes as 7 Elements looks to expand its service offering to include a cost effective security solution to all clients which now includes conducting this government approved assessment. The CE and CE+ accreditation has been developed as a method to significantly reduce business vulnerability at an achievable cost. More information on the scheme can be found here.

Passphrase Guidance

A secure and functionally usable form of password authentication is passphrases. Passphrases are a combination of words that can be entered as a password. Recent attacks that have resulted in password leaks provide a wealth of knowledge about common password patterns. Passphrases provide a more secure but user-friendly alternative to traditional passwords.

A well-formed passphrase can be far more holistically secure than other password authentication alternatives. This additional security stems from maximising human memorability and cracking complexity in concert with minimising selection guessability, observability and recordability. Passphrases increase usability as they can contain special or unique significance to the user, are more memorable and their length and form can provide significant layers of complexity. Passwords are much easier to crack than passphrases, regardless of their cryptographic protocol. In addition, cracking passwords is becoming progressively easier by harnessing the power of cloud computing and new technology. The approximate time required to brute-force a complex eight character password lies between seconds and days for most cryptographic protocols, whereas for passphrases this time increases exponentially, making passphrases significantly more secure. Passphrases are therefore computationally more challenging to crack than passwords.

Passphrase Generation Recommendations

The following factors are recommended in the generation of secure passphrases.

• Use three or more uncommon words, for example “steep alphabet dawn win”.

• The phrase should not be common, for example a well-known saying or from a film or book.

• Use spaces or special characters between words to further enhance the security. For example, “steep-alphabet-dawn-win” or “steep!alphabet-dawn!win”.

• As with passwords, do not enforce excessive expiry of passphrases to avoid user fatigue that may result in users employing insecure coping strategies that ultimately degrade and diminish security. Enforcing new and distinct passphrase selection every three months should meet the needs of a risk averse organisation.

• A limit of 32 characters will give users freedom to create more secure passphrase word combinations, while not putting excessive demands on existing systems to maintain the data or computational tasks.

• As with complex passwords, secure passphrases should consist of at least two of the following elements however, users should be free to choose from any of these categories:

  • Uppercase letters
  • Lowercase letters
  • Numbers
  • Punctuation marks
  • Mathematical or other conventional symbols
  • Spaces

Passphrase Security Augmentation Elements

In developing a passphrase policy it is crucial that the system is practical for users. This can be achieved by ensuring that verification methods impose a minimal burden on users. To assist in this the following factors should be considered in developing a passphrase policy.

  • Memorability: Passphrases must not made overly complex as to be difficult to recall.
  • Guessability: Passphrases should be hard to guess. This means family, colleagues, friends and social engineers should not be able to guess passphrases by exploiting the varying degrees of intimacy with a passphrase holder. Passphrases should not contain meaningful dates, pet names, addresses, hobbies, interests or otherwise.
  • Observability: Passphrases should be entered easily. If a passphrase is overly time consuming to enter this enhances the ability of shoulder surfers to accurately observe password entry.
  • Recordability: Passphrase entry must be secure. Users should become naturally wary of highly observable key press combinations for instance, the passphrase “qwe rty uiop” is highly recordable due to the sequential means of entry on standard keyboards. As characters are being typed into the passphrase field they should also be immediately obfuscated to avoid screen recorders from recording passphrase input. The workstations in use must also be secure to ensure keyloggers are not in operation.

For more guidance on passwords, please visit the following link.

Password Guidance

Most organisations utilise passwords as a method of authenticating users as part of their access control solution for their systems. 7 Elements have often found poor password policy or insufficient policy enforcement can be a severe point of failure in an otherwise secure system. For password authentication to be effective the security provided by using passwords must remain robust regardless of persistent attacks originating from either human or computer sources. Organisations can take steps to ensure that the passwords used to access their systems are sufficiently strong by employing a robust password policy. This guidance lays out some key steps organisations can take to develop a robust password policy and therefore help ensure that strong passwords are used on their systems.

Password Formation Guidance

To ensure that users have strong passwords the following basic guidelines on how passwords are formed may be used as part of a robust password policy. A robust password policy should stipulate that a password has the following properties.

• Passwords should be a minimum of nine characters long. They should also be sufficiently complex to offset the likelihood of a successful brute force attack or guessing of the password.

• Passwords should not contain personal information such as names, addresses, birthdays, car registrations, ID numbers etc.

• Complex passwords should consist of at least four of the following elements however, users should be free to choose from any of these categories:

  • Uppercase letters
  • Lowercase letters
  • Numbers
  • Punctuation marks
  • Mathematical or other conventional symbols
  • Spaces

• Use of common passwords should be banned. Common passwords can be compiled from the many repositories of passwords released after major account hacks.

• A history of old password hashes should be kept. This should be used to prevent users from re-using their previous passwords.

• Accounts should be locked out after a number of failed access attempts. This is ordinarily set to three attempts. This helps to reduce the likelihood of a successful brute force attack against accounts.

• Passwords should be changed at regular intervals. However, organisations should be aware that constantly enforcing password changes may cause users to develop password generation fatigue. This may result in users employing insecure coping strategies, such as writing passwords down or using non-complex passwords. This could eventually degrade the security of password authentication.

• Password reuse should be limited so that unique passwords are not used across a single user’s multiple accounts. Furthermore passwords across accounts should not be similar permutations of the original password.

Password Security Augmentation Elements

In developing a password policy it is crucial that the system is practical for users. This can be achieved by ensuring that verification methods impose a minimal burden on users. To assist in this the following factors should be considered in developing a password policy.

• Memorability: Passwords must not be so complex as to be difficult to recall.

• Guessability: Passwords should be hard to guess. This means family, colleagues, friends and social engineers should not be able to guess passwords by exploiting the varying degrees of intimacy with a password holder. Passwords should not contain meaningful dates, pet names, addresses, hobbies, interests or otherwise.

• Observability: Passwords should be entered easily. If a password is overly time consuming to enter this enhances the ability of shoulder surfers to accurately observe password entry.

• Recordability: Password entry must be secure. Users should become naturally wary of highly observable key press combinations for instance, the password “qwerty” is highly recordable due to the sequential means of entry on standard keyboards. As characters are being typed into the password field they should also be immediately obfuscated to avoid screen recorders from recording password input. The workstations in use must also be secure to ensure keyloggers are not in operation.

• Complexity: A minimum password length combined with relative complexity is essential. Passwords do not need to be overly complicated to remember but instead fortified through the discussed elements of augmentation to prevent the success of current and emerging password hacking and encrypted hash cracking techniques.

For a more robust approach to password management, take a look at our guidance on using a passphrase.

BitTorrent Distributed Denial of Service

We recently worked with a client that had suffered a denial of service on one of their websites. They wondered if we could tell them what had happened and how to stop it from happening again. So, time to start digging through logs to work out what was going on. It turned out that the attack was a Distributed Denial of Service (DDoS) attack using BitTorrent clients to attack the target site. BitTorrent Distributed Denial of Service attacks are a pretty under reported attack vector that I hadn’t come across before in the real world. So the following blog takes a look at the log analysis and investigation that led to this understanding.

DDoS attacks like this are often carried out using bots, infected computers on the Internet, which are all under the control of a malicious individual or group. These bots are then used to perform activities, such as requesting web pages repeatedly. This causes the victim site to be quickly overwhelmed with requests resulting in one of the following events:

  • Exceed the available network bandwidth.
  • Exceed the application processing capacity.

If successful, either event would result in removing access to legitimate users of the site.

However, in this case, the means of delivery was via BitTorrent clients and not infected machines.

Initial Log Analysis

A quick review of the logs showed that a large volume of traffic had been directed at the site from multiple sources within a very short space of time. We had requests for ‘announce.php’ coming from over 10,000 unique hosts in a specific geo location.

Here is an example of one of the logs captured by Apache (sanitised):

[source_ip] - - [date_time] "GET /announce.php?info_hash=%25B56%25B51u%252DU%2504Eb%2513%25EA%253BVFBu%257E%2512%25BA
&peer_id=%252DSD0100%252D%25127%25F9%25AD%25ABd%252D%25CFN%2593%25FC%2521
&ip=192.168.1.31&port=11706&uploaded=1043070976&downloaded=1043070976&left=0
&numwant=200&key=6511&compact=1 HTTP/1.0" 301 655 "-" "-"

Looking at the announce.php and the parameters (upload, download, and left) in more detail we suspected it was BitTorrent traffic due to the format of the request. With a bit of googling this assumption was confirmed. In discussion with the client it was quickly established that their main corporate site was not acting as part of a peer-to-peer network and that this traffic was the source of the DDoS attack.

BitTorrent Analysis

All of the traffic appeared to be coming from the same geographic region, so we came up with two possible scenarios. One, that DNS cache poisoning was being carried out to send legitimate BitTorrent traffic to our victim server. Or two, that our victim server had been named as the torrent tracker server in the .bittorrent file. Knowing which scenario was in play would then dictate the best mitigation option, so back to the logs.

In their current format, our clients logs were not helping us figure out which of our scenarios were the most likely as they were missing detailed information as to what the ‘Host’ section of the request was. As the attack was ongoing, we quickly updated the apache logging by editing the `apache2.conf` file to add the parameter ‘%Host i\'.

Apache logs are easily configured to provide additional details such as this and for the small additional storage overhead, it is definitely worth while having this pre-configured to save time during later analysis.

With the new log configuration, we can now see the domain that is being requested (marked in grey):

[source_ip] - - [date_time] "GET /announce.php?info_hash=%25B56%25B51u%252DU%2504Eb%2513%25EA%253BVFBu%257E%2512%25BA
&peer_id=%252DSD0100%252D%25127%25F9%25AD%25ABd%252D%25CFN%2593%25FC%2521
&ip=192.168.1.31&port=11706&uploaded=1043070976&downloaded=1043070976&left=0
&numwant=200&key=6511&compact=1 HTTP/1.0" 301 655 "-" "-" "[client.domain]"

We have removed our clients domain name here, but the Host header matched our clients domain name which indicates that this wasn’t DNS poisoning but rather that our clients server had been listed as a BitTorrent tracker in a .bittorrent file. If this was DNS cache poisoning we would have seen the domain name of the site the BitTorrent clients were trying to reach.

Remediation

Defending against the attack was easy enough once we understood what was going on. As our client isn’t actually running a BitTorrent server we simply dropped any requests for announce.php at the firewall and reduced the TCP timeout value on the Apache box to flush inactive connections.

Conclusion

What was especially interesting in our scenario is that the systems involved in the DDoS attack were not ‘bots’, these systems hadn’t been compromised at all. To perform this attack, the malicious actor doesn’t need any technical skills beyond creating a torrent file, and the attack is very low risk as once the .torrent file is created, no further interaction is required. While an attack like this doesn’t generate massive amount of traffic, with enough peers having downloaded the torrent file, the attack can be big enough to cause a denial of service.

No! Not Casper, not that friendly GHOST!

Last year (2014) we saw a couple of big exploits that made the headlines and security teams all around the world are still picking up the pieces left by Heartbleed and ShellShock.

So where are we this year? We are not even 10% into the new year and already contenders are popping up trying to make their name. The newest vulnerability to get the brand treatment is GHOST.

A not so friendly GHOST

GHOST is a buffer overflow that affects the GNU C Library (otherwise known as glibc), specifically the __nss_hostname_digits_dots() function of glibc.

Because this function is available both remotely and locally (as gethostbyname functions generally are), it is exploitable both locally and remotely. Successful exploitation of this vulnerability allows for arbitrary code execution, resulting in unauthorised access. The full advisory can be read here and has been tagged with the corresponding CVE-2015-0235.

Who can I call? GHOSTbusters?

No working exploit has yet been disclosed. However, the technical explanation in the advisory could be sufficient to shed light on the matter and allow for a working version of the exploit to be developed.

GHOST’s impact should therefore be considered as critical and warrant early remediation.

As per the advisory, the disclosure of this vulnerability has been coordinated with several vendors in order to allow time to issue security related patches.

Vulnerable versions of glibc range between glibc-2.2 and glibc-2.17~glibc-2.18. However, many long term support and server grade distributions remain vulnerable. For example:

Debian 7 (wheezy)
Red Hat Enterprise Linux 6 & 7
CentOS 6 & 7
Ubuntu 12.04 LTS