Zombies_NightoftheLivingDeadSecurity consultants can tend to live a bit like nomads: wandering from office to office, plying their trade along the way. We get to see lots of different organisations, and a surprising variety of datacentres. Mostly these are full of shiny racks of new equipment, but every now-and-then we’ll see something out of the ordinary. Some form of legacy system: mysterious, ancient boxes, left to their own affairs in the darkest corner of the room.

Of course, whilst these systems are simply interesting museum-pieces to us, to the poor soul who is tasked with owning the risk, they are something much more ominous. Legacy systems that dramatically outlive their intended lifespan tend to do so for one glaring reason: they are important to the organisation, and both difficult and expensive to replace.

To add to this growing pile of legacy, this week Oracle announced the end of Java support for the Windows XP platform [1]. This isn’t that much of a surprise, considering that Microsoft had already dropped support for XP in recent months [2]. However, what is notable about this though is the dramatically different threat landscape. Almost no-one is interested in your old PDP-11 or Cray supercomputer, but Windows XP is quite a different kettle of fish.

Windows XP is the doyenne of the exploit writers. Not only does it lack the remote code execution protection that later version of Windows have built-in (making it easier to get an exploit working reliably) but it still has a large enough install-base to warrant the effort of writing one. On top of this, in recent months Java has been found to be riddled with flaws: the April patch bundle [3] contained no less than 37 discrete flaws, 4 of which were rated with the highest severity possible under the CVSS scheme.

So here you have a perfect storm: a critical system, relatively easy to exploit, with no patches available. Given that you can no longer patch these servers, our recommendation to you is to first have a nice cup of tea, then secondly apply some common sense:

  1. We know you would have already replaced these systems if you could have done so, but if there is any possibility of you rethinking this stance, now is a good time. The situation will not get better with time.
  2. Segregate all your legacy systems behind an internal firewall if possible. Keep them away from the general-use LAN segments and especially from desktops.
  3. Restrict remote access methods to essential users and enforce strong cryptography.
  4. Implement file-system fingerprinting to detect any unauthorised changes to the host.
  5. Make sure anti-malware and anti-virus applications are up-to-date.


Oh, and one last happy thought before you go: you are prepared for when Windows 2003 Server goes end-of-life next year, aren’t you?

  1. http://java.com/en/download/help/sysreq.xml
  2. http://windows.microsoft.com/en-gb/windows/end-support-help
  3. https://blogs.oracle.com/security/entry/april_2014_critical_patch_update
  4. http://support.microsoft.com/lifecycle/search/default.aspx?alpha=Windows+Server+2003+R2

Call Me, Maybe?

Posted: 27 June 2014 in Uncategorized

800px-FeTAp613-1 copy

There’s one thing to be said for the world of Information Security, and it’s that it rarely stands still for a moment. New products and technologies are released with relentless regularity, each with its own particular set of security challenges to first understand, then protect. Never a quiet moment.

But as new technologies are introduced, old ones are often superseded; relegated to the “legacy” bucket. But just because they are no longer the latest hot topic, it doesn’t mean that they don’t still pose a significant risk to the organisation.

One such technology is the traditional telephone, or as it likes to be formally addressed, the Public Switched Telephone Network (PSTN). Back in the day, the media was awash with stories of hacking attacks that were launched over the telephone network. In fact, the high-profile hack that led to the drafting of the UK Computer Misuse Act (CMA) was itself delivered over the telephone, using a modem.

The Internet has changed all of this, though. As the greatest exposure to external threats for many organisations, in most cases it rightly takes the majority of the focus when it comes to security. But in this shift, a lot of organisations seem to have forgotten about the PSTN. This is a bit of a problem, as unfortunately the attackers haven’t!

The fact is that the legacy telephone system remains a rich target for an attacker. Dozens of critical devices are still installed with a remote administrative interface connected to an old-school telephone line; systems like the burglar alarm, door entry systems, the PBX itself, video conferencing, SANs, heavy machinery such as lifts, etc. Any of these could be available, and all that is often required is for an attacker to connect to the right telephone number, then enter the default credentials for the device.

There was a time when most organisations would regularly get their external telephone connectivity security tested as part of a “war dialling” exercise, but this seems to be a rarity these days. Maybe it’s time for you to get a bit more old-school?



by Martin O’Neal

The Wrath of Zeus

Posted: 3 June 2014 in Uncategorized

ZeusAt the moment there is a lot of fluster in the media about the GameOver Zeus malware, and how there are only two weeks to the impending destruction of mankind as we know it. Cue melodramatic music and peal of thunder.

Firstly, we don’t think that this is a panic-stations situation. This particular malware has been tracked in the wild for several years already, so it isn’t a new threat. Though obviously the way that it is packaged and deployed are updated regularly, so it may not be immediately detected by your antivirus systems.

Secondly, the malware itself is typically delivered through a leading email, which will encourage the recipient to either open an attachment, or visit a phishing site. The important part is that it requires human intervention to activate and install it.

The recommendations for coping with this are the normal advice that users and administrators should follow on a daily basis anyway:

  • For a corporate, block dangerous attachments (executables etc.) before they reach the desktop.
  • Ensure that your antivirus is installed, correctly configured and the signatures are up-to-date.
  • Do not open emails, attachments or click on links that look in any way suspicious.
  • If you think you have inadvertently installed any malware, don’t use your computer for anything sensitive, like online banking, until you can get it checked thoroughly and if necessary cleaned

Additional information about GameOver Zeus is available here: http://www.us-cert.gov/ncas/alerts/TA14-150A

When our clients approach us with a new application or a technology refresh project, we often see an initial reluctance for an external infrastructure vulnerability assessment to be performed along-side. This is often because the client feels safe that their infrastructure is protected by network level devices such as firewalls or intrusion detection systems, or that modern software and servers are secure out of the box. Well that’s not always the case…

Acme Corp CMS Assessment

Acme Corp approached Corsaire to conduct an application assessment on their new content management system and an external infrastructure assessment on 1 IP address. The application server would be located in the client’s DMZ and protected by a firewall only allowing HTTP and HTTPS. Corsaire was provided with the following URL which would be in scope for this project:

https:/ /cms.acmecorp.com/application/cms

Apart from some low hanging fruit the CMS application was found to be secure. Everyone is happy! Go team!

Is Infrastructure always Boring?

So I get given the infrastructure component of the assessment and my worst fears are soon realised; only 80 and 443 are exposed to the Internet. I quietly work through our external infrastructure methodology and only find low risk SSL issues and some default Apache pages. Default pages are always a good indicator of a lack of server hardening, so I decide to have a further poke around. I manage to find a default configuration file which has some information disclosure, but still nothing to shout home about. As always with any assessment, an understanding of the environment is essential. This involves reading any documentation you can find including installation, configuration and development guides from the supplier. The documentation included a typical configuration scenario which when compared with the findings from the application, was probably the configuration of this CMS system.


In this typical scenario the CMS application and API are hosted on the same server. This is never a good idea. Separation of services people! Anyway, this got me thinking about how the API is configured and how the CMS application interacts with it. Could I connect to this? Oh this is getting interesting!

Pew! Pew! Pew!

Trawling through the documentation, it was determined that the API could be configured as another virtual host served by the same Apache instance as the CMS application! Oh goody, if they have done this then all that is required is the correct domain name to send with the request to potentially start interacting with the API! The default setup from the documentation didn’t work, but by using the information revealed in the default configuration file I managed to get a hit! Bingo! I have direct access to the API using https:/ /cms-api.acmecorp.com bypassing any security provided by the CMS application. Oh dear, they are still using default credentials. Pew, Pew, Pew! I now have full control of the CMS and content.


While the application in scope was secure, the infrastructure and server configuration was still in a default state and had not been hardened. The API was assumed to be internally accessible only, so the default credentials were not changed. Without the infrastructure component of this assessment, access to the API would have potentially not been found.

Lessons to be learnt?

  • Never underestimate the importance of infrastructure assessments when deploying a new application
  • Always harden all servers irrespective of their network location
  • Always restrict access to any service unless explicitly required
  • Ensure separation of services to reduce exposure and risk
  • Understand the environment and RTFM!


Written by

Ant-frameAnthony Dickinson
Security Consultant

ICO Report

Posted: 16 May 2014 in Uncategorized


The recent Information Commissioners Office (ICO) report makes for interesting reading. Although it is relatively high-level, and targeted more towards managers, the report and supporting guidance provide some clarity for InfoSec professionals too.

Firstly, by targeting the management audience, they are re-emphasising that the responsibility for ensuring that personal data is appropriately protected is company-wide; not just the domain of a few industry specialists.

Secondly, the announcement sets out its stall by making immediate reference to punitive fines. In recent years the ICO have increasingly chosen to make an example of high-profile failures, and the fines have been significant. They are making it clear that there is no change to this tack.

Thirdly, the small number of issues listed by the ICO are all basic security measures. Other than their relative simplicity, they do share a common thread, in that all of them could allow the gathering of DPA data en masse. For example, the report specifically singles out per-user style flaws like Cross Site Scripting (XSS) as something they are less interested in.

So in summary, the report is not really saying anything new, but it does spell out very clearly the current ICO approach to policy:

  • DPA resources must be protected as appropriate
  • Responsibility lies with management
  • Breaches will result in fines


  1. http://ico.org.uk/news/latest_news/2014/top-it-data-security-threats-revealed-and-what-organisations-must-do-to-stop-them-12052014
  1. http://ico.org.uk/news/latest_news/2014/~/media/documents/library/Data_Protection/Research_and_reports/protecting-personal-data-in-online-services-learning-from-the-mistakes-of-others.pdf


The main threats that organisations and their customers are facing today are the same ones that have always been around: ignorance, apathy and poverty. And the best thing that any organisation can do to reduce the impact of these, is to simply get the basics right. But you won’t be able to do anything if you don’t know your threats, don’t have the appetite to address them, or don’t have the budget to pay for the solutions.

Security for the business itself, or for its customers, is all about gaining a good understanding of the risks, and then building appropriate processes to ensure that they are balanced against the effort and cost of addressing them. Everything else is really just window dressing. For example, that shiny new security appliance that you were looking at last week (available in suitably bold primary colours) will not make your organisation secure. There are no magic bullets, only good sense and hard work.

And now we get to the nub of the problem, the typical board of corporateville. These busy people can, quite literally, talk for days about the colour of the latest product packaging (mauve or taupe, darling?), but when it comes to where those pesky credit card numbers get stored after you have taken your clients money, then they tend to be far less talkative. Until things go wrong.

Increasingly, the legislation and regulation that cover security are being given real teeth, to punish those who flout them. Punitive fines, suspension of trading facilities, and ultimately, members of the board can go to prison. And what would any busy person (upon finding themselves staring down the barrel of a punitive deadline), be looking for in their hour of need? You’ve got it; their gut instinct will be to bite the hand off the first magic-bullet solution that comes along. If you are the person responsible for security, the trick is to make sure that the particular bullet is one of your choosing (magic or otherwise).

The real problem with all this, I would say, is that the attention span of the typical board is about three weeks, starting from the last high-profile security event (be it a failed audit, a rogue employee, or a successful hack etc). And the biggest challenge is seizing this slim window of opportunity, and using it to your maximum advantage. If you don’t get your plans in front of the board, and budgets signed-off in these three weeks, then you might as well keep your pipe and slippers to hand, because you won’t be doing anything more interesting in the near future.

So to summarise, if you are apathetic, simply go back to your mochaccino now (this next bit isn’t for you). For everyone else, start your preparation today; profile your organisation and understand the real risks. Then pull together some sensible solutions and ballpark budgets to address them.

And finally, the next time that your organisation is struck by a compelling event, you can simply set out your stall. You’ll be thinking comprehensive solution, they’ll be thinking magic-bullet, and everyone should end up living happily ever after. Well, everyone except the VP of hospitality who (after reading

an article in an in-flight magazine about security) was hankering after a puce-coloured security appliance for the datacentre…


Written by
Martin O’Neal
Security Consultant

heartbleedTo give you some context, I’m writing this blog in the days immediately following the Heartbleed event. I’m not going to comment on the event itself, as it has been done to death elsewhere, but I am going to focus on the reaction of the affected organisations.

Events like this, whilst rare, cannot be avoided entirely by any organisation that uses computers attached to the Internet. Software has flaws. Some flaws will be catastrophic. It’s a given. However, knowing this, you can prepare for the worst.

Firstly, many organisations had no idea what their attack surface was. Sure, it’s fairly trivial to point at the corporate web site. But many simply weren’t aware that there was a vulnerable SSL component in their other internet-facing applications, like their email servers. Understanding your attack surface simply has to be the first-step of protecting it. Otherwise, how can you protect that which you are not aware of?

Secondly, whatever incident response process they had just seemed to crumble. The flaw itself, whilst devastating, was thankfully easy to patch or otherwise mitigate. Then after that, there was simply a requirement to regenerate the certificates and finally force any authentication credentials to be expired and changed. It was clear that many high-profile organisations had taken no action long after the vulnerability was being actively exploited in the wild. Being objective, at this point, any services that were deemed to hold confidential data should have been simply taken offline to protect the brand. Decisions like this need to be pre-made by executives, and solid processes need to be established in advance, so that they can be carried out swiftly by the staff that administer the systems.

The reality is that this isn’t going to be a unique event. There will inevitably be others to come.

If your organisation does not understand your attack-surface and does not have a comprehensive incident response process already in place, then now is the time to act. You may be shutting the door behind this particular horse, but rest assured that there are plenty of other horses that you can still protect.

ImageComfort Eating for Web Applications

Cookies were devised to enable websites and applications to store small amounts of information in the browser. This enables the otherwise ‘stateless’ HTTP protocol used by browsers to be augmented to let the site enhance the user experience; for example,  remembering who the user was and what they were doing to tailor the content they will receive.

While there are potentially a vast number of uses for cookies, both benign and nefarious, an exceptionally common use is the storage and transmission of session data, enabling web-applications to provide a full experience for users. Session data provided via cookies allows applications to support continuous authentication, and subsequently access content and functionality without having to re-authenticate every-time they try and access a different piece of content or a feature; which would be necessary if the server had no way of reliably keeping track of requests received from the users browser.

The value allocated to the user within the cookie can include the difference in the privileges and accessibility rights accorded to an authenticated administrative user apposed to a standard user such as within a banking application. The value indicates the access rights, based on the stateful sessions. Staying within the realm of banking applications; it is important that the privileges accorded to the user do not become corrupted. It is equally important that the privileges of two standard users’ do not cross. In either situation, corruption of the relevant privileges could inevitably result in financial loss to both the bank and the end users.

It is equally important to remember that any service with an authentication system is by definition trying to keep some sort of information secure. Any application with user data or information will generally use cookies to identify users after they have authenticated. Once the application has identified and verified the user, the session token issued in the cookie is often the ONLY information that separates one authenticated user from another, or from a malicious hacker.

Once their ID and authentication credentials have been entered, whether by email, password, or a full multi-factor strong authentication system, the cookie generated must be protected if the user is to be protected.

Cookie Monsters, Nom Nom

Now is when we should switch our thinking to that of someone out to cause trouble! That cookie is all that stands between you and access to your targets services and data.

There are a handful of ways the token can be compromised by an attacker; essentially they can either steal the cookie after the user has authenticated, or force the user to use a token they are already aware of.  The latter involves implementation issues that are outside the scope of this article, but watch this space.

The former, stealing the user’s cookie, involves either snooping on their session to grab it ‘on the wire’, or tricking the user, their browser or the application into handing the cookie over.

By far the most common way to gain access to a session cookie is via a cross site scripting (XSS) vulnerability.

In the example below the cookie is displayed on screen, in reality it could be pushed  to a remote site by an embedded script and then utilised by a malicious user, as seen in the Apache.org XSS Compromise in 2010 [1].


A session token obtained in this manner could be used to impersonate the user from whom it was stolen. If there are no further security measures in place the token can be used to access the active session, gaining full access to the user functionality.

Another way cookies may be exposed is by an attacker sitting in a strategic location on the network and snooping on network traffic to grab the all important tokens.  This has commonly occurred in shared network environments providing free WiFi access, like the ubiquitous coffee shops and fast food joints we often work in during lunch.  The attacker sits on those networks (or sets up their own fake equivalent), monitoring the network activity and poaching the valuable sessions.  This has played no small part in the massive migration of online services from unencrypted HTTP to SSL wrapped HTTPS sites to try and stem the flow of drive-by account hijacks.

Defending the Cookie Jar – Set – Cookie Directives

Issues with cookie management usually arise with the directives; the problems aren’t new and the smart folk that devise the standards that underpin online communications try to provide extra mechanisms to defend ourselves.  Two possible directives can be employed to prevent aforementioned cross site scripting and similar attacks; the ‘Secure’ directive and the ‘HttpOnly’ directive.

The former ‘Secure’ directive directs the browser to send the cookies ‘securely’.  Sadly this is a rather informal definition, but in most cases it prevents cookies being sent over unencrypted channels. The development of the application should hopefully have followed some basic security standards; ideally access to a login process and protected (authenticated) pages, if not, the entire application should be stipulated to occur over HTTPS. There is no reason to send a cookie over unencrypted HTTP that could contain sensitive information, which would then be susceptible to network sniffing attacks.

          Example of a java session ID cookie with the ‘Secure’ directive:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; Secure;

Similarly, the latter, ‘HTTP Only’ directive instructs the browser to only pass the cookie back to the server through its default mechanism; the ‘Cookie:’ HTTP header.  The key objective of this directive is to prevent access to the cookie by script executing in the browser.  The intent is that the token set by the server should still be passed back with every request, enabling the server to use it to track the session, but that neither the browser nor any other part of the communication chain should be able to read or modify it.

If it is not present in the Set-Cookie header (command), sent by the server, the cookie can be read by client side JavaScript as shown in the example above. Should a XSS vulnerability be present on the site in question, the cookie and subsequently the session data of the user could be captured and utilised by a malicious user.

Unless the web application in use requires client-side scripts to read or edit the cookie value, there is almost no reason not to have the ‘HttpOnly’ directive present.

          Example of a java session ID cookie with the ‘HttpOnly’ directive:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; HttpOnly;

These cookie directives can of course be used in unison:

          Example of a java session ID cookie with the ‘HttpOnly’ and ‘Secure’ directives:

          Set-Cookie: jsessionid=AS348AF929FK219CKA9FK3B79870H; HttpOnly; Secure;

Failing to implement these simple directives in your applications greatly increases the possibility of session based attacks.

Going on a Cookie Diet: Implementing Directives

For more information refer to the following guide on implementing these directives in some of the more common web authoring languages.

‘Secure’ Directive:

Java (Servlet 3.0 (Java EE 6)):

A standard method was introduced to manage session cookies in Java’s web.xml:







The ‘Secure’ directive for PHP session cookies is set within the session_set_cookie_params function:

void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]] )


Under .NET you can set the cookie directives within the web.config file in the system.web/httpCookies element:

<httpCookies requireSSL=”true” ..>

‘HTTP Only’ Directive:

Java (Servlet 3.0 (Java EE 6)):

A standard method was introduced to manage session cookies in java’s web.xml:







As with the ‘Secure’ directive, ‘HttpOnly’ is set within the session_set_cookie_params function:

void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]] )


Under .NET you can set the cookie directives within the web.config file in the system.web/httpCookies element:

<httpCookies httpOnlyCookies=”true” …>

By Chris McCall
Image by Michelle O’Connell

strategy-114-passwords-pop_757At the beginning of October, Adobe reported a data breach that affected around 3 million customers [1]. In the following weeks, the number rose significantly, but this was just the tip of the iceberg. At the beginning of November a huge dump of the data was published online, containing an eye watering 150 million entries.

Various organizations, individuals and companies have analysed this data and have reached the same conclusions: firstly, Adobe’s choice of encryption was extremely poor and secondly, the passwords used by the users were shocking. See the original research by Jeremi Gosney[2], and an excellent article by Paul Ducklin for more information[3].

While most researchers were focusing on cracking the encryption key or criticising Adobe for the poor encryption selection, Corsaire initially did further analysis, focusing on extracting information to help our clients understand the implications of the leak and the lessons that can be learnt from this leak.. We have now decided to publish this advice more widely in the infosec community.

1.1     Weak Password Choice

Looking at Jeremi Gosney’s top 100 list, it is obvious that users are using very weak and simple passwords. While this is no real surprise, the worrying aspect is that many of these weak passwords are associated with corporate email addresses. For example, one global security company is using the generic password ‘123456’ for the account with the email address format of company@company.com


1.2     Password Reuse

Examination of the data reveals users are reusing the same password across multiple accounts, both corporate and personal. For example:

67436532-|–|-jbloggs@company.co.uk-|-VdcYhakfnPioxG6CatHBw==-|-your wife|–

a2342312-|–|-purchasing@company.co.uk-|- VdcYhakfnPioxG6CatHBw==-|-wife|–

72349517-|–|-jbloggs@hotmail.co.uk-|- VdcYhakfnPioxG6CatHBw==-|-wife|–

1.3     Related Accounts

Another piece of information that can be obtained from the reuse of passwords is the ability to link related accounts. In the example below you can track the employment history of J. Bloggs based on his password reuse.



23552666-|–|-j.bloggs@getfit.com -|-nSFKSJFaKERjfsxG6CatHBw==-|-|–


Of course, this will be difficult to achieve if the user has chosen a common password.

1.4     Password Hints

As the encryption key is still not publically known, the encrypted passwords cannot be reversed to yield the plaintext password. Unfortunately, the presence of unencrypted password hints supplied by the users allows us to make a very confident guess of the password. A single hint is often not enough information to allow a confident guess, but 10 or more hints for the same password makes the process considerably easier.

In the example below, the encrypted password is:


This is used in 25 different accounts. The associated password hints are:

Favourite element



Tl u/c

The usual metal


The rock


Seeing all these hints together allows us to make an educated guess that the password is probably ‘Thallium’.

1.5     Recommendations

The main points to take on board from this data leak, (after you have reset all your Adobe account passwords), is that users are the weakest security link in your company. While it is easy to implement and enforce a password policy within the internal corporate environment (via group policies, for example), it’s a very different situation online. You have no control over any particular individuals’ password strength if the online service/site does not enforce it.

This is where user education is vital. Educating your employees in password management best practices is a must and this should be done on a regular basis. Companies must ensure all users not only know and understand the current password policy, but the implications of using weak passwords or reusing passwords between corporate and personal accounts, especially online. In addition to education, providing your users with the ability to generate suitable passwords and store them securely will help (for example, unique password generators and password safe software).

If this has made you nervous and you would like us to examine the leaked data for any accounts related to your company, just get in touch!

Samantha Fielden, Head of Client Management
Artwork by Michael Gillette


Russian Dolls

As security professionals, one thing that we see repeatedly is the conflict between securing an organisation’s data assets and the cost of doing so. It’s a legitimate business conflict too. Whilst the recession is still fresh in everyone’s memory, saving money is always going to be a hot topic.

There can be significant economies of scale by consolidating data centre equipment. Folding all the hardware platforms into a single virtual server farm should slim the rack real-estate, whilst simultaneously reducing the need for power and air-conditioning. Likewise, merging the storage into a single, high-availability Storage Area Network (SAN) can increase reliability and reduce the overall cost of ownership. Then there is all that pesky networking equipment, like routers and switches; surely it would be cheaper to replace it all with a large multi-blade chassis?

However, the problem with too much consolidation is that sooner or later someone will be tempted to merge several clumps of applications and data (security domains) that have conflicting security requirements to save a few pennies. For example, sensitive financial systems that are only accessed by authorised users, and publicly accessible, brochure websites that anyone can use.

Two concepts you will hear security people talk about constantly are a “layered approach” and “compartmentalisation”. When they talk about a layered approach, they are referring to having multiple sets of security controls; so that if one fails, an attacker does not get complete access to the data. Compartmentalisation in turn refers to deliberately separating systems into discrete chunks, so that a breach in one area should remain isolated. Consolidation often negates these controls and approaches, making them irrelevant.

One might ask, why would this be a problem?

In a heavily consolidated environment it becomes much more straightforward for an attacker to use the common platform as a route to move between systems. Something as simple as a configuration mistake on a consolidated MPLS router, that routes both internal and external circuits, could allow access to your sensitive data. Or an attacker could use a consolidated SAN fabric with both internal and DMZ servers attached to bypass network firewalls. VM clusters that host both Internet-accessible and internal servers are also made vulnerable by consolidation. In the event of one server being compromised, then the VM platform itself may be attacked directly, allowing all servers hosted on it to be accessed.

Don’t get me wrong; consolidation can be a good thing when it is done well. The savings are very real. However, before you go ahead and commit to a consolidation plan proposed by your integration vendor, run it past your security team to make sure that you’re not making savings that will be made pointless after you’ve paid for a costly PR debacle.

Martin O’Neal

Photo: Lachlan Fearnley