Misfortune-cookie

What is it?
Misfortune Cookie is a critical vulnerability that gives an intruder the capability to remotely hijack home and business Internet routers. [1]

What does it do?
Once the attacker has commandeered the router, they have the ability to monitor your web browsing, access your credentials, distribute malware and even the ability to control any devices connected to the network (including tablets, security cameras and kitchen appliances such as toasters)!!

What is Vulnerable?
At least 200 different models of devices are currently exposing a vulnerable service on the public Internet address space, the majority of these are residential and small business networks.

What can you do?
Check your security settings – add passwords to sensitive data, don’t save your credentials to your browser and encrypt browsing data with HTTPS connections [2].
Patch it! – There are currently no patches available, but watch out for firmware updates from your device vendor and apply as instructed.
Endpoint Protections – Whilst your router has its own layer of security defences, further protecting your network with endpoint protections is vital (firewalls, anti-virus software and up-to-date operating systems).

Key signs to look out for:
There are no ways to trace the Misfortune Cookie, but look out for tell-tale signs such as a change of settings on your device and trouble logging into your web interface.

 

[1] http://mis.fortunecook.ie/
[2] http://arstechnica.com/business/2011/03/https-is-more-secure-so-why-isnt-the-web-using-it/


Written by
faye-frameFaye Brennan
Marketing Assistant
Corsaire

???????????????????????????????

Barely two months have passed since the POODLE flaw in SSLv3 [1] was announced, and we now have a follow-up attack that affects the TLS protocol (which was initially thought to be secure by the Google security team that researched the attack [2]).

Once again, the flaw itself works in a very similar way, utilising implementation issues in TLS protocol to allow an attacker to get at the sensitive data inside the encrypted connection. However for an attacker to actively target you, they will still need some form of man-in-the-middle network access to your connection (sharing the same coffee shop Wi-Fi is more than enough).

Patches are on the way from the normal raft of vendors: apply as appropriate! As always, disable support for weak cipher suites.

 

[1] http://blog.corsaire.com/2014/10/15/the-poodles-of-war-details-of-the-sslv3-flaw-released/
[2] https://www.openssl.org/~bodo/ssl-poodle.pdf
[3] http://www.penetration-testing.com
[4] https://www.imperialviolet.org/2014/12/08/poodleagain.html


Written by
Martin-frameMartin O’Neal
Managing Director
Corsaire

 

evil fishermanNot all security breaches can be blamed on hackers trawling the Internet day and night for flaws, sometimes it’s the funny cat video, the Ukrainian bride waiting for you, or the mystery package delivery you missed.

In basic content, today’s phishing attacks are as simple as they were 5 years ago, the only difference is now the spammers don’t use plain text emails but have instead switched to HTML so their emails look more enticing and realistic to the victim. Let’s dive a little deeper…

The bait:
A common method of phishing is hiding the intended URL with hyperlinks, or using commonly known shortening services such as https://tinyurl.com/ or https://bitly.com/. The example below is one we recently received from “Vodafone”:

Phishing3

The email is baiting the victim to click on a shortened URL. It is exceptionally unlikely that a company would send out shortened URLs, so just ignore these emails! However, for those who want to probe a little deeper, it is possible to quickly check the links without subjecting yourself to compromise by using a URL expander such as http://longurl.org or http://knowurl.com. The URL expands to a much more recognizable link, indicating that an executable file awaits the users download – http ://1.1.1.1/wp-content/themes/f679RqP75G.exe

In the case of a spoofed hyperlink, users should always hover over the link with their mouse to identify the final destination. Take for example this Google Drive phishing link:

Phishing

It is clear by hovering over the link, that the intended address is not what is displayed to the user.

On mobile devices such as iOS or Android, simply pressing and holding down on the URL will bring up a submenu – also indicating what the real URL is:

Phishing2

This quickly identifies the link as a phishing attack since the actual intended address is http ://1.1.1.1:8080/ae38x2aejm

The attack:
Let’s take a look at the tinyURL hiding the potentially malicious executable file: http ://1.1.1.1/wp-content/themes/f679RqP75G.exe

It is always good practice to scan a file before doing anything with it, Virustotal.com identified the file as a Trojan variant, with a detection rate of 22/40, but how dangerous is this?

Upon executing the file, it has removed itself immediately, but started a background process called ‘unyen.exe’. Looking at the internal memory strings, it is obvious that more calls to external resources are pulled down, even some from valid websites which seemed to have been compromised!

A constant I/O stream with bursts of network activity give a pretty good indication that the software is slowly gathering user data, and occasionally sending the data back encrypted to the C&C server. The following diagram shows a quick overview of the infiltration, exploitation, and exfiltration process this specific variant has adopted:

Phishing-Diagram

Not all attacks are designed to harvest user data or disrupt day to day business. Sometime they’re just for personal gain like bragging rights or sometimes even financial gain if an attacker can get an internal document or list of emails they can sell them on. And sometimes your machine becomes a sleeping zombie in a gargantuan botnet waiting for instructions to join your zombie brothers for DDoS attacks.

The most common causes are when staff haven’t been trained properly, they don’t care enough, or simply become trigger-happy-link-clicking-maniacs because they’re too busy.

Security teams should make sure they give time and appropriate support to staff and have a standardised processes in place to react quickly and effectively in the case that a phishing attack is successful or staff have concerns.

A dodgy link need not be the end of the world, whether or not it was clicked. Keep your staff trained and engaged… and stay vigilant!


Written by
Lex-frameAlexis Vanden Eijnde
Security Consultant
Corsaire

Wild-Wild-cert

There seems to be a constant feud between business and security decisions. Despite best efforts by security teams, someone in the business will invariably attempt to introduce/implement/sneak-in procedures or policies that directly conflict with the carefully constructed security model. That’s when a tin-foiled hat security advisor enters the scene, and a lengthy conversation takes place about the pros, and cons of the decision.

One of the more common disagreements revolve around the use of a wildcard certificate on multiple subdomains.

The business advantage:
As with most companies, cost is of huge importance. Consider company X is deploying around 100 new subdomains that all need a signed certificate:

For arguments sake, let’s assume that buying a valid certificate is an average of £100; this will cost company X around £10,000 to certify all of the intended subdomains. On the flipside; only one wildcard certificate (although slightly more expensive than a single standard certificate) would be enough to satisfy the technical requirement, thereby saving the business money.

Another advantage to using a single wildcard certificate is the management process. If the certificates ever need to be renewed or revoked, this would only need to be done once on the wildcard certificate, whereas renewing or revoking 100 individual certificates could become quite time consuming.

The security disadvantage:
The first and foremost problem is a single point of failure. If a single server or certificate gets compromised, all sub-domains would then consequently be compromised as well. As mentioned by OWASP, this also violates the principle of least privilege [1]. In the same light, this is the reason Certificate Authorities (CA’s) do not offer extended validation on wildcard certificates [2].

Some older mobile devices and services, such as POP and IMAP, will have issues with wildcard certificates, which usually results in kludged or unsafe workarounds.

The potential compromise:
When the situation arrives where both departments cannot make a decision on the implementation, sometimes it’s best to go with a hybrid approach. Company X could split their subdomains into two sections; ‘critical’ systems, and less sensitive ‘non-critical’ systems (this could be done much at a more granular level, but for this example, we will stick to two classifications – the more segregation that is implemented, the less attack surface is exposed).

Here, each classification will hold its own wildcard (i.e. *-noncritical.companyX.com – where ‘noncritical’ can be any static reference), decreasing the overall attack surface. A great example of a less sensitive system is blogging or brochureware platforms such as WordPress. This means that if a new public exploit for a WordPress plugin is released (which is very common), then all your ‘critical’ systems (such as mail servers and portals) are not affected by the compromise.

So in conclusion, if you can’t afford (or it’s not feasible) to implement individual certificates, at least segregate the environment as much as possible to help reduce the attack surface on your critical applications – you don’t want to pull an HBGary because of a silly WordPress plugin now do you?

 

 

[1] “The principle of least privilege recommends that accounts have the least amount of privilege required to perform their business processes. This encompasses user rights, resource permissions such as CPU limits, memory, network, and file system permissions.” https://www.owasp.org/index.php/Least_privilege

[2] http://www.networksolutions.com/support/why-can-t-i-get-a-wildcard-extended-validation-ev-ssl-certificate/


Written by
Lex-frameAlexis Vanden Eijnde
Security Consultant
Corsaire

 

Begin at the beginning

It is a common occurrence for me to be asked, “What is the best way to get started as a security consultant?”.

But before I give you my answer, I feel I should point out that everything I’m about to write is obviously just my personal opinion, which you are of course entitled to take with the appropriate pinch of salt. I would expect a different consultancy to have different things that they are looking for. L’acheteur se méfiera!

Every year I personally read hundreds of CVs and interview dozens of people that are looking to make a start in the security industry; an industry which is unusually demanding of its consultants, requiring both extreme breadth and depth of knowledge. Knowledge that is built up in layers, one upon another, each new layer intimately dependent on the previous one.

Many of the people I interview have incredibly polished and impressive CVs, complete with long lists of skills, credentials and training courses. However, once the interview starts it is common to find that there is no substance behind the polish. The skills lists are just an aspiration; no real knowledge underpins the claim.

For someone starting out, I would say the most important thing to do is to make sure you understand the basics really well, and if you don’t know it really well, leave it off your CV. There is no point learning about XSS if you don’t understand HTML. No point in learning HTML if you don’t know HTTP. No point in learning HTTP if you don’t know IP. No point in learning IP if you don’t understand basic maths and technology concepts like modulus, endian-ness, and non-decimal radix.

Don’t attempt to run before you have mastered walking. Begin at the beginning…


Written by
Martin-frameMartin O’Neal
Managing Director
Corsaire

bad-appleWhile the hype of a new type of malware against widely-believed-immune Apple devices hasn’t died off yet, let me explain what the malware does and how you can avoid being a victim of this type of attack by applying a few simple security practices.

What it is:

‘WireLurker’, they call it; and the newly discovered malware can hop from infected Mac OS X systems to iPhones and iPads via USB. First signs of this malware have been seen in Chinese third-party app store Maiyadi which is where most infected applications reside for now.

What it does:

Although the intention of this malware is still unclear, it appears that WireLurker steals sensitive information such as AppleID and contact lists, but not bank details… phew! (Unless of course you have yours saved as a contact!). Oh, it also infects other apps on iOS devices and installs its own third-party apps without your permission.

What you should do:

Although Apple seems to have to nipped this one in the bud by revoking the certificate used to spread these baddies in the first place, our recommendation is that usual good security practices should be followed:

  • Only install apps from trusted sources, i.e. the Apple App Store.
  • Don’t connect your beloved devices to untrusted computers or accessories,
  • Use a decent antivirus on your OS and install the latest updates.
  • Don’t install applications that you haven’t requested or authorised.
  • Also delete that free game on your iPhone that you downloaded four months ago but haven’t opened once.

More specific to WireLurker, check your OSX computer and all the devices with which it has synced.

  • Look into the Profile section of your iOS device and ensure that no unauthorised enterprise provisioning has been created.
  • If you happen to have a jailbroken device (tut tut!), check to see if “/Library/MobileSubstrate/DynamicLibraries/sfbase.dylib” exists. If so, delete it through a terminal connection. Palo Alto Networks have released a handy script to detect this and similar files.

Until the next malware, stay safe.


Written by
Ash-frameAsh Dastmalchi
Security Consultant
Corsaire

 

bad usb

In the build-up to the recent DerbyCon conference, there was a lot of chatter in the infosec community about the release of some interesting USB firmware research. Then “Shellshock” happened and, at least for a few days, everyone was so busy scrambling around looking for vulnerable “Shellshock” end-points and trying to patch them up around that they almost forgot about the aforementioned USB research. However, once the dust of “Shellshock” settled, the so called ‘BadUSB’ research once again took center stage.

Thanks to the tireless efforts of IronGeek to record as many security conference videos as possible, the BadUSB presentation was online within a couple of hours of being presented at DerbyCon. The video has, at time of writing, amassed almost 98,000 views. Apparently the world is not yet fed up with the steady stream of vulnerabilities being released; a stream which seems to have turned into a flowing river this year, prompting Mitre to change its syntax to allow for 5-digit CVEs. But before I start rambling about that, let’s switch our focus back to BadUSB.

While it has been known for a few months that some USB drives could be infected with undetectable malware, until now the research has not been released to the general public. With Adam Caudill and Brandon Wilson’s talk at DerbyCon however, this has all changed. The exploit code is freely available on GitHub and, as mentioned above, the presentation detailing the research has also gone viral (pun intended).

So what exactly could someone do with this exploit code? It allows a user to modify the USB’s firmware to hide undetectable malicious code on the device which cannot be removed by simply wiping or formatting the infected device.

Great, so let’s just patch it, right? Well, unfortunately as well as being declared undetectable, many news outlets are also stating that the vulnerability is virtually “unpatchable” and that it could take some time to mitigate or resolve fully. For the tinfoil hat aficionados, there is also the cheery news that the NSA owns a USB device to “relay information and monitor computers”.

Realistically the issue will not be resolved by USB manufacturers any time soon, so what can organisations do to mitigate the threat in the mean time?

  • Use corporate endpoint software to lockdown USB ports and prevent devices from being mounted.
  • Log and monitor failed attempts by users to plug-in USB devices.
  • Use USB devices from trusted vendors only.
  • Provide guidelines for ways in which staff can securely share files without relying on USB.
  • Keep anti-virus and anti-malware solutions up to date to mitigate the potential for threats to spread.

Written by
Jan-FrameJan Fry
Security Consultant
Corsaire