Heartburn….. I mean Heartbleed


At this point it almost seems like Heartbleed doesn’t even need an introduction.  This has probably been one of the best marketed vulnerabilities since Conficker (thanks again, MS08-067!), and it’s only been out for a week!

But let’s make sure we get the understanding correct on what Heartbleed is and is not.

Heartbleed (officially noted as CVE-2014-0160 in the MITRE CVE® system) is a vulnerability in a software package called OpenSSL®, an open source implementation of the SSL/TLS standards.  So why is Heartbleed found in so many different products?

If you are developing a software solution or a system that needs to provide encryption for data, you have two options: write your own software, or use someone else’s.  Several commercial variants exist, but OpenSSL® is well supported, well documented, and free, so there is a huge “install” base of this software all over the Internet.

A completely comprehensive listing of exactly what products are affected is hard to build, since most of the organizations that utilize OpenSSL® under the hood don’t have it prominently disclosed.  The information security community is doing a pretty good job of putting together a list, but right now the best way to know for sure if you have any exposure to Heartbleed is to test.  There are plenty of methods available for testing exposure to this vulnerability.

  • Vulnerability Scanners – Most commercial vulnerability scanners have a signature/plugin available
  • Metasploit
    • A auxilliary scanner module called ssl_heartbleed has been released and is available in the main trunk (update with msfupdate and you’ll get it)

Go ahead and test.  I’ll wait right here.

Getting back to what makes this particular bug so bad, the problem happens with a very specific action available in the SSL/TLS standards.  The most common example of this is with the secure HTTP (HTTPS) protocol.  When your browser requests a connection over HTTPS, there is some math that both your browser and the server have to do to set up the secure part of the HTTPS connection.  It doesn’t take a really long time, but it does take some processing power, and a few milliseconds.  When the session your browser and the server have set up expires, both sides have to redo all the math, so a new feature called “heartbeat” was added that would allow a session to remain open, saving your browser, and more importantly the server, from having to do the additional processing .

The heartbeat check basically consists of a specific type of request over the HTTPS connection from the browser to the server, to which the server responds with an acknowledgement.  The problem comes in that the function that returns the value has a small flaw, one that lets the browser side add some additional instructions that makes the server return some information not intended for the browser to see.

The security researchers who discovered this vulnerability realized that the information the server was returning was actually whatever was in memory for the process on the server side.  This can include very sensitive things like 1) the private key for the certificate used to secure the service, 2) usernames and passwords for any users who may have logged in, and 3) other sensitive data that the service may be processing.

UPDATE: Follow on testing and research has not conclusively shown that private keys can be recovered via the Heartbleed vulnerability, but until confirmed we advise everyone to consider this possible still.  It appears the biggest concerns is in accessing confidential information that the service is processing or obtaining usernames and passwords of users who may have just logged in.

To be accurate, the Heartbleed bug discloses a small amount of the memory space (64 Kilobytes) to the attacker, and in order to really steal this information an attacker would need to be constantly “heartbleeding” a site to get this information.  Of course, it is distinctly possible to do, and the worst part is that there is nothing logged on the server side when this attack is conducted.

This excellent copyrighted info-graphic from BAE Systems does a nice job describing the vulnerability:


What has brought the most attention to this vulnerability (besides the excellent branding) has been how pervasive it is on the Internet.

We’ve found vulnerable versions on all kinds of systems: security appliances (!), web interfaces for software (really well known software like VMWare® ESX®, Splunk®, and others), and all kinds of embedded network devices (home wifi routers especially).


The fine folks at the OpenSSL® project had been alerted just a few days before this vulnerability was disclosed publicly, but non-vulnerable versions of OpenSSL® are available now.

The problem for most organizations is that the systems that are vulnerable aren’t systems that enable modification of these types of system files, so depending on where you find any exposures you’ll need to wait for the vendor to release an update.

First priority should be ensuring your Internet facing services are clear of this exposure, then work through testing in the internal network.

If you do find any Internet facing service that is vulnerable to Heartbleed, the best advice is to 1) patch the service, 2) revoke the current certificate, 3) generate a new private key, 4) replace certificate, and 5) consider requiring users of that service to reset their passwords.

All of this should follow your standard vulnerability management procedures for a critical bug.  Test the patch and roll it out as soon as possible!

The EI security team will continue researching and working on resolving this issue.

UPDATE 2: Another great article on this topic, this one complete with some Snort® signatures!


UPDATE 3: So yeah, there are now plenty of confirmed cases of private key disclosure via a heartbleed attack.  Time to get new certs and change those passwords!

Survivor: Internet Edition

It’s a pretty commonly accepted notion: Put something out on the Internet, and a whole bunch of people are going to take a swing at it.  Turn up a new wordpress site?  All kinds of spam in the comments, failed logins to the admin portal, and that’s even before you get any actual readers who might want to read what you have to say!

Following some basic precautions will ensure these “attacks” stay merely annoyances, and not full blown problems:

  • Install updates as soon as they are available (or as close as possible)
  • Use complex passwords – and change the built in ones!
  • Enable two factor authentication
  • Implement some kind of intrusion detection

In the course of various projects we’ll spin up a bunch of Internet facing servers, and pretty much within minutes we see blind attempts to log into our systems, from IPs that have no business being there.  We’ve gotten used to following the pattern of hardening systems before Internet exposure, but rather than just roll our eyes at yet another attempt to log in as root, we thought it would be an interesting study to quantify exactly how much unwanted traffic we see.

What impacts the numbers most?

  • Do net blocks assigned to virtual server hosting companies draw more attention than a server stood up on a residential connection?
  • Does the presence of a DNS name assigned to the public IP matter?
  • Would active blocking of offending IPs deter them , or will they return again later?

Without crossing the line into a honeypot experiment (which is really interesting too, but a topic for another day), we’re working on a study to try to put some data to this and see if these attacks are truly blind or if they are somewhat targeted.

Testing Plan:

We will configure 4 different servers for access to the Internet:

  • Linux server with SSH and a WordPress site – VPS Host – DNS entry with reverse lookup
  • Linux server with SSH and a WordPress site – VPS Site – no DNS entry
  • Linux Server with SSH and WordPress – Residential Internet Connection – DNS entry with reverse lookup
  • Linux Server with SSH and WordPress – Residential Internet Connection – No DNS entry

Depending on the results of this survey we may branch out and test additional configurations.

Stay safe!

Enhanced logging with the Citrix NetScaler

There’s nothing more frustrating than logs that don’t actually log useful information (OK, maybe there are a few things that are more annoying).  But by and large, when you’re having a problem with one of your load balanced apps, you want to KNOW right now!

I have recently found the custom message action option, and have found it to be an incredibly useful tool.  Like everything else in NetScaler world, you bind the message action to some entity, a content switching virtual server policy for example, and off you go.

The message action enables you to pull specific information about the request or response that meets the criteria you set, and boom, it’s in your syslog traffic.  You can snag pretty much anything you want, URL, headers, POST data, whatever.    Pull it up in $loganalyzer and boom, instant analytics.  Here’s how you get started:

1. Set the basics

This tutorial assumes you are already set up and successfully syslogging to a remote host, but you can log this same data to the newnslog file too (although I don’t think there is a more painful way to try to look at log data than that, but whatever!).

set audit syslogParams -userDefinedAuditlog YES

2. Define what your super awesome custom message will look like:

We ran into an issue where we had noticed the default policy on a content switched virtual server was taking lots of hots, and we wanted to see what was getting there.  We decided we’d need to know:

  • The IP address of the request
  • The HTTP method
  • The Host (from the header)
  • The URL (excluding all that crazy query stuff… ah .NET!)

We also decided that the message should make sense to a human reading it, not just to some search time extractions in your elastic search cluster….. So here’s our policy!

"Content switch policy hit for load-balanced-vserver:"+HTTP.REQ.LB_VSERVER.NAME+" ClientIP: "+CLIENT.IP.SRC+" issued a "+HTTP.REQ.METHOD+" request for "+HTTP.REQ.HEADER("Host")+HTTP.REQ.URL.HTTP_URL_SAFE

Let’s break this down:

  • Anything in quotes goes as a literal string in your syslog message (that’s how we make sure humans can read it), and you stich everything together with the plus (+)
  • HTTP.REQ.LB_VSERVER.NAME – returns the name of the load balanced virtual server handling the traffic.  We decided after we built the policy that if we made it correctly we could use it all over the place (we have TONS of content switching virtual servers)
  • CLIENT.IP.SRC – returns the IP making the request
  • HTTP.REQ.METHOD – returns the GET or POST or whatever method
  • HTTP.REQ.HEADER(“Host”) – the DNS name the user is using to access the service
  • HTTP.REQ.URL.HTTP_URL_SAFE.PATH – the URL that was requested, properly sanitized (watch out for second order XSS!), with just the path (it throws away anything from the ?<query>=)

We decided to use just the path and not capture the entire URL because adding the additional query characters didn’t matter (we almost never make content switching decisions based on that), it made the logs harder to read, and when we tried to analyze the results they just muddied the waters by making every request look unique.

A couple of things to watch out for here:

Don’t mix up request and response header stuff, it doesn’t work like that (at least not that we can find).

If you bypass the safety check, make sure you don’t expose yourself to risk later.

Hope this helps add some insight into your load balancing operation!

%d bloggers like this: