« Black Hat 2009 SSL Review: Black Ops of PKI (Dan Kaminsky) | Main | Black Hat 2009 SSL Review: Breaking the Myths of Extended Validation SSL Certificates (Alexander Sotirov and Mike Zusman) »

Black Hat 2009 SSL Review: More Tricks For Defeating SSL In Practice (Moxie Marlinspike)

August 05, 2009

Moxie Marlinspike has a story to tell, and it's a story about SSL implementation flaws. His talk at Black Hat 2009 (get the slides and the movie from Black Hat Archives) is a continuation of his previous activities, which go way back, as early as 2002 (or something). I liked this talk, because it was delivered in a low key, here-is-what-I-discovered, way.

  1. You first learn about the chained certificate validation flaws from the past, which were possible basically because X.509 is complex and programmers don't understand it. It was possible to exploit such flaws with the use of any valid certificate.
  2. Moxie wrote and published his first SSL tool, sslsniff, because Microsoft claimed the chained certificate validation flaws could not be exploited. This tool automates the exploitation and makes MITM attacks really simple (as most Moxie's SSL tools do).
  3. Next comes sslstrip, again a MITM tool, which relies on the fact that most people first browse using plain-text and let sites guide them to SSL. The tool strips away the links to SSL, leaving the user to communicate via plain-text.
  4. The new stuff is about the same problem Dan Kaminsky pointed out in his talk (but independently discovered), which is that most SSL implementations are vulnerable to NUL-byte attacks. As a result, it is possible to get a CA to sign a certificate that effectively allows you to impersonate a web site you do not own. Combining this attack with a wildcard certificate, you get a certificate that can be used to impersonate any web site.
  5. Moxie further discovered that it was possible to interfere (from a MITM position) with OCSP certificate validation to prolong a life of a revoked certificate. 
  6. Even worse, he discovered that it was possible to use a NUL-enriched wildcard certificate to subvert the Firefox and Thunderbird update mechanism.

The good thing about implementation flaws is that they are reasonably quick and easy to fix. The first issue is old news, so it's not a problem any more. The new flaws (#4, #5 and #6) are either fixed (I know they were fixed in Firefox) or will be fixed soon. You can make sure you're using a version of the browser that is not vulnerable. There's going to be a lot of people using older browsers, who are going to remain vulnerable. To help them, it is the CAs who are going to have to make sure they don't issue any more rogue certificates.

Although sslstrip is old news, I am worried about the fact that people can't recognise when a connection (to a web site) is insecure. No, scratch that. Why should people need to recognise that? I know how to and I do, but that's only out of necessity. I am really more worried about the fact that it's necessary to think about such things at all. Web sites should just be secure, period.

Short term, EV certificates help. That really means that they help me and you, but I don't believe they help normal (i.e., non-security) people. Can we do anything to help them? A solution comes to mind, and it consists from two parts:

  1. Use only SSL and make plain-text connections impossible.
  2. Refuse to connect to web sites that do not have valid certificates.