Dual_EC_DRBG Back Door?

Bruce Schneier reports that one of the pseudo-random number generators in the recently released NIST Special Publication 800-90 (.pdf) appears to include something that looks awfully like an intentional back door:

What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

It's possible that this is accidental; if it is deliberate, the prime suspects are the NSA, who have been pushing to get this algorithm adopted for some time. So much for the usual outsider's paranoia about how the evil TLA might be compromising our cryptography for their own nefarious ends. That's not the scary part, though; the really scary part is the thought that perhaps that isn't what is going on:

If this story leaves you confused, join the club. I don't understand why the NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it. And it makes no sense from a backwards-compatibility perspective: Swapping one random-number generator for another is easy.

Shumow and Ferguson's presentation (.pdf) is short, and although there are some squiggly letters in it you don't need to understand the mathematics of elliptic curves to follow the argument.

I look forward to seeing how this one plays out.

(Via Schneier on Security.)


As part of one of the more deeply nested yak shaving exercises I've been working through recently, I have added MicroIDs to various pages on this site. For example, the header for the main index page for this blog now includes the following elements:

<!-- MicroID for '/' variant of URL -->
<meta name="microid"
  content="mailto+http:sha1:b887e662ed3d811e665ef4a034e018a521a5467d" />
<!-- MicroID for '/index.html' variant of URL -->
<meta name="microid"
  content="mailto+http:sha1:ed938d07588303f4eeee45adfef090221e0c692e" />

A MicroID is a very simple way of making a verifiable statement about the ownership of a page. The specification goes into more detail, but essentially the value you see is constructed by independently hashing your e-mail address and the URL of the page in question, concatenating those results and then hashing once more.

The way you use a MicroID in practice is as supporting evidence for a claim of ownership to some third party who already knows your e-mail address. If you say "I own that page" to such a third party, they can compute the same MicroID value from your e-mail address and the page's URL and then check for a match within the page's <meta name="microid"> headers. You can see this claim checking by looking at the "verified" links in my claimID profile.

MicroID is an improvement on the perhaps more obvious approach of just embedding your e-mail address in the page because it doesn't reveal your e-mail address to things like spam address harvesters. It also improves on a simple hash of the e-mail address by including the URL in the calculation because all pages owned by the same e-mail address are thereby given different MicroIDs. This in turn means that pages can't be grouped together, even anonymously, by web spiders. Looked at from this point of view, a MicroID is a salted hash of the e-mail address.

I'm pretty sure that you could do the same job with one or even two less hash operations (for example, the URL is known by definition, so hashing it serves no purpose that I can see), but for static pages performance is not a concern. If I was running a large content site with dynamically generated pages, though, this aspect of MicroID might put me off a little.

Note that although a MicroID looks a little like a digital signature (of the URL) it really isn't; in particular, a MicroID can easily be repudiated because anyone knowing your e-mail address can generate MicroID values "for" you and put them on any pages they please. In other words, you can use it to help confirm ownership of something by a claimant, but not to prove ownership by someone who denies the connection.

Generating the MicroID values for blog pages in particular was made simpler for me by Phil Windley's MicroID plugin for Moveable Type. I did have to tweak it a little to correspond to the current MicroID spec, as Phil's plugin as distributed generates what is now thought of as a "legacy" format lacking the scheme and algorithm specifiers.

Firefox Cipher Suites

When your browser connects to a web site protected by transport layer security of some kind (usually by accessing an https:// URL) there's a negotiation between the two parties. Each party (browser, server) comes to the negotiation with a list of cipher suites that it is prepared to use, and the result is that one of these suites is chosen for the connection.

Recently I ran into a situation where Firefox 2.0 wasn't connecting to a site which Firefox 1.5 had no problems with. It's pretty hard to figure out which cipher suites Firefox is prepared to use from its documentation, so I decided to determine the answer directly by snooping on the negotiation part of the protocol.

Read on for method and results.

Alice and Bob... and Bruce

I couldn't resist this T-shirt design from the people who bring us Everyone Loves Eric Raymond and Bruce Schneier Facts.

Obviously this is only going to be funny to (a) a very particular kind of nerd with (b) a very particular sense of humour. I suspect I'm not the only member of both sets, though.


I generated my first PGP RSA keypair way back in 1993. Some friends and I played around with PGP for e-mail for a while, but at the time few people knew about encryption and even fewer cared: the "no-one would want to read my mail" attitude meant that convincing people they should get their heads round all of this was a pretty hard sell. The fact that the software of the day was about as user-friendly as a cornered wolverine didn't help either.

The PGP software had moved forward a fair bit both technically and in terms of usability (up to "cornered rat") by 2002, when I generated my current DSS keypair. By this time, it was pretty common to see things like security advisories signed using PGP, but only the geekiest of the geeks bothered with e-mail encryption.

Here we are in 2006: I still use this technology primarily to check signatures on things like e-mailed security advisories (I use Thunderbird and Enigmail), but I've finally found a need to use my own key, and it isn't for e-mail.

Over the years, PGP (now standardised as OpenPGP) has become the main way of signing open source packages so that downloaders have a cryptographic level of assurance that the package they download was built by someone they trust. Of course, the majority of people still don't check these signatures but systems like RPM often do so on their behalf behind the scenes.

I've agreed to take on some limited package build responsibilities for such a project recently, so I've installed the latest versions of everything and updated my about page so that people can get copies of my public keys. Of course, there is no particular reason anyone should trust those keys; this is supposed to be where the web of trust is supposed to come in, by allowing someone to build a path to my keys through a chain of people they trust (directly or indirectly). Unfortunately, my current public key is completely unadorned by useful third-party signatures. If you think you can help change that (i.e., you already know me, already have an OpenPGP keypair and would be willing to talk about signing my public key) please let me know.

"Security Engineering" available for download

Skinflints of the world rejoice; Ross Anderson's textbook Security Engineering is now available for free download:

My book on Security Engineering is now available online for free download here.

I have two main reasons. First, I want to reach the widest possible audience, especially among poor students. Second, I am a pragmatic libertarian on free culture and free software issues; […]

I’d been discussing this with my publishers for a while. They have been persuaded by the experience of authors like David MacKay, who found that putting his excellent book on coding theory online actually helped its sales. […]

(Via Light Blue Touchpaper.)

Bruce Schneier Facts

Everybody loves Eric Raymond is a pretty weird web comic to start with, combining as it often does obscure open-source in-jokes with the premise that Richard Stallman, Eric Raymond and Linus Torvalds all live together in a flat somewhere.

Today's episode jumps over into the even more obscure realm of crypto in-jokes, with the even weirder premise that Bruce Schneier is actually a cryptographic Chuck Norris.

Clicking through to the interactive Bruce Schneier Facts Database is well worth while. My favourite random fact so far is:

Bruce Schneier doesn't even trust Trent. Trent has to trust Bruce Schneier.

Obscure enough for you?

More on Hashes

Since I last wrote about the problem with hashes, there has been a fair bit of activity and some progress:

  • An internet draft is available describing the nature of the attacks on hash functions, and how different internet applications are affected.
  • According to the OpenSSL changes file, additional hash algorithms are going to be supported in version 0.9.8. There is no indication of a date for that release, though.
  • Don Eastlake's internet draft on Additional XML Security Uniform Resource Identifiers (URIs) has progressed to its final status as RFC4051.

I have updated my previous article to reflect this.

[Updated 20051030 with latest URL for the Hoffman draft.]

SHA-1 and XMLDSIG: No Plan B?

People in the know are reporting that the 160-bit Secure Hash Algorithm has been broken by a group in China. When the group's paper is published we'll all be able to judge, but the initial reports indicate that SHA-1 has about 11 bits worth (2000 times) less collision resistance than its output length would suggest. This isn't a huge surprise; there were some indications last year that this might happen eventually although I don't think anyone expected things to move so quickly.

The break is a big deal for academic cryptographers, but it doesn't seem to represent an immediate disaster in practice. Existing digital signatures and certificates are probably safe for now, in particular, as the kind of attacks you can mount against a system using collisions mainly apply to new signatures. The revised 69-bit strength of SHA-1 is still good today against all but fairly wealthy adversaries; Bruce Schneier has some estimates of how rich.

Obviously, there will now be a move towards beefier hash algorithms like SHA-256 or SHA-512 (PDF link). In the long run, because these come from the same family they may turn out to give only temporary respite. More immediately, they aren't implemented by all current cryptographic libraries: for example, they have been in Java since 1.4 but the extremely popular OpenSSL package doesn't ship with support for them yet.

Further up the stack, many standards already allow for selectable algorithms and for negotiated selection of them at run time. That's harder to do with digital signature applications, because in this kind of context there isn't normally a way to negotiate algorithms. This means, for example, that public suppliers of digital certificates probably won't be able to shift from SHA-1 very soon, as many of the systems that use their certificates (browsers, for example) are based on cryptographic libraries that don't support anything stronger than SHA-1 today.

One place where there seems to be a complete absence of a "Plan B" at present is the joint IETF/W3C standard for digital signatures in XML (XMLDSIG). The published standard from 2002 (also RFC3275) only discusses SHA-1. Moving away from this position requires several steps:

  • Implementation of additional algorithms in the basic cryptographic libraries such as OpenSSL (already true for some libraries, OpenSSL has code checked in but not released). [20050421: OpenSSL change log indicates this is scheduled for the 0.9.8 release.]
  • A specification of URIs naming additional algorithms for XMLDSIG (Don Eastlake has an Internet Draft on this dating from last year.) [20050421: this is now RFC4051.]
  • Access to those algorithms from XMLDSIG implementations, such as Apache XML Security.
  • Either:
    • Some sort of standard specifying that XMLDSIG implementations should implement additional algorithms, or
    • A similar kind of must-implement agreement even higher in the stack, in my area of interest either at the level of SAML or Shibboleth.
  • Last but not least: everyone installs new versions of all of the above.

None of this sounds like it is going to happen overnight. It is important that it all happens some time soon, though, as the general feeling seems to be that it is likely that further progress will be made against SHA-1; it is just the timing that is unknown.

[Last updated 20050421: OpenSSL status, RFC4051.]

Subscribe to RSS - Cryptography