Security

Second Life Goo and Dancing Pigs

The virtual world of Second Life has recently been suffering from a series of attacks from what has been referred to as "grey goo", a term which is a direct reference to the scenario of uncontrolled exponential growth in nanotech replicators. The result of a grey goo attack is that the world fills up with junk that prevents anyone getting anything else done.

I haven't covered this before because it is well known to the point of infuriation to most people connected with Second Life. What's been more interesting recently is that people outside that community have started picking up issues like this from Second Life, particularly people more commonly associated with security in general. For example, Ed Felten wrote a couple of articles recently about the "copybot", which allows you to make a copy of anything you can see in-world without paying for it (with some limitations, which aren't relevant to this discussion). Professor Felten is perhaps most well known for his work on the SDMI challenge, US v. Microsoft and more recently the (in-)security of electronic voting machines.

Directly on point to the grey goo attacks is Eric Rescorla's Beta-testing the nanotech revolution; again, this is a bit off what most people would think of as Eric's normal beat.

But that's my point: if you're involved however peripherally in security systems, you walk into something like Second Life and see a lot of problems waiting to happen; as Ed Felten puts it, these are really issues "from the It-Was-Only-a-Matter-of-Time file". New systems should be learning from the mistakes of the past, not blundering through a series of unworkable solutions every time until they get to something that works until the next bad guy comes along. Unfortunately, that doesn't seem to be how the world operates. Ed Felten has another appropriate quote for this: "Given a choice between dancing pigs and security, users will pick dancing pigs every time."

If you're interested in a bit more comment about the grey goo problem per se, I attach the comment I added to Eric Rescorla's article below.

RUSE MET LORD CURT REEL ION

I learned the difference between haphazard and random a long time ago, on a university statistics course. Since then, I've been wary of inventing passwords by just "thinking random" or using an obfuscation algorithm on something memorable ("replace Es by 3s, replace Ls by 7s", or whatever). The concern is that there is really no way to know how much entropy there is in such a token (in the information theoretic sense), and it is probably less than you might think. People tend to guess high when asked how much entropy there is in something; most are surprised to hear that English text is down around one bit per letter, depending on the context.

If you know how much information entropy there is in your password, you have a good idea of how much work it would take for an attacker to guess your password by brute force: N bits of entropy means they have to try 2^N possibilities. One way to do this that I've used for several years is to take a fixed amount of real randomness and express it in hexadecimal. For example, I might say this to get a password with 32 bits (4 bytes) of entropy:

$ dd if=/dev/random bs=1 count=4 | od -t x1
… 0000000 14 37 a8 37

A password like 1437a837 is probably at the edge of memorability for most people, but I know that it has 32 bits worth of strength to it. So, what is one to do if there is a need for a stronger password, say one containing 64 bits of entropy? Certainly d4850aca371ce23c isn't the answer for most of us.

When I was faced with a need to generate a higher entropy — but memorable — password recently, I remembered a technique used by some of the one-time password systems and described in RFC 2289. This uses a dictionary of 2048 (2^11) short English words to represent fragments of a 64-bit random number; six such words suffice to represent the whole 64-bit string with two bits left over for a checksum. In this scheme, our unmemorable d4850aca371ce23c becomes:

RUSE MET LORD CURT REEL ION

I couldn't find any code that allowed me to go from the hexadecimal representation of a random bit string to something based on RFC 2289, so I wrote one myself. You can download SixWord.java if you'd like to see what I ended up with or need something like this yourself.

The code is dominated by an array holding the RFC 2289 dictionary of 2048 short words, and another array holding the 27 test vectors given in the RFC. When run, the program runs the test vectors then prompts for a hex string. You can use spaces in the input if you're pasting something you got out of od, for example. The result should be a six word phrase you might have a chance of remembering. But if you put 64 bits worth of randomness in, you know that phrase will still have the same strength as a password as the hex gibberish did.

PGP/GPG Keys

I generated my first PGP RSA keypair way back in 1993. Some friends and I played around with PGP for e-mail for a while, but at the time few people knew about encryption and even fewer cared: the "no-one would want to read my mail" attitude meant that convincing people they should get their heads round all of this was a pretty hard sell. The fact that the software of the day was about as user-friendly as a cornered wolverine didn't help either.

The PGP software had moved forward a fair bit both technically and in terms of usability (up to "cornered rat") by 2002, when I generated my current DSS keypair. By this time, it was pretty common to see things like security advisories signed using PGP, but only the geekiest of the geeks bothered with e-mail encryption.

Here we are in 2006: I still use this technology primarily to check signatures on things like e-mailed security advisories (I use Thunderbird and Enigmail), but I've finally found a need to use my own key, and it isn't for e-mail.

Over the years, PGP (now standardised as OpenPGP) has become the main way of signing open source packages so that downloaders have a cryptographic level of assurance that the package they download was built by someone they trust. Of course, the majority of people still don't check these signatures but systems like RPM often do so on their behalf behind the scenes.

I've agreed to take on some limited package build responsibilities for such a project recently, so I've installed the latest versions of everything and updated my about page so that people can get copies of my public keys. Of course, there is no particular reason anyone should trust those keys; this is supposed to be where the web of trust is supposed to come in, by allowing someone to build a path to my keys through a chain of people they trust (directly or indirectly). Unfortunately, my current public key is completely unadorned by useful third-party signatures. If you think you can help change that (i.e., you already know me, already have an OpenPGP keypair and would be willing to talk about signing my public key) please let me know.

Screening People with Clearances

Another short, cogent essay from Bruce Schneier, this time on why it makes sense to be Screening People with Clearances:

Why should we waste time at airport security, screening people with U.S. government security clearances? […]

Poole argued that people with government security clearances, people who are entrusted with U.S. national security secrets, are trusted enough to be allowed through airport security with only a cursory screening. […]

To someone not steeped in security, it makes perfect sense. But it's a terrible idea, and understanding why teaches us some important security lessons.

This is worth reading just to understand how a U.S. security clearance isn't quite the concrete thing you perhaps assumed it was, but I think the comments on "subjective agenda" are important too. After all, if the people who make the rules aren't bound by them, what incentive do they have to make sensible rules? I think it would be fair to guess, for example, that the average lawmaker hasn't spent a lot of time recently standing in an airport in their stockinged feet with their permitted items in a transparent bag.

(Via Schneier on Security.)

Tags:

"Security Engineering" available for download

Skinflints of the world rejoice; Ross Anderson's textbook Security Engineering is now available for free download:

My book on Security Engineering is now available online for free download here.

I have two main reasons. First, I want to reach the widest possible audience, especially among poor students. Second, I am a pragmatic libertarian on free culture and free software issues; […]

I’d been discussing this with my publishers for a while. They have been persuaded by the experience of authors like David MacKay, who found that putting his excellent book on coding theory online actually helped its sales. […]

(Via Light Blue Touchpaper.)

Bruce Schneier Facts

Everybody loves Eric Raymond is a pretty weird web comic to start with, combining as it often does obscure open-source in-jokes with the premise that Richard Stallman, Eric Raymond and Linus Torvalds all live together in a flat somewhere.

Today's episode jumps over into the even more obscure realm of crypto in-jokes, with the even weirder premise that Bruce Schneier is actually a cryptographic Chuck Norris.

Clicking through to the interactive Bruce Schneier Facts Database is well worth while. My favourite random fact so far is:

Bruce Schneier doesn't even trust Trent. Trent has to trust Bruce Schneier.

Obscure enough for you?

Update Roulette

Not installing security updates isn't really a viable strategy these days. Even waiting a few days to see whether other people have trouble with the update is problematic when a zero day exploit might be available.

It's a bit like playing Russian Roulette in a room full of people who feel their job is to point their guns at you until you pull the trigger.

Obviously this goes wrong once in a while. The recent Samba 3.0.23 update broke access from Windows and Mac machines on my Fedora Core 4 system, but some people with Fedora Core 5 are reporting that all logins to their systems are disabled.

After a bit of searching around and trying various things, I found that in my case I could bring my system back to life by "upgrading" to an older version of the four packages in question.

There is some indication that version 3.0.23a will be out real soon now… but that doesn't really make me feel completely happy. Nor does the realisation that my FC4 system will officially be "legacy" next week and I'll need an upgrade to at least FC5 to stay within my "properly supported" comfort zone.

This kind of thing does seem to happen more often with Fedora, and anecdotally seems to be related to their strategy of pulling in new releases rather than back-porting security fixes. Moving to a more "enterprise" style system for the places where I need stability rather than the latest features is probably the right answer for me; once RHEL 5 is out I will probably take a close look at it and the equivalent CentOS release.

[Update 20060729: the 3.0.23a release doesn't fix the problem, at least for me.]

WAYFs and Discovery

Of course, the real reason I was in Windermere was not to photograph ducks but to present some slides on the discovery problem in Shibboleth. You can download a copy of the presentation "WAYFs and Discovery" here (1.4MB PDF).

The abstract (accidentally omitted from the meeting material) was:

The standard model of Identity Provider discovery in Shibboleth deployments is that of a federation-supplied, central discovery service called a WAYF. Although an essential backstop, this approach has significant shortcomings. We present some recent work in the area of multi-federation WAYFs, and review alternative discovery technologies (both present and future) that allow deployers to improve the user experience.

My co-author Rod Widdowson can be found here.

Dick Hardt at OSCON

Speaking of identity, Dick Hardt of Sxip gave a cracking keynote at this year's Open Source Conference.

If you're at all interested in digital identity (and you're not allergic to Larry Lessig's presentation style), I highly recommend spending taking the fifteen minutes required to watch this. It is very light on technical details, but gets across the critical differences between "old style" digital identity and the so-called "Identity 2.0" systems that are starting to emerge. It even manages to be entertaining while it does so. And the pictures of a Vancouver "Cold Beer and Wine" store bring back memories…

Schneier at Turnrow

This last week, the security people at my wife's place of work have instituted a new policy of X-raying lunchtime sandwiches purchased outside the building. Yesterday, a security guy I've been saying "hi" to regularly for a year asked me to present a credential I've never had (and then let me talk him out of it, which didn't improve my opinion much). And of course, our politicians have gone into emergency "let's sneak some laws past quick, before people start thinking again" mode.

None of this was very surprising; by now everyone is used to the suffocating results of the knee-jerk "must be seen to do something" reaction after a major incident. Whether the security measures imposed make sense in any way is another question, and I've always put a lot of it down to woolly thinking.

A newly published interview with Bruce Schneier at Turnrow reminds me that many of these measures make more sense if you think about them as security decisions being taken by someone else, ostensibly for your benefit, but within the decision-maker's agenda rather than your own. Cutting it down to the bone, if someone is making a cost/benefit analysis on your behalf, they are likely to make sure that they will benefit while you pay the cost. If you can throw the cost (in money, convenience, or loss of civil liberties) over the wall to someone else you can justify almost anything, no matter how small the benefit.

This is an excellent interview, distilling most of the important points of Schneier's book Beyond Fear into a couple of pages. Worth reading, and worth passing around to people when they ask why something incomprehensible is being foisted on them in the name of "security".

[via Schneier on Security, of course]

Tags:

Pages

Subscribe to RSS - Security