September 23rd, 2006 at 17:38 UTC by Richard Clayton
It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Cryptography makes extensive use of randomness. The unbreakable Vernam Cipher (XORing the message with a one-time pad of the same length) is well-known — as is the problem of securely transporting the random values in the one-time pad to the recipient. Practical systems use a crytographically secure pseudo-random number system, typically implemented with a block cipher — vesting the security properties into a randomly chosen key (a rather smaller item to transmit to the receiver — and usually known as a “session key”). Cryptographic systems fail when the random numbers are re-used (as happened with the Venona material) or when the generation system is weak, as was the case with an early version of Netscape. (BTW: if you want to make some random numbers for crypto then you’d hardly do better than to start with Peter Gutmann’s “Chapter Six” paper.)
Security protocols regularly include nonces: for a simple example, Needham-Schroeder starts with Alice sending a message to the server of the form (Alice, Bob, Nonce) and Bob uses a second nonce to confirm that he is sharing a key with Alice. There is often a requirement here for uniqueness as well as unguessability — though it’s usual just to pick values from a very large set of possibilities and assume that clashes never occur.
So far, randomness seems like a good thing.
It’s still helpful when one turns to information leakage… but not as helpful as is expected by those who have just studied cryptography and security protocols. For a close-to-home example, Steven Murdoch’s “hot or not” scheme changes the temperature of a system reached indirectly via an anonymous communication system and detects which system reached directly has a matching change of clock skew.
One of the suggested countermeasures in comments on this blog post is the addition of random changes to the clock skew. This will certainly slow down the attack, but it does not eliminate it since the random changes will — over many measurements — just cancel themselves out. This is not unrelated to Timothy Newsham’s observations about the adding of random increments to TCP initial sequence numbers even after some time, you still know a great deal about the range of values that the ISN is expected to have. Randomness does work for the ISN problem if there is a truly random selection from the 2**32 possible values, but random increments are not enough.
Turning now to anonymity systems and the paper that George and I presented at the recent Sixth IEEE International Conference on Peer-to-Peer Computing (P2P2006). The initial design of Tarzan (as it was once called — see the link), used other participants in the system to relay their traffic so as to achieve some anonymity. Unlike Tor which uses a small number of servers, the idea was that all of the (hopefully) millions of users would provide relaying for each other. This would prevent traffic analysis attacks at the entry/exit nodes, because there would be far to many to consider monitoring them.
In the initial design (a later version was modified to avoid our attacks) presented at the First International Workshop on Peer-to-Peer Systems (IPTPS 02) the identities of the myriad participants were recorded in a Chord ring and random selections were made to find relay nodes. To prevent an eavesdropper knowing which packets were which, several hundred random dips in the Chord ring were to be made — and the handful of nodes to be used randomly selected from these. However, what our paper shows is that unless you learn the identity of more than 10% or so of the nodes, any triple of three consecutive nodes on your packet’s path through the network is more than likely to be unique (no-one else who is also selecting at random also knows that same set of three). That means that if the “known to you” nodes are monitored then a packet that comes from a “known to you” node and departs to a “known to you” node can be identified as “belonging to you”; an unfortunate result for an anonymity system.
Existing practical anonymity systems such as MixMaster, MixMinion and Tor are not subject to this attack because participants learn about 100% of the nodes. However, as these systems get bigger this may have to change — and George and I wrote this paper (which also has some observations about the dangers of short-cuts for route reconstruction after failures) to remind the community that random isn’t always the best way of fixing security problems!