Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.

Cryptography makes extensive use of randomness. The unbreakable Vernam Cipher (XORing the message with a one-time pad of the same length) is well-known — as is the problem of securely transporting the random values in the one-time pad to the recipient. Practical systems use a crytographically secure pseudo-random number system, typically implemented with a block cipher — vesting the security properties into a randomly chosen key (a rather smaller item to transmit to the receiver — and usually known as a “session key”). Cryptographic systems fail when the random numbers are re-used (as happened with the Venona material) or when the generation system is weak, as was the case with an early version of Netscape. (BTW: if you want to make some random numbers for crypto then you’d hardly do better than to start with Peter Gutmann’s “Chapter Six” paper.)

Security protocols regularly include nonces: for a simple example, Needham-Schroeder starts with Alice sending a message to the server of the form (Alice, Bob, Nonce) and Bob uses a second nonce to confirm that he is sharing a key with Alice. There is often a requirement here for uniqueness as well as unguessability — though it’s usual just to pick values from a very large set of possibilities and assume that clashes never occur.

So far, randomness seems like a good thing.

It’s still helpful when one turns to information leakage… but not as helpful as is expected by those who have just studied cryptography and security protocols. For a close-to-home example, Steven Murdoch’s “hot or not” scheme changes the temperature of a system reached indirectly via an anonymous communication system and detects which system reached directly has a matching change of clock skew.

One of the suggested countermeasures in comments on this blog post is the addition of random changes to the clock skew. This will certainly slow down the attack, but it does not eliminate it since the random changes will — over many measurements — just cancel themselves out. This is not unrelated to Timothy Newsham’s observations about the adding of random increments to TCP initial sequence numbers even after some time, you still know a great deal about the range of values that the ISN is expected to have. Randomness does work for the ISN problem if there is a truly random selection from the 2**32 possible values, but random increments are not enough.

Turning now to anonymity systems and the paper that George and I presented at the recent Sixth IEEE International Conference on Peer-to-Peer Computing (P2P2006). The initial design of Tarzan (as it was once called — see the link), used other participants in the system to relay their traffic so as to achieve some anonymity. Unlike Tor which uses a small number of servers, the idea was that all of the (hopefully) millions of users would provide relaying for each other. This would prevent traffic analysis attacks at the entry/exit nodes, because there would be far to many to consider monitoring them.

In the initial design (a later version was modified to avoid our attacks) presented at the First International Workshop on Peer-to-Peer Systems (IPTPS 02) the identities of the myriad participants were recorded in a Chord ring and random selections were made to find relay nodes. To prevent an eavesdropper knowing which packets were which, several hundred random dips in the Chord ring were to be made — and the handful of nodes to be used randomly selected from these. However, what our paper shows is that unless you learn the identity of more than 10% or so of the nodes, any triple of three consecutive nodes on your packet’s path through the network is more than likely to be unique (no-one else who is also selecting at random also knows that same set of three). That means that if the “known to you” nodes are monitored then a packet that comes from a “known to you” node and departs to a “known to you” node can be identified as “belonging to you”; an unfortunate result for an anonymity system.

Existing practical anonymity systems such as MixMaster, MixMinion and Tor are not subject to this attack because participants learn about 100% of the nodes. However, as these systems get bigger this may have to change — and George and I wrote this paper (which also has some observations about the dangers of short-cuts for route reconstruction after failures) to remind the community that random isn’t always the best way of fixing security problems!

2 thoughts on “Random isn't always useful

  1. @Richard

    You say above about about my posting to “, Steven Murdoch’s hot or not”,

    “One of the suggested countermeasures in comments on this blog post is the addition of random changes to the clock skew. This will certainly slow down the attack, but it does not eliminate it since the random changes will — over many measurements — just cancel themselves out.”

    You are actually imposing several constraints on the nature of random which I most certainly did not in my posting to “hot or not”,

    1, You say “over many measurements” you are imposing two assumptions on the data set used for the random selection,

    A, it is finite
    B, it is invarient

    2. Also when you say “just cancel themselves out” you are making the assuming that you are using a process whereby what goes up, goes down by the same amount. That is to say that the data set is

    A, Selected with uniform probability
    B, That the data set has values that are equal and oposit
    after normalisation (ie cancel out).

    These assumptions are indicative of a PRNG with fixed state size and without external influence….

    Now what was it you said above that prefaced your comments on my post, ah yes,

    “but not as helpful as is expected by those who have just studied cryptography and security protocols.”

    I could recomend you have a look at the first sentance of the introduction of Peter Gutmann’s “Chapter Six” paper you link to above. To save time I have repeated it here,

    “The best means of obtaining unpredictable random numbers is by measuring physical phenomena such as radioactive decay, thermal noise in semiconductors, sound samples taken in a noisy environment, and even digitised images of a lava lamp”

    Although I question the use of sound samples (for many reasons) the above are not constained by your assumptions.

    Also at the time Peter (probably) wrote the paper his second sentance was also true,

    “However few computers (or users) have access to the kind of specialised hardware required for these sources, and must rely on other means of obtaining random data.”

    Since then time has moved on and Intel amongst others are now including normalised thermal noise sources into their CPU’s and peripheral chips. Are they reliable as RNGs well time will tell…

    Having designed a few secure random number generators in my time I can tell you for free that it is not easy especially when trying to keep the price down. Oh and trying to obscure the output using hashing or cipher functions does not make them any stronger the entropy still stays the same (though it can help share the entropy around a bit 😉 Worse it makes it difficult to detect when there are problems with the physical source (which happens way more often than you would think).

    But even assuming that I decided to use an insecure PRNG, such as a LFSR that makes a selection (almost uniform probability) from an array of integers (the data set) it would still work for a couple of reasons,

    1, I apply a bias to the data set I make my random selections to.

    This bias is designed to be oposit to that of the clock skew, therfore it minimises it to an arbatry small amount (how small is dependent on the size of the LSFR and XTAL correction factor).

    2, I monitor the clock skew against a reference and adjust the data set appropriatly.

    This effectivly de-couples the clock skew from the computers environment and fixes it to that of an external reference (that could be an atomic clock on the network).

    As it happens (see DDS / Fractional N / Delta-Sigma frequency synthersisers) it is actually very very easy to change the output frequency of a clock source by a factor smaller than that of the very best frequency standards (ie 1 part in 10E15 for fast pulsar measurments) needs an accumulator of around fifty bits.

    You can also impliment a goodly chunk of such a system in a very very very small software function (think a modified Mitchell-Moore Generator[1] if you want to be overly complex) called by a timer interupt by the main CPU that uses a cheap D to A converter, through a lowpass filter that drives a variable capacitance diode that changes the loading capacitance on the Crystal that the CPU clock uses. The size of the D to A in bits actually does not realy matter you can get arbitary precision by over sampaling a 1 bit D to A it just means you have to run it very very fast 😉 However you can get 24 bit audio D to A’s now for around a couple of USD they are the same as those on a reasonable quality sound card.

    Provided the correction (however it is done) is done in a time of less than one half of the “tick time” of the Network clock you will not be able to detect the correction (see how Swallow Counters work in synths and invert the idea ie replace the divide N/N+1 counter with the error correction factor on the XTAL).

    So as I said you can using spread spectrum type techniques (which is what a Delta-Sigma synth uses) to reduce the error below a point that you can actually measure.

    If you want me to design you the hardware to prove it all you need to do is pay the usual consultancy rates and sign an NDA. I can also manufacture them for you (in the Far East) at competative rates. Alternativly if you can come up with a convincing business case I can introduce you to people who will manufacture Mother Boards with the hardware built right in (basically they would probably extend the EMC spread spectrum circuitry allready included in modern motherboards).

    Oh send my regards to Ross Anderson (Cambridge) and Bart Preneel (Leuven) it’s been quite a while since I last spoke to them but they probably remember me 😉

    [1] See Algorithum A in section 3.2.2 of Vol 2 of Knuth’s “The Art of Computer Programming”

  2. On a more pertinant note to your paper there is an up and a down side to having a mix network with millions of available mix machines. And why it might be desirable to actually use only a subset (of trusted) machines.

    One of the assumptions made by many people with regards to mix networks is that all the machines are equal peers, and essentially the network is flat… The algorithums they use do not take into account a number of things, two that spring to mind are Regionality and choke points, both of which can easily have a detrimental effect on the security of such peer systems.

    Choke points, such as the “Great FireWall of China” effectivly divide the internet up into two (or more) regeions, those on the Chinese side and those in the rest of the world. Effectivly this means that a mix network could (possibly) work without problem in either part. However it would not work through the choke point if the organisation that controlled it decided to interfere with the trafic in some way (drop packets as in your paper).

    The resulting regeonality is effectivly has the above down side as well as an upside. IF one makes (a not invalid) assumption that Geo-political regeions of differing views are unlikley to colaberate then deliberatly using mix points in two or more regeions may actually increase security providing the end points for the trafic are not in those regeions.

    Not taking into account choke points and the resulting regeions into a routing plan through a mix network will in most cases not improve security.

    Interestingly most noise on the Internet about choke points appears to be from the legal and social points of view even from technical biasd writers. Amongst others Dan Gillmor, has made comments about choke points in society in a more general sense,

    http://www.landfield.com/isn/mail-archive/2002/Oct/0050.html

Leave a Reply

Your email address will not be published. Required fields are marked *