Posts filed under 'Cryptology

Nov 20, '07

In my previous post, I discussed how I analyzed the recent attack on Light Blue Touchpaper. What I did not disclose was how the attacker gained access in the first place. It turned out to incorporate a zero-day exploit, which is why I haven’t mentioned it until now.

As a first step, the attacker exploited an SQL injection vulnerability. When I noticed the intrusion, I upgraded Wordpress then restored the database and files from off-server backups. Wordpress 2.3.1 was released less than a day before my upgrade, and was supposed to fix this vulnerability, so I presumed I would be safe.

I was therefore surprised when the attacker broke in again, the following day (and created himself an administrator account). After further investigation, I discovered that he had logged into the “admin” account — nobody knows the password for this because I set it to a long random string. Neither me nor other administrators ever used that account, so it couldn’t have been XSS or another cookie stealing attack. How was this possible?

From examining the Wordpress authentication code I discovered that the password hashing was backwards! While the attacker couldn’t have obtained the password from the hash stored in the database, by simply hashing the entry a second time, he generated a valid admin cookie. On Monday I posted a vulnerability disclosure (assigned CVE-2007-6013) to the BugTraq and Full-Disclosure mailing lists, describing the problem in more detail.

It is disappointing to see that people are still getting this type of thing wrong. In their 1978 summary, Morris and Thompson describe the importance of one way hashing and password salting (neither of which Wordpress does properly). The issue is currently being discussed on LWN.net and the wp-hackers mailing list. Hopefully some progress will be made at getting it right this time around.

Oct 2, '07

When we want to check freshness of cryptographically secured messages, we have to use monotonic counters, timestamps or random nonces. Each of these mechanisms increases the complexity of a given system in a different way. Freshness based on counters seems to be the easiest to implement in the context of ad-hoc mesh wireless networks. One does not need to increase power consumption for an extra message for challenge (containing a new random number), nor there is need for precise time synchronisation. It sounds easy but people in the real world are … creative. We have been working with TinyOS, an operating system that was designed for constrained hardware. TinyOS is a quite modular platform and even mesh networking is not part of the system’s core but is just one of the modules that can be easily replaced or not used at all.

Frame structures for TinyOS and TinySec on top of 802.15.4
Fig.: Structures of TinyOS and TinySec frames with all the counters. TinySec increases length of “data” field to store initialisation vector. (more…)

Sep 30, '07

In a few hours time Part III of the Regulation of Investigatory Powers Act 2000 will come into effect. The commencement order means that as of October 1st a section 49 notice can be served which requires that encrypted data be “put into an intelligible form” (what you and I might call “decrypted”). Extended forms of such a notice may, under the provisions of s51, require you to hand over your decryption key, and/or under s54 include a “no tipping off” provision.

If you fail to comply with a notice (or breach a tipping off requirement by telling someone about it) then you will have committed an offence, for which the maximum penalty is two years and a fine or both. It’s five years for “tipping off” and also five years (an amendment in s15 of the Terrorism Act 2006) if the case relates to “national security”.

By convention, laws in the UK very seldom have retrospective effect, so that if you do something today, Parliament is very loth to pass a law tomorrow to make your actions illegal. However, the offences in Part III relate to failing to obey a s49 notice and that notice could be served on you tomorrow (or thereafter), but the material may have been encrypted by you today (or before).

Potentially therefore, the police could start demanding the putting into an intelligible form, not only of information that they seize in a raid tomorrow morning, but also of material that they seized weeks, months or years ago. In the 1995 Smith case (part of Operation Starburst), the defendant only received a suspended sentence because the bulk of the material was encrypted. In this particular example, the police may be constrained by double jeopardy or the time that has elapsed from serving a notice on Mr Smith, but there’s nothing in RIP itself, or the accompanying Code of Practice, to prevent them serving a s49 notice on more recently seized encrypted material if they deem it to be necessary and proportionate.

In fact, they might even be nipping round to Jack Straw’s house demanding a decryption key — as this stunt from 1999 makes possible (when the wording of a predecessor bill was rather more inane than RIP was (eventually) amended to).

There are some defences in the statute to failing to comply with a notice — one of which is that you can claim to have forgotten the decryption key (in practice, the passphrase under which the key is stored). In such a case the prosecution (the burden of proof was amended during the passage of the Bill) must show beyond a reasonable doubt that you have not forgotten it. Since they can’t mind-read, the expectation must be that they would attempt to show regular usage of the passphrase, and invite the jury to conclude that the forgetting has been faked — and this might be hard to manage if a hard disk has been in a police evidence store for over a decade.

However, if you’re still using such a passphrase and still have access to the disk, and if the contents are going to incriminate you, then perhaps a sledgehammer might be a suitable investment.

Me? I set up my alibi long ago :)

Jul 13, '07

Many designs for trustworthy electronic elections use cryptography to assure participants that the result is accurate. However, it is a system’s software engineering that ensures a result is declared at all. Both good software engineering and cryptography are thus necessary, but so far cryptography has drawn more attention. In fact, the software engineering aspects could be just as challenging, because election systems have a number of properties which make them almost a pathological case for robust design, implementation, testing and deployment.

Currently deployed systems are lacking in both software robustness and cryptographic assurance — as evidenced by the English electronic election fiasco. Here, in some cases the result was late and in others the electronic count was abandoned due to system failures resulting from poor software engineering. However, even where a result was returned, the black-box nature of auditless electronic elections brought the accuracy of the count into doubt. In the few cases where cryptography was used it was poorly explained and didn’t help verify the result either.

End-to-end cryptographically assured elections have generated considerable research interest and the resulting systems, such as Punchscan and Prêt à Voter, allow voters to verify the result while maintaining their privacy (provided they understand the maths, that is — the rest of us will have to trust the cryptographers). These systems will permit an erroneous result to be detected after the election, whether caused by maliciousness or more mundane software flaws. However should this occur, or if a result is failed to be returned at all, the election may need to fall back on paper backups or even be re-run — a highly disruptive and expensive failure.

Good software engineering is necessary but, in the case of voting systems, may be especially difficult to achieve. In fact, such systems have more similarities to the software behind rocket launches than more conventional business productivity software. We should thus expect the consequential high costs and, despite all this extra effort, that the occasional catastrophe will be inevitable. The remainder of this post will discuss why I think this is the case, and how manually-counted paper ballots circumvent many of these difficulties.

(more…)

Jul 11, '07

For about thirty years now, security researchers have been talking about using digital signatures in court. Thousands of academic papers have had punchlines like “the judge then raises X to the power Y, finds it’s equal to Z, and sends Bob to jail”. So far, this has been pleasant speculation.

Now the rubber starts to hit the road. Since 2006 trucks in Europe have been using digital tachographs. Tachographs record a vehicle’s speed history and help enforce restrictions on drivers’ working hours. For many years they have used circular waxed paper charts, which have been accepted in court as evidence just like any other paper record. However, paper charts are now being replaced with smartcards. Each driver has a card that records 28 days of infringement history, protected by digital signatures. So we’ve now got the first widely-deployed system in which digital sigantures are routinely adduced in evidence. The signed records are being produced to support prosecutions for working too long hours, for speeding, for tachograph tampering, and sundry other villainy.

So do magistrates really raise X to the power Y, find it’s equal to Z, and send Eddie off to jail? Not according to enforcement folks I’ve spoken to. Apparently judges find digital signatures too “difficult” as they’re all in hex. The police, always eager to please, have resolved the problem by applying standard procedures for “securing” digital evidence. When they raid a dodgy trucking company, they image the PC’s disk drive and take copies on DVDs that are sealed in evidence bags. One gets given to the defence and one kept for appeal. The paper logs documenting the procedure are available for Their Worships to inspect. Everyone’s happy, and truckers duly get fined.

In fact the trucking companies are very happy. I understand that 20% of British trucks now use digital tachographs, well ahead of expectations. Perhaps this is not uncorrelated with the fact that digital tachographs keep much less detailed data than could be coaxed out of the old paper charts. Just remember, you read it here first.

Dec 12, '06

23C3 logoThe 23rd Chaos Communication Congress will be held later this month in Berlin, Germany on 27–30 December. I will be attending to give a talk on Hot or Not: Revealing Hidden Services by their Clock Skew. Another contributor to this blog, George Danezis, will be talking on An Introduction to Traffic Analysis.

This will be my third time speaking at the CCC (I previously talked on Hidden Data in Internet Published Documents and The Convergence of Anti-Counterfeiting and Computer Security in 2004 then Covert channels in TCP/IP: attack and defence in 2005) and I’ve always had a great time but this year looks to be the best yet. Here are a few highlights from the draft programme, although I am sure there are many great talks I have missed.

It’s looking like a great line-up, so I hope many of you can make it. See you there!

Sep 23, '06

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
(more…)

Aug 25, '06

My book on Security Engineering is now available online for free download here.

I have two main reasons. First, I want to reach the widest possible audience, especially among poor students. Second, I am a pragmatic libertarian on free culture and free software issues; I believe many publishers (especially of music and software) are too defensive of copyright. I don’t expect to lose money by making this book available for free: more people will read it, and those of you who find it useful will hopefully buy a copy. After all, a proper book is half the size and weight of 300-odd sheets of laser-printed paper in a ring binder.

I’d been discussing this with my publishers for a while. They have been persuaded by the experience of authors like David MacKay, who found that putting his excellent book on coding theory online actually helped its sales. So book publishers are now learning that freedom and profit are not really in conflict; how long will it take the music industry?

Aug 11, '06

At the recent HOPE conference, the “secure instant messaging (IM) client”, ScatterChat, was released in a blaze of publicity. It was designed by J. Salvatore Testa II to allow human rights and democracy activists to securely communicate while under surveillance. It uses cryptography to protect confidentiality and authenticity, and integrates Tor to provide anonymity and is bundled with an easy to use user interface. Sadly not everything is as good as it sounds.

When I first started supervising undergraduates at Cambridge, Richard Clayton explained that the real purpose of the security course was to teach students not to invent the following (in increasing order of importance): protocols, hash functions, block ciphers and modes of operation. Academic literature is scattered with the bones of flawed proposals for all of these, despite being designed by very capable and experienced cryptographers. Instead, wherever possible, implementors should use peer-reviewed building blocks, as normally there is already a solution which can do the job, but has withstood more analysis and so is more likely to be secure.

Unfortunately, ScatterChat uses both a custom protocol and mode of operation, neither which are as secure as hoped. While looking at the developer documentation I found a few problems and reported them to the author. As always, there is the question of whether such vulnerabilities should be disclosed. It is likely that these problems would be discovered eventually, so it is better for them to be caught early and users allowed to take precautions, rather than attackers who independently find the weaknesses being able to exploit them with impunity. Also, I hope this will serve as a cautionary tale, reminding software designers that cryptography and protocol design is fraught with difficulties so is better managed through open peer-review.

The most serious of the three vulnerabilities was published today in an advisory (technical version), assigned CVE-2006-4021, from the ScatterChat author, but I also found two lesser ones. The three vulnerabilities are as follows (in increasing order of severity): (more…)

Jul 13, '06

At the rump session of PET 2006 I presented a simple idea on how to defend against a targeted attacks on software distribution. There were some misunderstandings after my 5 minute description, so I thought it would help to put the idea down in writing and I also hope to attract more discussion and a larger audience.

Consider a security-critical open source application; here I will use the example of Tor. The source-code is signed with the developer’s private key and users have the ability to verify the signature and build the application with a trustworthy compiler. I will also assume that if a backdoor is introduced in a deployed version, someone will notice, following from Linus’s law — “given enough eyeballs, all bugs are shallow”. These assumptions are debatable, for example the threats of compiler backdoors have been known for some time and subtle security vulnerabilities are hard to find. However a backdoor in the Linux kernel was discovered, and the anonymous reporter of a flaw in Tor’s Diffie-Hellman implementation probably found it through examining the source, so I think my assumptions are at least partially valid.

The developer’s signature protects against an attacker mounting a man-in-the-middle attack and modifying a particular user’s download. If the developer’s key (or the developer) is compromised then a backdoor could be inserted, but from the above assumptions, if this version is widely distributed, someone will discover the flaw and raise the alarm. However, there is no mechanism that protects against an attacker with access to the developer’s key singling out a user and adding a backdoor to only the version they download. Even if that user is diligent, the signature will check out fine. As the backdoor is only present in one version, the “many eyeballs” do not help. To defend against this attack, a user needs to find out if the version they download is the same as what other people receive and have the opportunity to verify.

My proposal is that the application build process should first calculate the hash of the source code, embed it in the binary and make it remotely accessible. Tor already has a mechanism for the last step, because each server publishes a directory descriptor which could include this hash. Multiple directory servers collect these and allow them to be downloaded by a web browser. Then when a user downloads the Tor source code, he can use the operating system provided hashing utility to check that the package he has matches a commonly deployed one.

If a particular version claims to have been deployed for some time, but no server displays a matching hash, then the user knows that there is a problem. The verification must be performed manually for now, but an operating system provider could produce a trusted tool for automating this. Note that server operators need to perform no extra work (the build process is automated) and only users who believe they may be targeted need perform the extra verification steps.

This might seem similar to the remote-attestation feature of Trusted Computing. Here, computers are fitted with special hardware, a Trusted Platform Module (TPM), which can produce a hard to forge proof of the software currently running. Because it is implemented in tamper-resistant hardware, even the owner of the computer cannot produce a fake statement, without breaking the TPM’s defences. This feature is needed for applications including DRM, but comes with added risks.

The use of TPMs to protect anonymity networks has been suggested, but the important difference between my proposal and TPM remote-attestation is that I assume most servers are honest, so will not lie about the software they are running. They have no incentive to do so, unless they want to harm the anonymity of the users, and if enough servers are malicious then there is no need to modify the client users are running, as the network is broken already. So there is no need for special hardware to implement my proposal, although if it is present, it could be used.

I hope this makes my scheme clearer and I am happy to receive comments and suggestions. I am particularly interested in whether there are any flaws in the design, whether the threat model is reasonable and if the effort in deployment and use is worth the increased resistance to attack.


Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Posts by Month

Posts by Category