Category Archives: Cryptology

Cryptographic primitives, cryptanalysis

Protocol design is hard — Flaws in ScatterChat

At the recent HOPE conference, the “secure instant messaging (IM) client”, ScatterChat, was released in a blaze of publicity. It was designed by J. Salvatore Testa II to allow human rights and democracy activists to securely communicate while under surveillance. It uses cryptography to protect confidentiality and authenticity, and integrates Tor to provide anonymity and is bundled with an easy to use user interface. Sadly not everything is as good as it sounds.

When I first started supervising undergraduates at Cambridge, Richard Clayton explained that the real purpose of the security course was to teach students not to invent the following (in increasing order of importance): protocols, hash functions, block ciphers and modes of operation. Academic literature is scattered with the bones of flawed proposals for all of these, despite being designed by very capable and experienced cryptographers. Instead, wherever possible, implementors should use peer-reviewed building blocks, as normally there is already a solution which can do the job, but has withstood more analysis and so is more likely to be secure.

Unfortunately, ScatterChat uses both a custom protocol and mode of operation, neither which are as secure as hoped. While looking at the developer documentation I found a few problems and reported them to the author. As always, there is the question of whether such vulnerabilities should be disclosed. It is likely that these problems would be discovered eventually, so it is better for them to be caught early and users allowed to take precautions, rather than attackers who independently find the weaknesses being able to exploit them with impunity. Also, I hope this will serve as a cautionary tale, reminding software designers that cryptography and protocol design is fraught with difficulties so is better managed through open peer-review.

The most serious of the three vulnerabilities was published today in an advisory (technical version), assigned CVE-2006-4021, from the ScatterChat author, but I also found two lesser ones. The three vulnerabilities are as follows (in increasing order of severity): Continue reading Protocol design is hard — Flaws in ScatterChat

Protecting software distribution with a cryptographic build process

At the rump session of PET 2006 I presented a simple idea on how to defend against a targeted attacks on software distribution. There were some misunderstandings after my 5 minute description, so I thought it would help to put the idea down in writing and I also hope to attract more discussion and a larger audience.

Consider a security-critical open source application; here I will use the example of Tor. The source-code is signed with the developer’s private key and users have the ability to verify the signature and build the application with a trustworthy compiler. I will also assume that if a backdoor is introduced in a deployed version, someone will notice, following from Linus’s law — “given enough eyeballs, all bugs are shallow”. These assumptions are debatable, for example the threats of compiler backdoors have been known for some time and subtle security vulnerabilities are hard to find. However a backdoor in the Linux kernel was discovered, and the anonymous reporter of a flaw in Tor’s Diffie-Hellman implementation probably found it through examining the source, so I think my assumptions are at least partially valid.

The developer’s signature protects against an attacker mounting a man-in-the-middle attack and modifying a particular user’s download. If the developer’s key (or the developer) is compromised then a backdoor could be inserted, but from the above assumptions, if this version is widely distributed, someone will discover the flaw and raise the alarm. However, there is no mechanism that protects against an attacker with access to the developer’s key singling out a user and adding a backdoor to only the version they download. Even if that user is diligent, the signature will check out fine. As the backdoor is only present in one version, the “many eyeballs” do not help. To defend against this attack, a user needs to find out if the version they download is the same as what other people receive and have the opportunity to verify.

My proposal is that the application build process should first calculate the hash of the source code, embed it in the binary and make it remotely accessible. Tor already has a mechanism for the last step, because each server publishes a directory descriptor which could include this hash. Multiple directory servers collect these and allow them to be downloaded by a web browser. Then when a user downloads the Tor source code, he can use the operating system provided hashing utility to check that the package he has matches a commonly deployed one.

If a particular version claims to have been deployed for some time, but no server displays a matching hash, then the user knows that there is a problem. The verification must be performed manually for now, but an operating system provider could produce a trusted tool for automating this. Note that server operators need to perform no extra work (the build process is automated) and only users who believe they may be targeted need perform the extra verification steps.

This might seem similar to the remote-attestation feature of Trusted Computing. Here, computers are fitted with special hardware, a Trusted Platform Module (TPM), which can produce a hard to forge proof of the software currently running. Because it is implemented in tamper-resistant hardware, even the owner of the computer cannot produce a fake statement, without breaking the TPM’s defences. This feature is needed for applications including DRM, but comes with added risks.

The use of TPMs to protect anonymity networks has been suggested, but the important difference between my proposal and TPM remote-attestation is that I assume most servers are honest, so will not lie about the software they are running. They have no incentive to do so, unless they want to harm the anonymity of the users, and if enough servers are malicious then there is no need to modify the client users are running, as the network is broken already. So there is no need for special hardware to implement my proposal, although if it is present, it could be used.

I hope this makes my scheme clearer and I am happy to receive comments and suggestions. I am particularly interested in whether there are any flaws in the design, whether the threat model is reasonable and if the effort in deployment and use is worth the increased resistance to attack.

Oracle attack on WordPress

This post describes the second of two vulnerabilities I found in WordPress. The first, a XSS vulnerability, was described last week. While the vulnerability discussed here is applicable in fewer cases than the previous one, it is an example of a comparatively rare class, oracle attacks, so I think merits further exposition.

An oracle attack is where an attacker can abuse a facility provided by a system to gain unauthorized access to protected information. The term originates from cryptology, and such attacks still crop up regularly; for example in banking security devices and protocols. The occurrence of an oracle attack in WordPress illustrates the need for a better understanding of cryptography, even by the authors of applications not conventionally considered to be cryptographic software. Also more forgiving primitives and better robustness principles could reduce the risk of future weaknesses.

The vulnerability is a variant of the ‘cache’ shell injection bug reported by rgodm. This is caused by an unfortunate series of design choices by the WordPress team, leading to arbitrary PHP execution. The WordPress cache stores commonly accessed information from the database, such as user profile data, in files for faster retrieval. Despite them being needed only by the server, they are still accessible from the web, which is commonly considered bad practice. To prevent the content being read remotely, the data is placed in .php files, commented out with //. Thus when executed by the web server, in response to a remote query, they return an empty file.

However, putting user controlled data in executable files is inherently a risky choice. If the attacker can escape from the comment then arbitrary PHP can be executed. rgodm’s shell injection bug does this by inserting a newline into the display name. Now all the attacker must do is guess the name of the .php file which stores his cached profile information, and invoke it to run the injected PHP. WordPress puts an index.php in the cache directory to suppress directory indexing, and filenames are generated as MD5(username || DB_PASSWORD) || “.php”, which creates hard to guess name. The original bug report suggested brute forcing DB_PASSWORD, the MySQL authentication password, but the oracle attack described here will succeed even if a strong password is chosen.

Continue reading Oracle attack on WordPress