Tor on Android

Andrew Rice and I ran a ten week internship programme for Cambridge undergraduates this summer. One of the project students, Connell Gauld, was tasked with the job of producing a version of Tor for the Android mobile phone platform which could be used on a standard handset.

Connell did a great job and on Friday we released TorProxy, a pure Java implementation of Tor based on OnionCoffee, and Shadow, a Web browser which uses TorProxy to permit anonymous browsing from your Android phone. Both applications are available on the Android Marketplace; remember to install TorProxy if you want to use Shadow.

The source code for both applications is released under GPL v2 and is available from our SVN repository on the project home page. There are also instructions on how to use TorProxy to send and receive data via Tor from your own Android application.

Tuning in to random numbers

Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.

Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.

To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).

But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?

The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.

In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.

And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.

So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.

While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.

We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.

So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.

You can find the paper and slides on my website.

Which? survey of online banking security

Today Which? released their survey of online banking security. The results are summarized in their press release and the full article is in the September edition of “Which? Computing”.

The article found that there was substantial variation in what authentication measures UK banks used. Some used normal password fields, some used drop-down boxes, and some used a CAP smart card reader. All of these are vulnerable to attack by a sophisticated criminal (see for example our paper on CAP), but the article argued that it is better to force attackers to work harder to break into a customer’s account. Whether this approach would actually decrease fraud is an interesting question. Intuitively it makes sense, but it might just succeed in putting the manufacturers of unsophisticated malware out of business, and the criminals actually performing the fraud would just buy a smarter kit.

However, what I found most interesting were the responses from the banks whose sites were surveyed.

Barclays (which came top due to their use of CAP) were pleased:

“We believe our customers have the best security packages of all online banks to protect them and their money.”

In contrast, Halifax (who came bottom) didn’t like the survey saying:

“Any meaningful assessment of a bank’s fraud prevention tools needs to fully examine all systems whether they can be seen directly by customers or not and we would never release details of these systems to any third party.”

I suppose it is unsurprising that the banks which came top were happier with the results than those which came bottom, but to a certain extent I sympathize with Halifax. They are correct in saying that back-end controls (e.g. spotting suspicious transactions and reversing fraudulent ones) are very important tools at preventing fraud. I think the article is clear on this point, always saying that they are comparing “customer-facing” or “visible” security measures and including a section describing the limitations of the study.

However, I think this complaint indicates a deeper problem with consumer banking: customers have no way to tell which bank will better protect their money. About the only figure the banks offered was HSBC saying they were better than average. Fraud figures for individual banks do exist (APACS collects them), and they are shared between the banks, but they are withheld from customers and shareholders. So I don’t think it is surprising that consumer groups are comparing the only thing they can.

I can understand the reluctance in publishing fraud figures — it makes customers think their money is not safe, and no bank wants to be at the bottom. However, I do think it would be in the long-term best interests of everyone if there could be meaningful comparison of banks in terms of security. Customers can compare their safety while driving and while in hospital, but why not when they bank online?

So while I admit there are problems with the Which? report, I do think it is a step in the right direction. They are joining a growing group of security professionals who are calling for better data on security breaches. Which? were also behind the survey which found that 20% of fraud victims don’t get their money back, and a campaign to get better statistics on complaints against banks. I wish them luck in their efforts.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN

User complaints about photos in Facebook ads

I wrote about the mess caused by Facebook’s insecure application platform nearly 2 months ago. I also wrote about the long-term problems with “informed consent” for data use in social networks. In the past week, both problems came to a head as users began complaining about multiple third-party ad networks using their photos in banner ads. When I mentioned this problem in June, Facebook had just shut down the biggest ad networks for “deceptive practices,” specifically by duping users into a US$20 per month ringtone subscription. The void created by banning SocialReach and SocialHour apparently led to many new advertisers popping up in their place, with most carrying on the practice of using user photos to hawk quizzes, dating services, and the like. The ubiquitous ads annoyed enough users that Facebook was convinced to ban advertisers from using personal data. This is a welcome move, but Facebook underhandedly inserted a curious new privacy setting at “Privacy Settings->News Feed and Wall->Facebook Ads”:

Facebook does not give third party applications or ad networks the right to use your name or picture in ads. If this is allowed in the future, this setting will govern the usage of your information.

With this change, Facebook has quietly reserved the right to re-allow applications to utilise user data in ads in the future and opted everybody in to the feature. We’ve written about social networks minimising privacy salience, but this is plainly deceptive. It’s hard not to conclude this setting was purposefully hidden from sight, as ads shown by third-party applications have nothing to do with the News Feed or Wall. The choices of “No One” or “Only Friends” are also obfuscating, as only friends’ applications can access data from Facebook’s API to begin with; this is a simple “opt-out” checkbox dressed up to making being opted in seem more private. Meanwhile, Facebook has been showing users a patronising popup message on log-in:

Worried about privacy? Your photos are safe. There have been misleading rumors recently about the use of your photos in ads. Don’t believe them. These rumors were related to third-party applications, and not ads shown by Facebook. Get the whole story at the Facebook Blog, or check out the Help Center.

This message is misleading, if not outright dishonest, and shows an alarming dismissal of what was a widespread practice that offended many users. People weren’t concerned with whether their photos were sent to advertisers by Facebook itself or third-parties. They don’t want their photos or names used or stored by advertisers regardless of the technical details. The platform API remains fundamentally broken and gives users no way to prevent applications from accessing their photos. Facebook would be best served by fixing this instead of dismissing users’ concern for privacy as “misleading rumors.”

How much did shutting down McColo help?

On 11 November 2008 McColo, a Californian server hosting company, was disconnected from the Internet. This took the controllers for 6 major botnets offline. It has been widely reported that email spam volumes were markedly reduced for some time thereafter. But did disconnecting McColo only get rid of “easy to block” spam?

In a paper presented this week at the Sixth Conference on Email and Antispam (CEAS) I examined email traffic data for for the incoming email to a UK ISP to see what effect the disconnection had.
Continue reading How much did shutting down McColo help?

The Economics of Privacy in Social Networks

We often think of social networking to Facebook, MySpace, and the also-rans, but in reality there are there are tons of social networks out there, dozens which have membership in the millions. Around the world it’s quite a competitive market. Sören Preibusch and I decided to study the whole ecosystem to analyse how free-market competition has shaped the privacy practices which I’ve been complaining about. We carefully examined 45 sites, collecting over 250 data points about each sites’ privacy policies, privacy controls, data collection practices, and more. The results were fascinating, as we presented this week at the WEIS conference in London. Our full paper and complete dataset are now available online as well.

We collected a lot of data, and there was a little bit of something for everybody. There was encouraging news for fans of globalisation, as we found the social networking concept popular across many cultures and languages, with the most popular sites being available in over 40 languages. There was an interesting finding from a business perspective that photo-sharing may be the killer application for social networks, as this features was promoted far more often than sharing videos, blogging, or playing games. Unfortunately the news was mostly negative from a privacy standpoint. We found some predictable but still surprising problems. Too much unnecessary data is collected by most sites, 90% requiring a full-name and DOB. Security practices are dreadful: no sites employed phishing countermeasures, and 80% of sites failed to protect password entry using TLS. Privacy policies were obfuscated and confusing, and almost half failed basic accessibility tests. Privacy controls were confusing and overwhelming, and profiles were almost universally left open by default.

The most interesting story we found though was how sites consistently hid any mention of privacy, until we visited the privacy policies where they provided paid privacy seals and strong reassurances about how important privacy is. We developed a novel economic explanation for this: sites appear to craft two different messages for two different populations. Most users care about privacy about privacy but don’t think about it in day-to-day life. Sites take care to avoid mentioning privacy to them, because even mentioning privacy positively will cause them to be more cautious about sharing data. This phenomenon is known as “privacy salience” and it makes sites tread very carefully around privacy, because users must be comfortable sharing data for the site to be fun. Instead of mentioning privacy, new users are shown a huge sample of other users posting fun pictures, which encourages them to  share as well. For privacy fundamentalists who go looking for privacy by reading the privacy policy, though, it is important to drum up privacy re-assurance.

The privacy fundamentalists of the world may be positively influencing privacy on major sites through their pressure. Indeed, the bigger, older, and more popular sites we studied had better privacy practices overall. But the desire to limit privacy salience is also a major problem because it prevents sites from providing clear information about their privacy practices. Most users therefore can’t tell what they’re getting in to, resulting in the predominance of poor-practices in this “privacy jungle.”

Static Consent and the Dynamic Web

Last week Facebook announced the end of regional networks for access control. The move makes sense: regional networks had no authentication so information available to them was easy to get with a fake account. Still, silently making millions of weakly-restricted profiles globally viewable raises some disturbing questions. If Terms of Service promise to only share data consistent with users’ privacy settings, but the available privacy settings change as features are added, what use are the terms as a legal contract? This is just one instance of a major problem for rapidly evolving web pages which rely on a static model of informed consent for data collection. Even “privacy fundamentalists” who are careful to read privacy policies and configure their privacy settings can’t be confident of their data’s future for three main reasons:

  • Functionality Changes: Web 2.0 sites add features constantly, usually with little warning or announcement. Users are almost always opted-in for fear that features won’t get noticed otherwise. Personal data is shared before users have any chance to opt out. Facebook has done this repeatedly, opting users in to NewsFeed, Beacon, Social Ads, and Public Search Listings. This has generated a few sizeable backlashes, but Facebook maintains that users must try new features in action before they can reasonably opt out.
  • Contractual Changes: Terms of Service documents can often be changed without notice, and users automatically agree to the new terms by continuing to use the service. In a study we’ll be publishing at WEIS next month evaluating 45 social networking sites, almost half don’t guarantee to announce changes to their privacy policies. Less than 10% of the sites commit to a mandatory notice period before implementing changes (typically a week or less). Realistically, at least 30 days are needed for fundamentalists to read the changes and cancel their accounts if they wish.
  • Ownership Changes: As reported in the excellent survey of web privacy practices by the KnowPrivacy project at UC Berkeley, the vast majority (over 90%) of sites explicitly reserve the right to share data with ‘affiliates’ subject only to the affiliate’s privacy policy. Affiliate is an ambiguous term but it includes at least  parent companies and their subsidiaries. If your favourite web site gets bought out by an international conglomerate, your data is transferred to the new owners who can instantly start using it under their own privacy policy. This isn’t an edge case, it’s a major loophole: websites are bought and sold all the time and for many startups acquisition is the business model.

For any of these reasons, the terms under which consent was given can be changed without warning. Safely disclosing personal data on the web thus requires continuously monitoring sites for new functionality, updated terms of service, or mergers, and instantaneously opting out if you are no longer comfortable. This is impossible even for privacy fundamentalists with an infinite amount of patience and legal knowledge, rendering the old paradigm of informed consent for data collection unworkable for Web 2.0.