I have put together a web page on psychology and security. There is a fascinating interplay between these two subjects, and their intersection is now emerging as a new research discipline, encompassing deception, risk perception, security usability and a number of other important topics. I hope that the new web page will be as useful in spreading the word as my security economics page has been in that field.
apComms backs ISP cleanup activity
The All Party Parliamentary Communications Group (apComms) recently published their report into an inquiry entitled “Can we keep our hands off the net?”
They looked at a number of issues, from “network neutrality” to how best to deal with child sexual abuse images. Read the report for the all the details; in this post I’m just going to draw attention to one of the most interesting, and timely, recommendations:
51. We recommend that UK ISPs, through Ofcom, ISPA or another appropriate
organisation, immediately start the process of agreeing a voluntary code for
detection of, and effective dealing with, malware infected machines in the UK.
52. If this voluntary approach fails to yield results in a timely manner, then we further recommend that Ofcom unilaterally create such a code, and impose it upon the UK ISP industry on a statutory basis.
The problem is that although ISPs are pretty good these days at dealing with incoming badness (spam, DDoS attacks etc) they can be rather reluctant to deal with customers who are malware infected, and sending spam, DDoS attacks etc to other parts of the world.
From a “security economics” point of view this isn’t too surprising (as I and colleagues pointed out in a report to ENISA). Customers demand effective anti-spam, or they leave for another ISP. But talking to customers and holding their hand through a malware infection is expensive for the ISP, and customers may just leave if hassled, so the ISPs have limited incentives to take any action.
When markets fail to solve problems, then you regulate… and what apComms is recommending is that a self-regulatory solution be given a chance to work. We shall have to see whether the ISPs seize this chance, or if compulsion will be required.
This UK-focussed recommendation is not taking place in isolation, there’s been activity all over the world in the past few weeks — in Australia the ISPs are consulting on a Voluntary Code of Practice for Industry Self-regulation in the Area of e-Security, in the Netherlands the main ISPs have signed an “Anti-Botnet Treaty“, and in the US the main cable provider, Comcast, has announced that its “Constant Guard” programme will in future detect if their customer machines become members of a botnet.
ObDeclaration: I assisted apComms as a specialist adviser, but the decision on what they wished to recommend was theirs alone.
Economics of peer-to-peer systems
A new paper, Olson’s Paradox Revisited: An Empirical Analysis of File-Sharing Behaviour in P2P Communities, finds a positive correlation between the size of a BitTorrent file-sharing community and the amount of content shared, despite a reduced individual propensity to share in larger groups, and deduces from this that file-sharing communities provide a pure (non-rival) public good. Forcing users to upload results in a smaller catalogue; but private networks provide both more and better content, as do networks aimed at specialised communities.
George Danezis and I produced a theoretical model of this five years ago in The Economics of Censorship Resistance. It’s nice to see that the data, now collected, bear us out
Tor on Android
Andrew Rice and I ran a ten week internship programme for Cambridge undergraduates this summer. One of the project students, Connell Gauld, was tasked with the job of producing a version of Tor for the Android mobile phone platform which could be used on a standard handset.
Connell did a great job and on Friday we released TorProxy, a pure Java implementation of Tor based on OnionCoffee, and Shadow, a Web browser which uses TorProxy to permit anonymous browsing from your Android phone. Both applications are available on the Android Marketplace; remember to install TorProxy if you want to use Shadow.
The source code for both applications is released under GPL v2 and is available from our SVN repository on the project home page. There are also instructions on how to use TorProxy to send and receive data via Tor from your own Android application.
Tuning in to random numbers
Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.
Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.
To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).
But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?
The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.
In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.
And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.
So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.
While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.
We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.
So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.
Which? survey of online banking security
Today Which? released their survey of online banking security. The results are summarized in their press release and the full article is in the September edition of “Which? Computing”.
The article found that there was substantial variation in what authentication measures UK banks used. Some used normal password fields, some used drop-down boxes, and some used a CAP smart card reader. All of these are vulnerable to attack by a sophisticated criminal (see for example our paper on CAP), but the article argued that it is better to force attackers to work harder to break into a customer’s account. Whether this approach would actually decrease fraud is an interesting question. Intuitively it makes sense, but it might just succeed in putting the manufacturers of unsophisticated malware out of business, and the criminals actually performing the fraud would just buy a smarter kit.
However, what I found most interesting were the responses from the banks whose sites were surveyed.
Barclays (which came top due to their use of CAP) were pleased:
“We believe our customers have the best security packages of all online banks to protect them and their money.”
In contrast, Halifax (who came bottom) didn’t like the survey saying:
“Any meaningful assessment of a bank’s fraud prevention tools needs to fully examine all systems whether they can be seen directly by customers or not and we would never release details of these systems to any third party.”
I suppose it is unsurprising that the banks which came top were happier with the results than those which came bottom, but to a certain extent I sympathize with Halifax. They are correct in saying that back-end controls (e.g. spotting suspicious transactions and reversing fraudulent ones) are very important tools at preventing fraud. I think the article is clear on this point, always saying that they are comparing “customer-facing” or “visible” security measures and including a section describing the limitations of the study.
However, I think this complaint indicates a deeper problem with consumer banking: customers have no way to tell which bank will better protect their money. About the only figure the banks offered was HSBC saying they were better than average. Fraud figures for individual banks do exist (APACS collects them), and they are shared between the banks, but they are withheld from customers and shareholders. So I don’t think it is surprising that consumer groups are comparing the only thing they can.
I can understand the reluctance in publishing fraud figures — it makes customers think their money is not safe, and no bank wants to be at the bottom. However, I do think it would be in the long-term best interests of everyone if there could be meaningful comparison of banks in terms of security. Customers can compare their safety while driving and while in hospital, but why not when they bank online?
So while I admit there are problems with the Which? report, I do think it is a step in the right direction. They are joining a growing group of security professionals who are calling for better data on security breaches. Which? were also behind the survey which found that 20% of fraud victims don’t get their money back, and a campaign to get better statistics on complaints against banks. I wish them luck in their efforts.
Defending against wedge attacks in Chip & PIN
The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).
The transaction process for an SDA card goes roughly as follows:
| Card auth. | Card → Terminal: | records, sigBANK{records} |
| Cardholder verif. | Terminal → Card: | PIN entered |
| Card → Terminal: | PIN OK | |
| Transaction auth. | Terminal → Card: | transaction, nonce |
| Card → Terminal: | MAC{transaction, nonce, PIN OK} |
Some things to note here:
- The card contains a static digital signature, which anyone can read and replay
- The response to PIN verification is not authenticated
This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.
Continue reading Defending against wedge attacks in Chip & PIN
Nuke photos
Here are some photos of a PAL, a PAL reprogrammer and a PAL verifier taken at the National Museum of Nuclear Science and History by Steven J. Greenwald (more photos here). I’ve added them to the additional material for my book.
User complaints about photos in Facebook ads
I wrote about the mess caused by Facebook’s insecure application platform nearly 2 months ago. I also wrote about the long-term problems with “informed consent” for data use in social networks. In the past week, both problems came to a head as users began complaining about multiple third-party ad networks using their photos in banner ads. When I mentioned this problem in June, Facebook had just shut down the biggest ad networks for “deceptive practices,” specifically by duping users into a US$20 per month ringtone subscription. The void created by banning SocialReach and SocialHour apparently led to many new advertisers popping up in their place, with most carrying on the practice of using user photos to hawk quizzes, dating services, and the like. The ubiquitous ads annoyed enough users that Facebook was convinced to ban advertisers from using personal data. This is a welcome move, but Facebook underhandedly inserted a curious new privacy setting at “Privacy Settings->News Feed and Wall->Facebook Ads”:
Facebook does not give third party applications or ad networks the right to use your name or picture in ads. If this is allowed in the future, this setting will govern the usage of your information.
With this change, Facebook has quietly reserved the right to re-allow applications to utilise user data in ads in the future and opted everybody in to the feature. We’ve written about social networks minimising privacy salience, but this is plainly deceptive. It’s hard not to conclude this setting was purposefully hidden from sight, as ads shown by third-party applications have nothing to do with the News Feed or Wall. The choices of “No One” or “Only Friends” are also obfuscating, as only friends’ applications can access data from Facebook’s API to begin with; this is a simple “opt-out” checkbox dressed up to making being opted in seem more private. Meanwhile, Facebook has been showing users a patronising popup message on log-in:
How much did shutting down McColo help?
On 11 November 2008 McColo, a Californian server hosting company, was disconnected from the Internet. This took the controllers for 6 major botnets offline. It has been widely reported that email spam volumes were markedly reduced for some time thereafter. But did disconnecting McColo only get rid of “easy to block” spam?
In a paper presented this week at the Sixth Conference on Email and Antispam (CEAS) I examined email traffic data for for the incoming email to a UK ISP to see what effect the disconnection had.
Continue reading How much did shutting down McColo help?