Category Archives: Academic papers

Passwords in the wild, part III: password standards for the Web

This is the third part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Joseph Bonneau.

In our analysis of 150 password deployments online, we observed a surprising diversity of implementation choices. Whilst sites can be ranked by the overall security of their password scheme, there is a vast middle group in which sites make seemingly incongruous security decisions. We also found almost no evidence of commonality in implementations. Examining the details of Web forms (variable names, etc.) and the format of automated emails, we found little evidence that sites are re-using a common code base. This lack of consistency in technical choices suggests that standards and guidelines could improve security.

Numerous RFCs concern themselves with one-time passwords and other relatively sophisticated authentication protocols. Yet, traditional password-based authentication remains the most prevalent authentication protocol on the Internet, as the International Telecommunication Union–itself a United Nations specialized agency to standardise telecommunications on a worldwide basis–observes in their ITU-T Recommendation X.1151, “Guideline on secure password-based, authentication protocol with key exchange.” Client PKI has not seen wide-spread adoption and tokens or smart-cards are prohibitively cost-inefficient or inconvenient for most websites. While passwords have many shortcomings, it is essential deploy them as carefully and securely as possible. Formal standards and guidelines of best practices are essential to help developers.

Continue reading Passwords in the wild, part III: password standards for the Web

Passwords in the wild, part II: failures in the market

This is the second part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

As we discussed yesterday, dubious practices abound within real sites’ password implementations. Password insecurity isn’t only due to random implementation mistakes, though. When we scored sites’ passwords implementations on a 10-point aggregate scale it became clear that a wide spectrum of implementation quality exists. Many web authentication giants (Amazon, eBay, Facebook, Google, LiveJournal, Microsoft, MySpace, Yahoo!) scored near the top, joined by a few unlikely standouts (IKEA, CNBC). At the opposite end were a slew of lesser-known merchants and news websites. Exploring the factors which lead to better security confirms the basic tenets of security economics: sites with more at stake tend to do better. However, doing better isn’t enough. Given users’ well-documented tendency to re-use passwords, the varying levels of security may represent a serious market failure which is undermining the security of password-based authentication.

Continue reading Passwords in the wild, part II: failures in the market

Passwords in the wild, part I: the gap between theory and implementation

Sören Preibusch and I have finalised our in-depth report on password practices in the wild, The password thicket: technical and market failures in human authentication on the web, presented in Boston last month for WEIS 2010. The motivation for our report was a lack of technical research into real password deployments. Passwords have been studied as an authentication mechanism quite intensively for the last 30 years, but we believe ours was the first large study into how Internet sites actually implement them. We studied 150 sites, including the most visited overall sites plus a random sample of mid-level sites. We signed up for free accounts with each site, and using a mixture of scripting and patience, captured all visible aspects of password deployment, from enrolment and login to reset and attacks.

Our data (which is now publicly available) gives us an interesting picture into the current state of password deployment. Because the dataset is huge and the paper is quite lengthy, we’ll be discussing our findings and their implications from a series of different perspectives. Today, we’ll focus on the preventable mistakes. In academic literature, it’s assumed that passwords will be encrypted during transmission, hashed before storage, and attempts to guess usernames or passwords will be throttled. None of these is widely true in practice.

Continue reading Passwords in the wild, part I: the gap between theory and implementation

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)

Database state – latest!

Today sees the publication of a report by Professor Trisha Greenhalgh into the Summary Care Record (SCR). There is a summary of the report in the BMJ, which also has two discussion pieces: one by Sir Mark Walport of the Wellcome Trust arguing that the future of medical records is digital, and one by me which agrees but argues that as the SCR is unsafe and unlawful, it should be abandoned.

Two weeks ago I reported here how the coalition government planned to retain the SCR, despite pre-election promises from both its constituent parties to do away with it. These promises followed our Database State report last year which demonstrated that many of the central systems built by the previous government contravened human-rights law. The government’s U-turn provoked considerable anger among doctors. NGOs and backbench MPs, prompting health minister Simon Burns to promise a review.

Professor Greenhalgh’s review, which was in fact completed before the election, finds that the SCR fails to do what it was supposed to. It isn’t used much; it doesn’t fit in with how doctors and nurses actually work; it doesn’t make consultations shorter but longer; and the project was extremely badly managed. In fact, her report should be read by all serious students of software engineering; like the London Ambulance Service report almost twenty years ago, this document sets out in great detail what not to do.

For now, there is some press coverage in the Telegraph, the Mail, E-health Insider and Computerworld UK.

Workshop on the economics of information security 2010

Here is a liveblog of WEIS which is being held today and tomorrow at Harvard. It has 125 attendees: 59% academic, 15% govt/NGO, and 26% industry; the split of backgrounds of 47% CS, 35% econ/management and 18% policy/law. The paper acceptance rate was 24/72: 10 empirical papers, 8 theory and 6 on policy.

The workshop kicked off with a keynote talk from Tracey Vispoli of Chubb Insurance. In early 2000s, insurance industry thought cyber would be big. It isn’t yet, but it is starting to grow rapidly. There is still little actuarial data. But the tndustry can shape behaviour by being in the gap between risk aversion and risk tolerance. Its technical standards can make a difference (as with buildings, highways, …). So far a big factor is the insurance response to notification requirements: notification costs of $50-60 per compromised record mean that a 47m compromise like TJX is a loss you want to insure! So she expects healthy supply and demand model for cyberinsurance in coming years. This will help to shape standards, best practices and culture.

Questions: are there enough data to model? So far no company has enough; ideally we should bring data together from industry to one central shared point. Government has a role as with highways. Standards? Client prequalification is currently a fast-moving target. Insurers’ competitive advantage is understanding the intersection between standards and pricing. Reinsurance? Sure, where a single event could affect multiple policies. Tension between auditability and security in the power industry (NERC) – is there any role for insurance? Maybe, but legal penalties are in general uninsurable. How do we get insurers to come to WEIS? It would help if we had more specificity in our research papers, if we did not just talk about “breaches” but “breaches resulting in X” (the industry is not interested in national security, corporate espionage and other things that do not result in claims). Market evolution? She predicts the industry will follow its usual practice of lowballing a new market until losses mount, then cut back coverage terms. (E.g. employment liability insurance grew rapidly over last 20 years but became unprofitable because of class actions for discrimination etc – so industry cut coverage, but that was OK as it helped shape best employment practice). Data sharing by industry itself? Client confidentiality stops ad-hoc sharing but it would be good to have a properly regulated central depository. Who’s the Ralph Nader of this? Broad reform might come from the FTC; it’s surprising the SEC hasn’t done anything (HIPAA and GLB are too industry-specific). Quantifiability of best practice? Not enough data. How much of biz is cyber? At present it’s 5% of Chubb’s insurance business, but you can expect 8-9% in 2010-11 – rapid growth!

Future sessions will be covered in additional posts…

An old scam still works

In the very first paper I wrote on ATM fraud, Why Cryptosystems Fail, the very first example I gave of a fraud came from the case R v Moon at Hastings Crown Court in February 1992. Mr Moon was a teller at the TSB who noticed that address changes weren’t audited. He found a customer with over £10,000 in her account, changed her address to his, issued a card and pin, and changed the address back. He looted her account and when she complained, she wasn’t believed.

It’s still happening, most recently to a customer of the Abbey. Bank insider issues extra card, steals money, customer blamed – after all, chip and pin is infallible, isn’t it? Expecting banks to keep decent logs might be too much; and I supppose it’s way too much to expect bank fraud staff to read the research literature on their subject.

IEEE best paper award

Steven Murdoch, Saar Drimer, Mike Bond and I have just won the IEEE Security and Privacy Symposium’s Best Practical Paper award for our paper Chip and PIN is Broken. This was an unexpected pleasure, given the very strong competition this year (especially from this paper). We won this award once before, in 2008, for a paper on a similar topic.

Ross, Mike, Saar, Steven (photo by Joseph Bonneau)

Update (2010-05-28): The photo now includes the full team (original version)

Evaluating statistical attacks on personal knowledge questions

What is your mother’s maiden name? How about your pet’s name? Questions like these were a dark corner of security systems for quite some time. Most security researchers instinctively think they aren’t very secure. But they still have gained widespread deployment as a backup to password-based authentication when email-based identification isn’t available. Free webmail providers, for example, may have no other choice. Unfortunately, because most websites rely on email when passwords fail, and email providers rely on personal knowledge questions, most web authentication is no more secure than personal knowledge questions. This risk has gotten more attention recently, with high profile compromises of Paris Hilton’s phone, Sarah Palin’s email, and Twitter’s corporate Google Documents occurring due to guessed personal knowledge questions.

There’s finally been a surge of academic research into the area in the last five years. It’s been shown, for example, that these questions are easy to look up online, often found in public records, and easy for friends and acquaintances to guess. In a joint work with Mike Just and Greg Matthews from the University of Edinburgh published this week in the proceedings of Financial Cryptography 2010, we’ve examined the more basic question of how secure the underlying answer distributions are to statistical guessing. Put another way, if an attacker wants to do no target-specific work, but just guess common answers for a large number of accounts using population-wide statistics, how well can she do?

Continue reading Evaluating statistical attacks on personal knowledge questions