I’m at the 22nd Workshop on the Economics of Information Security in Geneva, and will be liveblogging the sessions in the followups to this post. Links to previous editions of WEIS, along with liveblogs and recordings, may be found here.
Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.
Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.
In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.
Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.
By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.
This week sees the start of a course on security engineering that Sam Ainsworth and I are teaching. It’s based on the third edition of my Security Engineering book, and is a first cut at a ‘film of the book’.
Each week we will put two lectures online, and here are the first two. Lecture 1 discusses our adversaries, from nation states through cyber-crooks to personal abuse, and the vulnerability life cycle that underlies the ecosystem of attacks. Lecture 2 abstracts this empirical experience into more formal threat models and security policies.
Although our course is designed for masters students and fourth-year undergrads in Edinburgh, we’re making the lectures available to everyone. I’ll link the rest of the videos in followups here, and eventually on the book’s web page.
How far can we go with acoustic snooping on data?
Seven years ago we showed that you could use a phone camera to measure the phone’s motion while typing and use that to recover PINs. Four years ago we showed that you could use interrupt timing to recover text entered using gesture typing. Last year we showed how a gaming app can steal your banking PIN by listening to the vibration of the screen as your finger taps it. In that attack we used the on-phone microphones, as they are conveniently located next to the screen and can hear the reverberations of the screen glass.
This year we wondered whether voice assistants can hear the same taps on a nearby phone as the on-phone microphones could. We knew that voice assistants could do acoustic snooping on nearby physical keyboards, but everyone had assumed that virtual keyboards were so quiet as to be invulnerable.
Almos Zarandy, Ilia Shumailov and I discovered that attacks are indeed possible. In Hey Alexa what did I just type? we show that when sitting up to half a meter away, a voice assistant can still hear the taps you make on your phone, even in presence of noise. Modern voice assistants have two to seven microphones, so they can do directional localisation, just as human ears do, but with greater sensitivity. We assess the risk and show that a lot more work is needed to understand the privacy implications of the always-on microphones that are increasingly infesting our work spaces and our homes.
In 2012 we presented the first systematic study of the costs of cybercrime. We have now repeated our study, to work out what’s changed in the seven years since then.
Measuring the Changing Cost of Cybercrime will appear on Monday at WEIS. The period has seen huge changes, with the smartphone replacing as PC and laptop as the consumer terminal of choice, with Android replacing Windows as the most popular operating system, and many services moving to the cloud. Yet the overall pattern of cybercrime is much the same.
We know a lot more than we did then. Back in 2012, we guessed that cybercrime was about half of all crime, by volume and value; we now know from surveys in several countries that this is the case. Payment fraud has doubled, but fallen slightly as a proportion of payment value; the payment system has got larger, and slightly more efficient.
So what’s changed? New cybercrimes include ransomware and other offences related to cryptocurrencies; travel fraud has also grown. Business email compromise and its cousin, authorised push payment fraud, are also growth areas. We’ve also seen serious collateral damage from cyber-weapons such as the NotPetya worm. The good news is that crimes that infringe intellectual property – from patent-infringing pharmaceuticals to copyright-infringing software, music and video – are down.
Our conclusions are much the same as in 2012. Most cyber-criminals operate with impunity, and we have to fix this. We need to put a lot more effort into catching and punishing the perpetrators.
Security systems are often designed by geeks who assume that the users will also be geeks, and the same goes for the advice that users are given when things start to go wrong. For example, banks reacted to the growth of phishing in 2006 by advising their customers to parse URLs. That’s fine for geeks but most people don’t do that, and in particular most women don’t do that. So in the second edition of my Security Engineering book, I asked (in chapter 2, section 2.3.4, pp 27-28): “Is it unlawful sex discrimination for a bank to expect its customers to detect phishing attacks by parsing URLs?”
Tyler Moore and I then ran the experiment, and Tyler presented the results at the first Workshop on Security and Human Behaviour that June. We recruited 132 volunteers between the ages of 18 and 30 (77 female, 55 male) and tested them to see whether they could spot phishing websites, as well as for systematising quotient (SQ) and empathising quotient (EQ). These measures were developed by Simon Baron-Cohen in his work on Asperger’s; most men have SQ > EQ while for most women EQ > SQ. The ability to parse URLs is correlated with SQ-EQ and independently with gender. A significant minority of women did badly at URL parsing. We didn’t get round to publishing the full paper at the time, but we’ve mentioned the results in various talks and lectures.
We have now uploaded the original paper, How brain type influences online safety. Given the growing interest in gender HCI, we hope that our study might spur people to do research in the gender aspects of security as well. It certainly seems like an open goal!
I’m in the Security Protocols Workshop, whose theme this year is “security protocols for humans”. I’ll try to liveblog the talks in followups to this post.
Have you ever wondered whether one app on your phone could spy on what you’re typing into another? We have. Five years ago we showed that you could use the camera to measure the phone’s motion during typing and use that to recover PINs. Then three years ago we showed that you could use interrupt timing to recover text entered using gesture typing. So what other attacks are possible?
Our latest paper shows that one of the apps on the phone can simply record the sound from its microphones and work out from that what you’ve been typing.
Your phone’s screen can be thought of as a drum – a membrane supported at the edges. It makes slightly different sounds depending on where you tap it. Modern phones and tablets typically have two microphones, so you can also measure the time difference of arrival of the sounds. The upshot is that can recover PIN codes and short words given a few measurements, and in some cases even long and complex words. We evaluate the new attack against previous ones and show that the accuracy is sometimes even better, especially against larger devices such as tablets.
This paper is based on Ilia Shumailov’s MPhil thesis project.
I’m at Financial Crypto 2019 and will try to liveblog some of the sessions in followups to this post.
I am at the Symposium on Post-Bitcoin Cryptocurrencies in Vienna and will try to liveblog the talks in follow-ups to this post.
The introduction was by Bernhard Haslhofer of AIT, who maintains the graphsense.info toolkit and runs the Titanium project on bitcoin forensics jointly with Rainer Boehme of Innsbruck. Rainer then presented an economic analysis arguing that criminal transactions were pretty well the only logical app for bitcoin as it’s permissionless and trustless; if you have access to the courts then there are better ways of doing things. However in the post-bitcoin world of ICOs and smart contracts, it’s not just the anti-money-laundering agencies who need to understand cryptocurrency but the securities regulators and the tax collectors. Yet there is a real policy tension. Governments hype blockchains; Austria uses them to auction sovereign bonds. Yet the only way in for the citizen is through the swamp. How can the swamp be drained?