At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.
We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.
We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.
This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.
I’m sitting in the Inaugural Cybercrime Conference of the Cambridge Cloud Cybercrime Centre, and will attempt to liveblog the talks in followups to this post.
We recently reported that the Commissioner of the Met, Sir Bernard Hogan-Howe, said that banks should not refund fraud victims as this would just make people careless with their passwords and antivirus. The banks’ desire to blame fraud victims if they can, to avoid refunding them, is rational enough, but for a police chief to support them was disgraceful. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.
Our spooky friends at Cheltenham have joined the party. The Register reports a story in the Financial Times (behind a paywall) which says GCHQ believes that “companies must do more to try and encourage their customers to improve their cyber security standards. Customers using outdated software – sometimes riddled with vulnerabilities that hackers can exploit – are a weak link in the UK’s cyber defences.” There is no mention of the banks’ own outdated technology, or of GCHQ’s role in keeping consumer software vulnerable.
The elegant scribblers at the Financial Times are under the impression that “At present, banks routinely cover the cost of fraud, regardless of blame.” So they clearly are not regular readers of Light Blue Touchpaper.
The spooks are slightly more cautious; according to the FT, GCHQ “has told the private sector it will not take responsibility for regulatory failings”. I’m sure the banks will heave a big sigh of relief that their cosy relationship with the police, the ombudsman and the FCA will not be disturbed.
We will have to change our security-economics teaching material so we don’t just talk about the case where “Alice guards a system and Bob pays the costs of failure”, but also this new case where “Alice guards a system, and bribes the government to compel Bob to pay the costs of failure.” Now we know how Hogan-Howe is paid off; the banks pay for his Dedicated Card and Payment Crime Unit. But how are they paying off GCHQ, and what else are they getting as part of the deal?
I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.
Commissioner Hogan-Howe of the Met said on Thursday that the banks should not refund fraud victims because it “rewards” them for being lax about internet security. This was too much to pass up, so I wrote a letter to the editor of the Times, which has just been published. As the Times is behind a paywall, here is the text.
Sir, Sir Bernard Hogan-Howe argues that banks should not refund online fraud victims as this would make people careless with their passwords and anti-virus software (p1, March 24, and letters Mar 25 & 26). This is called secondary victimisation. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.
About 5 percent of computers running Windows are infected with malware, and common bank fraud malware such as Zeus lets the fraudster redirect transactions. You think you’re paying £150 to your electricity bill, while the malware is actually sending £9000 to Russia. The average person is helpless against this; everything seems normal, and antivirus products usually only detect it afterwards.
Much of the blame lies with the banks, who let the users of potentially infected computers make large payments instantly, rather than after a day or two, as used to be the case. They take this risk because regulators let them dump much of the cost of the resulting fraud on customers.
The elephant in the room is that the Met has been claiming for years that property crime is falling, when in fact it’s just going online like everything else. We’re now starting to get better crime figures; it’s time we got better policing, and better bank regulation too.
Ross Anderson FRS FREng
Professor of Security Engineering
University of Cambridge
I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.
The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.
I will try to liveblog later sessions as followups to this post.
We know more and more about the financial cost of cybercrime, but there has been very little work on its emotional cost. David Modic and I decided to investigate. We wanted to empirically test whether there are emotional repercussions to becoming a victim of fraud (Yes, there are). We wanted to compare emotional and financial impact across different categories of fraud and establish a ranking list (And we did). An interesting, although not surprising, finding was that in every tested category the victim’s perception of emotional impact outweighed the reported financial loss.
A victim may think that they will still be able to recover their money, if not their pride. That really depends on what type of fraud they facilitated. If it is auction fraud, then their chances of recovery are comparatively higher than in bank fraud – we found that 26% of our sample would attempt to recover funds lost in a fraudulent auction and approximately half of them were reimbursed (look at this presentation). There is considerable evidence that banks are not very likely to believe someone claiming to be a victim of, say, identity theft and by extension bank fraud. Thus, when someone ends up out of pocket, they will likely also go through a process of secondary victimisation where they will be told they broke some small-print rule like having the same pin for two of their bank cards or not using the bank’s approved anti-virus software, and are thus not eligible for any refund and it is all their own fault, really.
You can find the article here or here. (It was published in IEEE Security & Privacy.)
This paper complements and extends our earlier work on the costs of cybercrime, where we show that the broader economic costs to society of cybercrime – such as loss of confidence in online shopping and banking – also greatly exceed the amounts that cybercriminals actually manage to steal.
Yesterday the Financial Conduct Authority (the UK bank regulator) issued a report on Fair treatment for consumers who suffer unauthorised transactions. This is an issue in which we have an interest, as fraud victims regularly come to us after being turned away by their bank and by the financial ombudsman service. Yet the FCA have found that everything is hunky dory, and conclude “we do not believe that further thematic work is required at this stage”.
One of the things the FCA asked their consultants is whether there’s any evidence that claims are rejected on the sole basis that a pin was used. The consultants didn’t want to reply on existing work but instead surveyed a nationally representative sample of 948 people and found that 16% had a transaction dispute in the last year. These were 37% MOTO, 22% cancelled future dated payment, 15% ATM cash, 13% shop, 13% lump sum from bank account. Of customers who complained, 43% were offered their money back spontaneously; a further 41% asked; in the end a total of 68% got refunds after varying periods of time. In total 7% (15 victims) had claim declined, most because the bank said the transaction was “authorised” or following a”contract with merchant” and 2 for chip and pin (one of them an ATM transaction; the other admitted sharing their PIN). 12 of these 15 considered the result
unfair. These figures are entirely consistent with what we learn from the British Crime Survey and elsewhere; two million UK victims a year, and while most get their money back, many don’t; and a hard core of perhaps a few tens of thousands who end up feeling that their bank has screwed them.
The case studies profiled in the consultants’ paper were of glowing happy people who got their money back; the 12 sad losers were not profiled, and the consultants concluded that “Customers might be being denied refunds on the sole basis that Chip and PIN were used … we found little evidence of this” (p 49) and went on to remark helpfully that some customers admitted sharing their PINs and felt OK lying about this. The FCA happily paraphrases this as “We also did not find any evidence of firms holding customers liable for unauthorised transactions solely on the basis that the PIN was used to make the transaction” (main report, p 13, 3.25).
According to recent news reports, the former head of the FCA, Martin Wheatley, was sacked by George Osborne for being too harsh on the banks.
I came across an unusual DHL branded phish recently…
The user receives an email with the Subject of “DHL delivery to [ xxx ]June ©2015” where xxx is their valid email address. The From is forged as “DHLexpress<email@example.com>” (the criminal will have used this domain since delivery.net hasn’t yet adopted DMARC whereas dhl.com has a p=reject policy which would have prevented this type of forgery altogether).
The email looks like this (I’ve blacked out the valid email address):
and so, although we would all wish otherwise, it is predictable that many recipients will have opened the attachment.
BTW: if the image looks in the least bit fuzzy in your browser then click on the image to see the full-size PNG file and appreciate how realistic the email looks.
I expect many now expect me to explain about some complex 0-day within the PDF that infects the machine with malware, because after all, that’s the main risk from opening unexpected attachments isn’t it ?
Continue reading Phishing that looks like another risk altogether
The FBI overstated forensic hair matches in nearly all trials up till 2000. 26 of their 28 examiners overstated forensic matches in ways that favoured prosecutors in more than 95 percent of the 268 trials reviewed so far. 32 defendants were sentenced to death, of whom 14 were executed or died in prison.
In the District of Columbia, the only jurisdiction where defenders and prosecutors have re-investigated all FBI hair convictions, three of seven defendants whose trials included flawed FBI testimony have been exonerated through DNA testing since 2009, and courts have cleared two more. All five served 20 to 30 years in prison for rape or murder. The FBI examiners in question also taught 500 to 1,000 state and local crime lab analysts to testify in the same ways.
Systematically flawed forensic evidence should be familiar enough to readers of this blog. In four previous posts here I’ve described problems with the curfew tags that are used to monitor the movements of parolees and terrorism suspects in the UK. We have also written extensively on the unreliability of card payment evidence, particularly in banking disputes. However, payment evidence can also be relevant to serious criminal trials, of which the most shocking cases are probably those described here and here. Hundreds, perhaps thousands, of men were arrested after being wrongly suspected of buying indecent images of children, when in fact they were victims of credit card fraud. Having been an expert witness in one of those cases, I wrote to the former DPP Kier Starmer on his appointment asking him to open a formal inquiry into the police failure to understand credit card fraud, and to review cases as appropriate. My letter was ignored.
The Washington Post article argues cogently that the USA lacks, and needs, a mechanism to deal with systematic failures of the justice system, particularly when these are related to its inability to cope with technology. The same holds here too. In addition to the hundreds of men wrongly arrested for child porn offences in Operation Ore, there have been over two hundred prosecutions for curfew tag tampering, no doubt with evidence similar to that offered in cases where we secured acquittals. There have been scandals in the past over DNA and fingerprints, as I describe in my book. How many more scandals are waiting to break? And as everything goes online, digital evidence will play an ever larger role, leading to more systematic failures in future. How should we try to forestall them?