In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.
Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure
We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.
So here it is as a free course. Spread the word!
At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.
We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.
We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.
This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.
I’m sitting in the Inaugural Cybercrime Conference of the Cambridge Cloud Cybercrime Centre, and will attempt to liveblog the talks in followups to this post.
The Cambridge Cloud Cybercrime Centre is organising an inaugural one day conference on cybercrime on Thursday, 14th July 2016.
In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but for this first year we have a stellar group of invited speakers who are at the forefront of their fields:
Adam Bossler, Associate Professor, Department of Criminal Justice and Criminology, Georgia Southern University, USA
Alice Hutchings, Post-doc Criminologist, Computer Laboratory, University of Cambridge, UK
David S. Wall, Professor of Criminology, University of Leeds, UK
Maciej Korczynski Post-Doctoral Researcher, Delft University of Technology, The Netherlands
Michael Levi, Professor of Criminology, Cardiff University, UK
Mike Hulett, Head of Operations, National Cyber Crime Unit, National Crime Agency, UK
Nicolas Christin, Assistant Research Professor of Electrical and Computer Engineering, Carnegie Mellon University, USA
Richard Clayton, Director, Cambridge Cloud Cybercrime Centre, University of Cambridge, UK
Ross Anderson, Professor of Security Engineering, Computer Laboratory, University of Cambridge, UK
Tyler Moore, Tandy Assistant Professor of Cyber Security & Information Assurance, University of Tulsa, USA
They will present various aspects of cybercrime from the point of view of criminology, security economics, cybersecurity governance and policing.
This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Ninth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 12th and 13th July 2016.
For more details see here.
We recently reported that the Commissioner of the Met, Sir Bernard Hogan-Howe, said that banks should not refund fraud victims as this would just make people careless with their passwords and antivirus. The banks’ desire to blame fraud victims if they can, to avoid refunding them, is rational enough, but for a police chief to support them was disgraceful. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.
Our spooky friends at Cheltenham have joined the party. The Register reports a story in the Financial Times (behind a paywall) which says GCHQ believes that “companies must do more to try and encourage their customers to improve their cyber security standards. Customers using outdated software – sometimes riddled with vulnerabilities that hackers can exploit – are a weak link in the UK’s cyber defences.” There is no mention of the banks’ own outdated technology, or of GCHQ’s role in keeping consumer software vulnerable.
The elegant scribblers at the Financial Times are under the impression that “At present, banks routinely cover the cost of fraud, regardless of blame.” So they clearly are not regular readers of Light Blue Touchpaper.
The spooks are slightly more cautious; according to the FT, GCHQ “has told the private sector it will not take responsibility for regulatory failings”. I’m sure the banks will heave a big sigh of relief that their cosy relationship with the police, the ombudsman and the FCA will not be disturbed.
We will have to change our security-economics teaching material so we don’t just talk about the case where “Alice guards a system and Bob pays the costs of failure”, but also this new case where “Alice guards a system, and bribes the government to compel Bob to pay the costs of failure.” Now we know how Hogan-Howe is paid off; the banks pay for his Dedicated Card and Payment Crime Unit. But how are they paying off GCHQ, and what else are they getting as part of the deal?
A manuscript authored by myself and Richard Clayton has recently been published as an advance access paper in the criminology journal Deviant Behavior.
This research uses criminological theories to study those who operate ‘booter services’: websites that illegally offer denial of service attacks for a fee. We interviewed those operating the sites, and found that booter services provide ‘easy money’ for the young males that run them. The operators claim they provide legitimate services for network testing, despite acknowledging that their services are used to attack other targets. Booter services are advertised through the online communities where the skills are learned and definitions favorable toward offending are shared. Some financial services proactively frustrate the provision of booter services, by closing the accounts used for receiving payments.
For those accessing the paper from universities, you may find the paper here. The ‘accepted manuscript’, which is the final version of the paper before it has been typeset, can be accessed here.
I’ve written before about dubious “academic” journals… and today I’m going to discuss a dubious “academic” conference (which is associated with some dubious journals, but it’s the conference that’s my focus today).
Fordham University has been running the “International Conference on Cyber Security” since 2009 and ICCS 2016 (labelled “Sixth” because they skipped 2011 and 2014) will take place in New York in July. This conference has an extremely reputable program committee and is run by Fordham and the Federal Bureau of Investigation (I expect you’ve heard of them … they investigate cybercrime in the USA…).
There’s also another “International Conference on Cyber Security (ICCS 2016)” running this year as well … it will take place in Zurich in July and is run by WASET (the World Academy of Science, Engineering and Technology). The program committee for this one is somewhat less prestigious (I sorry to say that I have not heard of any of them … and to my mind the most reputable looking person is “Wei Yan of Trend Micro” … except he’s currently on his fourth job since he left Trend Micro in 2010, so that makes me wonder how many of the people on the list know that they’re mentioned ?
There’s other reasons for feeling this conference might be a little dubious, not least that this is apparently the “Eighteenth ICCS”. That might lead you to believe that there have been seventeen previous ICCS events … but I did a lot of searches and failed to find any of them !
My searches did turn up the “2nd International Conference on Cyber Security (ICCS) 2016” which will take place at the Rajasthan Technical University, India — this one looks pretty respectable, with PC members from India and the USA.
So if you fancy going to Cyber Security Conference in 2016 then you are spoilt for choice, but I would not myself recommend travelling to Zurich. A key reason is that you may find that the Dorint Airport-Hotel, where ICCS 2016 is to be held may turn out to be a little crowded… the same hotel is hosting no fewer than 160 other International conferences at exactly the same time: click here for the full list!
Alternatively, if you can’t make it this year, put a note in your diary. The “31st International Conference on Cyber Security (ICCS 2029)” is planned to take place in Zurich on July 21–22 2029… Wei Jan is on the PC for that one too … and the submission deadline is as soon as March 31, 2029, so best to get a move on with finishing that paper!
As a final note, invited papers from ICCS 2016 (the Zurich version) are to be published in a special issue of “Advances in Cyber Security”. Now you might cynically think that this was an open access journal from WASEC, but no they have no journal with that title (and in fact neither does anyone else)… but what do you know, “Advances in Cyber Security” is a fine looking book published in December 2012 by none other than Fordham University Press. Small world, isn’t it!
The Cambridge Cloud Cybercrime Centre (more information about our vision for this initiative are in this earlier article) has up to three Research Associate / Research Assistant positions to fill.
We are looking for enthusiastic researchers to work with the substantial amounts of cybercrime data that we will be collecting. The people we appoint will have the chance to define their own goals and objectives and pursue them independently or as part of a team. We will also expect everyone to assist with automating the processing of our incoming data feeds and adding value to them.
We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus. However, we are looking for strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.
Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply — and please pay attention to our request that in the covering letter you create as part of the application you should explain which particular aspects of cybercrime research are of interest to you.
Commissioner Hogan-Howe of the Met said on Thursday that the banks should not refund fraud victims because it “rewards” them for being lax about internet security. This was too much to pass up, so I wrote a letter to the editor of the Times, which has just been published. As the Times is behind a paywall, here is the text.
Sir, Sir Bernard Hogan-Howe argues that banks should not refund online fraud victims as this would make people careless with their passwords and anti-virus software (p1, March 24, and letters Mar 25 & 26). This is called secondary victimisation. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.
About 5 percent of computers running Windows are infected with malware, and common bank fraud malware such as Zeus lets the fraudster redirect transactions. You think you’re paying £150 to your electricity bill, while the malware is actually sending £9000 to Russia. The average person is helpless against this; everything seems normal, and antivirus products usually only detect it afterwards.
Much of the blame lies with the banks, who let the users of potentially infected computers make large payments instantly, rather than after a day or two, as used to be the case. They take this risk because regulators let them dump much of the cost of the resulting fraud on customers.
The elephant in the room is that the Met has been claiming for years that property crime is falling, when in fact it’s just going online like everything else. We’re now starting to get better crime figures; it’s time we got better policing, and better bank regulation too.
Ross Anderson FRS FREng
Professor of Security Engineering
University of Cambridge
I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.
The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.
I will try to liveblog later sessions as followups to this post.