Four cool new jobs

We’re advertising for four people to join the security group from October.

The first three are for two software engineers to join our new cybercrime centre, to develop new ways of finding bad guys in the terabytes and (soon) petabytes of data we get on spam, phish and other bad stuff online; and a lawyer to explore and define the boundaries of how we share cybercrime data.

The fourth is in Security analysis of semiconductor memory. Could you help us come up with neat new ways of hacking chips? We’ve invented quite a few of these in the past, ranging from optical fault induction through semi-invasive attacks generally. What’s next?

Double bill: Password Hashing Competition + KeyboardPrivacy

Two interesting items from Per Thorsheim, founder of the PasswordsCon conference that we’re hosting here in Cambridge this December (you still have one month to submit papers, BTW).

First, the Password Hashing Competition “have selected Argon2 as a basis for the final PHC winner”, which will be “finalized by end of Q3 2015”. This is about selecting a new password hashing scheme to improve on the state of the art and make brute force password cracking harder. Hopefully we’ll have some good presentations about this topic at the conference.

Second, and unrelated: Per Thorsheim and Paul Moore have launched a privacy-protecting Chrome plugin called Keyboard Privacy to guard your anonymity against websites that look at keystroke dynamics to identify users. So, you might go through Tor, but the site recognizes you by your typing pattern and builds a typing profile that “can be used to identify you at other sites you’re using, were identifiable information is available about you”. Their plugin intercepts your keystrokes, batches them up and delivers them to the website at a constant pace, interfering with the site’s ability to build a profile that identifies you.

FCA view on unauthorised transactions

Yesterday the Financial Conduct Authority (the UK bank regulator) issued a report on Fair treatment for consumers who suffer unauthorised transactions. This is an issue in which we have an interest, as fraud victims regularly come to us after being turned away by their bank and by the financial ombudsman service. Yet the FCA have found that everything is hunky dory, and conclude “we do not believe that further thematic work is required at this stage”.

One of the things the FCA asked their consultants is whether there’s any evidence that claims are rejected on the sole basis that a pin was used. The consultants didn’t want to reply on existing work but instead surveyed a nationally representative sample of 948 people and found that 16% had a transaction dispute in the last year. These were 37% MOTO, 22% cancelled future dated payment, 15% ATM cash, 13% shop, 13% lump sum from bank account. Of customers who complained, 43% were offered their money back spontaneously; a further 41% asked; in the end a total of 68% got refunds after varying periods of time. In total 7% (15 victims) had claim declined, most because the bank said the transaction was “authorised” or following a”contract with merchant” and 2 for chip and pin (one of them an ATM transaction; the other admitted sharing their PIN). 12 of these 15 considered the result
unfair. These figures are entirely consistent with what we learn from the British Crime Survey and elsewhere; two million UK victims a year, and while most get their money back, many don’t; and a hard core of perhaps a few tens of thousands who end up feeling that their bank has screwed them.

The case studies profiled in the consultants’ paper were of glowing happy people who got their money back; the 12 sad losers were not profiled, and the consultants concluded that “Customers might be being denied refunds on the sole basis that Chip and PIN were used … we found little evidence of this” (p 49) and went on to remark helpfully that some customers admitted sharing their PINs and felt OK lying about this. The FCA happily paraphrases this as “We also did not find any evidence of firms holding customers liable for unauthorised transactions solely on the basis that the PIN was used to make the transaction” (main report, p 13, 3.25).

According to recent news reports, the former head of the FCA, Martin Wheatley, was sacked by George Osborne for being too harsh on the banks.

Job Ads: Cloud Cybercrime Centre

The Cambridge Cloud Cybercrime Centre (more information about our vision for this brand new initiative are in this earlier article) now has a number of Research Associate / Research Assistant positions to fill:

  • A person to take responsibility for improving the automated processing of our incoming data feeds. They will help develop new sources of data, add new value to existing data and develop new ways of understanding and measuring cybercrime: full details are here.
  • A person with a legal background to carry out research into the legal and policy aspects of cybercrime data sharing. Besides contributing to the academic literature and to the active policy debates in this area they will assist in negotiating relevant arrangements with data suppliers and users: full details are here.

and with special thanks for the generosity of ThreatSTOP, who have funded this extra position:

  • We also seek someone to work on distributed denial-of-service (DDoS) measurement. We have been gathering data on reflected UDP DDoS events for many months and we want to extend our coverage and develop a much more detailed analysis of the location of perpetrators and victims along with real-time datafeeds of relevant information to assist in reducing harm. Full details are here.

Please follow the links to read the relevant formal advertisement for the details about exactly who and what we’re looking for and how to apply.

Crypto Wars 2.0

Today we unveil a major report on whether law enforcement and intelligence agencies should have exceptional access to cryptographic keys and to our computer and communications data generally. David Cameron has called for this, as have US law enforcement leaders such as FBI Director James Comey.

This policy repeats a mistake of the 1990s. The Clinton administration tried for years to seize control of civilian cryptography, first with the Clipper Chip, and then with various proposals for ‘key escrow’ or ‘trusted third party encryption’. Back then, a group of experts on cryptography and computer security got together to explain why this was a bad idea. We have now reconvened in response to the attempt by Cameron and Comey to resuscitate the old dead horse of the 1990s.

Our report, Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, is timed to set the stage for a Wednesday hearing of the Senate Judiciary Committee at which Mr Comey will present his proposals. The reply to Comey will come from Peter Swire, who was on the other side twenty years ago (he was a Clinton staffer) and has written a briefing on the first crypto war here. Peter was recently on President Obama’s NSA review group. He argues that the real way to fix the problems complained of is to fix the mutual legal assistance process – which is also my own view.

Our report is also highly relevant to the new ‘Snoopers’ Charter’ that Home Secretary Teresa May has promised to put before parliament this fall. Mrs May has made clear she wants access to everything.

However this is both wrong in principle, and unworkable in practice. Building back doors into all computer and communication systems is against most of the principles of security engineering, and it also against the principles of human rights. Our right to privacy, set out in section 8 of the European Convention on Human Rights, can only be overridden by mechanisms that meet three tests. First, they must be set out in law, with sufficient clarity for their effects to be foreseeable; second, they must be proportionate; third, they must be necessary in a democratic society. As our report makes clear, universal exceptional access will fail all these tests by a mile.

For more, see the New York Times.

Passwords 2015 call for papers

The  9th International Conference on Passwords will be held at Cambridge, UK on 7-9 December 2015.

Launched in 2010 by Per Thorsheim,  Passwordscon is a lively and entertaining conference series dedicated solely to passwords. Passwordscon’s unique mix of refereed papers and hacker talks encourages a kind of cross-fertilization that I’m sure you’ll find both entertaining and fruitful.

Paper submissions are due by 7 September 2015. Selected papers will be included in the event proceedings, published by Springer in the Lecture Notes in Computer Science (LNCS) series.

We hope to see lots of you there!

Graeme Jenkinson, Local arrangements chair

Cambridge Cloud Cybercrime Centre

We have recently won a major grant (around £2 million over 5 years) under the EPSRC Contrails call which we will be using to set up the “Cambridge Cloud Cybercrime Centre”:

https://www.cambridgecybercrime.uk/

The will be a multi-disciplinary initiative combining expertise from the University of Cambridge’s Computer Laboratory, Institute of Criminology and Faculty of Law. We will be operational from 1 October 2015.

Our approach will be data driven. We have already negotiated access to some very substantial datasets relating to cybercrime and we aim to leverage our neutral academic status to obtain more data and build one of the largest and most diverse data sets that any organisation holds.

We will mine and correlate these datasets to extract information about criminal activity. Our analysis will enhance understanding of crime ‘in the cloud’, enable us to devise identifiers of such criminality, allow us to build systems to detect this type of crime when it occurs, and aid us in showing how it is possible to collect extremely reliable evidence of wrongdoing. When it is appropriate, we will work closely with law enforcement so that interventions can be undertaken.

Our overall objective is to create a sustainable and internationally competitive centre for academic research into cybercrime.

Importantly, we will not be keeping all this data to ourselves… a key aim of our Centre is to make data available to other academics for them to apply their own skills to address cybercrime issues.

Academics currently face considerable difficulties in researching cybercrime. It is difficult, and time consuming, to negotiate access to real data on actual abuse and then it is necessary to build and deploy data collection tools before the real work can even be started.

We intend to drive a step change in the amount of cybercrime research by making datasets available, not just of URLs but content as well, so that other academics can concentrate on their particular areas of expertise and start being productive immediately. These datasets will be both ‘historic’ and, where appropriate ‘real-time’.

We will maintain high ethical standards in everything we do and will develop a strong legal framework for our operations. In particular we will always ensure that the data we handle is treated fully in accord with the spirit, and not just the letter, of the agreements we enter into.

We will shortly be hiring for the first few research positions … pointers to the job adverts will appear on this blog.

Phishing that looks like another risk altogether

I came across an unusual DHL branded phish recently…

The user receives an email with the Subject of “DHL delivery to [ xxx ]June ©2015” where xxx is their valid email address. The From is forged as “DHLexpress<noreply@delivery.net>” (the criminal will have used this domain since delivery.net hasn’t yet adopted DMARC whereas dhl.com has a p=reject policy which would have prevented this type of forgery altogether).

The email looks like this (I’ve blacked out the valid email address):
DHL email body
and so, although we would all wish otherwise, it is predictable that many recipients will have opened the attachment.

BTW: if the image looks in the least bit fuzzy in your browser then click on the image to see the full-size PNG file and appreciate how realistic the email looks.

I expect many now expect me to explain about some complex 0-day within the PDF that infects the machine with malware, because after all, that’s the main risk from opening unexpected attachments isn’t it ?

But no!
Continue reading Phishing that looks like another risk altogether

Which Malware Lures Work Best?

Last week at the APWG eCrime Conference in Barcelona I presented some new results about an old Instant Messaging (IM) worm from a paper written by Tyler Moore and myself.

In late April 2010 users of the Yahoo and Microsoft IM systems started to get messages from their buddies which said, for example:
foto ☺ http://www.example.com/image.php?user@email.example.com
where the email address was theirs and the URL was for some malware.

Naturally, since the message was from their buddy a lot of folks clicked on the link and when the Windows warning pop-up said “you cannot see this photo until you press OK” they pressed OK and (since the Windows message was in fact a warning about executing unknown programs downloaded from the Internet) they too became infected with the malware. Hence they sent foto ☺ messages to all their buddies and the worm spread at increasing speed.

By late May 2010 I had determined how the malware was controlled (it resolved hostnames to locate IRC servers then joined particular channels where the topic was the message to be sent to buddies) and built a Perl program to join in and monitor what was going on. I also determined that the criminals were often hosting their malware on hosting sites with world-readable Apache weblogs so we could get exact counts of malware downloads (how many people clicked on the links).

Full details, and the story of a number of related worms that spread over the next two years can be found in the academic paper (and are summarised in the slides I used for a very short talk in Barcelona and a longer version I presented a week earlier in Luxembourg).

The key results are:

  • Thanks to some sloppiness by the criminals we had some brief snapshots of activity from an IRC channel used when the spreading phase was complete and infected machines were being forced to download new malware — this showed that 95% of people had clicked OK to dismiss the Microsoft warning message.
  • We had sufficient download data to estimate that around 3 million users were infected by the initial worm and we have records of over 14 million distinct downloads over all of the different worms (having ignored events caused by security monitoring, multiple clicks by the same user, etc.). That is — this was a large scale event.
  • We were able to compare the number of clicks during periods where the criminals vacillated between using URL shorteners in their URLs and when they used hostnames that (vaguely resembled) brands such as Facebook, MySpace, Orkut and so on. We found that when shorteners were used this reduced the number of clicks by almost half — presumably because it made users more cautious.
  • From early 2011 the worms were mainly affecting Brazil — and the simple “foto ☺” had long been replaced by other textual lures. We found that when the criminals used lures in Portuguese (e.g. “eu acho que é você na”, which has, I was told in Barcelona, a distinctive Brazilian feel to it) they were far more successful in getting people to click than when they used ‘language independent’ lures such as “hahha foto”

There’s nothing here which is super-surprising, but it is useful to see our preconceptions borne out not in a laboratory experiment (where it is hard to ensure that the experimental subjects are behaving quite the way that they would ‘in the wild’) but by large scale measurements from real events.