Inter-ACE national hacking competition today

Over 100 of the best students in cyber from the UK Academic Centres of Excellence in Cyber Security Research are gathered here at the University of Cambridge Computer Laboratory today for the second edition of our annual “Inter-ACE” hacking contest.

The competition is hosted on the CyberNEXS cyber-range of our sponsor Leidos, and involves earning points for hacking into each other’s machines while defending one’s own.   The competition has grown substantially from last year’s: you can follow it live on Twitter (@InterACEcyber) At the time of writing, we still don’t know who is going to take home the trophy. Can you guess who will?

The event has been made possible thanks to generous support from the National Cyber Security Centre, the Cabinet Office, Leidos and NCC Group.

The University is Hiring

We’re looking for a Chief Information Security Officer. This isn’t a research post here at the lab, but across the yard in University Information Services, where they manage our networks and our administrative systems. There will be opportunities to work with security researchers like us, but the main task is protecting Cambridge from all sorts of online bad actors. If you would like to be in the thick of it, and you know what you’re doing, here’s how you can apply.

CfP: BSides London 2017

====================================================================
BSides London 2017
7th June 2017
ILEC Conference Centre, 47 Lillie Road London, SW6 1UD
https://www.securitybsides.org.uk/
====================================================================

We invite proposals for BSides London 2017, to be held on the 7th June, 2017 in London, UK.

Please note that all submissions must be submitted at: https://bit.ly/BSidesLDN2017CFP

———————————————————

Important dates

CfP opens – February 14th
CfP closes – March 27th
Voting on CFP Open – March 30th
Voting on CFP Close – April 13th
email notification to proposers – April 14th
Deadline for speakers to confirm attendance – April 21st
BSides London schedule published – May 1st
BSides London! – June 7th, 2017

(All deadlines are 11:59pm GMT)

———————————————————

What is BSides?

Each BSides is a community-driven framework for building events for and by information security community members.  The goal is to expand the spectrum of conversation beyond the traditional confines of space and time.  It creates opportunities for individuals to both present and participate in an intimate atmosphere that encourages collaboration. It is an intense event with discussions, demos, and interaction from participants. It is where conversations for the next-big-thing are happening.

———————————————————

Scope

This year our focus will be on a theme that is a fundamental to InfoSec: “Sharing is Caring: Disclosure, leaks as well as knowledge transfer it is all about sharing”. We seek original contributions that present attacks, analyses, designs, applications, protocols, systems, practical experiences, and theory. As usual the theme is not prescriptive, and proposals may include (but are not limited to) the following topics:

* Information technology
* Network security & Cryptography
* Web Application security
* Mobile security
* Usable security
* Virtualization and cloud computing
* Innovative attack / defense strategies
* Forensics / Malware
* Embedded device security / IoT
* Physical security and lockpicking
* Biometrics
* Hardware hacking
* Biohacking and modification
* Open source software
* Robotics (bonus points for bringing an actual robot)
* Massive abuse of technology
* Evolutionary computing
* Ethical and philosophical implications of hacking

———————————————————

Advice to  presenters

PRESENTATIONS should describe novel technical contributions within the scope of the call. The presentations will be subjected to open (non-blind) peer review by the organising committee.  The allotted time for each presentation will typically be between 45 minutes to 1 hour (including Q&A); though shorter presentations are also welcome.

Remember that our participants’ backgrounds and experience are varied. There must be something for everyone, so when choosing a subject go with something you are comfortable with no matter the difficulty level. Your presentation should tell us a story:

– Here is a problem
– It’s an interesting problem
– It’s an unsolved problem
– Here is my idea
– My idea works (details, data)
– Here’s how my idea compares to other people’s approaches

If your talk is not selected, please keep in mind that we aim to provide a “lighting talks” track where speakers can present their topics on a first come/first served basis.

Best of luck and thanks for being part of Security BSides London! For additional information or questions regarding the process please email cfp at securitybsides.org.uk

———————————————————

Organization

As in previous years, the schedule for BSides London 2017 will be selected by public vote.

Banks biased against black fraud victims

The following is an op-ed I wrote in today’s Times. It appeared in their Thunderer column.

You’re less likely to be treated fairly by your bank if you’re elderly, poor, female or black. We’ve suspected this for years, and finally The Times has dug up the numbers to prove it.

Fraud victims who’re refused compensation often contact our security research group at Cambridge after they find we work on payment fraud. We call this stream of complaints our ‘fraud telescope’ as it gives us early warning of what the bad guys are up to. We’ve had more than 2,000 cases over 25 years.

In recent years we’ve started to realise what we weren’t seeing. The “dark matter” in the fraud universe is the missing victims: we don’t see that many middle-class white men. The victims who do come to us are disproportionately elderly, poor, female, or black. But crime surveys tell us that the middle classes and the young are more likely to be victims of fraud, so it’s hard to avoid the conclusion that banks are less generous to some of their customers.

We raised the issue of discrimination in 2011 with one of the banks and with the Commission for Racial Equality, but as no-one was keeping records, nothing could be proved, until today.

How can this discrimination happen? Well, UK rules give banks a lot of discretion to decide whether to refund a victim, and the first responders often don’t know the full story. If your HSBC card was compromised by a skimmer on a Tesco ATM, there’s no guarantee that Tesco will have told anyone (unlike in America, where the law forces Tesco to tell you). And the fraud pattern might be something entirely new. So bank staff end up making judgement calls like “Is this customer telling the truth?” and “How much is their business worth to us?” This in turn sets the stage for biases and prejudices to kick in, however subconsciously. Add management pressure to cut costs, sometimes even bonuses for cutting them, and here we are.

There are two lessons. First, banks need to train staff to be aware of unconscious bias (as universities do), and monitor their performance.

Second, the Financial Conduct Authority needs to protect all customers properly. It seems to be moving in the right direction; after the recent fraud against tens of thousands of Tesco Bank account holders, it said it expected fraud victims to be made good immediately. This has been the law in the USA since the 1980s and it must become a firm rule here too.

Government U-turn on Health Privacy

Now that everyone’s distracted with the supreme court case on Brexit, you can expect the government to sneak out something it’s ashamed of. Health secretary Jeremy Hunt has decided to ignore the wishes of over a million people who opted out of having their hospital records given to third parties such as drug companies, and the ICO has decided to pretend that the anonymisation mechanisms he says he’ll use instead are sufficient. One gently smoking gun is the fifth bullet in a new webpage here, where the Department of Health claims that when it says the data are anonymous, your wishes will be ignored. The news has been broken in an article in the Health Services Journal (it’s behind a paywall, as a splendid example of transparency) with the Wellcome Trust praising the ICO’s decision not to take action against the Department. We are assured that “the data is seen as crucial for vital research projects”. The exchange of letters with privacy campaigners that led up to this decision can be found here, here, here, here, here, here, and here.

An early portent of this u-turn was reported here in 2014 when officials reckoned that the only way they could still do administrative tasks such as calculating doctors’ bonuses was to just pretend that the data are anonymous even though they know it isn’t really. Then, after the care.data scandal showed that a billion records had been sold to over a thousand purchasers, we reported here how HES data had also been sold and how the minister seemed to have misled parliament about this.

I will be talking about ethics of all this on Thursday. Even if ministers claim that stolen medical records are OK to use, researchers must not act as if this is true; if patients end up trusting doctors as little as we trust politicians, then medical research will be in serious trouble. There is a video of a previous version of this talk here.

Meanwhile, if you’re annoyed that Jeremy Hunt proposes to ignore not just your privacy rights but your express wishes, you can send him a notice under Section 10 of the Data Protection Act forbidding him from disclosing your data. The Department has complied with such notices in the past, albeit with bad grace as they have no automated way to do it. If thousands of people serve such notices, they may finally have to stand up to the drug company lobbyists and write the missing software. For more, see here.

DigiTally

Last week I gave a keynote talk at CCS about DigiTally, a project we’ve been working on to extend mobile payments to areas where the network is intermittent, congested or non-existent.

The Bill and Melinda Gates Foundation called for ways to increase the use of mobile payments, which have been transformative in many less developed countries. We did some research and found that network availability and cost were the two main problems. So how could we do phone payments where there’s no network, with a marginal cost of zero? If people had smartphones you could use some combination of NFC, bluetooth and local wifi, but most of the rural poor in Africa and Asia use simple phones without any extra communications modalities, other than those which the users themselves can provide. So how could you enable people to do phone payments by simple user actions? We were inspired by the prepayment electricity meters I helped develop some twenty years ago; meters conforming to this spec are now used in over 100 countries.

We got a small grant from the Gates Foundation to do a prototype and field trial. We designed a system, Digitally, where Alice can pay Bob by exchanging eight-digit MACs that are generated, and verified, by the SIM cards in their phones. For rapid prototyping we used overlay SIMs (which are already being used in a different phone payment system in Africa). The cryptography is described in a paper we gave at the Security Protocols Workshop this spring.

Last month we took the prototype to Strathmore University in Nairobi to do a field trial involving usability studies in their bookshop, coffee shop and cafeteria. The results were very encouraging and I described them in my talk at CCS (slides). There will be a paper on this study in due course. We’re now looking for partners to do deployment at scale, whether in phone payments or in other apps that need to support value transfer in delay-tolerant networks.

There has been press coverage in the New Scientist, Engadget and Impress (original Japanese version).

Security Economics MOOC

In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.

Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure

We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.

So here it is as a free course. Spread the word!

Hacking the iPhone PIN retry counter

At our security group meeting on the 19th August, Sergei Skorobogatov demonstrated a NAND backup attack on an iPhone 5c. I typed in six wrong PINs and it locked; he removed the flash chip (which he’d desoldered and led out to a socket); he erased and restored the changed pages; he put it back in the phone; and I was able to enter a further six wrong PINs.

Sergei has today released a paper describing the attack.

During the recent fight between the FBI and Apple, FBI Director Jim Comey said this kind of attack wouldn’t work.

USENIX Security Best Paper 2016 – The Million Key Question … Origins of RSA Public Keys

Petr Svenda et al from Masaryk University in Brno won the Best Paper Award at this year’s USENIX Security Symposium with their paper classifying public RSA keys according to their source.

I really like the simplicity of the original assumption. The starting point of the research was that different crypto/RSA libraries use slightly different elimination methods and “cut-off” thresholds to find suitable prime numbers. They thought these differences should be sufficient to detect a particular cryptographic implementation and all that was needed were public keys. Petr et al confirmed this assumption. The best paper award is a well-deserved recognition as I’ve worked with and followed Petr’s activities closely.

The authors created a method for efficient identification of the source (software library or hardware device) of RSA public keys. It resulted in a classification of keys into more than dozen categories. This classification can be used as a fingerprint that decreases the anonymity of users of Tor and other privacy enhancing mailers or operators.

Bit Length of Largest Prime Factors of p-1
The graphs extracted from: The Million Key Question – Investigating The Origins of RSA Public Keys (follow the link for more).

All that is a result of an analysis of over 60 million freshly generated keys from 22 open- and closed-source libraries and from 16 different smart-cards. While the findings are fairly theoretical, they are demonstrated with a series of easy to understand graphs (see above).

I can’t see an easy way to exploit the results for immediate cyber attacks. However, we started looking into practical applications. There are interesting opportunities for enterprise compliance audits, as the classification only requires access to datasets of public keys – often created as a by-product of internal network vulnerability scanning.

An extended version of the paper is available from http://crcs.cz/rsa.