Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

DigiTally

Last week I gave a keynote talk at CCS about DigiTally, a project we’ve been working on to extend mobile payments to areas where the network is intermittent, congested or non-existent.

The Bill and Melinda Gates Foundation called for ways to increase the use of mobile payments, which have been transformative in many less developed countries. We did some research and found that network availability and cost were the two main problems. So how could we do phone payments where there’s no network, with a marginal cost of zero? If people had smartphones you could use some combination of NFC, bluetooth and local wifi, but most of the rural poor in Africa and Asia use simple phones without any extra communications modalities, other than those which the users themselves can provide. So how could you enable people to do phone payments by simple user actions? We were inspired by the prepayment electricity meters I helped develop some twenty years ago; meters conforming to this spec are now used in over 100 countries.

We got a small grant from the Gates Foundation to do a prototype and field trial. We designed a system, Digitally, where Alice can pay Bob by exchanging eight-digit MACs that are generated, and verified, by the SIM cards in their phones. For rapid prototyping we used overlay SIMs (which are already being used in a different phone payment system in Africa). The cryptography is described in a paper we gave at the Security Protocols Workshop this spring.

Last month we took the prototype to Strathmore University in Nairobi to do a field trial involving usability studies in their bookshop, coffee shop and cafeteria. The results were very encouraging and I described them in my talk at CCS (slides). There will be a paper on this study in due course. We’re now looking for partners to do deployment at scale, whether in phone payments or in other apps that need to support value transfer in delay-tolerant networks.

There has been press coverage in the New Scientist, Engadget and Impress (original Japanese version).

Hacking the iPhone PIN retry counter

At our security group meeting on the 19th August, Sergei Skorobogatov demonstrated a NAND backup attack on an iPhone 5c. I typed in six wrong PINs and it locked; he removed the flash chip (which he’d desoldered and led out to a socket); he erased and restored the changed pages; he put it back in the phone; and I was able to enter a further six wrong PINs.

Sergei has today released a paper describing the attack.

During the recent fight between the FBI and Apple, FBI Director Jim Comey said this kind of attack wouldn’t work.

USENIX Security Best Paper 2016 – The Million Key Question … Origins of RSA Public Keys

Petr Svenda et al from Masaryk University in Brno won the Best Paper Award at this year’s USENIX Security Symposium with their paper classifying public RSA keys according to their source.

I really like the simplicity of the original assumption. The starting point of the research was that different crypto/RSA libraries use slightly different elimination methods and “cut-off” thresholds to find suitable prime numbers. They thought these differences should be sufficient to detect a particular cryptographic implementation and all that was needed were public keys. Petr et al confirmed this assumption. The best paper award is a well-deserved recognition as I’ve worked with and followed Petr’s activities closely.

The authors created a method for efficient identification of the source (software library or hardware device) of RSA public keys. It resulted in a classification of keys into more than dozen categories. This classification can be used as a fingerprint that decreases the anonymity of users of Tor and other privacy enhancing mailers or operators.

Bit Length of Largest Prime Factors of p-1
The graphs extracted from: The Million Key Question – Investigating The Origins of RSA Public Keys (follow the link for more).

All that is a result of an analysis of over 60 million freshly generated keys from 22 open- and closed-source libraries and from 16 different smart-cards. While the findings are fairly theoretical, they are demonstrated with a series of easy to understand graphs (see above).

I can’t see an easy way to exploit the results for immediate cyber attacks. However, we started looking into practical applications. There are interesting opportunities for enterprise compliance audits, as the classification only requires access to datasets of public keys – often created as a by-product of internal network vulnerability scanning.

An extended version of the paper is available from http://crcs.cz/rsa.

Yet another Android side channel: input stealing for fun and profit

At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.

We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.

We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.

This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.

Royal Society report on cybersecurity research

The Royal Society has just published a report on cybersecurity research. I was a member of the steering group that tried to keep the policy team headed in the right direction. Its recommendation that governments preserve the robustness of encryption is welcome enough, given the new Russian law on access to crypto keys; it was nice to get, given the conservative nature of the Society. But I’m afraid the glass is only half full.

I was disappointed that the final report went along with the GCHQ line that security breaches should not be reported to affected data subjects, as in the USA, but to the agencies, as mandated in the EU’s NIS directive. Its call for an independent review of the UK’s cybersecurity needs may also achieve little. I was on John Beddington’s Blackett Review five years ago, and the outcome wasn’t published; it was mostly used to justify a budget increase for GCHQ. Its call for UK government work on standards is irrelevant post-Brexit; indeed standards made in Europe will probably be better without UK interference. Most of all, I cannot accept the report’s line that the government should help direct cybersecurity research. Most scientists agree that too much money already goes into directed programmes and not enough into responsive-mode and curiosity-driven research. In the case of security research there is a further factor: the stark conflict of interest between bona fide researchers, whose aim is that some of the people should enjoy some security and privacy some of the time, and agencies engaged in programmes such as Operation Bullrun whose goal is that this should not happen. GCHQ may want a “more responsive cybersecurity agenda”; but that’s the last thing people like me want them to have.

The report has in any case been overtaken by events. First, Brexit is already doing serious harm to research funding. Second, Brexit is also doing serious harm to the IT industry; we hear daily of listings posptoned, investments reconsidered and firms planning to move development teams and data overseas. Third, the Investigatory Powers bill currently before the House of Lords highlights the fact that surveillance debate in the West these days is more about access to data at rest and about whether the government can order firms to hack their customers.

While all three arms of the US government have drawn back on surveillance powers following the Snowden revelations, Theresa May has taken the hardest possible line. Her Investigatory Powers Bill will give her successors as Home Secretary sweeping powers to order firms in the UK to hand over data and help GCHQ hack their customers. Brexit will shield these powers from challenge in the European Court of Justice, making it much harder for a UK company to claim “adequacy” for its data protection arrangements in respect of EU data subjects. This will make it still less attractive for an IT company to keep in the UK either data that could be seized or engineering staff who could be coerced. I am seriously concerned that, together with Brexit, this will be the double whammy that persuades overseas firms not to invest in the UK, and that even causes some UK firms to leave. In the face of this massive self-harm, the measures suggested by the report are unlikely to help much.

Inter-ACE cyberchallenge at Cambridge

The best student hackers from the UK’s 13 Academic Centres of Excellence in Cyber Security Research are coming to Cambridge for the first Inter-ACE Cyberchallenge tomorrow, Saturday 23 April 2016.

inter-ace-logo4
The event is organized by the University of Cambridge in partnership with Facebook. It is loosely patterned on other inter-university sport competitions, in that each university enters a team of four students and the winning team takes home a trophy that gets engraved with the name of their university and is then passed on to the next winning team the following year.
trophies
Participation in the Inter-ACE cyberchallenge is open only to Universities accredited as ACEs under the EPSRC/GCHQ scheme. 10 of the 13 ACEs have entered this inaugural edition: alphabetically, Imperial College, Queens University Belfast, Royal Holloway University of London, University College London, University of Birmingham, University of Cambridge (hosting), University of Kent, University of Oxford, University of Southampton, University of Surrey. The challenges are set and administered by Facebook, but five of the ten competing insitutions have also sent Facebook an optional “guest challenge” for others to solve.
The players compete in a CTF involving both “Jeopardy-style” and “attack-defense-style” aspects. Game progress is visualized on a world map somewhat reminiscent of Risk, where teams attempt to conquer and re-conquer world countries by solving associated challenges.
We designed the Inter-ACE cyberchallenge riding on the success of the Cambridge2Cambridge cybersecurity challenge we ran in collaboration with MIT last March. In that event, originally planned following a January 2015 joint announcement by US President Barack Obama and UK Prime Minister David Cameron, six teams of students took part in a 24-hour Capture-The-Flag involving several rounds and spin-out individual events such as “rapid fire” (where challengers had to break into four different vulnerable binaries under time pressure) and “lock picking”, also against the clock and against each other. The challenges were expertly set and administered by ForAllSecure, a cybersecurity spin-off from Carnegie Mellon University.
C2C Updated Header- 3.7.16-1
With generous support from the UK consulate in Boston we were able to fly 10 Cambridge students to MIT. By design, we mixed people from both universities in each team, to promote C2C as an international cooperation and a bridge-building exercise. Thanks to the generosity of the many sponsors of the event, particularly Microsoft who funded the cash prizes, the winning team “Johnny Cached”, consisting of two MIT and two Cambridge students, walked away with 15,000 USD. Many other medals were awarded for various achievements throughout the event. Everyone came back with a sense of accomplishement and with connections with new like-minded and highly skilled friends across the pond.
9-2-with-medals
In both the C2C and the Inter-ACE I strived to design the rules in a way that would encourage participation not just from the already-experienced but also from interested inexperienced students who wanted to learn more. So, in C2C I designed a scheme where (following a pre-selection to rank the candidates) each team would necessarily include both experienced players and novices; whereas in Inter-ACE, where each University clearly had the incentive of picking their best players to send to Cambridge to represent them, I asked our technical partners Facebook to provide a parallel online competition that could be entered into remotely by individual students who were not on their ACE’s team. This way nobody who wanted to play is left out.
Industry and government (ours, but probably also those of whatever other country you’re reading this blog post from) concur that we need more cybersecurity experts. They can’t hire the good ones fast enough. A recent Washington post article lamented that “Universities aren’t doing enough to train the cyberdefenders America desperately needs”. Well, some of us are, and are taking the long term view.
As an educator, I believe the role of a university is to teach the solid foundations, the timeless principles, and especially “learning how to learn”, rather than the trick of the day; so I would not think highly of a hacking-oriented university course that primarily taught techniques destined to become obsolete in a couple of years. On the other hand, a total disconnect between theory and practice is also inappropriate. I’ve always introduced my students to lockpicking at the end of my undergraduate security course, both as a metaphor for the attack-defense interplay that is at the core of security (a person unskilled at picking locks has no hope of building a new lock that can withstand determined attacks; you can only beat the bad guys if you’re better than them) and to underline that the practical aspects of security are also relevant, and even fun. It has always been enthusiastically received, and has contributed to make more students interested in security.
I originally accepted to get involved in organizing Cambridge 2 Cambridge, with my esteemed MIT colleague Dr Howie Shrobe, precisely because I believe in the educational value of exposing our students to practical hands-on security. The C2C competition was run as a purely vocational event for our students, something they did during evenings and weekends if they were interested, and on condition it would not interfere with their coursework. However, taking on the role of co-organizing C2C allowed me, with thanks to the UK Cabinet Office, to recruit a precious full time collaborator, experienced ethical hacker Graham Rymer, who has since been developing a wealth of up-to-date training material for C2C. My long term plan, already blessed by the department, is to migrate some of this material into practical exercises for our official undergraduate curriculum, starting from next year. I think it will be extremely beneficial for students to get out of University with a greater understanding of the kind of adversaries they’re up against when they become security professionals and are tasked to defend the infrastructure of the organization that employs them.
Another side benefit of these competitions, as already remarked, is the community building, the forging of links between students. We don’t want merely to train individuals: we want to create a new generation of security professionals, a strong community of “good guys”. And if they met each other at the Inter-ACE when they were little, they’re going to have a much stronger chance of actively collaborating ten years later when they’re grown-ups and have become security consultants, CISOs or heads of homeland security back wherever they came from. Sometimes I have to fight with narrow-minded regulations that would only, say, offer scholarships in security to students who could pass security clearance. Well, playing by such rules makes the pool too small. For as long as I have been at Cambridge, the majority of the graduates and faculty in our security research group have been “foreigners” (myself included, of course). A university that only worked with students (and staff, for that matter) from its own country would be at a severe disadvantage compared to those, like Cambridge, that accept and train the best in the whole world. I believe we can only nurture and bring out the best student hackers in the UK in a stimulating environment where their peers are the best student hackers from anywhere else in the world. We need to take the long term view and understand that we cannot reach critical mass without this openness. We must show how exciting cybersecurity is to those clever students who don’t know it yet, whatever their gender, prior education, social class, background, even (heaven forbid) those scary foreigners, hoo hoo, because it’s only by building a sufficiently large ecosystem of skilled, competent and ethically trained good guys that employers will have enough good applicants “of their preferred profile” in the pool they want to fish in for recruitment purposes.
My warmest thanks to my academic colleagues leading the other ACE-CSRs who have responded so enthusiastically to this call at very short notice, and to the students who have been so keen to come to Cambridge for this Inter-ACE despite it being so close to their exam season. Let’s celebrate this diversity of backgrounds tomorrow and forge links between the best of the good guys, wherever they’re from. Going forward, let’s attract more and more brilliant young students to cybersecurity, to join us in the fight to make the digital society safe for all, within and across borders.

Security Protocols 2016

I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.

Financial Cryptography 2016

I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.

The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.

I will try to liveblog later sessions as followups to this post.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.