Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Inter-ACE cyberchallenge at Cambridge

The best student hackers from the UK’s 13 Academic Centres of Excellence in Cyber Security Research are coming to Cambridge for the first Inter-ACE Cyberchallenge tomorrow, Saturday 23 April 2016.

inter-ace-logo4
The event is organized by the University of Cambridge in partnership with Facebook. It is loosely patterned on other inter-university sport competitions, in that each university enters a team of four students and the winning team takes home a trophy that gets engraved with the name of their university and is then passed on to the next winning team the following year.
trophies
Participation in the Inter-ACE cyberchallenge is open only to Universities accredited as ACEs under the EPSRC/GCHQ scheme. 10 of the 13 ACEs have entered this inaugural edition: alphabetically, Imperial College, Queens University Belfast, Royal Holloway University of London, University College London, University of Birmingham, University of Cambridge (hosting), University of Kent, University of Oxford, University of Southampton, University of Surrey. The challenges are set and administered by Facebook, but five of the ten competing insitutions have also sent Facebook an optional “guest challenge” for others to solve.
The players compete in a CTF involving both “Jeopardy-style” and “attack-defense-style” aspects. Game progress is visualized on a world map somewhat reminiscent of Risk, where teams attempt to conquer and re-conquer world countries by solving associated challenges.
We designed the Inter-ACE cyberchallenge riding on the success of the Cambridge2Cambridge cybersecurity challenge we ran in collaboration with MIT last March. In that event, originally planned following a January 2015 joint announcement by US President Barack Obama and UK Prime Minister David Cameron, six teams of students took part in a 24-hour Capture-The-Flag involving several rounds and spin-out individual events such as “rapid fire” (where challengers had to break into four different vulnerable binaries under time pressure) and “lock picking”, also against the clock and against each other. The challenges were expertly set and administered by ForAllSecure, a cybersecurity spin-off from Carnegie Mellon University.
C2C Updated Header- 3.7.16-1
With generous support from the UK consulate in Boston we were able to fly 10 Cambridge students to MIT. By design, we mixed people from both universities in each team, to promote C2C as an international cooperation and a bridge-building exercise. Thanks to the generosity of the many sponsors of the event, particularly Microsoft who funded the cash prizes, the winning team “Johnny Cached”, consisting of two MIT and two Cambridge students, walked away with 15,000 USD. Many other medals were awarded for various achievements throughout the event. Everyone came back with a sense of accomplishement and with connections with new like-minded and highly skilled friends across the pond.
9-2-with-medals
In both the C2C and the Inter-ACE I strived to design the rules in a way that would encourage participation not just from the already-experienced but also from interested inexperienced students who wanted to learn more. So, in C2C I designed a scheme where (following a pre-selection to rank the candidates) each team would necessarily include both experienced players and novices; whereas in Inter-ACE, where each University clearly had the incentive of picking their best players to send to Cambridge to represent them, I asked our technical partners Facebook to provide a parallel online competition that could be entered into remotely by individual students who were not on their ACE’s team. This way nobody who wanted to play is left out.
Industry and government (ours, but probably also those of whatever other country you’re reading this blog post from) concur that we need more cybersecurity experts. They can’t hire the good ones fast enough. A recent Washington post article lamented that “Universities aren’t doing enough to train the cyberdefenders America desperately needs”. Well, some of us are, and are taking the long term view.
As an educator, I believe the role of a university is to teach the solid foundations, the timeless principles, and especially “learning how to learn”, rather than the trick of the day; so I would not think highly of a hacking-oriented university course that primarily taught techniques destined to become obsolete in a couple of years. On the other hand, a total disconnect between theory and practice is also inappropriate. I’ve always introduced my students to lockpicking at the end of my undergraduate security course, both as a metaphor for the attack-defense interplay that is at the core of security (a person unskilled at picking locks has no hope of building a new lock that can withstand determined attacks; you can only beat the bad guys if you’re better than them) and to underline that the practical aspects of security are also relevant, and even fun. It has always been enthusiastically received, and has contributed to make more students interested in security.
I originally accepted to get involved in organizing Cambridge 2 Cambridge, with my esteemed MIT colleague Dr Howie Shrobe, precisely because I believe in the educational value of exposing our students to practical hands-on security. The C2C competition was run as a purely vocational event for our students, something they did during evenings and weekends if they were interested, and on condition it would not interfere with their coursework. However, taking on the role of co-organizing C2C allowed me, with thanks to the UK Cabinet Office, to recruit a precious full time collaborator, experienced ethical hacker Graham Rymer, who has since been developing a wealth of up-to-date training material for C2C. My long term plan, already blessed by the department, is to migrate some of this material into practical exercises for our official undergraduate curriculum, starting from next year. I think it will be extremely beneficial for students to get out of University with a greater understanding of the kind of adversaries they’re up against when they become security professionals and are tasked to defend the infrastructure of the organization that employs them.
Another side benefit of these competitions, as already remarked, is the community building, the forging of links between students. We don’t want merely to train individuals: we want to create a new generation of security professionals, a strong community of “good guys”. And if they met each other at the Inter-ACE when they were little, they’re going to have a much stronger chance of actively collaborating ten years later when they’re grown-ups and have become security consultants, CISOs or heads of homeland security back wherever they came from. Sometimes I have to fight with narrow-minded regulations that would only, say, offer scholarships in security to students who could pass security clearance. Well, playing by such rules makes the pool too small. For as long as I have been at Cambridge, the majority of the graduates and faculty in our security research group have been “foreigners” (myself included, of course). A university that only worked with students (and staff, for that matter) from its own country would be at a severe disadvantage compared to those, like Cambridge, that accept and train the best in the whole world. I believe we can only nurture and bring out the best student hackers in the UK in a stimulating environment where their peers are the best student hackers from anywhere else in the world. We need to take the long term view and understand that we cannot reach critical mass without this openness. We must show how exciting cybersecurity is to those clever students who don’t know it yet, whatever their gender, prior education, social class, background, even (heaven forbid) those scary foreigners, hoo hoo, because it’s only by building a sufficiently large ecosystem of skilled, competent and ethically trained good guys that employers will have enough good applicants “of their preferred profile” in the pool they want to fish in for recruitment purposes.
My warmest thanks to my academic colleagues leading the other ACE-CSRs who have responded so enthusiastically to this call at very short notice, and to the students who have been so keen to come to Cambridge for this Inter-ACE despite it being so close to their exam season. Let’s celebrate this diversity of backgrounds tomorrow and forge links between the best of the good guys, wherever they’re from. Going forward, let’s attract more and more brilliant young students to cybersecurity, to join us in the fight to make the digital society safe for all, within and across borders.

Security Protocols 2016

I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.

Financial Cryptography 2016

I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.

The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.

I will try to liveblog later sessions as followups to this post.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Efficient multivariate statistical techniques for extracting secrets from electronic devices

That’s the title of my PhD thesis, supervised by Markus Kuhn, which has become available recently as CL tech report 878:
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-878.html

In this thesis I provide a detailed presentation of template attacks, which are considered the most powerful kind of side-channel attacks, and I present several methods for implementing and evaluating this attack efficiently in different scenarios.

These contributions may allow evaluation labs to perform their evaluations faster, show that we can determine almost perfectly an 8-bit target value even when this value is manipulated by a single LOAD instruction (may be the best published results of this kind), and show how to cope with differences across devices, among others.

Some of the datasets used in my experiments along with MATLAB scripts for reproducing my results are available here:
http://www.cl.cam.ac.uk/research/security/datasets/grizzly/

 

Emerging, fascinating, and disruptive views of quantum mechanics

I have just spent a long weekend at Emergent Quantum Mechanics (EmQM15). This workshop is organised every couple of years by Gerhard Groessing and is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. It’s held in Austria – the home not just of the main experimentalists working to close loopholes in the Bell tests, such as Anton Zeilinger, but of many of the physicists still looking for an underlying classical model from which quantum phenomena might emerge. The relevance to the LBT audience is that the security proofs of quantum cryptography, and the prospects for quantum computing, turn on this obscure area of science.

The two themes emergent from this year’s workshop are both relevant to these questions; they are weak measurement and emergent global correlation.

Weak measurement goes back to the 1980s and the thesis of Lev Vaidman. The idea is that you can probe the trajectory of a quantum mechanical particle by making many measurements of a weakly coupled observable between preselection and postselection operations. This has profound theoretical implications, as it means that the Heisenberg uncertainty limit can be stretched in carefully chosen circumstances; Masanao Ozawa has come up with a more rigorous version of the Heisenberg bound, and in fact gave one of the keynote talks two years ago. Now all of a sudden there are dozens of papers on weak measurement, exploring all sorts of scientific puzzles. This leads naturally to the question of whether weak measurement is any good for breaking quantum cryptosystems. After some discussion with Lev I’m convinced the answer is almost certainly no; getting information about quantum states takes exponentially much work and lots of averaging, and works only in specific circumstances, so it’s easy for the designer to forestall. There is however a question around interdisciplinary proofs. Physicists have known about weak measurement since 1988 (even if few paid attention till a few years ago), yet no-one has rushed to tell the crypto community “Sorry, guys, when we said that nothing can break the Heisenberg bound, we kinda overlooked something.”

The second theme, emergent global correlation, may be of much more profound interest, to cryptographers and physicists alike.

Continue reading Emerging, fascinating, and disruptive views of quantum mechanics

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Four cool new jobs

We’re advertising for four people to join the security group from October.

The first three are for two software engineers to join our new cybercrime centre, to develop new ways of finding bad guys in the terabytes and (soon) petabytes of data we get on spam, phish and other bad stuff online; and a lawyer to explore and define the boundaries of how we share cybercrime data.

The fourth is in Security analysis of semiconductor memory. Could you help us come up with neat new ways of hacking chips? We’ve invented quite a few of these in the past, ranging from optical fault induction through semi-invasive attacks generally. What’s next?

Double bill: Password Hashing Competition + KeyboardPrivacy

Two interesting items from Per Thorsheim, founder of the PasswordsCon conference that we’re hosting here in Cambridge this December (you still have one month to submit papers, BTW).

First, the Password Hashing Competition “have selected Argon2 as a basis for the final PHC winner”, which will be “finalized by end of Q3 2015”. This is about selecting a new password hashing scheme to improve on the state of the art and make brute force password cracking harder. Hopefully we’ll have some good presentations about this topic at the conference.

Second, and unrelated: Per Thorsheim and Paul Moore have launched a privacy-protecting Chrome plugin called Keyboard Privacy to guard your anonymity against websites that look at keystroke dynamics to identify users. So, you might go through Tor, but the site recognizes you by your typing pattern and builds a typing profile that “can be used to identify you at other sites you’re using, were identifiable information is available about you”. Their plugin intercepts your keystrokes, batches them up and delivers them to the website at a constant pace, interfering with the site’s ability to build a profile that identifies you.

Crypto Wars 2.0

Today we unveil a major report on whether law enforcement and intelligence agencies should have exceptional access to cryptographic keys and to our computer and communications data generally. David Cameron has called for this, as have US law enforcement leaders such as FBI Director James Comey.

This policy repeats a mistake of the 1990s. The Clinton administration tried for years to seize control of civilian cryptography, first with the Clipper Chip, and then with various proposals for ‘key escrow’ or ‘trusted third party encryption’. Back then, a group of experts on cryptography and computer security got together to explain why this was a bad idea. We have now reconvened in response to the attempt by Cameron and Comey to resuscitate the old dead horse of the 1990s.

Our report, Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, is timed to set the stage for a Wednesday hearing of the Senate Judiciary Committee at which Mr Comey will present his proposals. The reply to Comey will come from Peter Swire, who was on the other side twenty years ago (he was a Clinton staffer) and has written a briefing on the first crypto war here. Peter was recently on President Obama’s NSA review group. He argues that the real way to fix the problems complained of is to fix the mutual legal assistance process – which is also my own view.

Our report is also highly relevant to the new ‘Snoopers’ Charter’ that Home Secretary Teresa May has promised to put before parliament this fall. Mrs May has made clear she wants access to everything.

However this is both wrong in principle, and unworkable in practice. Building back doors into all computer and communication systems is against most of the principles of security engineering, and it also against the principles of human rights. Our right to privacy, set out in section 8 of the European Convention on Human Rights, can only be overridden by mechanisms that meet three tests. First, they must be set out in law, with sufficient clarity for their effects to be foreseeable; second, they must be proportionate; third, they must be necessary in a democratic society. As our report makes clear, universal exceptional access will fail all these tests by a mile.

For more, see the New York Times.