A dubious cyber security conference

I’ve written before about dubious “academic” journals… and today I’m going to discuss a dubious “academic” conference (which is associated with some dubious journals, but it’s the conference that’s my focus today).

Fordham University has been running the “International Conference on Cyber Security” since 2009 and ICCS 2016 (labelled “Sixth” because they skipped 2011 and 2014) will take place in New York in July. This conference has an extremely reputable program committee and is run by Fordham and the Federal Bureau of Investigation (I expect you’ve heard of them … they investigate cybercrime in the USA…).

There’s also another “International Conference on Cyber Security (ICCS 2016)” running this year as well … it will take place in Zurich in July and is run by WASET (the World Academy of Science, Engineering and Technology). The program committee for this one is somewhat less prestigious (I sorry to say that I have not heard of any of them … and to my mind the most reputable looking person is “Wei Yan of Trend Micro” … except he’s currently on his fourth job since he left Trend Micro in 2010, so that makes me wonder how many of the people on the list know that they’re mentioned ?

There’s other reasons for feeling this conference might be a little dubious, not least that this is apparently the “Eighteenth ICCS”. That might lead you to believe that there have been seventeen previous ICCS events … but I did a lot of searches and failed to find any of them !

My searches did turn up the “2nd International Conference on Cyber Security (ICCS) 2016” which will take place at the Rajasthan Technical University, India — this one looks pretty respectable, with PC members from India and the USA.

So if you fancy going to Cyber Security Conference in 2016 then you are spoilt for choice, but I would not myself recommend travelling to Zurich. A key reason is that you may find that the Dorint Airport-Hotel, where ICCS 2016 is to be held may turn out to be a little crowded… the same hotel is hosting no fewer than 160 other International conferences at exactly the same time: click here for the full list!

Alternatively, if you can’t make it this year, put a note in your diary. The “31st International Conference on Cyber Security (ICCS 2029)” is planned to take place in Zurich on July 21–22 2029… Wei Jan is on the PC for that one too … and the submission deadline is as soon as March 31, 2029, so best to get a move on with finishing that paper!

As a final note, invited papers from ICCS 2016 (the Zurich version) are to be published in a special issue of “Advances in Cyber Security”. Now you might cynically think that this was an open access journal from WASEC, but no they have no journal with that title (and in fact neither does anyone else)… but what do you know, “Advances in Cyber Security” is a fine looking book published in December 2012 by none other than Fordham University Press. Small world, isn’t it!

And the winners are…

inter-ace-logo4

The Inter-ACE Cyberchallenge on Saturday was fantastic. The event saw nearly twice as many competitors as attended the C2C competition in Boston recently, engaged in solving the most artful challenges. It was great to see so many students interested in cyber security making the effort to travel from the four corners of the UK, a few from as far away as Belfast!

IMG_5373The competition was played out on a “Risk-style” world map, and competing teams had to fight each other for control of several countries, each protected by a fiendish puzzle. A number of universities had also submitted guest challenges, and it was great that so many teams got involved in this creative process too. To give one example; The Cambridge team had designed a challenge based around a historically accurate enigma machine, with this challenge protecting the country of Panama. Competitors had to brute-force the settings of the enigma machine to decode a secret message. Other challenges were based around the core CTF subject areas of web application security, binary reverse engineering and exploitation, forensics, and crypto. Some novice teams may have struggled to compete, but they would have learned a lot, and hopefully developed an appetite for more competition. There were also plenty of teams present with advanced tool sets and a solid plan, with these preparations clearly paying off in the final scores.

IMG_5426

Between the 10 teams, their coaches, the organisers and the reporters, the lab was bustling with excitement and that intense feeling of hackers “in the zone” for the whole afternoon.

IMG_5406

I have nothing but praise for our partners Facebook, who worked hard on setting the challenges and making the CTF game run smoothly, as well as feeding the participants with pizza and endowing the prizes with hacking books and goodie bags.

IMG_5298

The biggest thanks go to the ACE-CSRs who enthusiastically supported this initiative despite the short notice. 40 students came to Cambridge to compete in the live event in teams of 4, and another 40+ competed remotely in the individuals.

 

In retrospect we should have organised a “best T-shirt” competition. I especially liked Facebook t-shirts “Fix more, whine less” and “s/sleep/hack/g” but the one I would have voted overall winner (despite not technically being a T-shirt) was Southampton’s Shakespearian boolean logic.

IMG_5310

It is with a mixture of pride and embarrassment that I announce the winners, as Cambridge won the gold in both the team and individual events.

IMG_5686

Team event:

  • 1st place (Gold): University of Cambridge
    Stella Lau, Will Shackleton, Cheng Sun, Gábor Szarka
  • 2nd place (Silver): Imperial College London
    Matthieu Buffet, Jiarou Fan, Luke Granger-Brown, Antoine Vianey-Liaud
  • 3rd place (Bronze): University of Southampton
    Murray Colpman, Kier Davis, Yordan Ganchev, Mohit Gupta

 

Individual event:

  • 1st place (Gold): Dimitrije Erdeljan, University of Cambridge
  • 2nd place (Silver): Emma Espinosa, University of Oxford
  • 3rd place (Bronze): David Young, University of Southampton

IMG_5346

I shall ignore allegations of having rigged the game except to say that yes, we did train our students rather extensively in preparation for the previously-mentioned Cambridge 2 Cambridge event with MIT. All of our winners are Cambridge undergraduates in computer science who had done well in the qualifiers for C2C. Two of them had actually been to Boston, where Gábor had been on the winning team overall and earned one gold and two silver medals, while Will (also former UK Cyber Security Challenge winner) had earned one gold, one silver and two bronze medals. Well deserved thanks also to my modest but irreplaceable collaborator Graham Rymer who designed and delivered an effective and up-to-date ethical hacking course to our volunteers. The Cambridge success in this weekend’s competition gives promising insights into the effectiveness of this training which we are gearing up to offering to all our undergraduates and potentially to other interested audiences in the future.

IMG_5359

We are once again grateful to everyone who took part. We are also grateful to the Cabinet Office, to EPSRC and to GCHQ for support that will allow us to keep the event running and we hereby invite all the ACEs to sharpen their hacking tools for next year and come back to attempt to reconquer the trophy from us.

Inter-ACE cyberchallenge at Cambridge

The best student hackers from the UK’s 13 Academic Centres of Excellence in Cyber Security Research are coming to Cambridge for the first Inter-ACE Cyberchallenge tomorrow, Saturday 23 April 2016.

inter-ace-logo4
The event is organized by the University of Cambridge in partnership with Facebook. It is loosely patterned on other inter-university sport competitions, in that each university enters a team of four students and the winning team takes home a trophy that gets engraved with the name of their university and is then passed on to the next winning team the following year.
trophies
Participation in the Inter-ACE cyberchallenge is open only to Universities accredited as ACEs under the EPSRC/GCHQ scheme. 10 of the 13 ACEs have entered this inaugural edition: alphabetically, Imperial College, Queens University Belfast, Royal Holloway University of London, University College London, University of Birmingham, University of Cambridge (hosting), University of Kent, University of Oxford, University of Southampton, University of Surrey. The challenges are set and administered by Facebook, but five of the ten competing insitutions have also sent Facebook an optional “guest challenge” for others to solve.
The players compete in a CTF involving both “Jeopardy-style” and “attack-defense-style” aspects. Game progress is visualized on a world map somewhat reminiscent of Risk, where teams attempt to conquer and re-conquer world countries by solving associated challenges.
We designed the Inter-ACE cyberchallenge riding on the success of the Cambridge2Cambridge cybersecurity challenge we ran in collaboration with MIT last March. In that event, originally planned following a January 2015 joint announcement by US President Barack Obama and UK Prime Minister David Cameron, six teams of students took part in a 24-hour Capture-The-Flag involving several rounds and spin-out individual events such as “rapid fire” (where challengers had to break into four different vulnerable binaries under time pressure) and “lock picking”, also against the clock and against each other. The challenges were expertly set and administered by ForAllSecure, a cybersecurity spin-off from Carnegie Mellon University.
C2C Updated Header- 3.7.16-1
With generous support from the UK consulate in Boston we were able to fly 10 Cambridge students to MIT. By design, we mixed people from both universities in each team, to promote C2C as an international cooperation and a bridge-building exercise. Thanks to the generosity of the many sponsors of the event, particularly Microsoft who funded the cash prizes, the winning team “Johnny Cached”, consisting of two MIT and two Cambridge students, walked away with 15,000 USD. Many other medals were awarded for various achievements throughout the event. Everyone came back with a sense of accomplishement and with connections with new like-minded and highly skilled friends across the pond.
9-2-with-medals
In both the C2C and the Inter-ACE I strived to design the rules in a way that would encourage participation not just from the already-experienced but also from interested inexperienced students who wanted to learn more. So, in C2C I designed a scheme where (following a pre-selection to rank the candidates) each team would necessarily include both experienced players and novices; whereas in Inter-ACE, where each University clearly had the incentive of picking their best players to send to Cambridge to represent them, I asked our technical partners Facebook to provide a parallel online competition that could be entered into remotely by individual students who were not on their ACE’s team. This way nobody who wanted to play is left out.
Industry and government (ours, but probably also those of whatever other country you’re reading this blog post from) concur that we need more cybersecurity experts. They can’t hire the good ones fast enough. A recent Washington post article lamented that “Universities aren’t doing enough to train the cyberdefenders America desperately needs”. Well, some of us are, and are taking the long term view.
As an educator, I believe the role of a university is to teach the solid foundations, the timeless principles, and especially “learning how to learn”, rather than the trick of the day; so I would not think highly of a hacking-oriented university course that primarily taught techniques destined to become obsolete in a couple of years. On the other hand, a total disconnect between theory and practice is also inappropriate. I’ve always introduced my students to lockpicking at the end of my undergraduate security course, both as a metaphor for the attack-defense interplay that is at the core of security (a person unskilled at picking locks has no hope of building a new lock that can withstand determined attacks; you can only beat the bad guys if you’re better than them) and to underline that the practical aspects of security are also relevant, and even fun. It has always been enthusiastically received, and has contributed to make more students interested in security.
I originally accepted to get involved in organizing Cambridge 2 Cambridge, with my esteemed MIT colleague Dr Howie Shrobe, precisely because I believe in the educational value of exposing our students to practical hands-on security. The C2C competition was run as a purely vocational event for our students, something they did during evenings and weekends if they were interested, and on condition it would not interfere with their coursework. However, taking on the role of co-organizing C2C allowed me, with thanks to the UK Cabinet Office, to recruit a precious full time collaborator, experienced ethical hacker Graham Rymer, who has since been developing a wealth of up-to-date training material for C2C. My long term plan, already blessed by the department, is to migrate some of this material into practical exercises for our official undergraduate curriculum, starting from next year. I think it will be extremely beneficial for students to get out of University with a greater understanding of the kind of adversaries they’re up against when they become security professionals and are tasked to defend the infrastructure of the organization that employs them.
Another side benefit of these competitions, as already remarked, is the community building, the forging of links between students. We don’t want merely to train individuals: we want to create a new generation of security professionals, a strong community of “good guys”. And if they met each other at the Inter-ACE when they were little, they’re going to have a much stronger chance of actively collaborating ten years later when they’re grown-ups and have become security consultants, CISOs or heads of homeland security back wherever they came from. Sometimes I have to fight with narrow-minded regulations that would only, say, offer scholarships in security to students who could pass security clearance. Well, playing by such rules makes the pool too small. For as long as I have been at Cambridge, the majority of the graduates and faculty in our security research group have been “foreigners” (myself included, of course). A university that only worked with students (and staff, for that matter) from its own country would be at a severe disadvantage compared to those, like Cambridge, that accept and train the best in the whole world. I believe we can only nurture and bring out the best student hackers in the UK in a stimulating environment where their peers are the best student hackers from anywhere else in the world. We need to take the long term view and understand that we cannot reach critical mass without this openness. We must show how exciting cybersecurity is to those clever students who don’t know it yet, whatever their gender, prior education, social class, background, even (heaven forbid) those scary foreigners, hoo hoo, because it’s only by building a sufficiently large ecosystem of skilled, competent and ethically trained good guys that employers will have enough good applicants “of their preferred profile” in the pool they want to fish in for recruitment purposes.
My warmest thanks to my academic colleagues leading the other ACE-CSRs who have responded so enthusiastically to this call at very short notice, and to the students who have been so keen to come to Cambridge for this Inter-ACE despite it being so close to their exam season. Let’s celebrate this diversity of backgrounds tomorrow and forge links between the best of the good guys, wherever they’re from. Going forward, let’s attract more and more brilliant young students to cybersecurity, to join us in the fight to make the digital society safe for all, within and across borders.

More Jobs in the Cloud Cybercrime Centre

The Cambridge Cloud Cybercrime Centre (more information about our vision for this initiative are in this earlier article) has up to three Research Associate / Research Assistant positions to fill.

We are looking for enthusiastic researchers to work with the substantial amounts of cybercrime data that we will be collecting. The people we appoint will have the chance to define their own goals and objectives and pursue them independently or as part of a team. We will also expect everyone to assist with automating the processing of our incoming data feeds and adding value to them.

We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus. However, we are looking for strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.

Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply — and please pay attention to our request that in the covering letter you create as part of the application you should explain which particular aspects of cybercrime research are of interest to you.

Security Protocols 2016

I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.

Met police chief blaming the victims

Commissioner Hogan-Howe of the Met said on Thursday that the banks should not refund fraud victims because it “rewards” them for being lax about internet security. This was too much to pass up, so I wrote a letter to the editor of the Times, which has just been published. As the Times is behind a paywall, here is the text.

Sir, Sir Bernard Hogan-Howe argues that banks should not refund online fraud victims as this would make people careless with their passwords and anti-virus software (p1, March 24, and letters Mar 25 & 26). This is called secondary victimisation. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.

About 5 percent of computers running Windows are infected with malware, and common bank fraud malware such as Zeus lets the fraudster redirect transactions. You think you’re paying £150 to your electricity bill, while the malware is actually sending £9000 to Russia. The average person is helpless against this; everything seems normal, and antivirus products usually only detect it afterwards.

Much of the blame lies with the banks, who let the users of potentially infected computers make large payments instantly, rather than after a day or two, as used to be the case. They take this risk because regulators let them dump much of the cost of the resulting fraud on customers.

The elephant in the room is that the Met has been claiming for years that property crime is falling, when in fact it’s just going online like everything else. We’re now starting to get better crime figures; it’s time we got better policing, and better bank regulation too.

Ross Anderson FRS FREng
Professor of Security Engineering
University of Cambridge

What do we mean by scale?

Online booking is now open for the annual Lovelace lecture, which I’m giving in London on March 15th. I’ll be using the event to try to develop a cross-cutting theme that arises from my research but is of wider interest. What do we mean by scale?

Back when computing was expensive, computer science started out worrying about how computations scaled – such as which sorting algorithms ran as n², n log n or even n. Once software development became the bottleneck, we started off by trying the same approach (measuring code entropy or control graph complexity), but moved on to distinguishing what types of complexity could be dealt with by suitable tools. Mass-market computing and communications brought network effects, and scalability started to depend on context (this is where security economics came in). Now we move to “Big Data” the dependency on people becomes more explicit. Few people have stopped to think of human factors in scaling terms. Do we make information about many people available to many, or to few? What about the complexity of the things that can be done with personal data? What about costs now versus in the future, and the elasticity of demand associated with such costs? Do you just count the data subjects, do you count the attackers too, or do you add the cops as well?

I’ve been quoted as saying “You can have security, functionality, scale – choose any two” or words to that effect. I’ll discuss this and try to sketch the likely boundaries, as well as future research directions. The discussion will cross over from science and engineering to economics and politics; recent proposed legislation in the UK, and court cases in the USA, would impose compliance burdens on people trying to scale systems up from one country to many.

The students we’re training to be the next generation of developers and entrepreneurs will need a broader understanding of what’s involved in scaling systems up, and in this talk I’ll try to explore what that means. Maybe I’m taking a risk with this talk, as I’m trying to assemble into a row a lot of facts that are usually found in different columns. But I do hope it will not be boring.

My Yahoo! password histograms are now available (with differential privacy!)

5 years ago, I compiled a dataset of password histograms representing roughly 70 million Yahoo! users. It was the largest password dataset ever compiled for research purposes. The data was a key component of my PhD dissertation the next year and motivated new statistical methods for which I received the 2013 NSA Cybersecurity Award.

I had always hoped to share the data publicly. It consists only of password histograms, not passwords themselves, so it seemed reasonably safe to publish. But without a formal privacy model, Yahoo! didn’t agree. Given the history of deanonymization work, caution is certainly in order. Today, thanks to new differential privacy methods described in a paper published at NDSS 2016 with colleagues Jeremiah Blocki and Anupam Datta, a sanitized version of the data is publicly available.

Continue reading My Yahoo! password histograms are now available (with differential privacy!)

“Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users

If you use an anonymity network such as Tor on a regular basis, you are probably familiar with various annoyances in your web browsing experience, ranging from pages saying “Access denied” to having to solve CAPTCHAs before continuing. Interestingly, these hurdles disappear if the same website is accessed without Tor. The growing trend of websites extending this kind of “differential treatment” to anonymous users undermines Tor’s overall utility, and adds a new dimension to the traditional threats to Tor (attacks on user privacy, or governments blocking access to Tor). There is plenty of anecdotal evidence about Tor users experiencing difficulties in browsing the web, for example the user-reported catalog of services blocking Tor. However, we don’t have sufficient detail about the problem to answer deeper questions like: how prevalent is differential treatment of Tor on the web; are there any centralized players with Tor-unfriendly policies that have a magnified effect on the browsing experience of Tor users; can we identify patterns in where these Tor-unfriendly websites are hosted (or located), and so forth.

Today we present our paper on this topic: “Do You See What I See? Differential Treatment of Anonymous Users” at the Network and Distributed System Security Symposium (NDSS). Together with researchers from the University of Cambridge, University College London, University of California, Berkeley and International Computer Science Institute (Berkeley), we conducted comprehensive network measurements to shed light on websites that block Tor. At the network layer, we scanned the entire IPv4 address space on port 80 from Tor exit nodes. At the application layer, we fetched the homepage from the most popular 1,000 websites (according to Alexa) from all Tor exit nodes. We compared these measurements with a baseline from non-Tor control measurements, and uncover significant evidence of Tor blocking. We estimate that at least 1.3 million IP addresses that would otherwise allow a TCP handshake on port 80 block the handshake if it originates from a Tor exit node. We also show that at least 3.67% of the most popular 1,000 websites block Tor users at the application layer.

Continue reading “Do you see what I see?” ask Tor users, as a large number of websites reject them but accept non-Tor users

Financial Cryptography 2016

I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.

The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.

I will try to liveblog later sessions as followups to this post.