Emerging, fascinating, and disruptive views of quantum mechanics

I have just spent a long weekend at Emergent Quantum Mechanics (EmQM15). This workshop is organised every couple of years by Gerhard Groessing and is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. It’s held in Austria – the home not just of the main experimentalists working to close loopholes in the Bell tests, such as Anton Zeilinger, but of many of the physicists still looking for an underlying classical model from which quantum phenomena might emerge. The relevance to the LBT audience is that the security proofs of quantum cryptography, and the prospects for quantum computing, turn on this obscure area of science.

The two themes emergent from this year’s workshop are both relevant to these questions; they are weak measurement and emergent global correlation.

Weak measurement goes back to the 1980s and the thesis of Lev Vaidman. The idea is that you can probe the trajectory of a quantum mechanical particle by making many measurements of a weakly coupled observable between preselection and postselection operations. This has profound theoretical implications, as it means that the Heisenberg uncertainty limit can be stretched in carefully chosen circumstances; Masanao Ozawa has come up with a more rigorous version of the Heisenberg bound, and in fact gave one of the keynote talks two years ago. Now all of a sudden there are dozens of papers on weak measurement, exploring all sorts of scientific puzzles. This leads naturally to the question of whether weak measurement is any good for breaking quantum cryptosystems. After some discussion with Lev I’m convinced the answer is almost certainly no; getting information about quantum states takes exponentially much work and lots of averaging, and works only in specific circumstances, so it’s easy for the designer to forestall. There is however a question around interdisciplinary proofs. Physicists have known about weak measurement since 1988 (even if few paid attention till a few years ago), yet no-one has rushed to tell the crypto community “Sorry, guys, when we said that nothing can break the Heisenberg bound, we kinda overlooked something.”

The second theme, emergent global correlation, may be of much more profound interest, to cryptographers and physicists alike.

Continue reading Emerging, fascinating, and disruptive views of quantum mechanics

87% of Android devices insecure because manufacturers fail to provide security updates

We are presenting a paper at SPSM next week that shows that, on average over the last four years, 87% of Android devices are vulnerable to attack by malicious apps. This is because manufacturers have not provided regular security updates. Some manufacturers are much better than others however, and our study shows that devices built by LG and Motorola, as well as those devices shipped under the Google Nexus brand are much better than most. Users, corporate buyers and regulators can find further details on manufacturer performance at AndroidVulnerabilities.org

We used data collected by our Device Analyzer app, which is available from the Google Play Store. The app collects data from volunteers around the globe and we have used data from over 20,000 devices in our study. As always, we are keen to recruit more contributors! We combined Device Analyzer data with information we collected on critical vulnerabilities affecting Android. We used this to develop the FUM score which can be used to compare the security provided by different manufacturers. Each manufacturer is given a score out of 10 based on: f, the proportion of devices free from known critical vulnerabilities; u, the proportion of devices updated to the most recent version; and m, the mean number of vulnerabilities the manufacturer has not fixed on any device.

The problem with the lack of updates to Android devices is well known and recently Google and Samsung have committed to shipping security updates every month. Our hope is that by quantifying the problem we can help people when choosing a device and that this in turn will provide an incentive for other manufacturers and operators to deliver updates.

Google has done a good job at mitigating many of the risks, and we recommend users only install apps from Google’s Play Store since it performs additional safety checks on apps. Unfortunately Google can only do so much, and recent Android security problems have shown that this is not enough to protect users. Devices require updates from manufacturers, and the majority of devices aren’t getting them.

For further information, contact Daniel Thomas and Alastair Beresford via contact@androidvulnerabilities.org

Badness in the RIPE Database

The Cambridge Cloud Cybercrime Centre formally started work this week … but rather than writing about that I thought I’d document some publicly visible artefacts of improper behaviour (much of which, my experience tells me, is very likely to do with the sending of email spam).

RIPE is one of the five Regional Internet Registries (RIRs) and they have the responsibility of making allocations of IP address space to entities in Europe and the Middle East (ARIN deals with North America, APNIC with Asia and Australasia, LACNIC with Latin America and the Caribbean and AfriNIC with Africa).

Their public “WHOIS” databases documents these allocations and there are web interfaces to access them (for RIPE use https://apps.db.ripe.net/search/query.html).

The RIPE Database also holds a number of other sets of data including a set of “routes”. Unfortunately some of those routes are prima facie evidence of people behaving badly.
Continue reading Badness in the RIPE Database

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Decepticon: interdisciplinary conference on deception research

I’m at Decepticon 2015 and will be liveblogging the talks in followups to this post. Up till now, research on deception has been spread around half a dozen different events, aimed at cognitive psychologists, forensic psychologists, law enforcement, cybercrime specialists and others. My colleague Sophie van der Zee decided to organise a single annual event to bring everyone together, and Decepticon is the the result. With over 160 registrants for the first edition of the event (and late registrants turned away) it certainly seems to have hit a sweet spot.

Award-winning case history of the care.data health privacy scandal

Each year we divide our masters of public policy students into teams and get them to write case studies of public policy failures. The winning team this year wrote a case study of the care.data fiasco. The UK government collected personal health information on tens of millions of people who had had hospital treatment in England and then sold it off to researchers, drug companies and even marketing firms, with only a token gesture of anonymisation. In practice patients were easy to identify. The resulting scandal stalled plans to centralise GP data as well, at least for a while.

Congratulations to Lizzie Presser, Maia Hruskova, Helen Rowbottom and Jesse Kancir, who tell the story of how mismanagement, conflicts and miscommunication led to a failure of patient privacy on an industrial scale, and discuss the lessons that might be learned. Their case study has just appeared today in Technology Science, a new open-access journal for people studying conflicts that arise between technology and society. LBT readers will recall several posts reporting the problem, but it’s great to have a proper, peer-reviewed case study that we can give to future generations of students. (Incidentally, the previous year’s winning case study was on a related topic, the failure of the NHS National Programme for IT.)

Four cool new jobs

We’re advertising for four people to join the security group from October.

The first three are for two software engineers to join our new cybercrime centre, to develop new ways of finding bad guys in the terabytes and (soon) petabytes of data we get on spam, phish and other bad stuff online; and a lawyer to explore and define the boundaries of how we share cybercrime data.

The fourth is in Security analysis of semiconductor memory. Could you help us come up with neat new ways of hacking chips? We’ve invented quite a few of these in the past, ranging from optical fault induction through semi-invasive attacks generally. What’s next?

Double bill: Password Hashing Competition + KeyboardPrivacy

Two interesting items from Per Thorsheim, founder of the PasswordsCon conference that we’re hosting here in Cambridge this December (you still have one month to submit papers, BTW).

First, the Password Hashing Competition “have selected Argon2 as a basis for the final PHC winner”, which will be “finalized by end of Q3 2015”. This is about selecting a new password hashing scheme to improve on the state of the art and make brute force password cracking harder. Hopefully we’ll have some good presentations about this topic at the conference.

Second, and unrelated: Per Thorsheim and Paul Moore have launched a privacy-protecting Chrome plugin called Keyboard Privacy to guard your anonymity against websites that look at keystroke dynamics to identify users. So, you might go through Tor, but the site recognizes you by your typing pattern and builds a typing profile that “can be used to identify you at other sites you’re using, were identifiable information is available about you”. Their plugin intercepts your keystrokes, batches them up and delivers them to the website at a constant pace, interfering with the site’s ability to build a profile that identifies you.

FCA view on unauthorised transactions

Yesterday the Financial Conduct Authority (the UK bank regulator) issued a report on Fair treatment for consumers who suffer unauthorised transactions. This is an issue in which we have an interest, as fraud victims regularly come to us after being turned away by their bank and by the financial ombudsman service. Yet the FCA have found that everything is hunky dory, and conclude “we do not believe that further thematic work is required at this stage”.

One of the things the FCA asked their consultants is whether there’s any evidence that claims are rejected on the sole basis that a pin was used. The consultants didn’t want to reply on existing work but instead surveyed a nationally representative sample of 948 people and found that 16% had a transaction dispute in the last year. These were 37% MOTO, 22% cancelled future dated payment, 15% ATM cash, 13% shop, 13% lump sum from bank account. Of customers who complained, 43% were offered their money back spontaneously; a further 41% asked; in the end a total of 68% got refunds after varying periods of time. In total 7% (15 victims) had claim declined, most because the bank said the transaction was “authorised” or following a”contract with merchant” and 2 for chip and pin (one of them an ATM transaction; the other admitted sharing their PIN). 12 of these 15 considered the result
unfair. These figures are entirely consistent with what we learn from the British Crime Survey and elsewhere; two million UK victims a year, and while most get their money back, many don’t; and a hard core of perhaps a few tens of thousands who end up feeling that their bank has screwed them.

The case studies profiled in the consultants’ paper were of glowing happy people who got their money back; the 12 sad losers were not profiled, and the consultants concluded that “Customers might be being denied refunds on the sole basis that Chip and PIN were used … we found little evidence of this” (p 49) and went on to remark helpfully that some customers admitted sharing their PINs and felt OK lying about this. The FCA happily paraphrases this as “We also did not find any evidence of firms holding customers liable for unauthorised transactions solely on the basis that the PIN was used to make the transaction” (main report, p 13, 3.25).

According to recent news reports, the former head of the FCA, Martin Wheatley, was sacked by George Osborne for being too harsh on the banks.

Job Ads: Cloud Cybercrime Centre

The Cambridge Cloud Cybercrime Centre (more information about our vision for this brand new initiative are in this earlier article) now has a number of Research Associate / Research Assistant positions to fill:

  • A person to take responsibility for improving the automated processing of our incoming data feeds. They will help develop new sources of data, add new value to existing data and develop new ways of understanding and measuring cybercrime: full details are here.
  • A person with a legal background to carry out research into the legal and policy aspects of cybercrime data sharing. Besides contributing to the academic literature and to the active policy debates in this area they will assist in negotiating relevant arrangements with data suppliers and users: full details are here.

and with special thanks for the generosity of ThreatSTOP, who have funded this extra position:

  • We also seek someone to work on distributed denial-of-service (DDoS) measurement. We have been gathering data on reflected UDP DDoS events for many months and we want to extend our coverage and develop a much more detailed analysis of the location of perpetrators and victims along with real-time datafeeds of relevant information to assist in reducing harm. Full details are here.

Please follow the links to read the relevant formal advertisement for the details about exactly who and what we’re looking for and how to apply.