Category Archives: Web security

Calibration Fingerprint Attacks for Smartphones

When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Website authors can use this information to provide an improved user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.

A device fingerprint allows websites to detect your return visits or track you as you browse from one website to the next across the Internet. Such techniques can be used to protect against identity theft or credit card fraud, but also allow advertisers to monitor your activities and build a user profile of the websites you visit (and therefore a view into your personal interests). Browser vendors have long worried about the potential privacy invasion from device fingerprinting and have included measures to prevent such tracking. For example, on iOS, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings, and eliminate cross-domain tracking.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Our attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you. The attack takes less than one second to generate a fingerprint which never changes, even after a factory reset. This attack therefore provides an effective means to track you as you browse across the web and move between apps on your phone.

One-minute video providing a demo and describing how the attack works

Our approach works by carefully analysing the data from sensors which are accessible without any special permissions on both websites and apps. Our analysis infers the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for systematic manufacturing errors. This calibration data can then be used as the fingerprint.

In general, it is difficult to create a unique fingerprint on iOS devices due to strict sandboxing and device homogeneity. However, we demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app: around 67 bits of entropy for the iPhone 6S. Calibration fingerprints generated by a website are less unique (around 42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices. Apple adopted our proposed mitigations in iOS 12.2 for apps (CVE-2019-8541). Apple recently removed all access to motion sensors from Mobile Safari by default.

We presented this work on 21st May at IEEE Symposium on Security and Privacy 2019. For more details, please visit the SensorID website and read our paper:

Jiexin Zhang, Alastair R. Beresford and Ian Sheret, SensorID: Sensor Calibration Fingerprinting for Smartphones, Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 2019.

The University is Hiring

We’re looking for a Chief Information Security Officer. This isn’t a research post here at the lab, but across the yard in University Information Services, where they manage our networks and our administrative systems. There will be opportunities to work with security researchers like us, but the main task is protecting Cambridge from all sorts of online bad actors. If you would like to be in the thick of it, and you know what you’re doing, here’s how you can apply.

USENIX Security Best Paper 2016 – The Million Key Question … Origins of RSA Public Keys

Petr Svenda et al from Masaryk University in Brno won the Best Paper Award at this year’s USENIX Security Symposium with their paper classifying public RSA keys according to their source.

I really like the simplicity of the original assumption. The starting point of the research was that different crypto/RSA libraries use slightly different elimination methods and “cut-off” thresholds to find suitable prime numbers. They thought these differences should be sufficient to detect a particular cryptographic implementation and all that was needed were public keys. Petr et al confirmed this assumption. The best paper award is a well-deserved recognition as I’ve worked with and followed Petr’s activities closely.

The authors created a method for efficient identification of the source (software library or hardware device) of RSA public keys. It resulted in a classification of keys into more than dozen categories. This classification can be used as a fingerprint that decreases the anonymity of users of Tor and other privacy enhancing mailers or operators.

Bit Length of Largest Prime Factors of p-1
The graphs extracted from: The Million Key Question – Investigating The Origins of RSA Public Keys (follow the link for more).

All that is a result of an analysis of over 60 million freshly generated keys from 22 open- and closed-source libraries and from 16 different smart-cards. While the findings are fairly theoretical, they are demonstrated with a series of easy to understand graphs (see above).

I can’t see an easy way to exploit the results for immediate cyber attacks. However, we started looking into practical applications. There are interesting opportunities for enterprise compliance audits, as the classification only requires access to datasets of public keys – often created as a by-product of internal network vulnerability scanning.

An extended version of the paper is available from http://crcs.cz/rsa.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Double bill: Password Hashing Competition + KeyboardPrivacy

Two interesting items from Per Thorsheim, founder of the PasswordsCon conference that we’re hosting here in Cambridge this December (you still have one month to submit papers, BTW).

First, the Password Hashing Competition “have selected Argon2 as a basis for the final PHC winner”, which will be “finalized by end of Q3 2015”. This is about selecting a new password hashing scheme to improve on the state of the art and make brute force password cracking harder. Hopefully we’ll have some good presentations about this topic at the conference.

Second, and unrelated: Per Thorsheim and Paul Moore have launched a privacy-protecting Chrome plugin called Keyboard Privacy to guard your anonymity against websites that look at keystroke dynamics to identify users. So, you might go through Tor, but the site recognizes you by your typing pattern and builds a typing profile that “can be used to identify you at other sites you’re using, were identifiable information is available about you”. Their plugin intercepts your keystrokes, batches them up and delivers them to the website at a constant pace, interfering with the site’s ability to build a profile that identifies you.

Crypto Wars 2.0

Today we unveil a major report on whether law enforcement and intelligence agencies should have exceptional access to cryptographic keys and to our computer and communications data generally. David Cameron has called for this, as have US law enforcement leaders such as FBI Director James Comey.

This policy repeats a mistake of the 1990s. The Clinton administration tried for years to seize control of civilian cryptography, first with the Clipper Chip, and then with various proposals for ‘key escrow’ or ‘trusted third party encryption’. Back then, a group of experts on cryptography and computer security got together to explain why this was a bad idea. We have now reconvened in response to the attempt by Cameron and Comey to resuscitate the old dead horse of the 1990s.

Our report, Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, is timed to set the stage for a Wednesday hearing of the Senate Judiciary Committee at which Mr Comey will present his proposals. The reply to Comey will come from Peter Swire, who was on the other side twenty years ago (he was a Clinton staffer) and has written a briefing on the first crypto war here. Peter was recently on President Obama’s NSA review group. He argues that the real way to fix the problems complained of is to fix the mutual legal assistance process – which is also my own view.

Our report is also highly relevant to the new ‘Snoopers’ Charter’ that Home Secretary Teresa May has promised to put before parliament this fall. Mrs May has made clear she wants access to everything.

However this is both wrong in principle, and unworkable in practice. Building back doors into all computer and communication systems is against most of the principles of security engineering, and it also against the principles of human rights. Our right to privacy, set out in section 8 of the European Convention on Human Rights, can only be overridden by mechanisms that meet three tests. First, they must be set out in law, with sufficient clarity for their effects to be foreseeable; second, they must be proportionate; third, they must be necessary in a democratic society. As our report makes clear, universal exceptional access will fail all these tests by a mile.

For more, see the New York Times.