Category Archives: Privacy technology

Anonymous communication, data protection

SHB 2019 – Liveblog

I’ll be trying to liveblog the twelfth workshop on security and human behaviour at Harvard. I’m doing this remotely because of US visa issues, as I did for WEIS 2019 over the last couple of days. Ben Collier is attending as my proxy and we’re trying to build on the experience of telepresence reported here and here. My summaries of the workshop sessions will appear as followups to this post.

Calibration Fingerprint Attacks for Smartphones

When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Website authors can use this information to provide an improved user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.

A device fingerprint allows websites to detect your return visits or track you as you browse from one website to the next across the Internet. Such techniques can be used to protect against identity theft or credit card fraud, but also allow advertisers to monitor your activities and build a user profile of the websites you visit (and therefore a view into your personal interests). Browser vendors have long worried about the potential privacy invasion from device fingerprinting and have included measures to prevent such tracking. For example, on iOS, the Mobile Safari browser uses Intelligent Tracking Prevention to restrict the use of cookies, prevent access to unique device settings, and eliminate cross-domain tracking.

We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Our attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you. The attack takes less than one second to generate a fingerprint which never changes, even after a factory reset. This attack therefore provides an effective means to track you as you browse across the web and move between apps on your phone.

One-minute video providing a demo and describing how the attack works

Our approach works by carefully analysing the data from sensors which are accessible without any special permissions on both websites and apps. Our analysis infers the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for systematic manufacturing errors. This calibration data can then be used as the fingerprint.

In general, it is difficult to create a unique fingerprint on iOS devices due to strict sandboxing and device homogeneity. However, we demonstrated that our approach can produce globally unique fingerprints for iOS devices from an installed app: around 67 bits of entropy for the iPhone 6S. Calibration fingerprints generated by a website are less unique (around 42 bits of entropy for the iPhone 6S), but they are orthogonal to existing fingerprinting techniques and together they are likely to form a globally unique fingerprint for iOS devices. Apple adopted our proposed mitigations in iOS 12.2 for apps (CVE-2019-8541). Apple recently removed all access to motion sensors from Mobile Safari by default.

We presented this work on 21st May at IEEE Symposium on Security and Privacy 2019. For more details, please visit the SensorID website and read our paper:

Jiexin Zhang, Alastair R. Beresford and Ian Sheret, SensorID: Sensor Calibration Fingerprinting for Smartphones, Proceedings of the 40th IEEE Symposium on Security and Privacy (S&P), 2019.

Security Engineering: Third Edition

I’m writing a third edition of my best-selling book Security Engineering. The chapters will be available online for review and feedback as I write them.

Today I put online a chapter on Who is the Opponent, which draws together what we learned from Snowden and others about the capabilities of state actors, together with what we’ve learned about cybercrime actors as a result of running the Cambridge Cybercrime Centre. Isn’t it odd that almost six years after Snowden, nobody’s tried to pull together what we learned into a coherent summary?

There’s also a chapter on Surveillance or Privacy which looks at policy. What’s the privacy landscape now, and what might we expect from the tussles over data retention, government backdoors and censorship more generally?

There’s also a preface to the third edition.

As the chapters come out for review, they will appear on my book page, so you can give me comment and feedback as I write them. This collaborative authorship approach is inspired by the late David MacKay. I’d suggest you bookmark my book page and come back every couple of weeks for the latest instalment!

Could a gaming app steal your bank PIN?

Have you ever wondered whether one app on your phone could spy on what you’re typing into another? We have. Five years ago we showed that you could use the camera to measure the phone’s motion during typing and use that to recover PINs. Then three years ago we showed that you could use interrupt timing to recover text entered using gesture typing. So what other attacks are possible?

Our latest paper shows that one of the apps on the phone can simply record the sound from its microphones and work out from that what you’ve been typing.

Your phone’s screen can be thought of as a drum – a membrane supported at the edges. It makes slightly different sounds depending on where you tap it. Modern phones and tablets typically have two microphones, so you can also measure the time difference of arrival of the sounds. The upshot is that can recover PIN codes and short words given a few measurements, and in some cases even long and complex words. We evaluate the new attack against previous ones and show that the accuracy is sometimes even better, especially against larger devices such as tablets.

This paper is based on Ilia Shumailov’s MPhil thesis project.

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.

Failure to protect: kids’ data in school

If you care about children’s rights, data protection or indeed about privacy in general, then I’d suggest you read this disturbing new report on what’s happening in Britain’s schools.

In an ideal world, schools should be actively preparing pupils to be empowered citizens in a digital world that is increasingly riddled with exploitative and coercive systems. Instead, the government is forcing schools to collect data that are then sold or given to firms that exploit it, with no meaningful consent. There is not even the normal right to request subject access to you can check whether the information about you is right and have it corrected if it’s wrong.

Yet the government has happily given the Daily Telegraph fully-identified pupil information so that it can do research, presumably on how private schools are better than government ones, or how grammar schools are better than comprehensives. You just could not make this up.

The detective work to uncover such abuses has been done by the NGO Defenddigitalme, who followed up some work we did a decade and more ago on the National Pupil Database in our Database State report and our earlier research on children’s databases. Defenddigitalme are campaigning for subject access rights, the deletion of nationality data, and a code of practice. Do read the report and if you think it’s outrageous, write to your MP and say so. Our elected representatives make a lot of noise about protecting children; time to call them on it.