Category Archives: Operating systems

The lifetime of an Android API vulnerability

By Daniel Carter, Daniel Thomas, and Alastair Beresford

Security updates are an important mechanism for protecting users and their devices from attack, and therefore it’s important vendors produce security updates, and that users apply them. Producing security updates is particularly difficult when more than one vendor needs to make changes in order to secure a system.

We studied one such example in previous research (open access). The specific vulnerability (CVE-2012-6636) affected Android devices and allowed JavaScript running inside a WebView of an app (e.g. an advert) to run arbitrary code inside the app itself, with all the permissions of app. The vulnerability could be exploited remotely by an attacker who bought ads which supported JavaScript. In addition, since most ads at the time were served over HTTP, the vulnerability could also be exploited if an attacker controlled a network used by the Android device (e.g. WiFi in a coffee shop). The fix required both the Android operating system, and all apps installed on the handset, to support at least Android API Level 17. Thus, the deployment of an effective solution for users was especially challenging.

When we published our paper in 2015, we predicted that this vulnerability would not be patched on 95% of devices in the Android ecosystem until January 2018 (plus or minus a standard deviation of 1.23 years). Since this date has now passed, we decided to check whether our prediction was correct.

To perform our analysis we used data on deployed API versions taken from (almost) monthly snapshots of Google’s Android Distribution Dashboard which we have been tracking. The good news is that we found the operating system update requirements crossed the 95% threshold in May 2017, seven months earlier than our best estimate, and within one standard deviation of our prediction. The most recent data for May 2019 shows deployment has reached 98.2% of devices in use. Nevertheless, fixing this aspect of the vulnerability took well over 4 years to reach 95% of devices.

Proportion of devices safe from the JavaScript-to-Java vulnerability. For details how this is calculated, see our previous paper.
Proportion of devices safe from the JavaScript-to-Java vulnerability. For details how this is calculated, see our previous paper.

Google delivered a further fix in Android 4.4.3 that blocked access to the getClass method from JavaScript, considerably reducing the risk of exploitation even from apps which were not updated. A conservative estimate of the deployment of this further fix is shown on the graph, reaching 95% adoption in April 2019. On the app side of things, Google has also been encouraging app developers to update. From 1st November 2018, updates to apps on Google Play must target API level 26 or higher and from November 1st 2019 updates to apps must target API level 28 or higher. This change forces the app-side changes necessary to fix this vulnerability. Unfortunately we don’t have good data on the distribution of apps installed on handsets, but we expect that most Android devices are now secure against this vulnerability.

Our work is not done however, and we are still looking into the security of mobile devices. This summer we are extending the work from our other 2015 paper Security Metrics for the Android Ecosystem where we analysed the composition of Android vulnerabilities. Last time we used distributions of deployed Android versions on devices from Device Analyzer (an Android measurement app we deployed to Google Play), the device management system of a FTSE 100 company, and User-Agent string data from an ISP in Rwanda. If you might be able to share similar data with us to support our latest research work then please get in touch: contact@androidvulnerabilities.org.

Could a gaming app steal your bank PIN?

Have you ever wondered whether one app on your phone could spy on what you’re typing into another? We have. Five years ago we showed that you could use the camera to measure the phone’s motion during typing and use that to recover PINs. Then three years ago we showed that you could use interrupt timing to recover text entered using gesture typing. So what other attacks are possible?

Our latest paper shows that one of the apps on the phone can simply record the sound from its microphones and work out from that what you’ve been typing.

Your phone’s screen can be thought of as a drum – a membrane supported at the edges. It makes slightly different sounds depending on where you tap it. Modern phones and tablets typically have two microphones, so you can also measure the time difference of arrival of the sounds. The upshot is that can recover PIN codes and short words given a few measurements, and in some cases even long and complex words. We evaluate the new attack against previous ones and show that the accuracy is sometimes even better, especially against larger devices such as tablets.

This paper is based on Ilia Shumailov’s MPhil thesis project.

Struck by a Thunderbolt

At the Network and Distributed Systems Security Symposium in San Diego today we’re presenting Thunderclap, which describes a set of new vulnerabilities involving the security of computer peripherals and the open-source research platform used to discover them. This is a joint work with Colin Rothwell, Brett Gutstein, Allison Pearce, Peter Neumann, Simon Moore and Robert Watson.

We look at the security of input/output devices that use the Thunderbolt interface, which is available via USB-C ports in many modern laptops. Our work also covers PCI Express (PCIe) peripherals which are found in desktops and servers.

Such ports offer very privileged, low-level, direct memory access (DMA), which gives peripherals much more privilege than regular USB devices. If no defences are used on the host, an attacker has unrestricted memory access, and can completely take control of a target computer: they can steal passwords, banking logins, encryption keys, browser sessions and private files, and they can also inject malicious software that can run anywhere in the system.

We studied the defences of existing systems in the face of malicious DMA-enabled peripheral devices and found them to be very weak.

The primary defence is a component called the Input-Output Memory Management Unit (IOMMU), which, in principle, can allow devices to access only the memory needed to do their job and nothing else. However, we found existing operating systems do not use the IOMMU effectively.

To begin with, most systems don’t enable the IOMMU at all. Windows 7, Windows 8, and Windows 10 Home and Pro didn’t support the IOMMU. Windows 10 Enterprise can optionally use it, but in a very limited way that leaves most of the system undefended. Linux and FreeBSD do support using the IOMMU, but this support is not enabled by default in most distributions. MacOS is the only OS we studied that uses the IOMMU out of the box.

This state of affairs is not good, and our investigations revealed significant further vulnerabilities even when the IOMMU is enabled.

We built a fake network card that is capable of interacting with the operating system in the same way as a real one, including announcing itself correctly, causing drivers to attach, and sending and receiving network packets. To do this, we extracted a software model of an Intel E1000 from the QEMU full-system emulator and ran it on an FPGA. Because this is a software model, we can easily add malicious behaviour to find and exploit vulnerabilities.

We found the attack surface available to a network card was much richer and more nuanced than was previously thought. By examining the memory it was given access to while sending and receiving packets, our device was able to read traffic from networks that it wasn’t supposed to. This included VPN plaintext and traffic from Unix domain sockets that should never leave the machine.

On MacOS and FreeBSD, our network card was able to start arbitrary programs as the system administrator, and on Linux it had access to sensitive kernel data structures. Additionally, on MacOS devices are not protected from one another, so a network card is allowed to read the display contents and keystrokes from a USB keyboard.

Worst of all, on Linux we could completely bypass the enabled IOMMU, simply by setting a few option fields in the messages that our malicious network card sent.

Such attacks are very plausible in practice. The combination of power, video, and peripheral-device DMA over Thunderbolt 3 ports facilitates the creation of malicious charging stations or displays that function correctly but simultaneously take control of connected machines.

We’ve been collaborating with vendors about these vulnerabilities since 2016, and a number of mitigations have been shipped. We have also been working with vendors, helping them to use our Thunderclap tools to explore this vulnerability space and audit their systems for problems.

MacOS fixed the specific vulnerability we used to get administrator access in macOS 10.12.4 in 2016, although the more general scope of such attacks remain relevant. More recently, new laptops that ship with Windows 10 version 1803 or later have a feature called Kernel DMA Protection for Thunderbolt 3, which at least enables the IOMMU for Thunderbolt devices (but not PCI Express ones). Since this feature requires firmware support, older laptops that were shipped before 1803 remain vulnerable. Recently, Intel committed patches to Linux to enable the IOMMU for Thunderbolt devices, and to disable the ATS feature that allowed our IOMMU bypass. These are part of the Linux kernel 5.0 which is currently in the release process.

One major laptop vendor told us they would like to study these vulnerabilities in more detail before adding Thunderbolt to new product lines.

More generally, since this is a new space of many vulnerabilities, rather than a specific example, we believe all operating systems are vulnerable to similar attacks, and that more substantial design changes will be needed to remedy these problems. We noticed similarities between the vulnerability surface available to malicious peripherals in the face of IOMMU protections and that of the kernel system call interface, long a source of operating system vulnerabilities. The kernel system call interface has been subjected to much scrutiny, security analysis, and code hardening over the years, which must now be applied to the interface between peripherals and the IOMMU.

As well as asking vendors to improve the security of their systems, we advise users to update their systems and to be cautious about attaching unfamiliar USB-C devices to their machines – especially those in public places.

We have placed more background on our work and a list of FAQs on our website, thunderclap.io. There, we have also open sourced the Thunderclap research platform to allow other researchers to reproduce and extend our work, and to aid vendors in performing security evaluation of their products.

Thunderclap: Exploring Vulnerabilities in Operating System IOMMU Protection via DMA from Untrustworthy Peripherals A. Theodore Markettos, Colin Rothwell, Brett F. Gutstein, Allison Pearce, Peter G. Neumann, Simon W. Moore, Robert N. M. Watson. Proceedings of the Network and Distributed Systems Security Symposium (NDSS), 24-27 February 2019, San Diego, USA.

Euro S&P

I am at the IEEE Euro Security and Privacy Conference in London.

The keynote talk was by Sunny Consolvo, who runs Google’s security and privacy UX team, and her topic was user-facing threats to privacy and security. Her first theme was browser warnings, which try to stop users doing what they want to; it’s an interruption, it’s technical and there’s no obvious way forward other than clicking through the warning. In 2013 their SSL warning had a clickthrough rate of 68% while their more explicit and graphic malware warning had only 23% clickthrough. Mozilla’s SSL warning had a much lower 33%, with an icon of a policeman and more explicit tests. After four years of experimenting with watching eyes, corporate styling / branding and extra steps – none of which worked very well – they tried a strategy of clear instruction, attractive preferred choice, and unattractive alternative. The text had less jargon, a low reading level, brevity, specifics, an illustration and colour. Her CHI15 paper shows that the new design did much better, from 69% CTR to 41%. It turns out that many factors are at play; a strong signal is site quality, but this leads many people to continue anyway to sites they have come to trust. The malware clickthrough rate is now down to 5%, and SSL to 21%. That cost five years of a huge team effort, with back-end stuff too as well as UX. It involved huge internal fights, such as with a product manager who wanted the warning to say “this site contains malware” rather than “the site you’re trying to get to contains malware” as it was shorter. Her recent papers are here, here, and here.

A second thread of work is a longitudonal survey of public opinion on privacy ranging from government surveillance to cyber-bullying. This has run since 2015 in sixteen countries. 84% of respondents thought limiting access to online but not public data is very or extremely important. 84% were concerned about hackers vs 55% worried about governments and 53% companies. 20% of Germans are very angry about government access to personal data versus 10% of Brits. Most people believe national security justifies data access (except in South Korea) while no country’s people believes the government should have access to police non-violent crime. Most people everywhere support targeted monitoring but nowhere is there majority support for bulk surveillance. In Germany 53% believed everyone should have the right to send anonymous encrypted email while in the UK it’s 39%. Germans were pessimistic about technology with only 4% believing it was possible to be completely anonymous online. Over 88% believe that freedom of expression is very or extremely important and less than 1% unimportant; but over 70% didn’t believe that cyberbullying should be allowed. Opinions are more varied on extremist religious content, with 10.9% agreeing it should be allowed and 21% saying “it depends”.

Her third thread was intimate partner abuse, which has been experienced by 27% of women and 11% of men. There are typically three phases: a physical control phase where the abuser has access to the survivor’s device and may install malware, or even destroy devices; an escape phase which is high-risk as they try to find a new home, a job and so on; and a life-apart phase when they might want to shield location, email address and phone numbers to escape harassment, and may have lifelong concerns. Risks are greater for poorer people who may not be able to just buy a new phone. Sunny gave some case stories of extreme mate guarding and survivors’ strategies such as using a neighbour’s phone or a computer in a library or at work. It takes seven escape attempts on average to get to life apart. After escape, a survivor may have to restrict childrens’ online activities and sever mutual relationships; letting your child post anything can leak the school location and lead to the abuser turning up. She may have to change career as it can be impossible to work as a self-employed professional if she can no longer advertise. The takeaway is that designers should focus on usability during times of high stress and high risk; they should allow users to have multiple accounts; they should design things so that someone reviewing your history should not be able to tell you deleted anything; they should push 2-factor authentication, unusual activity notifications, and incognito mode. They should also think about how a survivor can capture evidence for use in divorce and custody cases while minimising the trauma. Finally she suggests serious research on other abuse survivors of different age groups and in different countries. For more see her paper here.

I will try to liveblog the rest of the talks in followups to this post.

Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here. The full report is available from the EU here.

Yet another Android side channel: input stealing for fun and profit

At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.

We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.

We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.

This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Thinking of selling your old phone? Watch out!

Today we unveil two papers describing serious and widespread vulnerabilities in Android mobile phones. The first presents a Security Analysis of Factory Resets. Now that hundreds of millions of people buy and sell smartphones secondhand and use them for everything from banking to dating, it’s important to able to sanitize your phone. You need to clean it when you buy it, so you don’t get caught by malware; and even more when you sell it, so you don’t give away your bank credentials or other personal information. So does the factory reset function actually work? We bought a couple of dozen second-hand Android phones and tested them to find out.

The news is not at all good. We were able to retrieve the Google master cookie from the great majority of phones, which means that we could have logged on to the previous owner’s gmail account. The reasons for failure are complex; new phones are generally better than old ones, and Google’s own brand phones are better than the OEM offerings. However the vendors need to do a fair bit of work, and users need to take a fair amount of care.

Attacks on a sold phone that could not be properly sanitized are one example of what we call a “user-not-present” attack. Another is when your phone is stolen. Many security software vendors offer a facility to lock or wipe your phone remotely when this happens, and it’s a standard feature with mobile antivirus products. Do these ‘solutions’ work?

You guessed it. Antivirus software that relies on a faulty factory reset can only go so far, and there’s only so much you can do with a user process. The AV vendors have struggled with a number of design tradeoffs, but the results are not that impressive. See Security Analysis of Consumer-Grade Anti-Theft Solutions Provided by Android Mobile Anti-Virus Apps for the gory details. These failings mean that staff at firms which handle lots of second-hand phones (whether lost, stolen, sold or given to charity) could launch some truly industrial-scale attacks. These papers appear today at the Mobile Security Technology workshop at IEEE Security and Privacy.

Design and Implementation of the FreeBSD Operating System, Second Edition now shipping

Kirk McKusick, George Neville-Neil, and I are pleased to announce that The Design and Implementation of the FreeBSD Operating System, Second Edition is now available from Pearson Education (Amazon link for non-US folk). Light Blue Touchpaper readers might be particularly interested in the new chapter on FreeBSD’s kernel security features including:

  • Process Credentials
  • Users and Groups
  • Privilege Model
  • Interprocess Access Control
  • Discretionary Access Control
  • Capsicum Capability Model
  • Jails
  • Mandatory Access-Control Framework
  • Security Event Auditing
  • Cryptographic Services
  • GELI Full-Disk Encryption

There is detailed coverage of the FreeBSD TCB, POSIX.1e and NFSv4 ACLs, OS sandboxing features, the Mandatory Access Control Framework used not just in FreeBSD but also Junos/Mac OS X/iOS, the FreeBSD kernel’s Yarrow-based pseudo-random number generator, and both confidentiality and integrity cryptographic protection for filesystems, and the kernel’s IPsec implementation. Other new content in this edition of the book includes ZFS, paravirtualised device drivers, DTrace, NFSv4, network-stack virtualisation, and much more.

We will be using this book as one of the core texts for our new masters-level operating-system course at Cambridge, L41, in spring 2015.