Category Archives: Operating systems

Euro S&P

I am at the IEEE Euro Security and Privacy Conference in London.

The keynote talk was by Sunny Consolvo, who runs Google’s security and privacy UX team, and her topic was user-facing threats to privacy and security. Her first theme was browser warnings, which try to stop users doing what they want to; it’s an interruption, it’s technical and there’s no obvious way forward other than clicking through the warning. In 2013 their SSL warning had a clickthrough rate of 68% while their more explicit and graphic malware warning had only 23% clickthrough. Mozilla’s SSL warning had a much lower 33%, with an icon of a policeman and more explicit tests. After four years of experimenting with watching eyes, corporate styling / branding and extra steps – none of which worked very well – they tried a strategy of clear instruction, attractive preferred choice, and unattractive alternative. The text had less jargon, a low reading level, brevity, specifics, an illustration and colour. Her CHI15 paper shows that the new design did much better, from 69% CTR to 41%. It turns out that many factors are at play; a strong signal is site quality, but this leads many people to continue anyway to sites they have come to trust. The malware clickthrough rate is now down to 5%, and SSL to 21%. That cost five years of a huge team effort, with back-end stuff too as well as UX. It involved huge internal fights, such as with a product manager who wanted the warning to say “this site contains malware” rather than “the site you’re trying to get to contains malware” as it was shorter. Her recent papers are here, here, and here.

A second thread of work is a longitudonal survey of public opinion on privacy ranging from government surveillance to cyber-bullying. This has run since 2015 in sixteen countries. 84% of respondents thought limiting access to online but not public data is very or extremely important. 84% were concerned about hackers vs 55% worried about governments and 53% companies. 20% of Germans are very angry about government access to personal data versus 10% of Brits. Most people believe national security justifies data access (except in South Korea) while no country’s people believes the government should have access to police non-violent crime. Most people everywhere support targeted monitoring but nowhere is there majority support for bulk surveillance. In Germany 53% believed everyone should have the right to send anonymous encrypted email while in the UK it’s 39%. Germans were pessimistic about technology with only 4% believing it was possible to be completely anonymous online. Over 88% believe that freedom of expression is very or extremely important and less than 1% unimportant; but over 70% didn’t believe that cyberbullying should be allowed. Opinions are more varied on extremist religious content, with 10.9% agreeing it should be allowed and 21% saying “it depends”.

Her third thread was intimate partner abuse, which has been experienced by 27% of women and 11% of men. There are typically three phases: a physical control phase where the abuser has access to the survivor’s device and may install malware, or even destroy devices; an escape phase which is high-risk as they try to find a new home, a job and so on; and a life-apart phase when they might want to shield location, email address and phone numbers to escape harassment, and may have lifelong concerns. Risks are greater for poorer people who may not be able to just buy a new phone. Sunny gave some case stories of extreme mate guarding and survivors’ strategies such as using a neighbour’s phone or a computer in a library or at work. It takes seven escape attempts on average to get to life apart. After escape, a survivor may have to restrict childrens’ online activities and sever mutual relationships; letting your child post anything can leak the school location and lead to the abuser turning up. She may have to change career as it can be impossible to work as a self-employed professional if she can no longer advertise. The takeaway is that designers should focus on usability during times of high stress and high risk; they should allow users to have multiple accounts; they should design things so that someone reviewing your history should not be able to tell you deleted anything; they should push 2-factor authentication, unusual activity notifications, and incognito mode. They should also think about how a survivor can capture evidence for use in divorce and custody cases while minimising the trauma. Finally she suggests serious research on other abuse survivors of different age groups and in different countries. For more see her paper here.

I will try to liveblog the rest of the talks in followups to this post.

Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here. The full report is available from the EU here.

Yet another Android side channel: input stealing for fun and profit

At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.

We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.

We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.

This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Thinking of selling your old phone? Watch out!

Today we unveil two papers describing serious and widespread vulnerabilities in Android mobile phones. The first presents a Security Analysis of Factory Resets. Now that hundreds of millions of people buy and sell smartphones secondhand and use them for everything from banking to dating, it’s important to able to sanitize your phone. You need to clean it when you buy it, so you don’t get caught by malware; and even more when you sell it, so you don’t give away your bank credentials or other personal information. So does the factory reset function actually work? We bought a couple of dozen second-hand Android phones and tested them to find out.

The news is not at all good. We were able to retrieve the Google master cookie from the great majority of phones, which means that we could have logged on to the previous owner’s gmail account. The reasons for failure are complex; new phones are generally better than old ones, and Google’s own brand phones are better than the OEM offerings. However the vendors need to do a fair bit of work, and users need to take a fair amount of care.

Attacks on a sold phone that could not be properly sanitized are one example of what we call a “user-not-present” attack. Another is when your phone is stolen. Many security software vendors offer a facility to lock or wipe your phone remotely when this happens, and it’s a standard feature with mobile antivirus products. Do these ‘solutions’ work?

You guessed it. Antivirus software that relies on a faulty factory reset can only go so far, and there’s only so much you can do with a user process. The AV vendors have struggled with a number of design tradeoffs, but the results are not that impressive. See Security Analysis of Consumer-Grade Anti-Theft Solutions Provided by Android Mobile Anti-Virus Apps for the gory details. These failings mean that staff at firms which handle lots of second-hand phones (whether lost, stolen, sold or given to charity) could launch some truly industrial-scale attacks. These papers appear today at the Mobile Security Technology workshop at IEEE Security and Privacy.

Design and Implementation of the FreeBSD Operating System, Second Edition now shipping

Kirk McKusick, George Neville-Neil, and I are pleased to announce that The Design and Implementation of the FreeBSD Operating System, Second Edition is now available from Pearson Education (Amazon link for non-US folk). Light Blue Touchpaper readers might be particularly interested in the new chapter on FreeBSD’s kernel security features including:

  • Process Credentials
  • Users and Groups
  • Privilege Model
  • Interprocess Access Control
  • Discretionary Access Control
  • Capsicum Capability Model
  • Jails
  • Mandatory Access-Control Framework
  • Security Event Auditing
  • Cryptographic Services
  • GELI Full-Disk Encryption

There is detailed coverage of the FreeBSD TCB, POSIX.1e and NFSv4 ACLs, OS sandboxing features, the Mandatory Access Control Framework used not just in FreeBSD but also Junos/Mac OS X/iOS, the FreeBSD kernel’s Yarrow-based pseudo-random number generator, and both confidentiality and integrity cryptographic protection for filesystems, and the kernel’s IPsec implementation. Other new content in this edition of the book includes ZFS, paravirtualised device drivers, DTrace, NFSv4, network-stack virtualisation, and much more.

We will be using this book as one of the core texts for our new masters-level operating-system course at Cambridge, L41, in spring 2015.

The CHERI capability model: Revisiting RISC in an age of risk (ISCA 2014)

Last week, Jonathan Woodruff presented our joint paper on the CHERI memory model, The CHERI capability model: Revisiting RISC in an age of risk, at the 2014 International Symposium on Computer Architecture (ISCA) in Minneapolis (video, slides). This is our first full paper on Capability Hardware Enhanced RISC Instructions (CHERI), collaborative work between Simon Moore’s and my team composed of members of the Security, Computer Architecture, and Systems Research Groups at the University of Cambridge Computer Laboratory, Peter G. Neumann’s group at the Computer Science Laboratory at SRI International, and Ben Laurie at Google.

CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust.

The approach draws inspiration from Capsicum, our OS-facing hybrid capability-system model now shipping in FreeBSD and available as a patch for Linux courtesy Google. We found that capability-system approaches matched extremely well with least-privilege oriented software compartmentalisation, in which programs are broken up into sandboxed components to mitigate the effects of exploited vulnerabilities. CHERI similarly merges research capability-system ideas with a conventional RISC processor design, making accessible the security and robustness benefits of the former, while retaining software compatibility with the latter. In the paper, we contrast our approach with a number of others including Intel’s forthcoming Memory Protection eXtensions (MPX), but in particular pursue a RISC-oriented design instantiated against the 64-bit MIPS ISA, but the ideas should be portable to other RISC ISAs such as ARMv8 and RISC-V.

Our hardware prototype is implemented in Bluespec System Verilog, a high-level hardware description language (HDL) that makes it easier to perform design-space exploration. To facilitate both reproducibility for this work, and also future hardware-software research, we’ve open sourced the underlying Bluespec Extensible RISC Implementation (BERI), our CHERI extensions, and a complete software stack: operating system, compiler, and so on. In fact, support for the underlying 64-bit RISC platform, which implements a version of the 64-bit MIPS ISA, was upstreamed to FreeBSD 10.0, which shipped earlier this year. Our capability-enhanced versions of FreeBSD (CheriBSD) and Clang/LLVM are distributed via GitHub.

You can learn more about CHERI, BERI, and our larger clean-slate hardware-software agenda on the CTSRD Project Website. There, you will find copies of our prior workshop papers, full Bluespec source code for the FPGA processor design, hardware build instructions for our FPGA-based tablet, downloadable CheriBSD images, software source code, and also our recent technical report, Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture, and Jon Woodruff’s PhD dissertation on CHERI.

Jonathan Woodruff, Robert N. M. Watson, David Chisnall, Simon W. Moore, Jonathan Anderson, Brooks Davis, Ben Laurie, Peter G. Neumann, Robert Norton, and Michael Roe. The CHERI capability model: Revisiting RISC in an age of risk, Proceedings of the 41st International Symposium on Computer Architecture (ISCA 2014), Minneapolis, MN, USA, June 14–16, 2014.

Research Assistants and Associates in OS, Compiler and CPU Security

We are pleased to announce a job ad for two new research assistants or post-doctoral research associates working on our CTSRD Project, whose target research areas include OS, compiler, and CPU security. This is a joint project between the University of Cambridge’s Security, NetOS, and Computer Architecture research groups, as well as the Computer Science Laboratory at SRI International.

Research Assistants and Associates in OS, Compiler and CPU Security
Fixed-term: The funds for this post are available for 18 months in the first instance.

We are seeking multiple Research Assistants and Post-Doctoral Research Associates to join the CTSRD Project, which is investigating fundamental improvements to CPU-architecture, operating-system (OS), program-analysis, and programming-language structure in support of computer security. The CTSRD Project is a collaboration between the University of Cambridge and SRI International, and part of the DARPA CRASH research programme on clean-slate computer system design for security. More information may be found at:

This position will be an integral part of an international team of researchers spanning multiple institutions in academia and industry. Successful candidates will contribute to the larger research effort by performing system-software, compiler, and hardware implementation and experimentation, developing and evaluating novel hypotheses about refinements to the vertical hardware-software stack. Possible areas of responsibility include: modifying OS kernels (e.g., FreeBSD), adapting compiler suites (e.g., Clang/LLVM); extending an open-source Bluespec-based research-processor design (CHERI); supporting an early-adopter user community for open-source hardware and software; and improving the quality and performance of hardware-software prototypes. The successful candidate must be willing to travel in the UK and abroad engaging with downstream user communities.
Continue reading Research Assistants and Associates in OS, Compiler and CPU Security