For a bit over a decade, SRI International and the University of Cambridge have been working to develop CHERI (Capability Hardware Enhanced RISC Instructions), a set of processor-architecture security extensions targeting vulnerability mitigation through memory safety and software compartmentalisation. In 2019, the UK’s Industrial Strategy Challenge Fund announced the £187M Digital Security by Design (DSbD) programme, which is supporting the creation Arm’s experimental CHERI-based Morello processor, System-on-Chip (SoC), and board shipping in early 2022, as well as dozens of industrial and academic projects to explore and develop CHERI-based software security. This week, UKRI will be launching an £8M funding call via EPSRC and InnovateUK to support UK-based academic and industrial CHERI/Morello software ecosystem development work. They are particularly interested in supporting work in the areas of OS and developer toolchain, libraries and packages, language runtimes, frameworks and middleware, and platform services on open-source operating systems — all key areas to expand the breadth and maturity of CHERI-enabled software. There is a virtual briefing event taking place on 5 October 2021, with proposals due on 8 December 2021.
We are pleased to announce two new research and/or software-development posts contributing to the CHERI project and Arm’s forthcoming Morello prototype processor, SoC, and development board. Learn more about CHERI and Morello on our project web site.
Fixed-term: The funds for this post are available for up to 2 years, with the possibility of extension as grant funds permit.
Research Assistant: £26,715 – £30,942 or Research Associate: £32,816 – £40,322
We are seeking one or more Research Assistants (without PhD) or Research Associates (holding or shortly to obtain a PhD) with a strong background in compilers and/or operating systems to contribute to the CHERI Project and our joint work with Arm on their prototype Morello board, which incorporates CHERI into a high-end superscalar ARMv8-A processor. CHERI is a highly successful collaboration between the University of Cambridge, SRI International, and ARM Research to develop new architectural security primitives. The CHERI protection model extends off-the-shelf processor Instruction-Set Architectures (ISAs) and processors with new capability-based security primitives supporting fine-grained C/C++-language memory protection and scalable software compartmentalization.
I’m at Financial Cryptography 2020 and will try to liveblog some of the talks in followups to this post.
The keynote was given by Allison Nixon, Chief Research Officer of Unit221B, on “Fraudsters Taught Us that Identity is Broken”.
Allison started by showing the Mitchell and Webb clip. In a world where even Jack Dorsey got his twitter hacked via incoming SMS, what is identity? Your thief becomes you. Abuse of old-fashioned passports was rare as they were protected by law; now they’re your email address (which you got by lying to an ad-driven website) and phone number (which gets taken away and given to a random person if you don’t pay your bill). If lucky you might have a signing key (generated on a general purpose computer, and hard to revoke – that’s what bitcoin theft is often about). The whole underlying system is wrong. Email domains, like phone numbers, lapse if you forget to pay your bill; fraudsters actively look for custom domains and check if yours has lapsed, while relying parties mostly don’t. Privacy regulations in most countries prevent you from looking up names from phone numbers; many have phone numbers owned by their employers. Your email address can be frozen or removed because of spam if you’re bad or are hacked, while even felons are not deprived of their names. Evolution is not an intelligent process! People audit password length but rarely the password reset policy: many use zero-factor auth, meaning information that’s sort-of public like your SSN. In Twitter you reset your password then message customer support asking them to remove two-factor, and they do, so long as you can log on! This is a business necessity as too many people lose their phone or second factor, so this customer-support backdoor will never be properly closed. Many bitcoin exchanges have no probation period, whether mandatory or customer option. SIM swap means account theft so long as phone number enables password reset – she also calls this zero-factor authentication.
SIM swap is targeted, unlike most password-stuffing attacks, and compromises people who comply with all the security rules. Allison tried hard to protect herself against this fraud but mostly couldn’t as the phone carrier is the target. This can involve data breaches at the carrier, insider involvement and the customer service back door. Email domain abuse is similar; domain registrars are hacked or taken over. Again, the assumptions made about the underlying infrastructure are wrong. Your email can be reset by your phone number and vice versa. Your private key can be stolen via your cloud backups. Both identity vendors and verifiers rely on unvetted third parties; vendors can’t notify verifiers of a hack. The system failure is highlighted by the existence of criminal markets in identity.
There are unrealistic expectations too. As a user of a general-purpose computer, you have no way to determine whether your machine is suitable for storing private keys, and almost 100% of people are unable to comply with security advice. That tells you it’s the system that’s broken. It’s a blame game, and security advice is as much cargo cult as anything else.
What would a better identity system look like? There would be an end to ever-changing advice; you’d be notified if your information got stolen, just as you know if your physical driving license is stolen; there would be an end to unreasonable expectations of both humans and computers; the legal owner of the identity would be the person identified and would be non-transferable and irrevocable; it would not depend on the integrity of 3rd-party systems like DNS and CAs and patch management mechanisms; we’ll know we’re there once the criminal marketplace vanishes.
Questions: What might we do about certificate revocation? A probation period is the next thing to do, as how people learn of a SIM swap is a flood of password reset messages in email, and then it’s a race. I asked whether rather than fixing the whole world, we should fix it one relying party at a time? Banks give you physical tokens after all, as they’re regulated and have to eat the losses. Allison agreed; in 2019 she talked about SIM swap to many banks but had no interest from any crypto exchange. Curiously, the lawsuits tend to target carriers rather than the exchanges. What about SS7? There are sophisticated Russian criminal gangs doing such attacks, but they require a privileged position in the network, like BGP attacks. What about single signon? The market is currently in flux and might eventually settle on a few vendors. What about SMS spoofing attacks? Allison hasn’t seen them in 4g marketplaces or in widespread criminal use. Caller-ID spoofing is definitely used, by bad guys who organise SWATting. Should we enforce authentication tokens? The customer service department will be inundated with people who have lost theirs and that will become the backdoor. Would blockchains help? No, they’re just an audit log, and the failures are upstream. The social aspect is crucial: people know how to protect their physical cash in their wallet, and a proper solution to the identity problem must work like that. It’s not an impossible task, and might involve a chip in your driver’s license. It’s mostly about getting the execution right.
One of the cybercrimes that bothers us at Cambridge is accommodation fraud. Every October about 1% the people who come as grad students or postdocs rent an apartment that just doesn’t exist. Sites like Craigslist are full of ads that are just too good to be true. While the university does what it can to advise new hires and admissions to use our own accommodation services if they cannot check out an apartment personally, perhaps 50 new arrivals still turn up to find that they have nowhere to live, their money is gone, and the police aren’t interested. This is not a nice way to start your PhD.
Some years ago a new postdoc, Sophie van der Zee, almost fell for such a scam, and then got to know someone here who had actually become a victim. She made this into a research project, and replied to about a thousand scam ads. We analysed the persuasion techniques that the crooks used.
Here at last is our analysis: The gift of the gab: Are rental scammers skilled at the arts of persuasion? We found that most of the techniques the scammers used are straight from the standard marketing textbook (Cialdini) rather than from the lists of more exotic scam techniques compiled by fraud researchers such as Stajano and Wilson. The only significant exception was appeals to sympathy. Most of the scammers were operating out of West Africa in what appears to have one or more boilerhouse sales operations. They work from scripts, very much like people selling insurance or home improvements.
Previous cybercrime research looked at both high-value targeted operations and scale attackers who compromise machines in bulk. This is an example of fraud lying between the “first class” and “economy class” versions of cybercrime.
Rental scams are still a problem for new staff and students. Since this work was done, things have changed somewhat, in that most of the scams are now run by an operator using slick websites who, according to the local police, appears to be based in Germany. We have repeatedly tried, and failed, to persuade the police (local and Met), the NCA and the NCSC to have his door broken down. Unfortunately the British authorities appear to lack the motivation to extradite foreigners who commit small frauds at scale. So if you want to steal a few million a year, take it from a few thousand people, a thousand pounds at a time. So long as you stay overseas there seems to be little risk of arrest.
PIs: Robert N. M. Watson (Cambridge), Simon W. Moore (Cambridge), Peter Sewell (Cambridge), and Peter G. Neumann (SRI)
Since 2010, SRI International and the University of Cambridge, supported by DARPA, have been developing CHERI: a capability-system extension to RISC Instruction-Set Architectures (ISAs) supporting fine-grained memory protection and scalable compartmentalization .. while retaining incremental deployability within current C and C++ software stacks. This ten-year research project has involved hardware-software-semantic co-design: FPGA prototyping, compiler development, operating-system development, and application adaptation, as well as formal modeling and proof. Extensively documented in technical reports and research papers, we have iterated on CHERI as we evaluated and improved microarchitectural overheads, performance, software compatibility, and security.
As we know, mainstream computer systems are still chronically insecure. One of the main reasons for this is that conventional hardware architectures and C/C++ language abstractions, dating back to the 1970s, provide only coarse-grained memory protection. Without memory safety, many coding errors turn into exploitable security vulnerabilities. In our ASPLOS 2019 paper on CheriABI (best paper award), we demonstrated that a complete UNIX userspace and application suite could be protected by strong memory safety with minimal source-code disruption and acceptable performance overheads. Scalable software compartmentalization offers mitigation for future unknown classes of vulnerabilities by enabling greater use of design patterns such as software sandboxing. Our An Introduction to CHERI technical report introduces our approach including the architecture, microarchitectural contributions, formal models, software protection model, and practical software adaptation. The CHERI ISA v7 specification is the authoritative reference to the architecture, including both the architecture-neutral protection model and its concrete mappings into the 64-bit MIPS and 32/64-bit RISC-V ISAs. Our Rigorous Engineering technical report describes our modelling and mechanised proof of key security properties.
Today, we are very excited to be able to talk about another long-running aspect of our DARPA-supported work: A collaboration since 2014 with engineers at Arm to create an experimental adaptation of CHERI to the ARMv8-A architecture. This widely used ISA is the foundation for the vast majority of mobile phones and tablets, including those running iOS and Android. The £170M UKRI program Digital Security by Design (DSbD) was announced in late September 2019 to explore potential applications of CHERI — with a £70M investment by UKRI, and a further £117M from industry including involvement by Arm, Microsoft, and Google. Today, UKRI and Arm announced that the Arm Morello board will become available from 2021: Morello is a prototype 7nm high-end multi-core superscalar ARMv8-A processor (based on Arm’s Neoverse N1), SoC, and board implementing experimental CHERI extensions. As part of this effort, the UK Engineering and Physical Sciences Research Council (EPSRC) has also announced a new £8M programme to fund UK academics to work with Morello. Arm will release their Morello adaptation of our CHERI Clang/LLVM toolchain, and we will release a full adaptation of our open-source CHERI reference software stack to Morello (including our CheriBSD operating system and application suite) as foundations for research and prototyping on Morello. Watch the DSbD workshop videos from Robert Watson (Cambridge), Richard Grisenthwaite (Arm), and Manuel Costa (Microsoft) on CHERI and Morello, which are linked below, for more information.
This is an incredible opportunity to validate the CHERI approach, with accompanying systems software and formal verification, through an industrial scale and industrial quality hardware design, and to broaden the research community around CHERI to explore its potential impact. You can read the announcements about Morello here:
- A blog post by Richard Grisenthwaite (Chief Architect, Arm) on DSbD and Morello
- A blog post by Cambridge’s Department of Computer Science and Technology on DSbD and Morello
- The announcement from the UK Department for Business, Energy, and Industrial Strategy (BEIS)
Recordings of several talks on CHERI and Morello are now available from the ISCF Digital Security by Design Challenge Collaborators’ Workshop (26 September 2019), including:
- Robert Watson (Cambridge)’s talk on CHERI, and on our transition collaboration with Arm (video) (slides)
- Richard Grisenthwaite (Arm)’s talk on the Morello board and CHERI transition (video) (slides)
- Manuel Costa (Microsoft)’s talk on memory safety and potential opportunities arising with CHERI and Morello (video)
In addition, we are maintaining a CHERI DSbD web page with background information on CHERI, announcements regarding Morello, links to DSbD funding calls, and information regarding software artefacts, formal models, and so on. We will continue to update that page as the programme proceeds.
This has been possible through the contributions of the many members of the CHERI research team over the last ten years, including: Hesham Almatary, Jonathan Anderson, John Baldwin, Hadrien Barrel, Thomas Bauereiss, Ruslan Bukin, David Chisnall, James Clarke, Nirav Dave, Brooks Davis, Lawrence Esswood, Nathaniel W. Filardo, Khilan Gudka, Brett Gutstein, Alexandre Joannou, Robert Kovacsics, Ben Laurie, A. Theo Markettos, J. Edward Maste, Marno van der Maas, Alfredo Mazzinghi, Alan Mujumdar, Prashanth Mundkur, Steven J. Murdoch, Edward Napierala, Kyndylan Nienhuis, Robert Norton-Wright, Philip Paeps, Lucian Paul-Trifu, Alex Richardson, Michael Roe, Colin Rothwell, Peter Rugg, Hassen Saidi, Stacey Son, Domagoj Stolfa, Andrew Turner, Munraj Vadera, Jonathan Woodruff, Hongyan Xia, and Bjoern A. Zeeb.
Approved for public release; distribution is unlimited. This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contract FA8750-10-C-0237 (CTSRD), with additional support from FA8750-11-C-0249 (MRC2), HR0011-18-C-0016 (ECATS), and FA8650-18-C-7809 (CIFV) as part of the DARPA CRASH, MRC, and SSITH research programs. The views, opinions, and/or findings contained in this report are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. We also acknowledge the EPSRC REMS Programme Grant (EP/K008528/1), the ERC ELVER Advanced Grant (789108), the Isaac Newton Trust, the UK Higher Education Innovation Fund (HEIF), Thales E-Security, Microsoft Research Cambridge, Arm Limited, Google, Google DeepMind, HP Enterprise, and the Gates Cambridge Trust.
There has already been serious controversy about the “Henry VIII” powers in the Brexit Bill, which will enable ministers to rewrite laws at their discretion as we leave the EU. Now Theresa May’s government has sneaked a new “Framework for data processing in government” into the Lords committee stage of the new Data Protection Bill (see pages 99-101, which are pp 111–3 of the pdf). It will enable ministers to promulgate a Henry VIII privacy regulation with quite extraordinary properties.
It will cover all data held by any public body including the NHS (175(1)), be outside of the ICO’s jurisdiction (178(5)) and that of any tribunal (178(2)) including Judicial Review (175(4), 176(7)), wider human-rights law (178(2,3,4)), and international jurisdictions – although ministers are supposed to change it if they notice that it breaks any international treaty obligation (177(4)).
In fact it will be changeable on a whim by Ministers (175(4)), have no effective Parliamentary oversight (175(6)), and apply retroactively (178(3)). It will also provide an automatic statutory defence for any data processing in any Government decision taken to any tribunal/court 178(4)).
Ministers have had frequent fights in the past over personal data in the public sector, most frequently over medical records which they have sold, again and again, to drug companies and others in defiance not just of UK law, EU law and human-rights law, but of the express wishes of patients, articulated by opting out of data “sharing”. In fact, we have to thank MedConfidential for being the first to notice the latest data grab. Their briefing gives more details are sets out the amendments we need to press for in Parliament. This is not the only awful thing about the bill by any means; its section 164 will be terrible news for journalists. This is one of those times when you need to write to your MP. Please do it now!
Over 100 of the best students in cyber from the UK Academic Centres of Excellence in Cyber Security Research are gathered here at the University of Cambridge Computer Laboratory today for the second edition of our annual “Inter-ACE” hacking contest.
The competition is hosted on the CyberNEXS cyber-range of our sponsor Leidos, and involves earning points for hacking into each other’s machines while defending one’s own. The competition has grown substantially from last year’s: you can follow it live on Twitter (@InterACEcyber) At the time of writing, we still don’t know who is going to take home the trophy. Can you guess who will?
The event has been made possible thanks to generous support from the National Cyber Security Centre, the Cabinet Office, Leidos and NCC Group.
The Inter-ACE Cyberchallenge on Saturday was fantastic. The event saw nearly twice as many competitors as attended the C2C competition in Boston recently, engaged in solving the most artful challenges. It was great to see so many students interested in cyber security making the effort to travel from the four corners of the UK, a few from as far away as Belfast!
The competition was played out on a “Risk-style” world map, and competing teams had to fight each other for control of several countries, each protected by a fiendish puzzle. A number of universities had also submitted guest challenges, and it was great that so many teams got involved in this creative process too. To give one example; The Cambridge team had designed a challenge based around a historically accurate enigma machine, with this challenge protecting the country of Panama. Competitors had to brute-force the settings of the enigma machine to decode a secret message. Other challenges were based around the core CTF subject areas of web application security, binary reverse engineering and exploitation, forensics, and crypto. Some novice teams may have struggled to compete, but they would have learned a lot, and hopefully developed an appetite for more competition. There were also plenty of teams present with advanced tool sets and a solid plan, with these preparations clearly paying off in the final scores.
Between the 10 teams, their coaches, the organisers and the reporters, the lab was bustling with excitement and that intense feeling of hackers “in the zone” for the whole afternoon.
I have nothing but praise for our partners Facebook, who worked hard on setting the challenges and making the CTF game run smoothly, as well as feeding the participants with pizza and endowing the prizes with hacking books and goodie bags.
The biggest thanks go to the ACE-CSRs who enthusiastically supported this initiative despite the short notice. 40 students came to Cambridge to compete in the live event in teams of 4, and another 40+ competed remotely in the individuals.
In retrospect we should have organised a “best T-shirt” competition. I especially liked Facebook t-shirts “Fix more, whine less” and “s/sleep/hack/g” but the one I would have voted overall winner (despite not technically being a T-shirt) was Southampton’s Shakespearian boolean logic.
It is with a mixture of pride and embarrassment that I announce the winners, as Cambridge won the gold in both the team and individual events.
- 1st place (Gold): University of Cambridge
Stella Lau, Will Shackleton, Cheng Sun, Gábor Szarka
- 2nd place (Silver): Imperial College London
Matthieu Buffet, Jiarou Fan, Luke Granger-Brown, Antoine Vianey-Liaud
- 3rd place (Bronze): University of Southampton
Murray Colpman, Kier Davis, Yordan Ganchev, Mohit Gupta
- 1st place (Gold): Dimitrije Erdeljan, University of Cambridge
- 2nd place (Silver): Emma Espinosa, University of Oxford
- 3rd place (Bronze): David Young, University of Southampton
I shall ignore allegations of having rigged the game except to say that yes, we did train our students rather extensively in preparation for the previously-mentioned Cambridge 2 Cambridge event with MIT. All of our winners are Cambridge undergraduates in computer science who had done well in the qualifiers for C2C. Two of them had actually been to Boston, where Gábor had been on the winning team overall and earned one gold and two silver medals, while Will (also former UK Cyber Security Challenge winner) had earned one gold, one silver and two bronze medals. Well deserved thanks also to my modest but irreplaceable collaborator Graham Rymer who designed and delivered an effective and up-to-date ethical hacking course to our volunteers. The Cambridge success in this weekend’s competition gives promising insights into the effectiveness of this training which we are gearing up to offering to all our undergraduates and potentially to other interested audiences in the future.
We are once again grateful to everyone who took part. We are also grateful to the Cabinet Office, to EPSRC and to GCHQ for support that will allow us to keep the event running and we hereby invite all the ACEs to sharpen their hacking tools for next year and come back to attempt to reconquer the trophy from us.
Online booking is now open for the annual Lovelace lecture, which I’m giving in London on March 15th. I’ll be using the event to try to develop a cross-cutting theme that arises from my research but is of wider interest. What do we mean by scale?
Back when computing was expensive, computer science started out worrying about how computations scaled – such as which sorting algorithms ran as n², n log n or even n. Once software development became the bottleneck, we started off by trying the same approach (measuring code entropy or control graph complexity), but moved on to distinguishing what types of complexity could be dealt with by suitable tools. Mass-market computing and communications brought network effects, and scalability started to depend on context (this is where security economics came in). Now we move to “Big Data” the dependency on people becomes more explicit. Few people have stopped to think of human factors in scaling terms. Do we make information about many people available to many, or to few? What about the complexity of the things that can be done with personal data? What about costs now versus in the future, and the elasticity of demand associated with such costs? Do you just count the data subjects, do you count the attackers too, or do you add the cops as well?
I’ve been quoted as saying “You can have security, functionality, scale – choose any two” or words to that effect. I’ll discuss this and try to sketch the likely boundaries, as well as future research directions. The discussion will cross over from science and engineering to economics and politics; recent proposed legislation in the UK, and court cases in the USA, would impose compliance burdens on people trying to scale systems up from one country to many.
The students we’re training to be the next generation of developers and entrepreneurs will need a broader understanding of what’s involved in scaling systems up, and in this talk I’ll try to explore what that means. Maybe I’m taking a risk with this talk, as I’m trying to assemble into a row a lot of facts that are usually found in different columns. But I do hope it will not be boring.