Monthly Archives: October 2019

UKRI Digital Security by Design: A £190M research programme around Arm’s Morello – an experimental ARMv8-A CPU, SoC, and board with CHERI support

PIs: Robert N. M. Watson (Cambridge), Simon W. Moore (Cambridge), Peter Sewell (Cambridge), and Peter G. Neumann (SRI)

Since 2010, SRI International and the University of Cambridge, supported by DARPA, have been developing CHERI: a capability-system extension to RISC Instruction-Set Architectures (ISAs) supporting fine-grained memory protection and scalable compartmentalization .. while retaining incremental deployability within current C and C++ software stacks. This ten-year research project has involved hardware-software-semantic co-design: FPGA prototyping, compiler development, operating-system development, and application adaptation, as well as formal modeling and proof. Extensively documented in technical reports and research papers, we have iterated on CHERI as we evaluated and improved microarchitectural overheads, performance, software compatibility, and security.

As we know, mainstream computer systems are still chronically insecure. One of the main reasons for this is that conventional hardware architectures and C/C++ language abstractions, dating back to the 1970s, provide only coarse-grained memory protection. Without memory safety, many coding errors turn into exploitable security vulnerabilities. In our ASPLOS 2019 paper on CheriABI (best paper award), we demonstrated that a complete UNIX userspace and application suite could be protected by strong memory safety with minimal source-code disruption and acceptable performance overheads. Scalable software compartmentalization offers mitigation for future unknown classes of vulnerabilities by enabling greater use of design patterns such as software sandboxing. Our An Introduction to CHERI technical report introduces our approach including the architecture, microarchitectural contributions, formal models, software protection model, and practical software adaptation. The CHERI ISA v7 specification is the authoritative reference to the architecture, including both the architecture-neutral protection model and its concrete mappings into the 64-bit MIPS and 32/64-bit RISC-V ISAs. Our Rigorous Engineering technical report describes our modelling and mechanised proof of key security properties.

Today, we are very excited to be able to talk about another long-running aspect of our DARPA-supported work: A collaboration since 2014 with engineers at Arm to create an experimental adaptation of CHERI to the ARMv8-A architecture. This widely used ISA is the foundation for the vast majority of mobile phones and tablets, including those running iOS and Android. The £170M UKRI program Digital Security by Design (DSbD) was announced in late September 2019 to explore potential applications of CHERI — with a £70M investment by UKRI, and a further £117M from industry including involvement by Arm, Microsoft, and Google. Today, UKRI and Arm announced that the Arm Morello board will become available from 2021: Morello is a prototype 7nm high-end multi-core superscalar ARMv8-A processor (based on Arm’s Neoverse N1), SoC, and board implementing experimental CHERI extensions. As part of this effort, the UK Engineering and Physical Sciences Research Council (EPSRC) has also announced a new £8M programme to fund UK academics to work with Morello. Arm will release their Morello adaptation of our CHERI Clang/LLVM toolchain, and we will release a full adaptation of our open-source CHERI reference software stack to Morello (including our CheriBSD operating system and application suite) as foundations for research and prototyping on Morello. Watch the DSbD workshop videos from Robert Watson (Cambridge), Richard Grisenthwaite (Arm), and Manuel Costa (Microsoft) on CHERI and Morello, which are linked below, for more information.

This is an incredible opportunity to validate the CHERI approach, with accompanying systems software and formal verification, through an industrial scale and industrial quality hardware design, and to broaden the research community around CHERI to explore its potential impact. You can read the announcements about Morello here:

Recordings of several talks on CHERI and Morello are now available from the ISCF Digital Security by Design Challenge Collaborators’ Workshop (26 September 2019), including:

  • Robert Watson (Cambridge)’s talk on CHERI, and on our transition collaboration with Arm (video) (slides)
  • Richard Grisenthwaite (Arm)’s talk on the Morello board and CHERI transition (video) (slides)
  • Manuel Costa (Microsoft)’s talk on memory safety and potential opportunities arising with CHERI and Morello (video)

In addition, we are maintaining a CHERI DSbD web page with background information on CHERI, announcements regarding Morello, links to DSbD funding calls, and information regarding software artefacts, formal models, and so on. We will continue to update that page as the programme proceeds.

This has been possible through the contributions of the many members of the CHERI research team over the last ten years, including: Hesham Almatary, Jonathan Anderson, John Baldwin, Hadrien Barrel, Thomas Bauereiss, Ruslan Bukin, David Chisnall, James Clarke, Nirav Dave, Brooks Davis, Lawrence Esswood, Nathaniel W. Filardo, Khilan Gudka, Brett Gutstein, Alexandre Joannou, Robert Kovacsics, Ben Laurie, A. Theo Markettos, J. Edward Maste, Marno van der Maas, Alfredo Mazzinghi, Alan Mujumdar, Prashanth Mundkur, Steven J. Murdoch, Edward Napierala, Kyndylan Nienhuis, Robert Norton-Wright, Philip Paeps, Lucian Paul-Trifu, Alex Richardson, Michael Roe, Colin Rothwell, Peter Rugg, Hassen Saidi, Stacey Son, Domagoj Stolfa, Andrew Turner, Munraj Vadera, Jonathan Woodruff, Hongyan Xia, and Bjoern A. Zeeb.

Approved for public release; distribution is unlimited. This work was supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contract FA8750-10-C-0237 (CTSRD), with additional support from FA8750-11-C-0249 (MRC2), HR0011-18-C-0016 (ECATS), and FA8650-18-C-7809 (CIFV) as part of the DARPA CRASH, MRC, and SSITH research programs. The views, opinions, and/or findings contained in this report are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. We also acknowledge the EPSRC REMS Programme Grant (EP/K008528/1), the ERC ELVER Advanced Grant (789108), the Isaac Newton Trust, the UK Higher Education Innovation Fund (HEIF), Thales E-Security, Microsoft Research Cambridge, Arm Limited, Google, Google DeepMind, HP Enterprise, and the Gates Cambridge Trust.

Online suicide games: a form of digital self-harm or a myth?

By Maria Bada & Richard Clayton

October is ‘Cyber Security Month’, and you will see lots of warnings and advice about how to keep yourself safe online. Unfortunately, not every warning is entirely accurate and particularly egregious examples are warnings about ‘suicide games’ which are said to involve an escalating series of challenges ending in suicide.

Here at the Cambridge Cybercrime Centre, we’ve been looking into suicide games by interviewing teachers, child protection experts and NGOs; and by tracking mentions of games such as the ‘Blue Whale Challenge’ and ‘Momo’ in news stories and on UK Police and related websites.

We found that the stories about online suicide games have no discernable basis in fact and are linked to misperceptions about actual suicides. A key finding of our work is that media, social media and well-meaning (but baseless) warning releases by authorities are spreading the challenge culture and exaggerating fears.

To clarify, virally spreading challenges are real and some are unexpectedly dangerous such as the salt and ice challenge, the cinnamon challenge and more recently skin embroidery. Very sadly of course suicides are also real – but we are convinced that the combination has no basis in fact.

We’re not alone in our belief. Snopes investigated Blue Whale in 2017 and deemed the story ‘unproven’, while in 2019 the BBC posted a detailed history of Blue Whale showing there was no record of such a game prior to a single Russian media article of dubious accuracy. The UK Safer Internet Centre calls the claims around Momo ‘fake news’, while YouTube has found no evidence to support the claim that there are videos showing or promoting Momo on its platform.

Regardless of whether a challenge is dangerous or not, youngsters are especially motivated to take part, presumably because of a desire for attention and curiosity. The ‘challenge culture’ is a deeply rooted online phenomenon. Young people are constantly receiving media messages and new norms which not only inform their thinking, but also their values and beliefs. 

Although there is no evidence that the suicide games are ‘real’, authorities around the world have reacted by releasing warnings and creating information campaigns to warn youngsters and parents about the risks. However, a key concern when discussing, or warning of, suicide games is that this drives children towards the very content of concern and raises the risk of ‘suicide contagion’, which could turn stories into a tragic self-fulfilling prophecy for a small number of vulnerable youths.

Understanding what media content really means, what its source is and why a certain message has been constructed, is crucial for quality understanding and recognition of media mediated messages and their meaning. Adequate answers to all these questions can only be acquired by media literacy. However, in most countries media education is still a secondary activity that teachers or media educators deal with without training or proper material. 

Our research recommends that policy measures are taken such as: a) awareness and education to ensure that young people can handle risks online and offline; b) development of national and international strategies and guidelines for suicide prevention and how the news related to suicides is shown in media and social media; c) development of social media and media literacy; d) collaborative efforts of media, legal systems and education to prevent suicides; e) guidance for quality control of warning releases by authorities.

Maria Bada presented this work on 24-26th June 2019, at the 24th Annual CyberPsychology, CyberTherapy & Social Networking Conference (CYPSY24) in Norfolk, Virginia, USA. Click here  to access the abstract of this paper – the full version of the paper is currently in peer review and should be available soon.