Category Archives: Open-source security

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here.

DigiTally

Last week I gave a keynote talk at CCS about DigiTally, a project we’ve been working on to extend mobile payments to areas where the network is intermittent, congested or non-existent.

The Bill and Melinda Gates Foundation called for ways to increase the use of mobile payments, which have been transformative in many less developed countries. We did some research and found that network availability and cost were the two main problems. So how could we do phone payments where there’s no network, with a marginal cost of zero? If people had smartphones you could use some combination of NFC, bluetooth and local wifi, but most of the rural poor in Africa and Asia use simple phones without any extra communications modalities, other than those which the users themselves can provide. So how could you enable people to do phone payments by simple user actions? We were inspired by the prepayment electricity meters I helped develop some twenty years ago; meters conforming to this spec are now used in over 100 countries.

We got a small grant from the Gates Foundation to do a prototype and field trial. We designed a system, Digitally, where Alice can pay Bob by exchanging eight-digit MACs that are generated, and verified, by the SIM cards in their phones. For rapid prototyping we used overlay SIMs (which are already being used in a different phone payment system in Africa). The cryptography is described in a paper we gave at the Security Protocols Workshop this spring.

Last month we took the prototype to Strathmore University in Nairobi to do a field trial involving usability studies in their bookshop, coffee shop and cafeteria. The results were very encouraging and I described them in my talk at CCS (slides). There will be a paper on this study in due course. We’re now looking for partners to do deployment at scale, whether in phone payments or in other apps that need to support value transfer in delay-tolerant networks.

There has been press coverage in the New Scientist, Engadget and Impress (original Japanese version).

Report on the IP Bill

This morning at 0930 the Joint Committee on the IP Bill is launching its report. As one of the witnesses who appeared before it, I got an embargoed copy yesterday.

The report s deeply disappointing; even that of the Intelligence and Security Committee (whom we tended to dismiss as government catspaws) is more vigorous. The MPs and peers on the Joint Committee have given the spooks all they wanted, while recommending tweaks and polishes here and there to some of the more obvious hooks and sharp edges.

The committee supports comms data retention, despite acknowledging that multiple courts have found this contrary to EU and human-rights law, and the fact that there are cases in the pipeline. It supports extending retention from big telcos offering a public service to private operators and even coffee shops. It support greatly extending comms data to ICRs; although it does call for more clarity on the definition, it give the Home Office lots of wriggle room by saying that a clear definition is hard if you want to catch all the things that bad people might do in the future. (Presumably a coffee shop served with an ICR order will have no choice but to install a government-approved black box. or just pipe everything to Cheltenham.) It welcomes the government decision to build and operate a request filter – essentially the comms database for which the Home Office has been trying to get parliamentary approval since the days of Jacqui Smith (and which Snowden told us they just built anyway). It comes up with the rather startling justification that this will help privacy as the police may have access to less stuff (though of course the spooks, including our 5eyes partners and others, will have more). It wants end-to-end encrypted stuff to be made available unless it’s “not practicable to do so”, which presumably means that the Home Secretary can order Apple to add her public key quietly to your keyring to get at your Facetime video chats. That has been a key goal of the FBI in Crypto War 2; a Home Office witness openly acknowledged it.

The comparison with the USA is stark. There, all three branches of government realised they’d gone too far after Snowden. President Obama set up the NSA review group, and implemented most of its recommendations by executive order; the judiciary made changes to the procedures of the FISA Court; and Congress failed to renew the data retention provisions in the Patriot Act (aided by the judiciary). Yet here in Britain the response is just to take Henry VIII powers to legalise all the illegal things that GCHQ had been up to, and hope that the European courts won’t strike the law down yet again.

People concerned for freedom and privacy will just have to hope the contrary. The net effect of the minor amendments proposed by the joint committee will be to make it even harder to get any meaningful amendments as the Bill makes its way through Parliament, and we’ll end up having to rely on the European courts to trim it back.

For more, see Scrambling for Safety, a conference we held last month in London on the bill and whose video is now online, and last week’s Cambridge symposium for a more detailed analysis.

CHERI: Architectural support for the scalable implementation of the principle of least privilege

[CHERI tablet photo]
FPGA-based CHERI prototype tablet — a 64-bit RISC processor that boots CheriBSD, a CHERI-enhanced version of the FreeBSD operating system.
Only slightly overdue, this post is about our recent IEEE Security and Privacy 2015 paper, CHERI: A Hybrid Capability-System Architecture for Scalable Software Compartmentalization. We’ve previously written about how our CHERI processor blends a conventional RISC ISA and processor pipeline design with a capability-system model to provide fine-grained memory protection within virtual address spaces (ISCA 2014, ASPLOS 2015). In our this new paper, we explore how CHERI’s capability-system features can be used to implement fine-grained and scalable application compartmentalisation: many (many) sandboxes within a single UNIX process — a far more efficient and programmer-friendly target for secure software than current architectures.

Continue reading CHERI: Architectural support for the scalable implementation of the principle of least privilege

Double bill: Password Hashing Competition + KeyboardPrivacy

Two interesting items from Per Thorsheim, founder of the PasswordsCon conference that we’re hosting here in Cambridge this December (you still have one month to submit papers, BTW).

First, the Password Hashing Competition “have selected Argon2 as a basis for the final PHC winner”, which will be “finalized by end of Q3 2015”. This is about selecting a new password hashing scheme to improve on the state of the art and make brute force password cracking harder. Hopefully we’ll have some good presentations about this topic at the conference.

Second, and unrelated: Per Thorsheim and Paul Moore have launched a privacy-protecting Chrome plugin called Keyboard Privacy to guard your anonymity against websites that look at keystroke dynamics to identify users. So, you might go through Tor, but the site recognizes you by your typing pattern and builds a typing profile that “can be used to identify you at other sites you’re using, were identifiable information is available about you”. Their plugin intercepts your keystrokes, batches them up and delivers them to the website at a constant pace, interfering with the site’s ability to build a profile that identifies you.

Meeting Snowden in Princeton

I’m at Princeton where Ed Snowden is due to speak by live video link in a few minutes, and have a discussion with Bart Gellmann.

Yesterday he spent four hours with a group of cryptographers from industry and academia, of which I was privileged to be one. The topic was the possible and likely countermeasures, both legal and technical, against state surveillance. Ed attended as the “Snobot”, a telepresence robot that let him speak to us, listen and move round the room, from a studio in Moscow. As well as over a dozen cryptographers there was at least one lawyer and at least one journalist familiar with the leaked documents. Yesterday’s meeting was under the Chatham House rule, so I may not say who said what; any new disclosures may have been made by Snowden, or by one of the journalists, or by one of the cryptographers who has assisted journalists with the material. Although most of what was discussed has probably appeared already in one place or another, as a matter of prudence I’m publishing these notes on the blog while I’m enjoying US first-amendment rights, and will sanitise them from my laptop before coming back through UK customs.

The problem of state surveillance is a global one rather than an NSA issue, and has been growing for years, along with public awareness of it. But we learned a lot from the leaks; for example, wiretaps on the communications between data centres were something nobody thought of; and it might do no harm to think a bit more about the backhaul in CDNs. (A website that runs TLS to a CDN and then bareback to the main server is actually worse than nothing, as we lose the ability to shame them.) Of course the agencies will go for the low-hanging fruit. Second, we also got some reassurance; for example, TLS works, unless the agencies have managed to steal or coerce the private keys, or hack the end systems. (This is a complex discussion given CDNs, problems with the CA ecology and bugs like Heartbleed.) And it’s a matter of record that Ed trusted his life to Tor, because he saw from the other side that it worked.

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.

With IKE the NSA were interested in getting the original handshakes, harvesting them all systematically worldwide. These are databased and indexed. The quantum type attacks were common against non-crypto traffic; it’s easy to spam a poisoned link. However there is no evidence at all of active attacks on cryptographic protocols, or of any break-and-poison attack on crypto links. It is however possible that the hacking crew can use your cryptography to go after your end system rather than the content, if for example your crypto software has a buffer overflow.

What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.

In addition to the Gemalto case, Belgacom is another case of hacking X to get at Y. The kind of attack here is now completely routine: you look for the HR spreadsheet in corporate email traffic, use this to identify the sysadmins, then chain your way in. Companies need to have some clue if they’re to stop attacks like this succeeding almost trivially. By routinely hacking companies of interest, the agencies are comprehensively undermining the security of critical infrastructure, and claim it’s a “nobody but us” capability. however that’s not going to last; other countries will catch up.

Would opportunistic encryption help, such as using unauthenticated Diffie-Hellman everwhere? Quite probably; but governments might then simply compel the big service forms to make the seeds predictable. At present, key theft is probably more common than key compulsion in US operations (though other countries may be different). If the US government ever does use compelled certs, it’s more likely to be the FBI than the NSA, because of the latter’s focus on foreign targets. The FBI will occasionally buy hacked servers to run in place as honeypots, but Stuxnet and Flame used stolen certs. Bear in mind that anyone outside the USA has zero rights under US law.

Is it sensible to use medium-security systems such as Skype to hide traffic, even though they will give law enforcement access? For example, an NGO contacting people in one of the Stans might not want to incriminate them by using cryptography. The problem with this is that systems like Skype will give access not just to the FBI but to all sorts of really unsavoury police forces.

FBI operations can be opaque because of the care they take with parallel construction; the Lavabit case was maybe an example. It could have been easy to steal the key, but then how would the intercepted content have been used in court? In practice, there are tons of convictions made on the basis of cargo manifests, travel plans, calendars and other such plaintext data about which a suitable story can be told. The FBI considers it to be good practice to just grab all traffic data and memorialise it forever.

The NSA is even more cautious than the FBI, and won’t use top exploits against clueful targets unless it really matters. Intelligence services are at least aware of the risk of losing a capability, unlike vanilla law enforcement, who once they have a tool will use it against absolutely everybody.

Using network intrusion detection against bad actors is very much like the attack / defence evolution seen in the anti-virus business. A system called Tutelage uses Xkeyscore infrastructure and matches network traffic against signatures, just like AV, but it has the same weaknesses. Script kiddies are easily identifiable from their script signatures via Xkeyscore, but the real bad actors know how to change network signatures, just as modern malware uses packers to become highly polymorphic.

Cooperation with companies on network intrusion detection is tied up with liability games. DDoS attacks from Iran spooked US banks, which invited the government in to snoop on their networks, but above all wanted liability protection.

Usability is critical. Lots of good crypto never got widely adopted as it was too hard to use; think of PGP. On the other hand, Tails is horrifically vulnerable to traditional endpoint attacks, but you can give it as a package to journalists to use so they won’t make so many mistakes. The source has to think “How can I protect myself?” which makes it really hard, especially for a source without a crypto and security background. You just can’t trust random journalists to be clueful about everything from scripting to airgaps. Come to think of it, a naive source shouldn’t trust their life to securedrop; he should use gpg before he sends stuff to it but he won’t figure out that it’s a good idea to suppress key IDs. Engineers who design stuff for whistleblowers and journalists must be really thoughtful and careful if they want to ensure their users won’t die when they screw up. The goal should be that no single error should be fatal, and so long as their failures aren’t compounded the users will stay alive. Bear in mind that non-roman-language countries use numeric passwords, and often just 8 digits. And being a target can really change the way you operate. For example, password managers are great, but not for someone like Ed, as they put too many of the eggs in one basket. If you’re a target, create a memory castle, or a token that can be destroyed on short notice. If you’re a target like Ed, you have to compartmentalise.

On the policy front, one of the eye-openers was the scale of intelligence sharing – it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.

Jurisdiction is a big soft spot. When will CDNs get tapped on the shoulder by local law enforcement in dodgy countries? Can you lock stuff out of particular jurisdictions, so your stuff doesn’t end up in Egypt just for load-balancing reasons? Can the NSA force data to be rehomed in a friendly jurisdiction, e.g. by a light DoS? Then they “request” stuff from a partner rather than “collecting” it.

The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document”. The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

The deepest problem is that the system architecture that has evolved in recent years holds masses of information on many people with no intelligence value, but with vast potential for political abuse.

Traditional law enforcement worked on individualised suspicion; end-system compromise is better than mass search. Ed is on the record as leaving to the journalists all decisions about what targeted attacks to talk about, as many of them are against real bad people, and as a matter of principle we don’t want to stop targeted attacks.

Interference with crypto in academia and industry is longstanding. People who intern with a clearance get a “lifetime obligation” when they go through indoctrination (yes, that’s what it’s called), and this includes pre-publication review of anything relevant they write. The prepublication review board (PRB) at the CIA is notoriously unresponsive and you have to litigate to write a book. There are also specific programmes to recruit cryptographers, with a view to having friendly insiders in companies that might use or deploy crypto.

The export control mechanisms are also used as an early warning mechanism, to tip off the agency that kit X will be shipped to country Y on date Z. Then the technicians can insert an implant without anyone at the exporting company knowing a thing. This is usually much better than getting stuff Trojanned by the vendor.

Western governments are foolish to think they can develop NOBUS (no-one but us) technology and press the stop button when things go wrong, as this might not be true for ever. Stuxnet was highly targeted and carefully delivered but it ended up in Indonesia too. Developing countries talk of our first-mover advantage in carbon industrialisation, and push back when we ask them to burn less coal. They will make the same security arguments as our governments and use the same techniques, but without the same standards of care. Bear in mind, on the equities issue, that attack is way way easier than defence. So is cyber-war plausible? Politically no, but at the expert level it might eventually be so. Eventually something scary will happen, and then infrastructure companies will care more, but it’s doubtful that anyone will do a sufficiently coordinated attack on enough diverse plant through different firewalls and so on to pose a major threat to life.

How can we push back on the poisoning of the crypto/security community? We have to accept that some people are pro-NSA while others are pro-humanity. Some researchers do responsible disclosure while others devise zero-days and sell them to the NSA or Vupen. We can push back a bit by blocking papers from conferences or otherwise denying academic credit where researchers prefer cash or patriotism to responsible disclosure, but that only goes so far. People who can pay for a new kitchen with their first exploit sale can get very patriotic; NSA contractors have a higher standard of living than academics. It’s best to develop a culture where people with and without clearances agree that crypto must be open and robust. The FREAK attack was based on export crypto of the 1990s.

We must also strengthen post-national norms in academia, while in the software world we need transparency, not just in the sense of open source but of business relationships too. Open source makes it harder for security companies to sell different versions of the product to people we like and people we hate. And the NSA may have thought dual-EC was OK because they were so close to RSA; a sceptical purchaser should have observed how many government speakers help them out at the RSA conference!

Secret laws are pure poison; government lawyers claim authority and act on it, and we don’t know about it. Transparency about what governments can and can’t do is vital.

On the technical front, we can’t replace the existing infrastructure, so it won’t be possible in the short term to give people mobile phones that can’t be tracked. However it is possible to layer new communications systems on top of what already exists, as with the new generation of messaging apps that support end-to-end crypto with no key escrow. As for whether such systems take off on a large enough scale to make a difference, ultimately it will all be about incentives.

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Design and Implementation of the FreeBSD Operating System, Second Edition now shipping

Kirk McKusick, George Neville-Neil, and I are pleased to announce that The Design and Implementation of the FreeBSD Operating System, Second Edition is now available from Pearson Education (Amazon link for non-US folk). Light Blue Touchpaper readers might be particularly interested in the new chapter on FreeBSD’s kernel security features including:

  • Process Credentials
  • Users and Groups
  • Privilege Model
  • Interprocess Access Control
  • Discretionary Access Control
  • Capsicum Capability Model
  • Jails
  • Mandatory Access-Control Framework
  • Security Event Auditing
  • Cryptographic Services
  • GELI Full-Disk Encryption

There is detailed coverage of the FreeBSD TCB, POSIX.1e and NFSv4 ACLs, OS sandboxing features, the Mandatory Access Control Framework used not just in FreeBSD but also Junos/Mac OS X/iOS, the FreeBSD kernel’s Yarrow-based pseudo-random number generator, and both confidentiality and integrity cryptographic protection for filesystems, and the kernel’s IPsec implementation. Other new content in this edition of the book includes ZFS, paravirtualised device drivers, DTrace, NFSv4, network-stack virtualisation, and much more.

We will be using this book as one of the core texts for our new masters-level operating-system course at Cambridge, L41, in spring 2015.

The CHERI capability model: Revisiting RISC in an age of risk (ISCA 2014)

Last week, Jonathan Woodruff presented our joint paper on the CHERI memory model, The CHERI capability model: Revisiting RISC in an age of risk, at the 2014 International Symposium on Computer Architecture (ISCA) in Minneapolis (video, slides). This is our first full paper on Capability Hardware Enhanced RISC Instructions (CHERI), collaborative work between Simon Moore’s and my team composed of members of the Security, Computer Architecture, and Systems Research Groups at the University of Cambridge Computer Laboratory, Peter G. Neumann’s group at the Computer Science Laboratory at SRI International, and Ben Laurie at Google.

CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust.

The approach draws inspiration from Capsicum, our OS-facing hybrid capability-system model now shipping in FreeBSD and available as a patch for Linux courtesy Google. We found that capability-system approaches matched extremely well with least-privilege oriented software compartmentalisation, in which programs are broken up into sandboxed components to mitigate the effects of exploited vulnerabilities. CHERI similarly merges research capability-system ideas with a conventional RISC processor design, making accessible the security and robustness benefits of the former, while retaining software compatibility with the latter. In the paper, we contrast our approach with a number of others including Intel’s forthcoming Memory Protection eXtensions (MPX), but in particular pursue a RISC-oriented design instantiated against the 64-bit MIPS ISA, but the ideas should be portable to other RISC ISAs such as ARMv8 and RISC-V.

Our hardware prototype is implemented in Bluespec System Verilog, a high-level hardware description language (HDL) that makes it easier to perform design-space exploration. To facilitate both reproducibility for this work, and also future hardware-software research, we’ve open sourced the underlying Bluespec Extensible RISC Implementation (BERI), our CHERI extensions, and a complete software stack: operating system, compiler, and so on. In fact, support for the underlying 64-bit RISC platform, which implements a version of the 64-bit MIPS ISA, was upstreamed to FreeBSD 10.0, which shipped earlier this year. Our capability-enhanced versions of FreeBSD (CheriBSD) and Clang/LLVM are distributed via GitHub.

You can learn more about CHERI, BERI, and our larger clean-slate hardware-software agenda on the CTSRD Project Website. There, you will find copies of our prior workshop papers, full Bluespec source code for the FPGA processor design, hardware build instructions for our FPGA-based tablet, downloadable CheriBSD images, software source code, and also our recent technical report, Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture, and Jon Woodruff’s PhD dissertation on CHERI.

Jonathan Woodruff, Robert N. M. Watson, David Chisnall, Simon W. Moore, Jonathan Anderson, Brooks Davis, Ben Laurie, Peter G. Neumann, Robert Norton, and Michael Roe. The CHERI capability model: Revisiting RISC in an age of risk, Proceedings of the 41st International Symposium on Computer Architecture (ISCA 2014), Minneapolis, MN, USA, June 14–16, 2014.