Yearly Archives: 2020

Three Paper Thursday: What’s Intel SGX Good For?

Software Guard eXtensions (SGX) represents Intel’s latest foray into trusted computing. Initially intended as a means to secure cloud computation, it has since been employed for DRM and secure key storage in production systems. SGX differs from its competitors such as TrustZone in its focus on reducing the volume of trusted code in its “secure world”. These secure worlds are called enclaves in SGX parlance and are protected from untrusted code by a combination of a memory encryption engine and a set of new CPU instructions to enforce separation.

SGX has been available on almost every Intel processor since the sixth generation of Intel Core (Skylake). It has had a bit of a rocky journey since. Upon its release, there were concerns expressed about SGX being used as a medium for malware dissemination. A proof of concept demonstrating the effectiveness of this was also published. Then there were concerns about the mandatory agreements required to publish enclave code in production. In order to use Intel’s attestation service, a key enabler for any remote application built on SGX, developers have to obtain a commercial license from Intel. This lock-in raised a fair bit of consternation and to date hasn’t been fully addressed by Intel. Then came the vulnerabilities. SGX was found to be vulnerable to cache timing attacks speculative execution vulnerabilities and to Load Value Injection attacks. This has further eroded the credibility of the platform.

That said, there have been a fair few projects that have utilized SGX either to harden existing solutions or to come up with hitherto unseen applications. In this post, we’ll look at three such proposals to see how they utilize SGX despite its concerning attributes. Where applicable, we will also talk about how the discovered vulnerabilities have affected their viability.

Continue reading Three Paper Thursday: What’s Intel SGX Good For?

Three Paper Thursday: Exploring the Impact of Online Crime Victimization

Just as in other types of victimization, victims of cybercrime can experience serious consequences, emotional or not. First of all, a repeat victim of a cyber-attack might face serious financial or emotional hardship. These victims are also more likely to require medical attention as a consequence of online fraud victimization. This means repeat victims have a unique set of support needs, including the need for counselling, and seeking support from the criminal justice system. There are also cases, such as in cyberbullying or sextortion, where victims will not speak to their family and friends. These victims feel too ashamed to share details with others and they will probably not receive any support. In such cases trauma can even lead to self-harm. Therefore, we see that online victimization can actually lead to physical harm. 

As a member of the National Risk Assessment (NRA) Behavioural Science Expert Group in the UK, working on the social and psychological impact of cyber-attacks on members of the public, I have identified for years now that the actual social or psychological impact of different types of cyber-attacks to victims or society as a whole is still not explored. Governments have been slow in identifying and analysing potential events online that may negatively impact individuals. In the UK, as well as in other countries, cybercrime has been added as part of a national risk assessment exercise only a few years ago. Therefore, our knowledge about the potential impact of cyber-attacks and their cascading effects are still being under research.  

This is often a very difficult area for lawyers and the courts to understand. Understanding victims’ needs and the responsibilities of the police, the judiciary and other authorities in dealing with such crimes is very important. This is why we need to further explore how and to what extent the situation and needs of victims of online crimes differ from those of traditional offline crimes. By sharing experiences and openly discussing about this issue, we will be able to engrain the cybersecurity mindset in our societies thus preventing victimization in some level. 

In this post I would like to introduce recent work in this area. The first one explores the social and psychological impact of cyber-attacks to individuals as well as nations, the second one explores the differences between the situation and needs of online and offline crime victims while the third one discusses the relationship between offending and victimization online.

Continue reading Three Paper Thursday: Exploring the Impact of Online Crime Victimization

Three Paper Thursday: Attacking the Bitcoin Peer-to-Peer Network

People have tried to develop many different attack vectors on cryptocurrencies, from codebase flaws, cryptographic algorithms, mining processes, consensus protocols and block propagation mechanisms to the underlying network layer. Most attacks could be patched quickly by modifying the source code, but preventing attacks that exploit the network layer remains a non-trivial problem as the network layer heavily relies on the existing Internet infrastructure, which is impractical to change. So network-layer attacks could be dangerous, powerful and hard to mitigate.

In this post, I would like to introduce some recent attacks against the Bitcoin P2P network. Continue reading Three Paper Thursday: Attacking the Bitcoin Peer-to-Peer Network

Three Paper Thursday: Adversarial Machine Learning, Humans and everything in between

Recent advancements in Machine Learning (ML) have taught us two main lessons: a large proportion of things that humans do can actually be automated, and that a substantial part of this automation can be done with minimal human supervision. One no longer needs to select features for models to use; in many cases people are moving away from selecting the models themselves and perform a Network Architecture Search. This means non-stop search across billions of dimensions, ever improving different properties of deep neural networks (DNNs).

However, progress in automation has brought a spectre to the feast. Automated systems seem to be very vulnerable to adversarial attacks. Not only is this vulnerability hard to get rid of; worse, we often can’t even define what it means to be vulnerable in the first place.

Furthermore, finding adversarial attacks on ML systems is really easy even if you do not have any access to the models. There are only so many things that make cat a cat, and all the different models that deal with cats will be looking at the same set of features. This has an important implication: learning how to trick one model dealing with cats often transfers over to other models. Transferability is a terrible property for security because it makes adversarial ML attacks cheap and scalable. If there is a camera in the bank running a similar ML model to the camera you can get in Costco for $5, then the cost of developing an attack is $5.

As of now, we do not really have good answers to any of these questions. In the meantime, ML controlled systems are entering the human realm.

In this Three Paper Thursday I want to talk about works from the field of adversarial ML that make it much more understandable.

Continue reading Three Paper Thursday: Adversarial Machine Learning, Humans and everything in between

Contact Tracing in the Real World

There have recently been several proposals for pseudonymous contact tracing, including from Apple and Google. To both cryptographers and privacy advocates, this might seem the obvious way to protect public health and privacy at the same time. Meanwhile other cryptographers have been pointing out some of the flaws.

There are also real systems being built by governments. Singapore has already deployed and open-sourced one that uses contact tracing based on bluetooth beacons. Most of the academic and tech industry proposals follow this strategy, as the “obvious” way to tell who’s been within a few metres of you and for how long. The UK’s National Health Service is working on one too, and I’m one of a group of people being consulted on the privacy and security.

But contact tracing in the real world is not quite as many of the academic and industry proposals assume.

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1–3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

Sixth, there’s the human aspect. On Friday, when I was coming back from walking the dogs, I stopped to chat for ten minutes to a neighbour. She stood halfway between her gate and her front door, so we were about 3 metres apart, and the wind was blowing from the side. The risk that either of us would infect the other was negligible. If we’d been carrying bluetooth apps, we’d have been flagged as mutual contacts. It would be quite intolerable for the government to prohibit such social interactions, or to deploy technology that would punish them via false alarms. And how will things work with an orderly supermarket queue, where law-abiding people stand patiently six feet apart?

Bluetooth also goes through plasterboard. If undergraduates return to Cambridge in October, I assume there will still be small-group teaching, but with protocols for distancing, self-isolation and quarantine. A supervisor might sit in a teaching room with two or three students, all more than 2m apart and maybe wearing masks, and the window open. The bluetooth app will flag up not just the others in the room but people in the next room too.

How is this to be dealt with? I expect the app developers will have to fit a user interface saying “You’re within range of device 38a5f01e20. Within infection range (y/n)?” But what happens when people get an avalanche of false alarms? They learn to click them away. A better design might be to invite people to add a nickname and a photo so that contacts could see who they are. “You are near to Ross [photo] and have been for five minutes. Are you maintaining physical distance?”

When I discussed this with a family member, the immediate reaction was that she’d refuse to run an anonymous app that might suddenly say “someone you’ve been near in the past four days has reported symptoms, so you must now self-isolate for 14 days.” A call from a public health officer is one thing, but not knowing who it was would just creep her out. It’s important to get the reactions of real people, not just geeks and wonks! And the experience of South Korea and Taiwan suggests that transparency is the key to public acceptance.

Seventh, on the systems front, decentralised systems are all very nice in theory but are a complete pain in practice as they’re too hard to update. We’re still using Internet infrastructure from 30 years ago (BGP, DNS, SMTP…) because it’s just too hard to change. Watch Moxie Marlinspike’s talk at 36C3 if you don’t get this. Relying on cryptography tends to make things even more complex, fragile and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.

Personally I feel conflicted. I recognise the overwhelming force of the public-health arguments for a centralised system, but I also have 25 years’ experience of the NHS being incompetent at developing systems and repeatedly breaking their privacy promises when they do manage to collect some data of value to somebody else. The Google Deepmind scandal was just the latest of many and by no means the worst. This is why I’m really uneasy about collecting lots of lightly-anonymised data in a system that becomes integrated into a whole-of-government response to the pandemic. We might never get rid of it.

But the real killer is likely to be the interaction between privacy and economics. If the app’s voluntary, nobody has an incentive to use it, except tinkerers and people who religiously comply with whatever the government asks. If uptake remains at 10-15%, as in Singapore, it won’t be much use and we’ll need to hire more contact tracers instead. Apps that involve compulsion, such as those for quarantine geofencing, will face a more adversarial threat model; and the same will be true in spades for any electronic immunity certificate. There the incentive to cheat will be extreme, and we might be better off with paper serology test certificates, like the yellow fever vaccination certificates you needed for the tropics, back in the good old days when you could actually go there.

All that said, I suspect the tracing apps are really just do-something-itis. Most countries now seem past the point where contact tracing is a high priority; even Singapore has had to go into lockdown. If it becomes a priority during the second wave, we will need a lot more contact tracers: last week, 999 calls in Cambridge had a 40-minute wait and it took ambulances six hours to arrive. We cannot field an app that will cause more worried well people to phone 999.

The real trade-off between surveillance and public health is this. For years, a pandemic has been at the top of Britain’s risk register, yet far less was spent preparing for one than on anti-terrorist measures, many of which were ostentatious rather than effective. Worse, the rhetoric of terror puffed up the security agencies at the expense of public health, predisposing the US and UK governments to disregard the lesson of SARS in 2003 and MERS in 2015 — unlike the governments of China, Singapore, Taiwan and South Korea, who paid at least some attention. What we need is a radical redistribution of resources from the surveillance-industrial complex to public health.

Our effort should go into expanding testing, making ventilators, retraining everyone with a clinical background from vet nurses to physiotherapists to use them, and building field hospitals. We must call out bullshit when we see it, and must not give policymakers the false hope that techno-magic might let them avoid the hard decisions. Otherwise we can serve best by keeping out of the way. The response should not be driven by cryptographers but by epidemiologists, and we should learn what we can from the countries that have managed best so far, such as South Korea and Taiwan.

Three Paper Thursday: The role of intermediaries, platforms, and infrastructures in governing crime and abuse

The platforms, providers, and infrastructures which together make up the contemporary Internet play an increasingly central role in the business of governing human societies. Although the software engineers, administrators, business professionals, and other staff working at these organisations may not have the institutional powers of state organisations such as law enforcement or the civil service, they are now in a powerful position of responsibility for the harms and illegal activities which their platforms facilitate. For this Three Paper Thursday, I’ve chosen to highlight papers which address these issues, and which explore the complex networks of different infrastructural actors and perspectives which play a role in the reporting, handling, and defining of abuse and crime online.

Continue reading Three Paper Thursday: The role of intermediaries, platforms, and infrastructures in governing crime and abuse

Three Paper Thursday: Sanitisers and Mitigators

In this reboot of the Three Paper Thursdays, back after a hiatus of almost eight years, I consider the many different ways in which programs can be sanitised to detect, or mitigated to prevent the use of, the many programmer errors that can introduce security vulerabilities in low-level languages such as C and C++. We first look at a new binary translation technique, before covering the many compiler techniques in the literature, and finally finishing off with my own hardware analysis architecture.

Continue reading Three Paper Thursday: Sanitisers and Mitigators

FC 2020

I’m at Financial Cryptography 2020 and will try to liveblog some of the talks in followups to this post.

The keynote was given by Allison Nixon, Chief Research Officer of Unit221B, on “Fraudsters Taught Us that Identity is Broken”.

Allison started by showing the Mitchell and Webb clip. In a world where even Jack Dorsey got his twitter hacked via incoming SMS, what is identity? Your thief becomes you. Abuse of old-fashioned passports was rare as they were protected by law; now they’re your email address (which you got by lying to an ad-driven website) and phone number (which gets taken away and given to a random person if you don’t pay your bill). If lucky you might have a signing key (generated on a general purpose computer, and hard to revoke – that’s what bitcoin theft is often about). The whole underlying system is wrong. Email domains, like phone numbers, lapse if you forget to pay your bill; fraudsters actively look for custom domains and check if yours has lapsed, while relying parties mostly don’t. Privacy regulations in most countries prevent you from looking up names from phone numbers; many have phone numbers owned by their employers. Your email address can be frozen or removed because of spam if you’re bad or are hacked, while even felons are not deprived of their names. Evolution is not an intelligent process! People audit password length but rarely the password reset policy: many use zero-factor auth, meaning information that’s sort-of public like your SSN. In Twitter you reset your password then message customer support asking them to remove two-factor, and they do, so long as you can log on! This is a business necessity as too many people lose their phone or second factor, so this customer-support backdoor will never be properly closed. Many bitcoin exchanges have no probation period, whether mandatory or customer option. SIM swap means account theft so long as phone number enables password reset – she also calls this zero-factor authentication.

SIM swap is targeted, unlike most password-stuffing attacks, and compromises people who comply with all the security rules. Allison tried hard to protect herself against this fraud but mostly couldn’t as the phone carrier is the target. This can involve data breaches at the carrier, insider involvement and the customer service back door. Email domain abuse is similar; domain registrars are hacked or taken over. Again, the assumptions made about the underlying infrastructure are wrong. Your email can be reset by your phone number and vice versa. Your private key can be stolen via your cloud backups. Both identity vendors and verifiers rely on unvetted third parties; vendors can’t notify verifiers of a hack. The system failure is highlighted by the existence of criminal markets in identity.

There are unrealistic expectations too. As a user of a general-purpose computer, you have no way to determine whether your machine is suitable for storing private keys, and almost 100% of people are unable to comply with security advice. That tells you it’s the system that’s broken. It’s a blame game, and security advice is as much cargo cult as anything else.

What would a better identity system look like? There would be an end to ever-changing advice; you’d be notified if your information got stolen, just as you know if your physical driving license is stolen; there would be an end to unreasonable expectations of both humans and computers; the legal owner of the identity would be the person identified and would be non-transferable and irrevocable; it would not depend on the integrity of 3rd-party systems like DNS and CAs and patch management mechanisms; we’ll know we’re there once the criminal marketplace vanishes.

Questions: What might we do about certificate revocation? A probation period is the next thing to do, as how people learn of a SIM swap is a flood of password reset messages in email, and then it’s a race. I asked whether rather than fixing the whole world, we should fix it one relying party at a time? Banks give you physical tokens after all, as they’re regulated and have to eat the losses. Allison agreed; in 2019 she talked about SIM swap to many banks but had no interest from any crypto exchange. Curiously, the lawsuits tend to target carriers rather than the exchanges. What about SS7? There are sophisticated Russian criminal gangs doing such attacks, but they require a privileged position in the network, like BGP attacks. What about single signon? The market is currently in flux and might eventually settle on a few vendors. What about SMS spoofing attacks? Allison hasn’t seen them in 4g marketplaces or in widespread criminal use. Caller-ID spoofing is definitely used, by bad guys who organise SWATting. Should we enforce authentication tokens? The customer service department will be inundated with people who have lost theirs and that will become the backdoor. Would blockchains help? No, they’re just an audit log, and the failures are upstream. The social aspect is crucial: people know how to protect their physical cash in their wallet, and a proper solution to the identity problem must work like that. It’s not an impossible task, and might involve a chip in your driver’s license. It’s mostly about getting the execution right.

Identifying Unintended Harms of Cybersecurity Countermeasures

In this paper (winner of the eCrime 2019 Best Paper award), we consider the types of things that can go wrong when you intend to make things better and more secure. Consider this scenario. You are browsing through Internet and see a news headline on one of the presidential candidates. You are unsure if the headline is true. What you can do is to navigate to a fact-checking website and type in the headline of interest. Some platforms also have fact-checking bots that would update periodically on false information. You do some research through three fact-checking websites and the results consistently show that the news contains false information. You share the results as a comment on the news article. Within two hours, you receive hundreds of notifications with comments countering your resources with other fact-checking websites. 

Such a scenario is increasingly common as we rely on the Internet and social media platforms for information and news. Although they are meant to increase security, these cybersecurity countermeasures can result in confusion and frustration among users due to the incorporation of additional actions as part of users’ daily online routines. As seen, fact-checking can easily be used as a mechanism for attacks and demonstration of in-group/out-group distinction which can contribute further to group polarisation and fragmentation. We identify these negative effects as unintended consequences and define it as shifts in expected burden and/or effort to a group. 

To understand unintended harms, we begin with five scenarios of cyber aggression and deception. We identify common countermeasures for each scenario, and brainstorm potential unintended harms with each countermeasure. The unintended harms are inductively organized into seven categories: 1) displacement, 2) insecure norms, 3) additional costs, 4) misuse, 5) misclassification, 6) amplification and 7) disruption. Applying this framework to the above scenario, insecure norms, miuse, and amplification are both unintended consequences of fact-checking. Fact-checking can foster a sense of complacency where checked news are automatically seen as true. In addition, fact-checking can be used as tools for attacking groups of different political views. Such misuse facilitates amplification as fact-checking is being used to strengthen in-group status and therefore further exacerbate the issue of group polarisation and fragmentation. 

To allow for a systematic application to existing or new cybersecurity measures by practitioners and stakeholders, we expand the categories into a functional framework by developing prompts for each harm category. During this process, we identify the underlying need to consider vulnerable groups. In other words, practitioners and stakeholders need to take into consideration the impacts of countermeasures on at-risk groups as well as the possible creation of new vulnerable groups as a result of deploying a countermeasure. Vulnerable groups refer to user groups who may suffer while others are unaffected or prosper from the countermeasure. One example is older adult users where their non-familiarity and less frequent interactions with technologies means that they are forgotten or hidden when assessing risks and/or countermeasures within a system. 

It is important to note the framework does not propose measurements for the severity or the likelihood of unintended harm occurring. Rather, the emphasis of the framework is in raising stakeholders’ and practitioners’ awareness of possible unintended consequences. We envision this framework as a common-ground tool for stakeholders, particularly for coordinating approaches in complex, multi-party services and/or technology ecosystems.  We would like to extend a special thank you to Schloss Dagstuhl and the organisers of Seminar #19302 (Cybersafety Threats – from Deception to Aggression). It brought all of the authors together and laid out the core ideas in this paper. A complimentary blog post by co-author Dr Simon Parkin can be found at UCL’s Benthams Gaze blog. The accepted manuscript for this paper is available here.

From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum

I recently travelled to Pittsburgh, USA, to present the paper “From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum” at eCrime 2019, co-authored with Ben Collier and Alice Hutchings. The accepted version of the paper can be accessed here.

The structure and content of various underground forums have been studied in the literature, from threat detection to the classification of marketplace advertisements. These platforms can provide a mechanism for knowledge sharing and a marketplace between cybercriminals and other members.

However, gaming-related activity on underground hacking forums have been largely unexplored. Meanwhile, UK law enforcement believe there is a potential link between playing online games and committing cybercrime—a possible cybercrime pathway. A small-scale study by the NCA found that users looking for gaming cheats on these types of forums can lead to interactions with users involved in cybercrime, leading to a possible first offences, followed by escalating levels of offending. Also, there has been interest from UK law enforcement in exploring intervention activity which aim to deter gamers from becoming involved in cybercrime activity.

We begin to explore this by presenting a data processing pipeline framework, used to identify potential key actors on a gaming-specific forum, using predictive and clustering methods on an initial set of key actors. We adapt open-source tools created for use in analysis of an underground hacking forum and apply them to this forum. In addition, we add NLP features, machine learning models, and use group-based trajectory modelling.

From this, we can begin to characterise key actors, both by looking at the distributions of predictions, and from inspecting each of the models used. Social network analysis, built using author-replier relationships, shows key actors and predicted key actors are well connected, and group-based trajectory modelling highlights a much higher proportion of key actors are contained in both a high-frequency super-engager trajectory in the gaming category, and in a high-frequency super-engager posting activity in the general category.

This work provides an initial look into a perceived link between playing online games and committing cybercrime by analysing an underground forum focused on cheats for games.