All posts by Ross Anderson

SHB Seminar

The SHB seminar on November 5th was kicked off by Tom Holt, who’s discovered a robust underground market in identity documents that are counterfeit or fraudulently obtained. He’s been scraping both websites and darkweb sites for data and analysing how people go about finding, procuring and using such credentials. Most vendors were single-person operators although many operate within affiliate programs; many transactions involved cryptocurrency; many involve generating pdfs that people can print at home and that are good enough for young people to drink alcohol. Curiously, open web products seem to cost twice as much as dark web products.

Next was Jack Hughes, who has been studying the contract system introduced by hackforums in 2018 and made mandatory the following year. This enabled him to analyse crime forum behaviour before and during the covid-19 era. How do new users become active, and build up trust? How does it evolve? He collected 200,000 transactions and analysed them. The contract mandate stifled growth quickly, leading to a first peak; covid caused a second. The market was already centralised, and became more so with the pandemic. However contracts are getting done faster, and the main activity is currency exchange: it seems to be working as a cash-out market.

Anita Lavorgna has been studying the discourse of groups who oppose public mask mandates. Like the antivaxx movement, this can draw in fringe groups and become a public-health issue. She collected 23654 tweets from February to June 2020. There’s a diverse range of voices from different places on the political spectrum but with a transversal theme of freedom from government interference. Groups seek strength in numbers and seek to ally into movements, leading to the mask becoming a symbol of political identity construction. Anita found very little interaction between the different groups: only 144 messages in total.

Simon Parkin has been working on how we can push back on bad behaviours online while they are linked with good behaviours that we wish to promote. Precision is hard as many of the desirable behaviours are not explicitly recognised as such, and as many behaviours arise as a combination of personal incentives and context. The best way forward is around usability engineering – making the desired behaviours easier.

Bruce Schneier was the final initial speaker, and his topic was covid apps. The initial rush of apps that arrived in March through June have known issues around false positives and false negatives. We’ve also used all sorts of other tools, such as analysis of Google maps to measure lockdown compliance. The third thing is the idea of an immunity passport, saying you’ve had the disease, or a vaccine. That will have the same issues as the fake IDs that Tom talked about. Finally, there’s compliance tracking, where your phone monitors you. The usual countermeasures apply: consent, minimisation, infosec, etc., though the trade-offs might be different for a while. A further bunch of issues concern home working and the larger attack surface that many firms have as a result of unfamiliar tools, less resistance to being tols to do things etc.

The discussion started on fake ID; Tom hasn’t yet done test purchases, and might look at fraudulently obtained documents in the future, as opposed to completely counterfeit ones. Is hackforums helping drug gangs turn paper into coin? This is not clear; more is around cashing out cybercrime rather than street crime. There followed discussion by Anita of how to analyse corpora of tweets, and the implications for policy in real life. Things are made more difficult by the fact that discussions drift off into other platforms we don’t monitor. Another topic was the interaction of fashion: where some people wear masks or not as a political statement, many more buy masks that get across a more targeted statement. Fashion is really powerful, and tends to be overlooked by people in our field. Usability research perhaps focuses too much on the utilitarian economics, and is a bit of a blunt instrument. Another example related to covid is the growing push for monitoring software on employees’ home computers. Unfortunately Uber and Lyft bought a referendum result that enables them to not treat their staff in California as employees, so the regulation of working hours at home will probably fall to the EU. Can we perhaps make some input into what that should look like? Another issue with the pandemic is the effect on information security markets: why should people buy corporate firewalls when their staff are all over the place? And to what extent will some of these changes be permanent, if people work from home more? Another thread of discussion was how the privacy properties of covid apps make it hard for people to make risk-management decisions. The apps appear ineffective because they were designed to do privacy rather than to do public health, in various subtle ways; giving people low-grade warnings which do not require any action appear to be an attempt to raise public awareness, like mask mandates, rather than an effective attempt to get exposed individuals to isolate. Apps that check people into venues have their own issues and appear to be largely security theatre. Security theatre comes into its own where the perceived risk is much greater than the actual risk; covid is the opposite. What can be done in this case? Targeted warnings? Humour? What might happen when fatigue sets in? People will compromise compliance to make their lives bearable. That can be managed to some extent in institutions like universities, but in society it will be harder. We ended up with the suggestion that the next SHB seminar should be in February, which should be the low point; after that we can look forward to things getting better, and hopefully to a meeting in person in Cambridge on June 3-4 2021.

Our new “Freedom of Speech” policy

Our beloved Vice-Chancellor proposes a “free speech” policy under which all academics must treat other academics with “respect”. This is no doubt meant well, but the drafting is surprisingly vague and authoritarian for a university where the VC, the senior pro-VC, the HR pro-VC and the Registrary are all lawyers. The bottom line is that in future we might face disciplinary charges and even dismissal for mockery of ideas and individuals with which we disagree.

The policy was slipped out in March, when nobody was paying attention. There was a Discussion in June, at which my colleague Arif Ahmad spelled out the problems.

Vigorous debate is intrinsic to academia and it should be civil, but it is unreasonable to expect people to treat all opposing views with respect. Oxford’s policy spells this out. At the Discussion, Arif pointed out that “respect” must be changed to “tolerance” if we are to uphold the liberal culture that we have not just embraced but developed over several centuries.

At its first meeting this term, the University Council considered these arguments but decided to press ahead anyway. We are therefore calling a ballot on three amendments to the policy. If you’re a senior member of the University we invite you to sign up your support for them on the flysheets. The first amendment changes “respect” to “tolerance”; the second makes it harder to force university societies to disinvite speakers whose remarks may be controversial, and the third restricts the circumstances in which the university itself can ban speakers.

Liberalism is coming under attack from authoritarians of both left and right, yet it is the foundation on which modern academic life is built and our own university has contributed more than any other to its development over the past 811 years. If academics can face discipline for using tactics such as scorn, ridicule and irony to criticise folly, how does that sit with having such alumni as John Maynard Keynes and Charles Darwin, not to mention Bertrand Rusell, Douglas Adams and Salman Rushdie?

Three Paper Thursday: Broken Hearts and Empty Wallets

This is a guest post by Cassandra Cross.

Romance fraud (also known as romance scams or sweetheart swindles) affects millions of individuals globally each year. In 2019, the Internet Crime Complaint Centre (IC3) (USA) had over US$475 million reported lost to romance fraud. Similarly, in Australia, victims reported losing over $AUD80 million and British citizens reported over £50 million lost in 2018. Given the known under-reporting of fraud overall, and online fraud more specifically, these figures are likely to only be a minority of actual losses incurred.

Romance fraud occurs when an offender uses the guise of a legitimate relationship to gain a financial advantage from their victim. It differs from a bad relationship, in that from the outset, the offender is using lies and deception to obtain monetary rewards from their partner. Romance fraud capitalises on the fact that a potential victim is looking to establish a relationship and exhibits an express desire to connect with someone. Offenders use this to initiate a connection and start to build strong levels of trust and rapport.

As with all fraud, victims experience a wide range of impacts in the aftermath of victimisation. While many believe these to be only financial, in reality, it extends to a decline in both physical and emotional wellbeing, relationship breakdown, unemployment, homelessness, and in extreme cases, suicide. In the case of romance fraud, there is the additional trauma associated with grieving both the loss of the relationship as well as any funds they have transferred. For many victims, the loss of the relationship can be harder to cope with than the monetary aspect, with victims experiencing large degrees of betrayal and violation at the hands of their offender.

Sadly, there is also a large amount of victim blaming that exists with both romance fraud and fraud in general. Fraud is unique in that victims actively participate in the offence, through the transfer of money, albeit under false pretences. As a result, they are seen to be culpable for what occurs and are often blamed for their own circumstances. The stereotype of fraud victims as greedy, gullible and naïve persists, and presents as a barrier to disclosure as well as inhibiting their ability to report the incident and access any support services.

Given the magnitude of losses and impacts on romance fraud victims, there is an emerging body of scholarship that seeks to better understand the ways in which offenders are able to successfully target victims, the ways in which they are able to perpetrate their offences, and the impacts of victimisation on the individuals themselves. The following three articles each explore different aspects of romance fraud, to gain a more holistic understanding of this crime type.

Continue reading Three Paper Thursday: Broken Hearts and Empty Wallets

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

How to jam neural networks

Deep neural networks (DNNs) have been a very active field of research for eight years now, and for the last five we’ve seen a steady stream of adversarial examples – inputs that will bamboozle a DNN so that it thinks a 30mph speed limit sign is a 60 instead, and even magic spectacles to make a DNN get the wearer’s gender wrong.

So far, these attacks have targeted the integrity or confidentiality of machine-learning systems. Can we do anything about availability?

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief – from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

One implication is that engineers designing real-time systems that use machine learning will have to pay more attention to worst-case behaviour; another is that when custom chips used to accelerate neural network computations use optimisations that increase the gap between worst-case and average-case outcomes, you’d better pay even more attention.

Is science being set up to take the blame?

Yesterday’s publication of the minutes of the government’s Scientific Advisory Group for Emergencies (SAGE) raises some interesting questions. An initial summary in yesterday’s Guardian has a timeline suggesting that it was the distinguished medics on SAGE rather than the Prime Minister who went from complacency in January and February to panic in March, and who ignored the risk to care homes until it was too late.

Is this a Machiavellian conspiracy by Dominic Cummings to blame the scientists, or is it business as usual? Having spent a dozen years on the university’s governing body and various of its subcommittees, I can absolutely get how this happened. Once a committee gets going, it can become very reluctant to change its opinion on anything. Committees can become sociopathic, worrying about their status, ducking liability, and finding reasons why problems are either somebody else’s or not practically soluble.

So I spent a couple of hours yesterday reading the minutes, and indeed we see the group worried about its power: on February 13th it wants the messaging to emphasise that official advice is both efficaceous and sufficient, to “reduce the likelihood of the public adopting unnecessary or contradictory behaviours”. Turf is defended: Public Health England (PHE) ruled on February 18th that it can cope with 5 new cases a week (meaning tracing 800 contacts) and hoped this might be increased to 50; they’d already decided the previous week that it wasn’t possible to accelerate diagnostic capacity. So far, so much as one might expect.

The big question, though, is why nobody thought of protecting people in care homes. The answer seems to be that SAGE dismissed the problem early on as “too hard” or “not our problem”. On March 5th they note that social distancing for over-65s could save a lot of lives and would be most effective for those living independently: but it would be “a challenge to implement this measure in communal settings such as care homes”. They appear more concerned that “Many of the proposed measures will be easier to implement for those on higher incomes” and the focus is on getting PHE to draft guidance. (This is the meeting at which Dominic Cummings makes his first appearance, so he cannot dump all the blame on the scientists.)

Continue reading Is science being set up to take the blame?

Three Paper Thursday – GDPR anniversary edition

This is a guest contribution from Daniel Woods.

This coming Monday will mark two years since the General Data Protection Regulation (GDPR) came into effect. It prompted an initial wave of cookie banners that drowned users in assertions like “We value your privacy”. Website owners hoped that collecting user consent would ensure compliance and ward off the lofty fines.

Article 6 of the GDPR describes how organisations can establish a legal basis for processing personal data. Putting aside a selection of `necessary’ reasons for doing so, data processing can only be justified by collecting the user’s consent to “the processing of his or her personal data for one or more specific purposes”. Consequently, obtaining user consent could be the difference between suffering a dizzying fine or not.

The law changed the face of the web and this post considers one aspect of the transition. Consent Management Providers (CMPs) emerged offering solutions for websites to embed. Many of these use a technical standard described in the Transparency and Consent Framework. The standard was developed by the Industry Advertising Body, who proudly claim it is is “the only GDPR consent solution built by the industry for the industry”.

All of the following studies either directly measure websites implementing this standard or explore the theoretical implications of standardising consent. The first paper looks at how the design of consent dialogues shape the consent signal sent by users. The second paper identifies disparities between the privacy preferences communicated via cookie banners and the consent signals stored by the website. The third paper uses coalitional game theory to explore which firms extract the value from consent coalitions in which websites share consent signals.

Continue reading Three Paper Thursday – GDPR anniversary edition

Contact Tracing in the Real World

There have recently been several proposals for pseudonymous contact tracing, including from Apple and Google. To both cryptographers and privacy advocates, this might seem the obvious way to protect public health and privacy at the same time. Meanwhile other cryptographers have been pointing out some of the flaws.

There are also real systems being built by governments. Singapore has already deployed and open-sourced one that uses contact tracing based on bluetooth beacons. Most of the academic and tech industry proposals follow this strategy, as the “obvious” way to tell who’s been within a few metres of you and for how long. The UK’s National Health Service is working on one too, and I’m one of a group of people being consulted on the privacy and security.

But contact tracing in the real world is not quite as many of the academic and industry proposals assume.

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1–3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

Sixth, there’s the human aspect. On Friday, when I was coming back from walking the dogs, I stopped to chat for ten minutes to a neighbour. She stood halfway between her gate and her front door, so we were about 3 metres apart, and the wind was blowing from the side. The risk that either of us would infect the other was negligible. If we’d been carrying bluetooth apps, we’d have been flagged as mutual contacts. It would be quite intolerable for the government to prohibit such social interactions, or to deploy technology that would punish them via false alarms. And how will things work with an orderly supermarket queue, where law-abiding people stand patiently six feet apart?

Bluetooth also goes through plasterboard. If undergraduates return to Cambridge in October, I assume there will still be small-group teaching, but with protocols for distancing, self-isolation and quarantine. A supervisor might sit in a teaching room with two or three students, all more than 2m apart and maybe wearing masks, and the window open. The bluetooth app will flag up not just the others in the room but people in the next room too.

How is this to be dealt with? I expect the app developers will have to fit a user interface saying “You’re within range of device 38a5f01e20. Within infection range (y/n)?” But what happens when people get an avalanche of false alarms? They learn to click them away. A better design might be to invite people to add a nickname and a photo so that contacts could see who they are. “You are near to Ross [photo] and have been for five minutes. Are you maintaining physical distance?”

When I discussed this with a family member, the immediate reaction was that she’d refuse to run an anonymous app that might suddenly say “someone you’ve been near in the past four days has reported symptoms, so you must now self-isolate for 14 days.” A call from a public health officer is one thing, but not knowing who it was would just creep her out. It’s important to get the reactions of real people, not just geeks and wonks! And the experience of South Korea and Taiwan suggests that transparency is the key to public acceptance.

Seventh, on the systems front, decentralised systems are all very nice in theory but are a complete pain in practice as they’re too hard to update. We’re still using Internet infrastructure from 30 years ago (BGP, DNS, SMTP…) because it’s just too hard to change. Watch Moxie Marlinspike’s talk at 36C3 if you don’t get this. Relying on cryptography tends to make things even more complex, fragile and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.

Personally I feel conflicted. I recognise the overwhelming force of the public-health arguments for a centralised system, but I also have 25 years’ experience of the NHS being incompetent at developing systems and repeatedly breaking their privacy promises when they do manage to collect some data of value to somebody else. The Google Deepmind scandal was just the latest of many and by no means the worst. This is why I’m really uneasy about collecting lots of lightly-anonymised data in a system that becomes integrated into a whole-of-government response to the pandemic. We might never get rid of it.

But the real killer is likely to be the interaction between privacy and economics. If the app’s voluntary, nobody has an incentive to use it, except tinkerers and people who religiously comply with whatever the government asks. If uptake remains at 10-15%, as in Singapore, it won’t be much use and we’ll need to hire more contact tracers instead. Apps that involve compulsion, such as those for quarantine geofencing, will face a more adversarial threat model; and the same will be true in spades for any electronic immunity certificate. There the incentive to cheat will be extreme, and we might be better off with paper serology test certificates, like the yellow fever vaccination certificates you needed for the tropics, back in the good old days when you could actually go there.

All that said, I suspect the tracing apps are really just do-something-itis. Most countries now seem past the point where contact tracing is a high priority; even Singapore has had to go into lockdown. If it becomes a priority during the second wave, we will need a lot more contact tracers: last week, 999 calls in Cambridge had a 40-minute wait and it took ambulances six hours to arrive. We cannot field an app that will cause more worried well people to phone 999.

The real trade-off between surveillance and public health is this. For years, a pandemic has been at the top of Britain’s risk register, yet far less was spent preparing for one than on anti-terrorist measures, many of which were ostentatious rather than effective. Worse, the rhetoric of terror puffed up the security agencies at the expense of public health, predisposing the US and UK governments to disregard the lesson of SARS in 2003 and MERS in 2015 — unlike the governments of China, Singapore, Taiwan and South Korea, who paid at least some attention. What we need is a radical redistribution of resources from the surveillance-industrial complex to public health.

Our effort should go into expanding testing, making ventilators, retraining everyone with a clinical background from vet nurses to physiotherapists to use them, and building field hospitals. We must call out bullshit when we see it, and must not give policymakers the false hope that techno-magic might let them avoid the hard decisions. Otherwise we can serve best by keeping out of the way. The response should not be driven by cryptographers but by epidemiologists, and we should learn what we can from the countries that have managed best so far, such as South Korea and Taiwan.

FC 2020

I’m at Financial Cryptography 2020 and will try to liveblog some of the talks in followups to this post.

The keynote was given by Allison Nixon, Chief Research Officer of Unit221B, on “Fraudsters Taught Us that Identity is Broken”.

Allison started by showing the Mitchell and Webb clip. In a world where even Jack Dorsey got his twitter hacked via incoming SMS, what is identity? Your thief becomes you. Abuse of old-fashioned passports was rare as they were protected by law; now they’re your email address (which you got by lying to an ad-driven website) and phone number (which gets taken away and given to a random person if you don’t pay your bill). If lucky you might have a signing key (generated on a general purpose computer, and hard to revoke – that’s what bitcoin theft is often about). The whole underlying system is wrong. Email domains, like phone numbers, lapse if you forget to pay your bill; fraudsters actively look for custom domains and check if yours has lapsed, while relying parties mostly don’t. Privacy regulations in most countries prevent you from looking up names from phone numbers; many have phone numbers owned by their employers. Your email address can be frozen or removed because of spam if you’re bad or are hacked, while even felons are not deprived of their names. Evolution is not an intelligent process! People audit password length but rarely the password reset policy: many use zero-factor auth, meaning information that’s sort-of public like your SSN. In Twitter you reset your password then message customer support asking them to remove two-factor, and they do, so long as you can log on! This is a business necessity as too many people lose their phone or second factor, so this customer-support backdoor will never be properly closed. Many bitcoin exchanges have no probation period, whether mandatory or customer option. SIM swap means account theft so long as phone number enables password reset – she also calls this zero-factor authentication.

SIM swap is targeted, unlike most password-stuffing attacks, and compromises people who comply with all the security rules. Allison tried hard to protect herself against this fraud but mostly couldn’t as the phone carrier is the target. This can involve data breaches at the carrier, insider involvement and the customer service back door. Email domain abuse is similar; domain registrars are hacked or taken over. Again, the assumptions made about the underlying infrastructure are wrong. Your email can be reset by your phone number and vice versa. Your private key can be stolen via your cloud backups. Both identity vendors and verifiers rely on unvetted third parties; vendors can’t notify verifiers of a hack. The system failure is highlighted by the existence of criminal markets in identity.

There are unrealistic expectations too. As a user of a general-purpose computer, you have no way to determine whether your machine is suitable for storing private keys, and almost 100% of people are unable to comply with security advice. That tells you it’s the system that’s broken. It’s a blame game, and security advice is as much cargo cult as anything else.

What would a better identity system look like? There would be an end to ever-changing advice; you’d be notified if your information got stolen, just as you know if your physical driving license is stolen; there would be an end to unreasonable expectations of both humans and computers; the legal owner of the identity would be the person identified and would be non-transferable and irrevocable; it would not depend on the integrity of 3rd-party systems like DNS and CAs and patch management mechanisms; we’ll know we’re there once the criminal marketplace vanishes.

Questions: What might we do about certificate revocation? A probation period is the next thing to do, as how people learn of a SIM swap is a flood of password reset messages in email, and then it’s a race. I asked whether rather than fixing the whole world, we should fix it one relying party at a time? Banks give you physical tokens after all, as they’re regulated and have to eat the losses. Allison agreed; in 2019 she talked about SIM swap to many banks but had no interest from any crypto exchange. Curiously, the lawsuits tend to target carriers rather than the exchanges. What about SS7? There are sophisticated Russian criminal gangs doing such attacks, but they require a privileged position in the network, like BGP attacks. What about single signon? The market is currently in flux and might eventually settle on a few vendors. What about SMS spoofing attacks? Allison hasn’t seen them in 4g marketplaces or in widespread criminal use. Caller-ID spoofing is definitely used, by bad guys who organise SWATting. Should we enforce authentication tokens? The customer service department will be inundated with people who have lost theirs and that will become the backdoor. Would blockchains help? No, they’re just an audit log, and the failures are upstream. The social aspect is crucial: people know how to protect their physical cash in their wallet, and a proper solution to the identity problem must work like that. It’s not an impossible task, and might involve a chip in your driver’s license. It’s mostly about getting the execution right.

Security Engineering, and Sustainability

Yesterday I got the audience at the 36th Chaos Computer Congress in Leipzig to vote on the cover art for the third edition of my textbook on Security Engineering: you can see the result here.

It was a privilege to give a talk at 36C3; as the theme was sustainability, I spoke on The Sustainability of Safety, Security and Privacy. This is a topic on which I’ve written and spoken several times in recent years, but we now have some progress to report. The EU has changed the rules to require that if you sell goods with digital components (whether embedded software, associated cloud services or smartphone apps) then these have to be maintained for as long as the customer might reasonably expect.