Category Archives: Politics

SHB Seminar

The SHB seminar on November 5th was kicked off by Tom Holt, who’s discovered a robust underground market in identity documents that are counterfeit or fraudulently obtained. He’s been scraping both websites and darkweb sites for data and analysing how people go about finding, procuring and using such credentials. Most vendors were single-person operators although many operate within affiliate programs; many transactions involved cryptocurrency; many involve generating pdfs that people can print at home and that are good enough for young people to drink alcohol. Curiously, open web products seem to cost twice as much as dark web products.

Next was Jack Hughes, who has been studying the contract system introduced by hackforums in 2018 and made mandatory the following year. This enabled him to analyse crime forum behaviour before and during the covid-19 era. How do new users become active, and build up trust? How does it evolve? He collected 200,000 transactions and analysed them. The contract mandate stifled growth quickly, leading to a first peak; covid caused a second. The market was already centralised, and became more so with the pandemic. However contracts are getting done faster, and the main activity is currency exchange: it seems to be working as a cash-out market.

Anita Lavorgna has been studying the discourse of groups who oppose public mask mandates. Like the antivaxx movement, this can draw in fringe groups and become a public-health issue. She collected 23654 tweets from February to June 2020. There’s a diverse range of voices from different places on the political spectrum but with a transversal theme of freedom from government interference. Groups seek strength in numbers and seek to ally into movements, leading to the mask becoming a symbol of political identity construction. Anita found very little interaction between the different groups: only 144 messages in total.

Simon Parkin has been working on how we can push back on bad behaviours online while they are linked with good behaviours that we wish to promote. Precision is hard as many of the desirable behaviours are not explicitly recognised as such, and as many behaviours arise as a combination of personal incentives and context. The best way forward is around usability engineering – making the desired behaviours easier.

Bruce Schneier was the final initial speaker, and his topic was covid apps. The initial rush of apps that arrived in March through June have known issues around false positives and false negatives. We’ve also used all sorts of other tools, such as analysis of Google maps to measure lockdown compliance. The third thing is the idea of an immunity passport, saying you’ve had the disease, or a vaccine. That will have the same issues as the fake IDs that Tom talked about. Finally, there’s compliance tracking, where your phone monitors you. The usual countermeasures apply: consent, minimisation, infosec, etc., though the trade-offs might be different for a while. A further bunch of issues concern home working and the larger attack surface that many firms have as a result of unfamiliar tools, less resistance to being tols to do things etc.

The discussion started on fake ID; Tom hasn’t yet done test purchases, and might look at fraudulently obtained documents in the future, as opposed to completely counterfeit ones. Is hackforums helping drug gangs turn paper into coin? This is not clear; more is around cashing out cybercrime rather than street crime. There followed discussion by Anita of how to analyse corpora of tweets, and the implications for policy in real life. Things are made more difficult by the fact that discussions drift off into other platforms we don’t monitor. Another topic was the interaction of fashion: where some people wear masks or not as a political statement, many more buy masks that get across a more targeted statement. Fashion is really powerful, and tends to be overlooked by people in our field. Usability research perhaps focuses too much on the utilitarian economics, and is a bit of a blunt instrument. Another example related to covid is the growing push for monitoring software on employees’ home computers. Unfortunately Uber and Lyft bought a referendum result that enables them to not treat their staff in California as employees, so the regulation of working hours at home will probably fall to the EU. Can we perhaps make some input into what that should look like? Another issue with the pandemic is the effect on information security markets: why should people buy corporate firewalls when their staff are all over the place? And to what extent will some of these changes be permanent, if people work from home more? Another thread of discussion was how the privacy properties of covid apps make it hard for people to make risk-management decisions. The apps appear ineffective because they were designed to do privacy rather than to do public health, in various subtle ways; giving people low-grade warnings which do not require any action appear to be an attempt to raise public awareness, like mask mandates, rather than an effective attempt to get exposed individuals to isolate. Apps that check people into venues have their own issues and appear to be largely security theatre. Security theatre comes into its own where the perceived risk is much greater than the actual risk; covid is the opposite. What can be done in this case? Targeted warnings? Humour? What might happen when fatigue sets in? People will compromise compliance to make their lives bearable. That can be managed to some extent in institutions like universities, but in society it will be harder. We ended up with the suggestion that the next SHB seminar should be in February, which should be the low point; after that we can look forward to things getting better, and hopefully to a meeting in person in Cambridge on June 3-4 2021.

Our new “Freedom of Speech” policy

Our beloved Vice-Chancellor proposes a “free speech” policy under which all academics must treat other academics with “respect”. This is no doubt meant well, but the drafting is surprisingly vague and authoritarian for a university where the VC, the senior pro-VC, the HR pro-VC and the Registrary are all lawyers. The bottom line is that in future we might face disciplinary charges and even dismissal for mockery of ideas and individuals with which we disagree.

The policy was slipped out in March, when nobody was paying attention. There was a Discussion in June, at which my colleague Arif Ahmad spelled out the problems.

Vigorous debate is intrinsic to academia and it should be civil, but it is unreasonable to expect people to treat all opposing views with respect. Oxford’s policy spells this out. At the Discussion, Arif pointed out that “respect” must be changed to “tolerance” if we are to uphold the liberal culture that we have not just embraced but developed over several centuries.

At its first meeting this term, the University Council considered these arguments but decided to press ahead anyway. We are therefore calling a ballot on three amendments to the policy. If you’re a senior member of the University we invite you to sign up your support for them on the flysheets. The first amendment changes “respect” to “tolerance”; the second makes it harder to force university societies to disinvite speakers whose remarks may be controversial, and the third restricts the circumstances in which the university itself can ban speakers.

Liberalism is coming under attack from authoritarians of both left and right, yet it is the foundation on which modern academic life is built and our own university has contributed more than any other to its development over the past 811 years. If academics can face discipline for using tactics such as scorn, ridicule and irony to criticise folly, how does that sit with having such alumni as John Maynard Keynes and Charles Darwin, not to mention Bertrand Rusell, Douglas Adams and Salman Rushdie?

Of testing centres, snipe, and wild geese: COVID briefing paper #8

Does the road wind up-hill all the way?
   Yes, to the very end.
Will the day's journey take the whole long day?
   From morn to night, my friend.

Christina Rossetti, 1861: Up-Hill. 

This week’s COVID briefing paper takes a personal perspective as I recount my many adventures in complying with a call for testing from my local council.

So as to immerse the reader in the experience, this post is long. If you don’t have time for that, you can go directly to the briefing.

The council calls for everyone in my street to be tested

On Thursday 13 August my household received a hand-delivered letter from the chief executive of my local council. There had been an increase in cases in my area, and as a result, they were asking everyone on my street to get tested.

Dramatis personae:

  • ME, a knowledge worker who has structured her life so as to minimize interaction with the outside world until the number of daily cases drops a lot lower than it is now;
  • OTHER HOUSEHOLD MEMBERS, including people with health conditions, who would be shielding if shielding hadn’t ended on August 1.

Fortunately, everyone else in my household is also in a position to enjoy the mixed blessing of a lifestyle without social interaction. So, none of us reacted to the news of an outbreak amongst our neighbours with fear for our own health, considering our habits over the last six months. Rather, we were, and are, reassured that the local government was taking a lead.

My neighbour, however, was having a different experience. Like most people on our street, he does not have the same privileges I do: he works in a supermarket, he does not have a car, and his only Internet access is through his dumbphone. Days before, he had texted me at the end of his tether, because customers were not wearing masks or observing social distancing. He felt (because he is) unprotected, and said it was only a matter of time before he becomes infected. Receiving the council’s letter only reinforced his alarm.

Booking the tests

Continue reading Of testing centres, snipe, and wild geese: COVID briefing paper #8

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

Is science being set up to take the blame?

Yesterday’s publication of the minutes of the government’s Scientific Advisory Group for Emergencies (SAGE) raises some interesting questions. An initial summary in yesterday’s Guardian has a timeline suggesting that it was the distinguished medics on SAGE rather than the Prime Minister who went from complacency in January and February to panic in March, and who ignored the risk to care homes until it was too late.

Is this a Machiavellian conspiracy by Dominic Cummings to blame the scientists, or is it business as usual? Having spent a dozen years on the university’s governing body and various of its subcommittees, I can absolutely get how this happened. Once a committee gets going, it can become very reluctant to change its opinion on anything. Committees can become sociopathic, worrying about their status, ducking liability, and finding reasons why problems are either somebody else’s or not practically soluble.

So I spent a couple of hours yesterday reading the minutes, and indeed we see the group worried about its power: on February 13th it wants the messaging to emphasise that official advice is both efficaceous and sufficient, to “reduce the likelihood of the public adopting unnecessary or contradictory behaviours”. Turf is defended: Public Health England (PHE) ruled on February 18th that it can cope with 5 new cases a week (meaning tracing 800 contacts) and hoped this might be increased to 50; they’d already decided the previous week that it wasn’t possible to accelerate diagnostic capacity. So far, so much as one might expect.

The big question, though, is why nobody thought of protecting people in care homes. The answer seems to be that SAGE dismissed the problem early on as “too hard” or “not our problem”. On March 5th they note that social distancing for over-65s could save a lot of lives and would be most effective for those living independently: but it would be “a challenge to implement this measure in communal settings such as care homes”. They appear more concerned that “Many of the proposed measures will be easier to implement for those on higher incomes” and the focus is on getting PHE to draft guidance. (This is the meeting at which Dominic Cummings makes his first appearance, so he cannot dump all the blame on the scientists.)

Continue reading Is science being set up to take the blame?

Three Paper Thursday – GDPR anniversary edition

This is a guest contribution from Daniel Woods.

This coming Monday will mark two years since the General Data Protection Regulation (GDPR) came into effect. It prompted an initial wave of cookie banners that drowned users in assertions like “We value your privacy”. Website owners hoped that collecting user consent would ensure compliance and ward off the lofty fines.

Article 6 of the GDPR describes how organisations can establish a legal basis for processing personal data. Putting aside a selection of `necessary’ reasons for doing so, data processing can only be justified by collecting the user’s consent to “the processing of his or her personal data for one or more specific purposes”. Consequently, obtaining user consent could be the difference between suffering a dizzying fine or not.

The law changed the face of the web and this post considers one aspect of the transition. Consent Management Providers (CMPs) emerged offering solutions for websites to embed. Many of these use a technical standard described in the Transparency and Consent Framework. The standard was developed by the Industry Advertising Body, who proudly claim it is is “the only GDPR consent solution built by the industry for the industry”.

All of the following studies either directly measure websites implementing this standard or explore the theoretical implications of standardising consent. The first paper looks at how the design of consent dialogues shape the consent signal sent by users. The second paper identifies disparities between the privacy preferences communicated via cookie banners and the consent signals stored by the website. The third paper uses coalitional game theory to explore which firms extract the value from consent coalitions in which websites share consent signals.

Continue reading Three Paper Thursday – GDPR anniversary edition

Three Paper Thursday: Will we ever get IoT security right?

Academia, governments and industry frequently talk about the importance of IoT security. Fundamentally, the IoT environment has similar problems to other technology platforms such as Android: a fragmented market with no clear responsibilities or incentives for vendors to provide regular updates, and consumers for whom its not clear how much (of a premium) they are willing to pay for (“better”) security and privacy.

Just two weeks ago, Belkin announced to shut down one of its cloud services, effectively transforming its several product lines of web cameras into useless bricks. Unlike other end-of-support announcements for IoT devices that (only) mean devices will never see an update again, many Belkin cameras simply refuse to work without the “cloud”. This is particularly disconcerting  as many see cloud-based IoT as one possible solution to improve device security by easing the user maintenance effort through remote update capabilities.

In this post, I would like to introduce three papers, each talking about different aspects of IoT security: 1) consumer purchasing behaviour, 2) vendor response, and 3) an assessment of the ever-growing literature on “best-practices” from industrial, governmental, and academic sources.
Continue reading Three Paper Thursday: Will we ever get IoT security right?

Contact Tracing in the Real World

There have recently been several proposals for pseudonymous contact tracing, including from Apple and Google. To both cryptographers and privacy advocates, this might seem the obvious way to protect public health and privacy at the same time. Meanwhile other cryptographers have been pointing out some of the flaws.

There are also real systems being built by governments. Singapore has already deployed and open-sourced one that uses contact tracing based on bluetooth beacons. Most of the academic and tech industry proposals follow this strategy, as the “obvious” way to tell who’s been within a few metres of you and for how long. The UK’s National Health Service is working on one too, and I’m one of a group of people being consulted on the privacy and security.

But contact tracing in the real world is not quite as many of the academic and industry proposals assume.

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1–3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

Sixth, there’s the human aspect. On Friday, when I was coming back from walking the dogs, I stopped to chat for ten minutes to a neighbour. She stood halfway between her gate and her front door, so we were about 3 metres apart, and the wind was blowing from the side. The risk that either of us would infect the other was negligible. If we’d been carrying bluetooth apps, we’d have been flagged as mutual contacts. It would be quite intolerable for the government to prohibit such social interactions, or to deploy technology that would punish them via false alarms. And how will things work with an orderly supermarket queue, where law-abiding people stand patiently six feet apart?

Bluetooth also goes through plasterboard. If undergraduates return to Cambridge in October, I assume there will still be small-group teaching, but with protocols for distancing, self-isolation and quarantine. A supervisor might sit in a teaching room with two or three students, all more than 2m apart and maybe wearing masks, and the window open. The bluetooth app will flag up not just the others in the room but people in the next room too.

How is this to be dealt with? I expect the app developers will have to fit a user interface saying “You’re within range of device 38a5f01e20. Within infection range (y/n)?” But what happens when people get an avalanche of false alarms? They learn to click them away. A better design might be to invite people to add a nickname and a photo so that contacts could see who they are. “You are near to Ross [photo] and have been for five minutes. Are you maintaining physical distance?”

When I discussed this with a family member, the immediate reaction was that she’d refuse to run an anonymous app that might suddenly say “someone you’ve been near in the past four days has reported symptoms, so you must now self-isolate for 14 days.” A call from a public health officer is one thing, but not knowing who it was would just creep her out. It’s important to get the reactions of real people, not just geeks and wonks! And the experience of South Korea and Taiwan suggests that transparency is the key to public acceptance.

Seventh, on the systems front, decentralised systems are all very nice in theory but are a complete pain in practice as they’re too hard to update. We’re still using Internet infrastructure from 30 years ago (BGP, DNS, SMTP…) because it’s just too hard to change. Watch Moxie Marlinspike’s talk at 36C3 if you don’t get this. Relying on cryptography tends to make things even more complex, fragile and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.

Personally I feel conflicted. I recognise the overwhelming force of the public-health arguments for a centralised system, but I also have 25 years’ experience of the NHS being incompetent at developing systems and repeatedly breaking their privacy promises when they do manage to collect some data of value to somebody else. The Google Deepmind scandal was just the latest of many and by no means the worst. This is why I’m really uneasy about collecting lots of lightly-anonymised data in a system that becomes integrated into a whole-of-government response to the pandemic. We might never get rid of it.

But the real killer is likely to be the interaction between privacy and economics. If the app’s voluntary, nobody has an incentive to use it, except tinkerers and people who religiously comply with whatever the government asks. If uptake remains at 10-15%, as in Singapore, it won’t be much use and we’ll need to hire more contact tracers instead. Apps that involve compulsion, such as those for quarantine geofencing, will face a more adversarial threat model; and the same will be true in spades for any electronic immunity certificate. There the incentive to cheat will be extreme, and we might be better off with paper serology test certificates, like the yellow fever vaccination certificates you needed for the tropics, back in the good old days when you could actually go there.

All that said, I suspect the tracing apps are really just do-something-itis. Most countries now seem past the point where contact tracing is a high priority; even Singapore has had to go into lockdown. If it becomes a priority during the second wave, we will need a lot more contact tracers: last week, 999 calls in Cambridge had a 40-minute wait and it took ambulances six hours to arrive. We cannot field an app that will cause more worried well people to phone 999.

The real trade-off between surveillance and public health is this. For years, a pandemic has been at the top of Britain’s risk register, yet far less was spent preparing for one than on anti-terrorist measures, many of which were ostentatious rather than effective. Worse, the rhetoric of terror puffed up the security agencies at the expense of public health, predisposing the US and UK governments to disregard the lesson of SARS in 2003 and MERS in 2015 — unlike the governments of China, Singapore, Taiwan and South Korea, who paid at least some attention. What we need is a radical redistribution of resources from the surveillance-industrial complex to public health.

Our effort should go into expanding testing, making ventilators, retraining everyone with a clinical background from vet nurses to physiotherapists to use them, and building field hospitals. We must call out bullshit when we see it, and must not give policymakers the false hope that techno-magic might let them avoid the hard decisions. Otherwise we can serve best by keeping out of the way. The response should not be driven by cryptographers but by epidemiologists, and we should learn what we can from the countries that have managed best so far, such as South Korea and Taiwan.

Security Engineering, and Sustainability

Yesterday I got the audience at the 36th Chaos Computer Congress in Leipzig to vote on the cover art for the third edition of my textbook on Security Engineering: you can see the result here.

It was a privilege to give a talk at 36C3; as the theme was sustainability, I spoke on The Sustainability of Safety, Security and Privacy. This is a topic on which I’ve written and spoken several times in recent years, but we now have some progress to report. The EU has changed the rules to require that if you sell goods with digital components (whether embedded software, associated cloud services or smartphone apps) then these have to be maintained for as long as the customer might reasonably expect.

WEIS 2019 – Liveblog

I’ll be trying to liveblog the seventeenth workshop on the economics of information security at Harvard. I’m not in Cambridge, Massachussetts, but in Cambridge, England, because of a visa held in ‘administrative processing’ (a fate that has befallen several other cryptographers). My postdoc Ben Collier is attending as my proxy (inspired by this and this).