Category Archives: Legal issues

Security-related legislation, government initiatives, court cases

Patient confidentiality in remote consultations

During the lockdown last year, I was asked by the International Psychoanalytic Association (IPA) to help them update their guidance on remote consultations. I spoke to a range of GPs, surgeons, psychologists and psychoanalysts about what they’d learned during the first lockdown about working over the phone, or over Skype or Zoom. The IPA has now published my report, on a web page that also has their guidance to members both before and after the exercise.

Before the pandemic, remote consultation did happen, but not all therapists offered it; and confidentiality concerns tended to focus on technical security measures such as whether the call was encrypted end-to-end. After everyone was forced online in March and April 2020, clinicians learned rapidly to focus on the endpoints. Patients often have problems finding a private space to talk; there may be a family member in earshot, whether by accident, or because they’re cooped up in a tiny apartment, or because they have a controlling partner or parent. A clinician may return a patient’s call and catch them in a supermarket queue. And the clinic too can be interrupted, if the clinician is practicing from home.

Technical endpoint compromise is occasionally an issue; a controlling family member could inspect a patient’s device and discover a therapeutic relationship that had not been disclosed. By far the worst endpoint compromise that happened during the study period was when the Vastaamo chain of clinics in Finland was hit by ransomware; 45,000 patients’ records were stolen, and some were put online by extortionists demanding bitcoin payments. (And now we face an even larger-scale issue in the UK as the government plans to hoover up all our GP records for sale to drug companies unless we opt out by June 25; see here for how to do that.)

Such horrors aside, the core problem is to establish a therapeutic space where both patient and clinician can interact effectively, which means being able to concentrate and also to relax. There’s more to this than just being comfortable trusting the endpoint environments, the devices, the communications medium and any record-keeping mechanism. Interaction matters too. Many clinician communities discovered independently that the plain old telephone system often works better than new-fangled stuff such as skype and zoom. Video calls add maybe half a second of latency for buffering, which destroys conversational turn-taking. A further advantage of the phone is that you’re not staring at someone’s face at an unnatural distance. You can walk around the room, or even walk around the park.

Since doing this work I’ve started to avoid zoom and teams in favour of phone calls when I can, and use end-to-end encrypted voice calls on WhatsApp or Signal where call costs or client confidentiality make it sensible.

Robots, manners and stress

Humans and other animals have evolved to be aware of whether we’re under threat. When we’re on safe territory with family and friends we relax, but when we sense that a rival or a predator might be nearby, our fight-or-flight response kicks in. Situational awareness is vital, as it’s just too stressful to be alert all the time.

We’ve started to realise that this is likely to be just as important in many machine-learning applications. Take as an example machine vision in an automatic driver assistance system, whose goal is automatic lane keeping and automatic emergency braking. Such systems use deep neural networks, as they perform way better than the alternatives; but they can be easily fooled by adversarial examples. Should we worry? Sure, a bad person might cause a car crash by projecting a misleading image on a motorway bridge – but they could as easily steal some traffic cones from the road works. Nobody sits up at night worrying about that. But the car industry does actually detune vision systems from fear of deceptive attacks!

We therefore started a thread of research aimed at helping machine-learning systems detect whether they’re under attack. Our first idea was the Taboo Trap. You raise your kids to observe social taboos – to behave well and speak properly – and yet once you send them to school they suddenly know words that would make your granny blush. The taboo violation shows they’ve been exposed to ‘adversarial inputs’, as an ML engineer would call them. So we worked out how to train a neural network to avoid certain taboo values, both of outputs (forbidden utterances) and intermediate activations (forbidden thoughts). The taboos can be changed every time you retrain the network, giving the equivalent of a cryptographic key. Thus even though adversarial samples will always exist, you can make them harder to find; an attacker can’t just find one that works against one model of car and use it against every other model. You can take a view, based on risk, of how many different keys you need.

We then showed how you can also attack the availability of neural networks using sponge examples – inputs designed to soak up as much energy, and waste as much time, as possible. An alarm can be simpler to build in this case: just monitor how long your classifier takes to run.

Are there broader lessons? We suspect so. As robots develop situational awareness, like humans, and react to real or potential attacks by falling back to a more cautious mode of operation, a hostile environment will cause the equivalent of stress. Sometimes this will be deliberate; one can imagine constant low-level engagement between drones at tense national borders, just as countries currently probe each others’ air defences. But much of the time it may well be a by-product of poor automation design coupled with companies hustling aggressively for consumers’ attention.

This suggests a missing factor in machine-learning research: manners. We’ve evolved manners to signal to others that our intent is not hostile, and to negotiate the many little transactions that in a hostile environment might lead to a tussle for dominance. Yet these are hard for robots. Food-delivery robots can become unpopular for obstructing and harassing other pavement users; and one of the show-stoppers for automated driving is the difficulty that self-driving cars have in crossing traffic, or otherwise negotiating precedence with other road users. And even in the military, manners have a role – from the chivalry codes of medieval knights to the more modern protocols whereby warships and warplanes warn other craft before opening fire. If we let loose swarms of killer drones with no manners, conflict will be more likely.

Our paper Situational Awareness and Machine Learning – Robots, Manners and Stress was invited as a keynote for two co-located events: IEEE CogSIMA and the NATO STO SCI-341 Research Symposium on Situation awareness of Swarms and Autonomous systems. We got so many conflicting demands from the IEEE that we gave up on making a video of the talk for them, and our paper was pulled from their proceedings. However we decided to put the paper online for the benefit of the NATO folks, who were blameless in this matter.

Infrastructure – the Good, the Bad and the Ugly

Infrastructure used to be regulated and boring; the phones just worked and water just came out of the tap. Software has changed all that, and the systems our society relies on are ever more complex and contested. We have seen Twitter silencing the US president, Amazon switching off Parler and the police closing down mobile phone networks used by crooks. The EU wants to force chat apps to include porn filters, India wants them to tell the government who messaged whom and when, and the US Department of Justice has launched antitrust cases against Google and Facebook.

Infrastructure – the Good, the Bad and the Ugly analyses the security economics of platforms and services. The existence of platforms such as the Internet and cloud services enabled startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But criminals also build infrastructure, from botnets through malware-as-a-service. There’s also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even “respectable” infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this?

I argue that this is not simply a matter for antitrust lawyers, but that computer scientists also have some insights to offer, and the interaction between technical and social factors is critical. I suggest a number of principles to guide analysis. First, what actors or technical systems have the power to exclude? Such control points tend to be at least partially social, as social structures like networks of friends and followers have more inertia. Even where control points exist, enforcement often fails because defenders are organised in the wrong institutions, or otherwise fail to have the right incentives; many defenders, from payment systems to abuse teams, focus on process rather than outcomes.

There are implications for policy. The agencies often ask for back doors into systems, but these help intelligence more than interdiction. To really push back on crime and abuse, we will need institutional reform of regulators and other defenders. We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability. It may make a real difference if we can push up offenders’ transaction costs, as online criminal enterprises rely more on agility than on on long-lived, critical, redundant platforms.

This was a Dertouzos Distinguished Lecture at MIT in March 2021.

WEIS 2020 – Liveblog

I’ll be trying to liveblog the seventeenth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (December 14/15) and streamed live on the CEPS channel on YouTube. The event was introduced by the general chair, Lorenzo Pupillo of CEPS, and the program chair Nicolas Christin of CMU. My summaries of the sessions will appear as followups to this post, and videos will be linked here in a few days.

Our new “Freedom of Speech” policy

Our beloved Vice-Chancellor proposes a “free speech” policy under which all academics must treat other academics with “respect”. This is no doubt meant well, but the drafting is surprisingly vague and authoritarian for a university where the VC, the senior pro-VC, the HR pro-VC and the Registrary are all lawyers. The bottom line is that in future we might face disciplinary charges and even dismissal for mockery of ideas and individuals with which we disagree.

The policy was slipped out in March, when nobody was paying attention. There was a Discussion in June, at which my colleague Arif Ahmad spelled out the problems.

Vigorous debate is intrinsic to academia and it should be civil, but it is unreasonable to expect people to treat all opposing views with respect. Oxford’s policy spells this out. At the Discussion, Arif pointed out that “respect” must be changed to “tolerance” if we are to uphold the liberal culture that we have not just embraced but developed over several centuries.

At its first meeting this term, the University Council considered these arguments but decided to press ahead anyway. We are therefore calling a ballot on three amendments to the policy. If you’re a senior member of the University we invite you to sign up your support for them on the flysheets. The first amendment changes “respect” to “tolerance”; the second makes it harder to force university societies to disinvite speakers whose remarks may be controversial, and the third restricts the circumstances in which the university itself can ban speakers.

Liberalism is coming under attack from authoritarians of both left and right, yet it is the foundation on which modern academic life is built and our own university has contributed more than any other to its development over the past 811 years. If academics can face discipline for using tactics such as scorn, ridicule and irony to criticise folly, how does that sit with having such alumni as John Maynard Keynes and Charles Darwin, not to mention Bertrand Rusell, Douglas Adams and Salman Rushdie?

Three Paper Thursday: Broken Hearts and Empty Wallets

This is a guest post by Cassandra Cross.

Romance fraud (also known as romance scams or sweetheart swindles) affects millions of individuals globally each year. In 2019, the Internet Crime Complaint Centre (IC3) (USA) had over US$475 million reported lost to romance fraud. Similarly, in Australia, victims reported losing over $AUD80 million and British citizens reported over £50 million lost in 2018. Given the known under-reporting of fraud overall, and online fraud more specifically, these figures are likely to only be a minority of actual losses incurred.

Romance fraud occurs when an offender uses the guise of a legitimate relationship to gain a financial advantage from their victim. It differs from a bad relationship, in that from the outset, the offender is using lies and deception to obtain monetary rewards from their partner. Romance fraud capitalises on the fact that a potential victim is looking to establish a relationship and exhibits an express desire to connect with someone. Offenders use this to initiate a connection and start to build strong levels of trust and rapport.

As with all fraud, victims experience a wide range of impacts in the aftermath of victimisation. While many believe these to be only financial, in reality, it extends to a decline in both physical and emotional wellbeing, relationship breakdown, unemployment, homelessness, and in extreme cases, suicide. In the case of romance fraud, there is the additional trauma associated with grieving both the loss of the relationship as well as any funds they have transferred. For many victims, the loss of the relationship can be harder to cope with than the monetary aspect, with victims experiencing large degrees of betrayal and violation at the hands of their offender.

Sadly, there is also a large amount of victim blaming that exists with both romance fraud and fraud in general. Fraud is unique in that victims actively participate in the offence, through the transfer of money, albeit under false pretences. As a result, they are seen to be culpable for what occurs and are often blamed for their own circumstances. The stereotype of fraud victims as greedy, gullible and naïve persists, and presents as a barrier to disclosure as well as inhibiting their ability to report the incident and access any support services.

Given the magnitude of losses and impacts on romance fraud victims, there is an emerging body of scholarship that seeks to better understand the ways in which offenders are able to successfully target victims, the ways in which they are able to perpetrate their offences, and the impacts of victimisation on the individuals themselves. The following three articles each explore different aspects of romance fraud, to gain a more holistic understanding of this crime type.

Continue reading Three Paper Thursday: Broken Hearts and Empty Wallets

Of testing centres, snipe, and wild geese: COVID briefing paper #8

Does the road wind up-hill all the way?
   Yes, to the very end.
Will the day's journey take the whole long day?
   From morn to night, my friend.

Christina Rossetti, 1861: Up-Hill. 

This week’s COVID briefing paper takes a personal perspective as I recount my many adventures in complying with a call for testing from my local council.

So as to immerse the reader in the experience, this post is long. If you don’t have time for that, you can go directly to the briefing.

The council calls for everyone in my street to be tested

On Thursday 13 August my household received a hand-delivered letter from the chief executive of my local council. There had been an increase in cases in my area, and as a result, they were asking everyone on my street to get tested.

Dramatis personae:

  • ME, a knowledge worker who has structured her life so as to minimize interaction with the outside world until the number of daily cases drops a lot lower than it is now;
  • OTHER HOUSEHOLD MEMBERS, including people with health conditions, who would be shielding if shielding hadn’t ended on August 1.

Fortunately, everyone else in my household is also in a position to enjoy the mixed blessing of a lifestyle without social interaction. So, none of us reacted to the news of an outbreak amongst our neighbours with fear for our own health, considering our habits over the last six months. Rather, we were, and are, reassured that the local government was taking a lead.

My neighbour, however, was having a different experience. Like most people on our street, he does not have the same privileges I do: he works in a supermarket, he does not have a car, and his only Internet access is through his dumbphone. Days before, he had texted me at the end of his tether, because customers were not wearing masks or observing social distancing. He felt (because he is) unprotected, and said it was only a matter of time before he becomes infected. Receiving the council’s letter only reinforced his alarm.

Booking the tests

Continue reading Of testing centres, snipe, and wild geese: COVID briefing paper #8

Three paper Thursday: Ethics in security research

Good security and cybercrime research often creates an impact and we want to ensure that impact is positive. This week I will discuss three papers on ethics in computer security research in the run up to next week’s Security and Human Behaviour workshop (SHB). Ethical issues in research using datasets of illicit origin (Thomas, Pastrana, Hutchings, Clayton, Beresford) from IMC 2017, Measuring eWhoring (Pastrana, Hutchings, Thomas, Tapiador) from IMC 2019, and An Ethics Framework for Research into Heterogeneous Systems (Happa, Nurse, Goldsmith, Creese, Williams).

Ethical issues in research using datasets of illicit origin (blog post) came about because in prior work we had noticed that there were ethical complexities to take care of when using data that had “fallen off the back of a lorry” such as the backend databases of hacked booter services that we had used. We took a broad look at existing published guidance to synthesise those issues which particularly apply to using data of illicit origin and we expected to see discussed by researchers:

Continue reading Three paper Thursday: Ethics in security research

Is science being set up to take the blame?

Yesterday’s publication of the minutes of the government’s Scientific Advisory Group for Emergencies (SAGE) raises some interesting questions. An initial summary in yesterday’s Guardian has a timeline suggesting that it was the distinguished medics on SAGE rather than the Prime Minister who went from complacency in January and February to panic in March, and who ignored the risk to care homes until it was too late.

Is this a Machiavellian conspiracy by Dominic Cummings to blame the scientists, or is it business as usual? Having spent a dozen years on the university’s governing body and various of its subcommittees, I can absolutely get how this happened. Once a committee gets going, it can become very reluctant to change its opinion on anything. Committees can become sociopathic, worrying about their status, ducking liability, and finding reasons why problems are either somebody else’s or not practically soluble.

So I spent a couple of hours yesterday reading the minutes, and indeed we see the group worried about its power: on February 13th it wants the messaging to emphasise that official advice is both efficaceous and sufficient, to “reduce the likelihood of the public adopting unnecessary or contradictory behaviours”. Turf is defended: Public Health England (PHE) ruled on February 18th that it can cope with 5 new cases a week (meaning tracing 800 contacts) and hoped this might be increased to 50; they’d already decided the previous week that it wasn’t possible to accelerate diagnostic capacity. So far, so much as one might expect.

The big question, though, is why nobody thought of protecting people in care homes. The answer seems to be that SAGE dismissed the problem early on as “too hard” or “not our problem”. On March 5th they note that social distancing for over-65s could save a lot of lives and would be most effective for those living independently: but it would be “a challenge to implement this measure in communal settings such as care homes”. They appear more concerned that “Many of the proposed measures will be easier to implement for those on higher incomes” and the focus is on getting PHE to draft guidance. (This is the meeting at which Dominic Cummings makes his first appearance, so he cannot dump all the blame on the scientists.)

Continue reading Is science being set up to take the blame?

Three Paper Thursday – GDPR anniversary edition

This is a guest contribution from Daniel Woods.

This coming Monday will mark two years since the General Data Protection Regulation (GDPR) came into effect. It prompted an initial wave of cookie banners that drowned users in assertions like “We value your privacy”. Website owners hoped that collecting user consent would ensure compliance and ward off the lofty fines.

Article 6 of the GDPR describes how organisations can establish a legal basis for processing personal data. Putting aside a selection of `necessary’ reasons for doing so, data processing can only be justified by collecting the user’s consent to “the processing of his or her personal data for one or more specific purposes”. Consequently, obtaining user consent could be the difference between suffering a dizzying fine or not.

The law changed the face of the web and this post considers one aspect of the transition. Consent Management Providers (CMPs) emerged offering solutions for websites to embed. Many of these use a technical standard described in the Transparency and Consent Framework. The standard was developed by the Industry Advertising Body, who proudly claim it is is “the only GDPR consent solution built by the industry for the industry”.

All of the following studies either directly measure websites implementing this standard or explore the theoretical implications of standardising consent. The first paper looks at how the design of consent dialogues shape the consent signal sent by users. The second paper identifies disparities between the privacy preferences communicated via cookie banners and the consent signals stored by the website. The third paper uses coalitional game theory to explore which firms extract the value from consent coalitions in which websites share consent signals.

Continue reading Three Paper Thursday – GDPR anniversary edition