Category Archives: Security psychology

Three Paper Thursday: Exploring the Impact of Online Crime Victimization

Just as in other types of victimization, victims of cybercrime can experience serious consequences, emotional or not. First of all, a repeat victim of a cyber-attack might face serious financial or emotional hardship. These victims are also more likely to require medical attention as a consequence of online fraud victimization. This means repeat victims have a unique set of support needs, including the need for counselling, and seeking support from the criminal justice system. There are also cases, such as in cyberbullying or sextortion, where victims will not speak to their family and friends. These victims feel too ashamed to share details with others and they will probably not receive any support. In such cases trauma can even lead to self-harm. Therefore, we see that online victimization can actually lead to physical harm. 

As a member of the National Risk Assessment (NRA) Behavioural Science Expert Group in the UK, working on the social and psychological impact of cyber-attacks on members of the public, I have identified for years now that the actual social or psychological impact of different types of cyber-attacks to victims or society as a whole is still not explored. Governments have been slow in identifying and analysing potential events online that may negatively impact individuals. In the UK, as well as in other countries, cybercrime has been added as part of a national risk assessment exercise only a few years ago. Therefore, our knowledge about the potential impact of cyber-attacks and their cascading effects are still being under research.  

This is often a very difficult area for lawyers and the courts to understand. Understanding victims’ needs and the responsibilities of the police, the judiciary and other authorities in dealing with such crimes is very important. This is why we need to further explore how and to what extent the situation and needs of victims of online crimes differ from those of traditional offline crimes. By sharing experiences and openly discussing about this issue, we will be able to engrain the cybersecurity mindset in our societies thus preventing victimization in some level. 

In this post I would like to introduce recent work in this area. The first one explores the social and psychological impact of cyber-attacks to individuals as well as nations, the second one explores the differences between the situation and needs of online and offline crime victims while the third one discusses the relationship between offending and victimization online.

Continue reading Three Paper Thursday: Exploring the Impact of Online Crime Victimization

Contact Tracing in the Real World

There have recently been several proposals for pseudonymous contact tracing, including from Apple and Google. To both cryptographers and privacy advocates, this might seem the obvious way to protect public health and privacy at the same time. Meanwhile other cryptographers have been pointing out some of the flaws.

There are also real systems being built by governments. Singapore has already deployed and open-sourced one that uses contact tracing based on bluetooth beacons. Most of the academic and tech industry proposals follow this strategy, as the “obvious” way to tell who’s been within a few metres of you and for how long. The UK’s National Health Service is working on one too, and I’m one of a group of people being consulted on the privacy and security.

But contact tracing in the real world is not quite as many of the academic and industry proposals assume.

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1–3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

Sixth, there’s the human aspect. On Friday, when I was coming back from walking the dogs, I stopped to chat for ten minutes to a neighbour. She stood halfway between her gate and her front door, so we were about 3 metres apart, and the wind was blowing from the side. The risk that either of us would infect the other was negligible. If we’d been carrying bluetooth apps, we’d have been flagged as mutual contacts. It would be quite intolerable for the government to prohibit such social interactions, or to deploy technology that would punish them via false alarms. And how will things work with an orderly supermarket queue, where law-abiding people stand patiently six feet apart?

Bluetooth also goes through plasterboard. If undergraduates return to Cambridge in October, I assume there will still be small-group teaching, but with protocols for distancing, self-isolation and quarantine. A supervisor might sit in a teaching room with two or three students, all more than 2m apart and maybe wearing masks, and the window open. The bluetooth app will flag up not just the others in the room but people in the next room too.

How is this to be dealt with? I expect the app developers will have to fit a user interface saying “You’re within range of device 38a5f01e20. Within infection range (y/n)?” But what happens when people get an avalanche of false alarms? They learn to click them away. A better design might be to invite people to add a nickname and a photo so that contacts could see who they are. “You are near to Ross [photo] and have been for five minutes. Are you maintaining physical distance?”

When I discussed this with a family member, the immediate reaction was that she’d refuse to run an anonymous app that might suddenly say “someone you’ve been near in the past four days has reported symptoms, so you must now self-isolate for 14 days.” A call from a public health officer is one thing, but not knowing who it was would just creep her out. It’s important to get the reactions of real people, not just geeks and wonks! And the experience of South Korea and Taiwan suggests that transparency is the key to public acceptance.

Seventh, on the systems front, decentralised systems are all very nice in theory but are a complete pain in practice as they’re too hard to update. We’re still using Internet infrastructure from 30 years ago (BGP, DNS, SMTP…) because it’s just too hard to change. Watch Moxie Marlinspike’s talk at 36C3 if you don’t get this. Relying on cryptography tends to make things even more complex, fragile and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.

Personally I feel conflicted. I recognise the overwhelming force of the public-health arguments for a centralised system, but I also have 25 years’ experience of the NHS being incompetent at developing systems and repeatedly breaking their privacy promises when they do manage to collect some data of value to somebody else. The Google Deepmind scandal was just the latest of many and by no means the worst. This is why I’m really uneasy about collecting lots of lightly-anonymised data in a system that becomes integrated into a whole-of-government response to the pandemic. We might never get rid of it.

But the real killer is likely to be the interaction between privacy and economics. If the app’s voluntary, nobody has an incentive to use it, except tinkerers and people who religiously comply with whatever the government asks. If uptake remains at 10-15%, as in Singapore, it won’t be much use and we’ll need to hire more contact tracers instead. Apps that involve compulsion, such as those for quarantine geofencing, will face a more adversarial threat model; and the same will be true in spades for any electronic immunity certificate. There the incentive to cheat will be extreme, and we might be better off with paper serology test certificates, like the yellow fever vaccination certificates you needed for the tropics, back in the good old days when you could actually go there.

All that said, I suspect the tracing apps are really just do-something-itis. Most countries now seem past the point where contact tracing is a high priority; even Singapore has had to go into lockdown. If it becomes a priority during the second wave, we will need a lot more contact tracers: last week, 999 calls in Cambridge had a 40-minute wait and it took ambulances six hours to arrive. We cannot field an app that will cause more worried well people to phone 999.

The real trade-off between surveillance and public health is this. For years, a pandemic has been at the top of Britain’s risk register, yet far less was spent preparing for one than on anti-terrorist measures, many of which were ostentatious rather than effective. Worse, the rhetoric of terror puffed up the security agencies at the expense of public health, predisposing the US and UK governments to disregard the lesson of SARS in 2003 and MERS in 2015 — unlike the governments of China, Singapore, Taiwan and South Korea, who paid at least some attention. What we need is a radical redistribution of resources from the surveillance-industrial complex to public health.

Our effort should go into expanding testing, making ventilators, retraining everyone with a clinical background from vet nurses to physiotherapists to use them, and building field hospitals. We must call out bullshit when we see it, and must not give policymakers the false hope that techno-magic might let them avoid the hard decisions. Otherwise we can serve best by keeping out of the way. The response should not be driven by cryptographers but by epidemiologists, and we should learn what we can from the countries that have managed best so far, such as South Korea and Taiwan.

SHB 2019 – Liveblog

I’ll be trying to liveblog the twelfth workshop on security and human behaviour at Harvard. I’m doing this remotely because of US visa issues, as I did for WEIS 2019 over the last couple of days. Ben Collier is attending as my proxy and we’re trying to build on the experience of telepresence reported here and here. My summaries of the workshop sessions will appear as followups to this post.

WEIS 2019 – Liveblog

I’ll be trying to liveblog the seventeenth workshop on the economics of information security at Harvard. I’m not in Cambridge, Massachussetts, but in Cambridge, England, because of a visa held in ‘administrative processing’ (a fate that has befallen several other cryptographers). My postdoc Ben Collier is attending as my proxy (inspired by this and this).

Does security advice discriminate against women?

Security systems are often designed by geeks who assume that the users will also be geeks, and the same goes for the advice that users are given when things start to go wrong. For example, banks reacted to the growth of phishing in 2006 by advising their customers to parse URLs. That’s fine for geeks but most people don’t do that, and in particular most women don’t do that. So in the second edition of my Security Engineering book, I asked (in chapter 2, section 2.3.4, pp 27-28): “Is it unlawful sex discrimination for a bank to expect its customers to detect phishing attacks by parsing URLs?”

Tyler Moore and I then ran the experiment, and Tyler presented the results at the first Workshop on Security and Human Behaviour that June. We recruited 132 volunteers between the ages of 18 and 30 (77 female, 55 male) and tested them to see whether they could spot phishing websites, as well as for systematising quotient (SQ) and empathising quotient (EQ). These measures were developed by Simon Baron-Cohen in his work on Asperger’s; most men have SQ > EQ while for most women EQ > SQ. The ability to parse URLs is correlated with SQ-EQ and independently with gender. A significant minority of women did badly at URL parsing. We didn’t get round to publishing the full paper at the time, but we’ve mentioned the results in various talks and lectures.

We have now uploaded the original paper, How brain type influences online safety. Given the growing interest in gender HCI, we hope that our study might spur people to do research in the gender aspects of security as well. It certainly seems like an open goal!

How Protocols Evolve

Over the last thirty years or so, we’ve seen security protocols evolving in different ways, at different speeds, and at different levels in the stack. Today’s TLS is much more complex than the early SSL of the mid-1990s; the EMV card-payment protocols we now use at ATMs are much more complex than the ISO 8583 protocols used in the eighties when ATM networking was being developed; and there are similar stories for GSM/3g/4g, SSH and much else.

How do we make sense of all this?

Reconciling Multiple Objectives – Politics or Markets? was particularly inspired by Jan Groenewegen’s model of innovation according to which the rate of change depends on the granularity of change. Can a new protocol be adopted by individuals, or does it need companies to adopt it en masse for internal use, or does it need to spread through a whole ecosystem, or – the hardest case of all – does it require a change in culture, norms or values?

Security engineers tend to neglect such “soft” aspects of engineering, and we probably shouldn’t. So we sketch a model of the innovation stack for security and draw a few lessons.

Perhaps the most overlooked need in security engineering, particularly in the early stages of a system’s evolution, is recourse. Just as early ATM and point-of-sale system operators often turned away fraud victims claiming “Our systems are secure so it must have been your fault”, so nowadays people who suffer abuse on social media can find that there’s nowhere to turn. A prudent engineer should anticipate disputes, and give some thought in advance to how they should be resolved.

Reconciling Multiple Objectives appeared at Security Protocols 2017. I forgot to put the accepted version online and in the repository after the proceedings were published in late 2017. Sorry about that. Fortunately the REF rule that papers must be made open access within three months doesn’t apply to conference proceedings that are a book series; it may be of value to others to know this!

BBC Horizon documentary: A Week without lying, the honesty experiment

Together with Ronald Poppe, Paul Taylor, and Gordon Wright, Sophie van der Zee (previously employed at the Cambridge Computer Laboratory), took a plunge and tested their automated lie detection methods in the real world. How well do the lie detection methods that we develop and test under very controlled circumstances in the lab, perform in the real world? And what happens to you and your social environment when you constantly feel monitored and attempt to live a truthful life? Is living a truthful life actually something we should desire? Continue reading BBC Horizon documentary: A Week without lying, the honesty experiment