13 thoughts on “Security and Human Behaviour 2021

  1. Luca Allodi started the 2021 workshop by talking about charismatic megacrime. The conservation community is all too aware that the public interest in charismatic megafauna such as rhinos and pandas; similarly, cybercrime researchers and the press pay excessive attention to the interesting stuff while ignoring the discreet stuff that happens at scale.

    Richard Clayton was next, talking about whether warning messages dissuade hackers. A widely-cited paper in 2014 claimed that they did, and several people reproduced the experiment in subsequent years. Richard didn’t believe this, and our psychologist colleagues thought the warning wouldn’t work. Deeper analysis suggested that the warning was simply breaking the hackers’ automation; this was confirmed by an experiment by Alex Vetterl where the warning was replaced with fake Latin, and still worked.

    Peter Grabosky has been studying dark humour and euphemism as indicators of state excess. Dark humour helps disinhibit, while euphemism helps to detoxify and camouflage unpleasant acts. For example, “Puff the Magic Dragon” was a children’s song, pressed into service as the name for an airborne gunship in the Vietnam war. Similar techniques have been used by all combatants in World War 2. Since then, Ministries of War have become Ministries of Defence; “Total Information Awareness” become “Terrorism Information Awareness” and then “Basketball”. The media age has made propaganda ever more sophisticated and relentless.

    Alice Hutchings has been studying the interventions done by the UK police under their Prevent strategy to persuade young offenders or suspects to stop, or not escalate to more serious offences. In one case, people who had downloaded but not deployed malware got warning letters; in another, people who had registered for a booter service but not used it got police visits. There were also workshops for other suspects. Overall, participants were positive about the interventions, and self-reported offending was down afterwards. People who considered the police activity to be more legitimate did not however report

    Anita Lavorgna has been studying public resistance to Covid contact-tracing apps. She’s collected relevant Twitter message traffic, and found that the conversation was driven by high-profile media actors such as the BBC and Sky rather than by professionals or experts. The themes associated with resistance included distrust in authority and the socio-cultural context, including both opposition to the Conservative government; and suffering, including domestic violence and trauma.

    Lydia Wilson’s subject is how ISIS is rewriting their territorial loss. They survived against the world’s militaries for five years, a nontrivial achievement, and are still highly active online despite major efforts by both governments and tech companies. She’s got various resources including a 2TB archive found on dropbox; through it, they are doing what nation states do via their museums and archives. So there’s no shortage of data for scholars to get started, but the live stuff migrates constantly from one channel to another. The big question is why the core appeal is so durable, in Africa and elsewhere.

    Discussion started with automated versus human defences, from defences against attacks on web forms to whether Prevent crime-prevention techniques are likely to spread internationally. Peter Grabosky has written on the harmful side-effects of crime prevention initiatives, such as drawing more people into the criminal justice system and having people treat such experiences as a badge of honour. It moved to offences by states; Peter’s starting point here was the work on neutralisation, which applies to states just as much as to individuals. States use military language around cybersecurity, as on the wars on crime, on terror (under George W Bush), on drugs (under Nixon) and (in Lyndon Johnson’s time) on poverty. Mobilising militant vocabulary is a standard way of excusing state excess. The cybersecurity industry has a different angle, with a focus on PR: if you work as a researcher there, then you’re expected to get an RSA or Black Hat paper every year or two, or move on. This feeds the “charismatic megacrime” mentality Luca alluded to. Law enforcement have limited bandwidth; they only want to know of the biggest offender in each space, not the top five. So your best strategy as a bad guy is to run the second, third, and fourth largest botnets in the world, but never seem to be the top target. Will the Prevent programme work for radicalisation? That’s were it started, and there’s no evidence it helps at all; it may harm by signaling that “My government hates Muslims.” One-to-one counseling does work (unless you screw it up, as Britain did with the London Bridge attacker, who was assessed as very radical but still let out). However, there seems to be nothing that’s cheap and that works. On the cultural side of things, privacy in contact tracing is complex; many users think in terms of institutional privacy, in the sense of whether they trust the system operator – so an app run by a health service or university may be more trusted than one run by a private company.

  2. Zinaida Benenson’s topic is the relationship of sysadmins with power. That some have real power is evidence from the impact Ed Snowden had; but what do they think? One said “I can decide who’s able to do their job”; others take the view that their responsibility is limited as more senior people have more. They are more positive about logging than about restrictive access controls (which mean more work), and feel under-appreciated.

    Maria Brincker is a philosopher who works on existential categories of space, being and action, and talked about a passage in Sartre’s “Being and Nothingness”. James Gibson’s language of affordances helps us understand what the world is inviting us to do; this gives a relationality to our perceptions. The most interesting set of affordances are the social ones – the potential afforded by other people. Here, Sartre comes to the rescue. Sartre’s “The Look” is about how the objects I see are seen differently by you; you “disintegrate” my space but make it richer. When we look directly at each other this raises the level of drama and danger; I become a potential aspect of your agency. “Being seen constitutes me as a defenceless being” and makes me “a means to the end of which I am ignorant”. This kind of analysis translates will into the new digital spaces where we becomes the objects of action by hidden agents. Surveillance also turns the actions of others into things that appear to be objects, but are not really.

    Simon Parkin has been studying how organisations prepare for cyber-risks, by running a series of workshops with company leaders. This is a complex multifactorial question, where Simon is teasing out how executives make decisions about choices between controls and efficiency, and the post-decision behaviour as these choices play out. Scenarios include ransomware and malfunctioning ML in critical infrastructure. The interesting cases are where executives take quite different decisions, and where the case complexity means that they need to prioritise risk types.

    Sergio’s research question is “Who really owns your Android?” He’s been studying firmware over-the-air (FOTA) updates. These can come from the platform, the OEM, the MNO or other players, and affect a variety of functionality; some are signed with test keys. Data on one million installation events from 20 FOTAs showed that it’s a complex environment, which results in the installation of unwanted programs and malware. This is largely out of the sight or control of the phone owner / user. It’s due for a cleanup.

    Kami Vaniea has been wondering what “privacy” means on Stackoverflow. It’s the second most talked about security topic with 13% of relevant comments (after 40% for access controls); however their concerns are really vague, and don’t align with the privacy frameworks at all – though the answers generally do.

    Discussion started on the relevance of privacy research, which does seem to be getting through to the people who answer privacy questions online. Perhaps privacy is sufficiently scary that only senior, or better-informed, people feel confident to answer. But the lessons from Android update are scary when you think that Android is built by some of the smartest people in the world; what are the implications for the patching of things like cars and washing machines? This all emphasises Maria’s point that we lose agency. The invisibility of the harm we suffer is necessary for smooth functioning, which means we’re thrown back on institutions for trust. But often they’re creepy, and this can dissuade people from using systems. There are also data breaches, such as the leakage of all covid records in the Netherlands; and a lack of privacy enforcement which makes clear that governments don’t care. The asymmetries of knowledge get worse; even in medieval times the priest, who knew people’s sins and secret desires, was visible and in a single place. And as systems construct people’s worlds, they can affect people’s visibility to others, creating new possibilities for censorship. It’s hardly surprising that some communities, such as the US black community, distrust the government, even over topics such as vaccination; within living memory their trust in the medical system has been badly breached. Beyond these communities, though, much of the anti-vax messaging seems to externally generated, and aimed at damaging American society; similar campaigns are financed from Russia to undermine confidence in vaccines in Europe. On the commercial side, the asymmetry between companies and citizens is exacerbated by the fact that machine learning, although promoted as AI (artificial intelligence), is actually IA (intelligence augmentation) – and it enhances the capability of the large companies who buy masses of advertising.

    1. I think sysadmins now are like blacksmiths historically, where “now” was 1986-1991 when I was one.

  3. Serge Egelman has been studying the privacy of contact-tracing apps. The Google-Apple framework is widely used and enables people to download the temporary keys of infected persons, to determine whether any of them was nearby for long enough to be a concern. If these keys stayed on-device, certain privacy guarantees follow; however they end up in crash dumps being uploaded to the OEMs, and OEM apps can also read the logs. Google ignored Serge’s report until he went to the media, then did an update that doesn’t really fix the problem. For example, Nokia’s drivers log all the BLE data, and stopping that would require an update from Nokia. Blackberry records GPS every minute or so, and so on. Activities also get logged. The whole ecosystem is toxic.

    Alissa Frik showed a video by Leysan Nurgalaieva on their joint work on a meta-analysis of the literature on why developers don’t care about security and privacy during development.

    Tom Holt has been studying ideologically-motivated web defacements and related cyber-attacks. Attacks by jihadi attackers were both rare, and less likely to use standard SQL injection. They’re more skilled then the usual script kiddies, and to Tom’s surprise, tended to target .org websites rather than government resources.

    Maryam Mehrnezhad has been studying the care of intimate data in fertility technologies. Fertility has gone digital like everything else, and different communities face risks differently. Maryam has been studying data about abortion, infertility and pregnancy and looking at vulnerable populations such as migrants and refugees. GDPR protects data on health and sex life, but she could find no specific data protection regulation on fertility. She promotes differential privacy analysis as an appropriate methodology for such studies.

    Tony Vance has been working on inexpert supervision; the SEC now requires directors of US-listed to supervise cybersecurity, with the CISO making a direct report to the board at least once a year. That is forcing companies worldwide to get their directors involved, like it or not. Companies increasingly need a cybersecurity expert on the board, but do they? Tony looked at a third of the top 3000 companies at random; only a fourth have a putative expert on the board; and many of these have no obvious relevance experience on their CV. Very often the CISO has to coach the board, which gets things the wrong way round.

    Rick Wash has been working out how people spot phishing emails. Last year he studied how experts get suspicious; now it’s the non-experts under the spotlight. He recruited university staff for a study and phished them before the interview. One subject was slightly suspicious as the email didn’t sort properly, but had previously been shouted at for not completing a survey from the university, so did it anyway. Such stories illuminate a sense-making pathway: what’s the context? What am I supposed to do? How can this help me maintain relationships? and finally: Am I done? People can become suspicious at any stage, but in different ways. In the context stage, they judge relevance; then in the action assessment stage, typicality; and at the action stage, the relationship to context. Rick did not observe any subjects becoming suspicious at the closure stage. Similarly, actions that people can take to make sense of an email, such as hovering over a link to see where it goes, must be seen in this context. They are likely to forget or neglect instrumental actions, such as logging into a system. One further difference is that non-expert users are more likely to talk to others to establish whether an email is suspicious, while experts usually only report one when they’re sure.

    The discussion ranged from the many usability failures of contact-tracing apps, which are perhaps understandable given their rushed development, through the difficulties of collecting and analysing data about political groups that vandalise websites. How do you tell which vandals are “political” and which aren’t? How many are just trolls and how many actually identify with some movement or other? (It’s just as hard to tell who’s a “cybersecurity expert” in company reports; with luck, board may hire consultants to advise them on how to keep the CISO honest.) In the Middle East, Islamist language is very common; lots of kids are called “Jihad”, for example. Perhaps over time you can work out who’s serious by looking at what targets they go for, and how this aligns with subcultural stories. (Telling who’s hostile is also an issue with military tech; some upcoming systems triage people in contested areas by their affect and may deem some to be hostile or combatants.) Looking for repeated actions is also a common behaviour when people are assessing an email; they may click on a link once to see where it goes, and then a second time to act on it, if they decide to. They may assume they’re OK so long as they don’t enter anything, especially on an Apple device; if they’re experts they may even be right if they’re patched up to date or using odd operating systems. But it’s a complete failure that machines can be compromised by doing something that many people need to do in order to do their job. Instead we should give them the tools to inspect incoming stuff in a safe manner.

  4. Jean Camp started off Friday’s sessions with research she did into phishing across the Five Eyes, as they’re all WEIRD, all English-speaking and all targeted by the same actors. The UK was the outlier at recognising genuine websites, with age the most important factor; the UK was also behind at technical knowledge. Risk behaviour was consistent across countries, but in New Zealand the more risk-seeking individuals were more accurate. Overall, frightened people are more vulnerable.

    Yi Ting Chua was next, talking about covid enables scams. For example, people flying to the UK have to select a test provider from a list of 254 firms, some of whose websites require registration even before you can find out what services are available at what price – one phishing red flag. Other red flags from the Consumers’ Association list include spelling mistakes, clunky formatting, and multiple unrelated logos trying to signal trust. In short, there are serious issues of trust signalling and multiple opportunities for phishermen to harvest sensitive data – as such sites demand passport details and well as travel plans and credit card numbers, and under a government mandate (paper here).

    Judith Donath’s topic was “Trust or Verify?” Many of the technologies we’re creating are substitutes for trust; hitchhiking has been displaced by ridesharing based on apps, so the judgment of the rider and driver is partly replaced by that of an online service. Face-to-face trust establishment is an ancient part of being human; replacing it with overarching surveillance may be convenient for taxi drivers (whose profession was historically one of the most dangerous) but can cause deep collateral damage. The less we evaluate others, the less good we may become at it. At the same time, we need to think about what trust should ideally be in a diverse, technological society. Affective trust establishment is problematic as it often depends on similarity; as diversity increases we might better aim at using tech to bootstrap a wider range of trust ties with others.

    Ben Collier was been studying the use of nudges and other modern social marketing techniques by governments and more particularly by law enforcement. The social media infrastructure has been studied in the context of the influence infrastructure, marketing and propaganda; yet it’s being increasingly used to target “risky” groups to deliver targeted tailored messaging and situational nudges, as well as to try to change culture. This “Prevent” approach is pioneered in the UK by the National Crime Agency; it’s spilled over to a fire safety campaign by the Home Office, targeted at people who just bought matches or candles. There’s a feedback loop where governments surveil targets, craft interventions, measure their outcomes, and repeat. We need a national conversation on this!

    Daniel Thomas has been studying the underreporting of cybercrime in a project funded by Police Scotland. We need better data to motivate resource allocation, and to catch the one time in a hundred where the villain screws up their opsec. The phone queue at Action Fraud is so long that 37% of calls are abandoned, and it takes three months for reports to get to the police. From 2019, Police Scotland was asked to pay half a million a year for their share of this mess, and decided they could do the job better themselves. Daniel will be working with this project to assess usability, quality and outcomes; he wants to hire a research student.

    Sophie followed up with work on willingness to report cybercrime victimisation. Traditional crime has been going down, leading to the belief that crime’s going online but not showing up in police reports. Sophie’s compared a vignette study with a victimisation study, which tracked each other fairly well. The victimisation data showed that 65% reported a cybercrime to the police, and 56% to others; the numbers vary between crimes, with phishing, identity fraud and consumer fraud being well reported to the police, stalking somewhat, but malware reported very little. The reasons not to report are also similar; top is “the police won’t do anything” and next is “I’ll deal with it myself”. She concludes that vignettes are a reasonable way to study this issue.

    The discussion started off with the recent US decision to make ransomware a real focus of law enforcement, like the War on Terror; might this undermine victim support, or is it just political theatre? (Most likely the latter.) It moved to whether cybercrime reporting should be more structured. At present, Action Fraud has an AI that bins a lot of reports on the grounds that they don’t contain enough information; perhaps interactivity could fix this. But in many cases a fraud victim has no idea what technical failure led to the money disappearing from their bank account; it’s a complex ecosystem where lots of people hold bits of the puzzle. A real issue in Britain is the gentlemen’s agreement between the banks for many years to not compete on security, until TSB broke ranks recently. A broader issue is testimony; most of what we know about the world comes from the testimony of others rather than from personal experience. For a while, videos were objects that convey truth; the existence of deepfakes changes the cost metric and may relegate video to the same status that words used to have. There is a third modality, of course; that of interrogation. Just as verbal testimony can be tested in cross examination, so also the video investigation techniques developed by Bellingcat and by journalists in general can correlate and substantiate video taken by witnesses. While a court fight may be the pinnacle of this, there is some element of tussle in many of our daily interactions, and an analysis of many forms of deception can involve its being seen as a signalling problem. Keeping people tethered to reduce trust may be good for business and cheap for government; there’s a lot of stuff in there, and we don’t fully understand which parts of those are opportunities for privacy research.

  5. Max Abrahms’s talk title was “Thinking like think tanks”. In 2002-3, Max worked for a think tank in DC; like all the others, they favoured the Iraq war. Why are think tanks so much more militant than political science professors? He surveyed 231 international relations professors at top universities, as well as 152 fellows at the top 20 foreign-policy institutes. The difference between the two is stark; think-tankers are half a standard deviation more hawkish. His hypothesis is that elites who favour intervention self-select for positions that give them access to power.

    Yasemin Acar has been studying password managers, which security experts tell everyone to use. Yet they have all sorts of easily findable and easily fixable problems; for example, they don’t work with some popular products such as jitsi. Some firms even block password managers in the mistaken belief that this improves security. She has done a systematic study and found other issues from domain matching to client-side encryption. Overall there were some 39 interactions that people complained about, which people could have fixed – she hopes that now some of the vendors or other suppliers will.

    Maria Bada has explored the cybercrime ecosystem around Shodan, a search engine that people use to find vulnerable IoT devices. On the relevant crime forums, CCTV cameras were the most relevant and frequently-discussed devices. The next bundle of issues to come down the pike may well be around wearables, as users often lack awareness of their privacy issues and blindly trust the vendors – or accept the risk for the convenience.

    Steven Murdoch’s topic is designing systems to support later dispute resolution. As an example, we’ve had double-entry bookkeeping since the thirteenth century; you can’t create or destroy money, just move it around, and that leaves and audit trail. A dreadful recent example was given by the Horizon system at the UK Post Office, where over a period of almost 20 years executives covered up bugs and falsely accused sub-postmasters and sub-postmistresses of fraud, leading to 900 wrongful prosecutions. That has recently come to light as court cases led to convictions being overturned, after a long group litigation. Perhaps £100m ended up in a suspense account. Might blockchains do things better? Twenty years’ Post Office transactions should fit in a few Gigabytes; see here for more.

    Katharina Pfeffer has been working with ubikeys used for authentication, and hardware wallets used for cryptocurrency. Why might users trust them? She did market reviews for both products followed by a user study. No combination of reasonable features and checks can give real technical assurance; many products don’t use secure elements, for example. Such protection mechanisms as exist are not visible to users, who base their trust on packaging features such as holographic stickers.

    Jeff Yan has been studying differential imaging forensics. Given a photo, who was behind the camera? This may sometimes be possible from the light reflected from the photographer and off the image. For example, you might be able to tell from a difference between two images that the photographer was wearing a green coat. Similarly, successive CCTV images may reflect information about an intruder who walked past out of shot.

    In discussion, the design trade-offs of password managers are interesting; Bruce’s doesn’t do autofill, for example, as that introduces insecurities. It might be useful to train board members to ask a few questions on matters like this, as a way of teasing out how an organisation thinks about cybersecurity. Autofill features break all sorts of other things; there have been people filling out expiry dates into amounts and paying $1123 for a trinket. So what sort of heuristics should we teach board members to pay attention to? Then there are issues with habits being formed at work transferring to home and vice versa, and the difficulty of making strong authentication evident as described by Katharina. In the Horizon case there were all sorts of further issues, such as terminals having both ) 0 and 00 buttons – which may have been good for rapid data entry but enabled off-by-10 errors. As for the failure of people to use trusted enclaves, the chip makers are investing masses of money in them, but nobody knows how to use them, or how users would interact with them. Finally, the physics limits of imaging are unknown, but must depend closely on the context, and how it limits side information.

  6. Alessandro Acquisti has been working for some years on who benefits from the data economy. His “Economics of Privacy” survey paper highlighted the belief by industry that ad targeting is a win-win for everyone, by reducing search costs on both sides. Is this capable of empirical validation, or falsification? The oligopoly suggests that most of the surplus ends up in the middle. He has a series of papers supporting this. His most recent study suggests that the products targeted at users are not optimal: both display and search results favour large players, display ads are less relevant, lower quality vendors end up in display ads and prices can be higher.

    I was next, talking about robots, manners and stress.

    Bob Axelrod’s topic is vengeance and cyber conflict. Vengeance is a powerful force; it is pervasive in international affairs. The A-bomb was in part vengeance for Pearl Harbor. What evokes vengeance? An attack that’s unprovoked, unprecedented, sneaky, cowardly in the sense of not allowing direct retaliation or being anonymous, or done without warning. These are all, or mostly, attributes of cyber attacks. Vengeance has not yet been taken seriously in the realm of cyber conflicts, but it should be.

    Damon McCoy talked about his systematisation of hate, harassment and online abuse. A range of attackers seek to inflict emotional harm or exercise social control, rather than to extract value. The problem is getting steadily worse, and is very threatening to some groups. The information security community should step up to this problem; nudges, warnings, abuse detection and other techniques can be use to lift or mitigate the burden.

    Bruce expects that AIs will become steadily better at hacking, and not just of computer systems: they will also surely be turned loose on the world’s tax codes. We have a lot of complex stuff, and AIs will find ways through it that we won’t. It’s impossible to avoid the genie problem, or the Midas problem: everything he touched turned to gold. In real life all our problems are underspecified, and while humans know the context, AIs don’t. The Volkswagen emissions hack was a crime because the engineers knew they were doing wrong; an AI might do the same but in an opaque way that we can’t criticise. Perhaps the bad side-effects of recommendation engines are an early example of this.

    In discussion, even 9/11 was in part an act of vengeance, and many spies self-recruited because of some grievance such as being passed over for promotion. A reputation for vengeance can be really helpful for deterrence. So far, cyber attacks haven’t been designed to cause vengeance but often to: Stuxnet led to the Aramco hack. In most cases of online harassment the victims might want to take vengeance but can’t. Manners evolved to enable us to live together, but also define what constitutes an insult. A problem is that different cultures have different ideas of manners; and more violent cultures (such as the US South) have more elaborate manners. Feuds between people online can flare easily as there will be some individuals on both sides with hair-trigger aggression; maybe the platforms should deal with this but they don’t want to. The first-mover logic may be turned round with cyber weapons, where the logic may be to stockpile rather than use. Competition between advertisers is different; it tends to overload the consumer’s attention budget. It’s a classic prisoners’ dilemma. As for targeted ads, it’s really tricky to work out whether consumer welfare is actually increased on purely economic grounds; hence Alessandro’s research. But from the firm’s viewpoint, having to serve the same ads to all makes them more transparent, including to competitors; and ads for exploitative products are targeted at the poor and minorities, who may lose more from targeting. As for interpersonal abuse, a sore problem is what to do about kids being blackmailed into sex work; this is being exploited by intelligence agencies to demand access to end-to-end encrypted apps, and the agencies may not have the incentive to do helpful things, such as closing off the money flows to CSA gangs overseas. It’s hard to do research on that because of strict liability, and there are other topics that are hard to fund – such as repeating and challenging previous work. Psychologists are now working on reproducibility, but other disciplines may be much less ready for this.

  7. Thanks – very interesting reads. The link to Maria Brincker’s, Disoriented and alone in the “experience machine” is broken. (Or is it just my experience of my machine?)

  8. Thank you for these very readable summaries. Not a trivial task to write them all. I too would be interested to see Maria Brincker’s article.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *