Security and Human Behaviour 2009

I’m at SHB 2009, which brings security engineers together with psychologists, behavioral economists and others interested in deception, fraud, fearmongering, risk perception and how we make security systems more usable. Here is the agenda.

This workshop was first held last year, and most of us who attended reckoned it was the most exciting event we’d been to in some while. (I blogged SHB 2008 here.) In followups that will appear as comments to this post, I’ll be liveblogging SHB 2009.

11 thoughts on “Security and Human Behaviour 2009

  1. The first session was kicked off by Frank Stajano who suggested we put more effort into studying victim psychology. He showed two clips from “The Real Hustle”, the first on a game of Monte in which all the participants except the mark are conspiring with the dealer, and the second on a fraud at a jewelers where the hustlers impersonated a police officer and a suspect he was arresting. These give insights into how people relate to social contexts and to authority. He has written a paper about what we can learn from hustlers – see the workshop web site.

    David Livingstone Smith was next. What is a philosopher to make of deception? It has turned out to be extraordinarily difficult to define properly. The Oxford English Dictionary “cause to have a false belief” is grossly inadequate. We can mislead people easily by accident. So is “sincerely cause to have a false belief”? The causer might just be sincerely ignorant. But this is anthropocentric, and animals deceive too – and the placenta deceives the mother’s body to avoid rejection using the same biochemical mechanisms as some parasites. The tongue orchid deceives the orchid dupe wasp; it is “wasp pornography” and causes the male wasp to mate with it. Essentially we want “deception” to be purposive without narrow human “intending”. “A deceives B when A causes some mechanism in B to fail in such a way as to cause some mechanism in A to fulfil its purpose.” How do you

    Bruce Schneier talked about fear selling versus greed selling. Traditionally security was a fear sell, and that works when people are really scared: people buy burglar alarms when they’ve been robbed, or their neighbours have been. Flashbulb events such as 9/11 can change behaviour on a wider scale (and that’s why we were oversensitive to Conficker, and why 400 airplane deaths spook us more than 42,000 automobile deaths). All sorts of attempts have been made to reeengineer security as a greed sell (“we concentrate on security so you don’t have to”) but this basically doesn’t work in isolation. It only seems to get traction when you can bundle it as part of a larger greed sell by bundling it as part of a larger package. These lessons are all explicable in terms of the heuristics and biases tradition, and they have broader applicability – for example to software project failure. (This will appear in about a week as a Wired article, with a link to a paper by Sorgensen.)

    Dominc Johnson followed up by talking about paradigm shifts in security strategy. He’s been working with Rafe Sagarin and Terry Taylor (who will also be talking tomorrow) on a project to learn about security from nature. For him, the striking thing about 9/11 wasn’t the failure of intelligence but the failure of anyone to do anything about a known threat. It took the attack to shift policy; and there are many other examples, from Pearl Harbour to paradigm shifts in science. Why are many adaptation mechanisms punctuated, rather than gradual? See his book chapter: in addition to the usual psychological and sensory biases, there are leadership, organisational and political biases, all of which converge on preserving the status quo. And public perceptions hinge on big salient events: US confidence in whether they’re winning the war on terror, for example, has three spikes after 9/11, with the invasion of Iraq, and with the killing of Zarqawi.

    The final speaker of this session was Jeff Hancock, who studies interpersonal deception. He’s interested mostly in how technology shapes the way we lie, but he also looks at how technology might help detection (as there’s more funding for that). See, for example, his paper on how men lie about their height and women about their weight in the course of online dating. He also tried getting people to write resumes on the assumption they’d be treated traditionally, or in LinkedIn: people lied a lot less in the latter case (and about more minor things such as their interests). People are starting to realise that stuff they put online is a bit like a tattoo: you can get rid of it maybe but it’s a bother. It seems almost every big scandal nowadays involves email or some other electronic logging mechanism whose affordances weren’t properly understood by the liar. And as everything becomes textual (voice messages now turned to text for 25c), everything is searchable. Jeff also looks for NLP mechanisms for detecting mendacity: and the psychological distancing involved means that the first person singular is used less (this is constant across perjured testimony, online dating, Bush statements about Iraq. Are politicians the future? If everyone’s going to be recorded in the future …

    The discussion was kicked off by Peter Neumann, who raised the issues of “constructive deception” where, for example, intelligence isn’t passed on in order to protect sources, and “security theatre” where people try to give the impression of doing something. Adam Shostack remarked that perhaps the Crusades were a major policy change in the absence of a challenge; and Dominic agreed that perhaps the collapse of the Soviet Union was another. Andrew Adams remarked that many “adaptive” responses aren’t; they are what institutions wanted to do anyway. Luke Church suggested that one reason computer security doesn’t work is that our interaction with computers is dominated by our everyday context: while security gurus fixate on attack events, real users don’t. Bruce Schneier agreed, and suggested it was unfixable: most people are nice most of the time, and rightly so. It follows that social engineering will always work. Terry Taylor remarked that the link between trust, ideology and religion appears to have been supremely important in evolutionary terms – as has redundancy at various levels in systems. John Adams recounted an experience of altruism – the previous day he’d left his London Freedom Pass in the taxi from the airport, and two hours later the taxi driver brought it back to the hotel! Jeff remarked that online and offline work differently: a motivated liar may do better online. Angela Sasse noted that she’d also done online dating studies, and asked whether misleading comments there were best classified as deception, or as social control / self-projection mechanisms. Jeff agreed; people may be representing their potential, but decepion is still a real issue. Finally, danah boyd argued that an interesting aspect is whether users’ perceptions of norms align with group perceptions. Jeff agreed that there are many conceptual issues between “norm” and “deception”; David added that unless we also consider self-deception we won’t get too far.

  2. The second session was kicked off by Julie Downs, a psychologist from CMU interested in how we can help people make better decisions. What’s the deep background to phishing risks? She got people to sit down and role-play what they do when managing email. How do people process scam emails and evaluate phishing sites? A deep problem is that scaring people just shifts the ROC curve – it doesn’t make them any better at signal detection. In time the curve will shift back, unless you help people get better at risk assessment. She found that contextualised understanding matters. Can people parse a URL? Do they understand the scams?

    Jean Camp was second. She’s interested in people taking risk online. People only mitigate risks they think apply to them. She discussed the well-known difficulty that normal people have in using encryption software in terms of users’ lack of any narrative that could cause its operation to make sense to them. Conversely, risk communication doesn’t buy you anything unless you empower people to choose a mitigation strategy. Stuff needs to be packaged better: a single story from the threat through the protection metaphor to the action required.

    Matt Blaze talked about electronic voting machines, having participated in two state studies. In both cases they had about two months to look at several complex systems, and in both cases their analysis confirmed the worst fears of the wildest conspiracy theorists. There were technical vulnerabilities galore. But were these vulnerabilities exploited? They found that officials in Kentucky had indeed exploited one of the systems Matt had examined, but had used a vulnerability he hadn’t noticed. It was a bug in the instructions given to voters. The Kentucky set-up required an extra “confirm” button-push, and the officials misled the voters about this; after a voter had left the booth an official would go into the booth and alter the vote. This ambiguity in the machine configurability was not noticed by expert reviewers (including Matt) but was discovered and exploited at the machine’s first deployment.

    Jeffrey Friedberg talked about Microsoft’s program to make the “expert” interfaces more accessible and complementary mechanisms such as extended verification. However, no-one has spent much time educating users on this. It would be nice if extended verification gave a “trust-at-a-glance” reflex: however when they trained experimental subjects about this, they became vulnerable to the picture-in-a-picture attack. (Users don’t understand what “the chrome” is.) This has led to the new discipline of the Trust User Experience, or TUX. This, he believes, is a really important problem, and a really hard one: it’s the “last two feet” of the end-to-end trust problem. How do you design a TUX, and how to you evaluate it?

    Stuart Schechter presented some work on the questions websites ask you if you forget your password. He looked at AOL, Google, Microsoft and Yahoo, inviting pairs of acquaintances to try to answer questions for themselves for for each other. They were given incentive gratuities for recalling their own stuff later, and answering their partners’ questions too. About 22% of answers were guessed by “trusted” partners while 17% were guessed by people who were partly, or not at all, trusted by partners (in the sense that they’d be prepared to share a password). They removed the questions that were too easy for partners to guess, or too difficult for the users themselves to remember some months later, and no questions were left! So is it better to let users choose their own questions? This also doesn’t work. Questions with memorable answers just tend to be insufficiently secret. In conclusion, password recovery questions just don’t work as a technology; it would not be adopted for secondary authentication if it only had just been invented. (He will present a better method at SOUPS.)

    Tyler Moore described his work in analysing cooperation between banks (or more accurately, between the firms that contract to take down
    websites) in the fight against phishing. They would do much better if they cooperated. The problem is even more serious in respect of mule recruitment websites, which attack the whole banking industry: no individual bank can be bothered to try to take them down. (They should, as mule recruitment is a bottleneck, but although it costs them money they don’t see their brands being directly attacked.) In effect, the fraudsters are exploiting the bankers’ psychology. This is one area in which education might possibly help.

    The discussion was initiated by Angela Sasse who asked whether “trust” was the right word to use in circumstances such as extended verification: vendors are conditioning people to respond to certain stimuli rather than educating the users. Jeff agreed: the TUX people are alert to the problems of habituation, and wary of claims that a “user-centric design” would be a silver bullet. Andrew Adams remarked
    that banks are training customers to do the wrong thing by phoning users and asking for passwords; Jeff responded that current mechanisms are unserviceable. It’s ridiculous, for example, to expect users to inspect certificates. However the economics are wrong, and a particular problem is merchants’ desire to give customers instant credit. Any kind of out-of-band authentication to bootstrap stuff would break this. Perhaps – though he’s reluctant to say this – it will take legislation. Jean took the view that she doesn’t want a relationship with her software provider or her bank, as she doesn’t want to invest in her bank, and does not want even more responsibility dumped on her. Also, it’s wrong for machines to trust every website bar a blacklist (you don’t tell your kids to accept candy from every stranger). Maybe banks should be in trusted directories? Jeff agreed that an FDIC directory as part of your trusted favorites could be a good idea, but it would mean changing the business model (CAs make money for every cert they issue). But however it’s done, the infrastructure should do a better job of connecting people with entities they trust. John Mueller dissented: the banks say that fraud is small and manageable, so why should the 99.9% of customers who’re not victims be inconvenienced? Luke Church said he’d looked at what concepts users describe correctly; do people make more mistakes with “relationship” or “encryption”? Angela remarked that companies make up their own rules of engagement and it would be easier if these were standardised. Lorrie Cranor asked Matt if evaluations should be more holistic, and Matt said they thought they were: how do you measure it? Peter Neumann agreed that various overlapping e-voting machine evaluations indicated that there were many more bugs than any of them found.

  3. The third session, on usability, began with Andrew Patrick discussing biometrics and human behaviour. Usability causes many problems for operators: a lot of people are implementing biometrics in immigration, welfare, prisons, schools and even to tackle employee absenteeism. Even Canada is not immune, having announced it will fingerprint all arriving foreigners and will try to do remote iris scanning of crowds at the 2010 Olympics. The wide use of biometric data will mean it’s not secret any more. How can we fight back? Andrew’s suggestion is that we all publish our fingerprints.

    Luke Church talked about user-centred design. This turns out to be easier if it’s concrete (don’t talk of “the user” but of “Joe”) and it assumes direct manipulation (to make frequent operations easy). It ends in “appliancisation” where rules are embedded. (A prize for anyone who can think of a less ugly name!) Both the usability and security folks think this is a good idea: but it just gives users the choice between a set of opaque options. Luke argues that a better way to go is “end-user engineering” in which we get away from dialogue boxes and get users to program. He discussed the Facbook user interface as an example of how not to do it – 86 different screens to plough through to customise stuff. Programming is easier, if you get the language right. They built a tool and were delighted when one user adapted it to an unanticipated purpose: see the NSPW paper for details. The slogan is: “Leave the last mile of the interface and policy design to the users.”

    Diana Smetters agreed that you can teach users, but claimed that you can’t teach them to do very much, especially when security isn’t what they’re trying to do! So take care to design what you’re going to ask them to learn. This is a reason why sharing is all-or-nothing: it’s just too difficult to innovate by sharing a small amount of data. So engineers should not be training users to decipher cryptic URLs, but moving users to more appropriate sharing models. How do we make it safe to click on links on email? (You can’t just tell them not to.) And users are quite rational to ignore browser warnings: 50% of certs are wrong, so almost all alerts are false alarms! As for a practical practical proposal, she has prototyped a system of protected links that work for trusted browsers – use a separate browser for important stuff. Moral: now it’s time to throw out the baby with the bathwater! We have been told for ages it must work with Windows: why? People are happy using netbooks …

    After that challenge to Microsoft, the company’s Rob Reeder was on next. His subject was the use of trustees to identify accountholders who have forgotten their passwords. The idea is basically that k of your friends say that you’re you. (This was interrupted because of problems with slides.)

    Jon Callas discussed the “security cliff” whereby we ask people to climb a huge cliff to get to perfection, with no place to stop en route. How do we build a ramp? For example, why doesn’t everyone use encrypted email? After last year’s SHB, he believes that the real risks are both relatively small and hard to describe. With seat belts, you can talk of Princess Diana and Jane Mansfield; yet in the absence of seat belt laws people will use seat belts at a rate of 17-20%. Why the heck should such people encrypt their email? He agrees that “you can teach them but you can’t teach them much”. We probably have to use libertarian paternalism to move them in the right direction. We might deal with phishing if we got the defaults right, but other spheres of human behaviour suggest we won’t do it otherwise.

    Rob Reeder resumed his talk and described his trial implementation. By putting suitable warnings into the recovery website, he got the percentage of trustees who fell for phishing scams down to 4%. Phone-cased social engineering was also tried, with a 45% success rate. (But we don’t know of any systems that withstand targeted and competent social engineering!) It’s also somewhat time-consuming.

    Lorrie Faith Cranor picked up a theme of Diana’s: warnings. Programmers often just warn users of hazards, but often (usually) the alarms are false. (“Something happened and you have to click OK to get on with doing things.”) She has therefore worked on a methodology for spotting where humans do security-critical tasks, and also discussed the relative merits of different browsers’ warning strategies. Firefox 3, for example, requires you to execute multiple steps to override an exception. She tested this and found that warning in IE Firefox 2 were just ignored; everyone tried to override Firefox 3 but only half were smart enough to. Conclusion: we have to think beyond warnings. You cannot rely on users paying attention to them.

    In discussion, the first topic was: is there any password recovery mechanism that actually works? Email is generally OK. Luke cautioned against being too fearful: the number of people who regularly get harmed by online computing is so low that such events are news. But what of firms who benefit from poor security usability, such as Facebook (as my book suggests)? Luke challenged the idea that this is a conspiracy. John Mueller asked: why not let the market decide? Jon Callas answered that PGP does sell crypto, but only to the concerned: it’s the defaults issue that people miss. Jean disputed that markets can resolve privacy: we’re only just starting to see it as a macro problem, and we don’t know how to reason about bad neighbourhoods (e.g. where organised crime gangs are in control). Andrew Adams opposed any command-and-control structures, such as mandatory whitelisting, as they would lead to closed platforms like AOL or Genie: whitelists would be captured by RIAA or whoever. Jon added that these mechanisms don’t work for long: he’s seen green-bar phishing sites. Dave Clark asked whether these issues were usability or economics; Diana answered that her point was about architectural change. I suggested that Diana’s whitelist browser might be physically implemented; for example, what if your browser turned into a whitelist machine whenever you put your phone next to your computer? (Jon liked this.) Joe Bonneau returned to the social network question and pointed to his WEIS paper: site operates just want to steer people to open profiles as it makes the site more useful, so the nudge is to stop people even thinking about privacy. Luke replied that the use of social networks is still subject to social negotiation: maybe people will plump for open communication, but we should leave the door open for that negotiation to happen. And privacy policies are programs, with all that means; they can be shared, open-sourced, whatever.

  4. The last session of the first day was on methodology. First in to bat was Angela Sasse, who reported a project to measure the effort that security measures were imposing on users. The summary is that designers should write out everything that users have to do to operate the system on the left hand side of a piece of paper, and everything they need to know to do this on the right hand side. If you spill over on to a second page, it’s not going to work. Then add up the time it takes – at one company it was costing users three weeks a year just to log in! (The quoted cost was just that of the helpdesk.) Sadist antisocial geeks impose huge costs on real people, who lose respect for security as a result. Top level: figure out what your compliance budget is, and then how you’re going to spend it. Like it or not, the users have a mental counter of how much time they spend doing security stuff. You can hack the limit slightly in various ways, but not necessarily with sanctions (they’re also a cost). But basically you’re best to change the design and culture (e.g. taking away the frequently made excuses). Future work will include measuring all this better: we are lacking quite a lot of basic information.

    The next speaker, Bashar Nuseibeh, is a software engineer with a background in safety and requirements engineering. He has a project on privacy rights management in mobile applications and has been looking at Facebook. How do people use it on the move? The methodological innovation was to get people to come up with a memory phrase that was salient at the time of use, and which is then used to trigger recall in later debriefings. In a paper accepted for Ubicomp, he discusses how they explore Facebook as a “space” or “place”; the findings include all sorts of subtle implicit knowledge boundaries, where some messages could only be understood by insiders; and differing etiquette (for example, young people will talk to one young person on Facebook and another in real life simultaneously, but won’t multitask when talking in real life to an older person. Also, people were using distinctions between real and virtual space to manage their privacy. This all poses a huge challenge to an engineer trying to extract requirements! The lesson is that such environments must support user-driven evolution.

    James Pita was next. His advisor is Milind Tambe and he reported work with Richard John in which people play security games with realistic resources. In one, the game is to patrol eight terminals with four police dog teams: how do you maximise security while minimising predictability? They use a Stackelberg game to come up with a suitable mixed strategy (a probability over the guarded terminals). They are now adding human observational and computational limits to the model.

    Markus Jakobsson asks why car insurers want to know if we smoke. People assume that it’s the increased risk of fire: actually it’s that people who disregard their health in one way may disregard it in others. So Markus recruited participants through Mechanical Turk asking, for one cent “have you lost money to online fraud? 100 said yes; he took 101 yes and 100 no to surveymonkey where he asked their views on physical, financial and information risks. Various correlations were found, for example between vulnerability to phishing and to auction fraud.

    Rachel Greenstadt has been looking at mixed initiative issues, so as to enable machines and humans to collaborate better. Machines do computation and scale and stuff better while humans resist systematic compromise. (No-one says “I must be dancing with Jake as this guy knows Jake’s private key”.) Failure modes are qualitatively different: so the goal should be systems that let humans and machines complement each other. Two examples: should I log into website X? should I publish anonymous essay Y, or will my writing style betray me? She has a paper analysing each of these problems.

    Mike Roe noted a survey (Dempsky & Aitchison) of what proportion of transexuals obtain their drugs from black market sources. Interesting public choice question: very cheap drug, but prescribed by expensive persons. Survey methodology is seriously hard (but probably fixable). A second interesting survey was of crime in online games (Castronova & Dibbel) on Second Life and MetaPlace. There’s not as much fraud as you’d think given the vulnerabilities and the money. explanation: see Bartle’s taxonomy of online gamers: explorers, socializers, achievers and killers. The conflict is largely between griefers and socializers in Second Life, and between griefers and achievers in MetaPlace. So here is an example of an environment where crime mostly isn’t economic. In fact, a lot of it is like school bullying.

    In discussion, Dave Clark challenged this, saying that while griefers are a constant, economic fraud grows over time. Sasha Romanosky then asked Angela whether willpower is depleted or strengthened by use: is it like fat or like muscle? She suggested it’s more complex than that, with strong peer-group effects. Also, risk appetite may depend on the value perceived to be at risk, and who “owns” that value. These factors can be shifted by communication. Richard John remarked to Markus Jakobsson that he was surprised the correlations were so small: Slovic had found large variance across ethnic groups, with most groups being more risk averse than white males. He’d have expected larger method variations. Markus remarked that he’d hoped to find the victims would have different risk assessments, but he didn’t find that.

  5. The second day’s talks were kicked off with a disruptive idea. Risk is good, said Terry Taylor, an ex-soldier with a background in military security policy. He prefers to talk about “living with risk” rather than the fashionable risk management. With Rafael Sagarin he ran a project on learning about security from evolutionary history which led to the book “Natural Security”. Organisms did not survive and become successful by eliminating all risks; adaptation is the order of the day. The important thing is to be successful. Thus we should see the collapse of the Soviet Union not as failure but as adaptation, driven by a small elite who were prepared to take risks: the Russian state is now more successful. Society in Easter Island, on the other hand, could not adapt, and collapsed. So: risk is good! It’s the risk takers who can adapt and move society along with them. So he’d encourage people to think about the big risks – not just climate change, but how about antibiotic resistance?

    Next heretic on was Andrew Odlyzko, his theme “cyberspace versus humanspace”. People cannot build secure systems; worse, we cannot live with secure systems. But that’s OK, because we don’t need secure systems! It’s actually amazing that civilisation works at all. Most spreadsheets have bugs; peer-reviewed papers often have errors in statistics; and it goes on. Prosperity coexists with appalling innumeracy. Ten or fifteen years ago, cyberspace was thought to be a separate world, immune to human weaknesses: a place where technologists could build things right. If the world were that abstract and perfect, then someone who could factor your modulus could become you and wipe you out. Yet that doesn’t happen. Human space compensates for the defects of cyberspace. Fax signatures are easy to forge but they work, because they’re embedded in the paper world. And cyberspace doesn’t matter all that much: it’s just one of humanity’s many tools. So what’s the way forward? Andrew urged us to learn from the spammers and phishermen: build messy, not clean; build zillions of links; and keep lots of permanent records.

    The next toppler of graven images was danah boyd, who talked about her fieldwork among American teens (see her thesis for details). What’s a “lie”? Well, most social media have lots of people from Afghanistan and Zimbabwe (the first and last countries on the list), many people who’re 61 or 71 (16 or 17 backwards in systems that demand you’re 18 or over). They’re not deceiving their friends, just the system: they don’t put the wrong birth date, just the wrong year (they don’t want inappropriate happy birthdays). Their parents have encouraged them to lie to be safe since they first went online at age 7 or 8. Case study 2: almost every teen has shared at least one password (often with parents). Teens also often share with best friends or partners as a symbol of trust. (You have to remember to change passwords when breaking up.) In effect, password sharing has become a core social protocol for an entire generation. So forget what people “should” do and study what they do do. As another example, teens put tons of information up in order to hide the important stuff. Privacy itself is changing, from bounded space to flows in networks. It’s complex and messy. Look at what it actually is, and how people who live it make sense of it.

    Mark Levine then described how social psychology traditionally saw groups as a bad thing: mob violence, mass hysteria and peer group pressure. Wrong, he says: this is only a partial story. Often groups can have a positive effect. He was collected a corpus of incidents of violence and potential violence recorded by CCTV operators (without names or soundtracks). Classic theory says that increasing group size leads to anonymity and bystander effects, leading to a negative effect; his obersvations show no increase into anti-social acts and a significant increase in pro-social de-escalation instead. Larger groups make bystandards more likely to try to bring things under control! Takeaway message: groups are part of the solution, not just part of the problem.

    Jeff MacKie-Mason argued that security is about incentives: how much do humans want to attack or defend. We have to design with this. For example, people used to bomb search engines to get to the top of the listing; now they just pay. So his disruptive idea is that rather than talking about economics and psychology of security, we should talk “design”. From a designer’s viewpoint, there are several useful things in econ and psych: examples are signaling theory, which is about making the costs to good guys and bad guys as different as practical and social comparison theory (tournaments, benchmarks, dissing).

    Last up was Joe Bonneau who described how the move to social media has changed a lot of the assumptions behind privacy. He made three points. First, there’s a culture gap in that privacy folks say that social networks are pointless, childish and broken, while social-network folks consider privacy to be difficult, boring and outdated. Second, like it or not, the figures show that social networks are here to stay, and Facebook in particular is reimplementing the internet – with its own version of all the common services. Third, all the old scams are being reinvented – some work better, others worse. Facebook is a noisy, fast environment that people navigate with their brains turned off; but the social graph can also be used in countermeasures.

    In discussion, John Mueller remarked that nature reacts to sustained events while humans react to anomalous ones. Terry replied that initial responses can be anomalous and not necessarily adaptive: post-9/11 the USA became more vulnerable to (more frequent and deadly) natural disasters. David Livingstone Smith challenged the analogy between adaptation and learning; Andrew Odlyzko ventured that the difference between security and adaptation was intelligent agency. It was asked how teens on social networks are different: danah answered that for teenagers, financial assets aren’t at stake; in fact they lose passwords more often than they give them away. The main thing that drives them is fear of parents: they lock stuff, improvise security protocols, put up lots of inaccurate information, and jump between systems often. She was surprised to find that their behaviour depends more on socioeconomic status than on age: middle-class kids trust their peers more not to backstab them. And the systems have all sorts of bugs: for example, they are incapable of managing ephemerality. You meet a kid at summer camp, and he’s in your circle of friends forever. This is just broken.

  6. The session on terror was kicked off by Bill Burns. The DHS thinks that a large terror attack could kill 100-1000 people, about the same as a 6.5 earthquake, but that the risk perception would last months – as would the number of fearful people. The economic half-life seems about 45 days. The same dynamic is quite visible among investors after the collapse of Lehmann Brothers. Interestingly, business leaders are less confident than ordinary people, and it will take some time for their confidence to recover to pre-crunch levels.

    Chris Cocking was next. He studies collective mass behaviour in emergencies from a social-psychology perspective. The classic view, from the 19th century concept of contagious “mob panic” through Milgram and Zimbardo, was pessimistic; Chris, like Mark in the previous session, believes this is only part of the story. A competing model is Tony Moss’s “social attachment” model, under which people associate more strongly with close group members under stress. Benjamin Cornwall has data supporting this from behaviour in fires. But this isn’t enough: in many emergences there are no kin present. Disasters also create a sense of “we-ness” (Turner, self-categorisation). Photos of the 9/11 evacuation show cooperative, rather than panicky, behaviour. There are various ways in which resilience can be reinforced. Indeed, crowds are the “fourth emergency service”.

    Richard John has been working with CREATE, a DHS program working with ‘asymmetric threats” (to minimise the use of the T-word) with particular emphasis on the social amplification of risk. While responses to disasters are sometimes muted (San Francisco 1906, when there was no FEMA to hand out federal dollars); but there have been events that caused limited losses directly but much larger costs in the long term: the Santa Barbara oil spill 1969; the 3 Mile Island event which killed nuclear power, and 9/11. They have done studies of likely reactions to various attack modes from MANPADS to biological weapons, and of repeated attacks. It turns out that people aren’t that scared of terrorism any more. The next study will be of what happens if there are terror attacks in virtual worlds.

    Mark Stewart is a civil engineer interested in whether protective measures applied to road, bridges and the like are cost-effective. He’d like to see terror hazards assessed as calmly as the hazards from natural disasters. However the terror industry trades in worst-case scenarios. (Guys, why not get your science-fiction writers to ask what al-Qaida could do if they got their hands on a time machine?) As a worked example, he discussed whether it’s worth spending $1bn a year on the Federal Air Marshall Service. Given that passengers will fight back, probably not. He recommends the civil engineering approach: to come up not just with an opinion but with numbers: people who object then have to propose different numbers.

    John Adams started off with a plug for John Mueller’s book “Overblown”, and went on to discuss the risk thermostat that we use to regulate our behaviour: when propensity and perception get out of kilter, we act to restore the balance. The perception loop is now heavily biased by social institutions driven by other concerns. He showed a kids’ playground with swings chained up – not for risk reduction but liability reduction. The kids play in the street so the council won’t get sued. He showed a number of slides of bureaucratic paranoia. Thankfully it’s starting to backfire: people are getting more afraid of the government than the terrorists. Risk tolerance is of course highly context-sensitive; people are more sensitive to imposed risks.

    Last up was Dan Gardner, a journalist who’s written a book on risk perception. His expertise is the news media. They love vivid dramatic causes of death, but won’t report diabetes deaths (the USA has more of these per annum than terrorist deaths ever). There is a very strong bad news bias; “something bad might happen” is actually news! Comparisons for perspective are usually not provided, and journalists are very gullible: they pick up popular notions and repeat them. These biases are commonly blamed on ideology and money. But: reports do what they do simply because they’re human. Newsrooms are not well enough organised for sensationalist conspiracy. It’s just basic human wiring: journalists go for this stuff for the same reasons audiences do. We are narrative machines; stories thrive on conflict and drama. Editorial decisions are intuitive, and we have no intuitive wiring for numbers.
    Everyone has heard the story of “The Miracle on the Hudson”; no-one has heard that in 2007 and 2008, for the first time, no-one died in a crash of a scheduled flight in the USA. That’s not a story.

    In the discussion, David Livingstone Smith raised attitudes to infectious disease; Richard agreed that people greatly overestimate infection risks, perhaps because of much worse outcomes until recently. Bruce argued that the DHS folks aren’t actually overreacting: they just care about a different risk (being fired). Mark agreed: air marshalls could be counterproductive as they’re not on 90% of flights and the belief in their existence might make passengers less likely to fight back. John Adams remarked that the most successful branches of medicine in terms of avoiding deaths are obstetrics and gynaecology – yet they are the most sued! Jean asked whether labelling attacks on nurses and doctors as terrorism rather than criminal nuts would change the response. Bill agreed it would: calling terrorism a “war” after 9/11 had measurable effects in the USA. Chris said it would increase anxiety levels but doubted it would translate into behaviour. Bill agreed; interviews with 9/11 survivors reported orderly behaviour during evacuation. Angela argued for the need for drills to channel behaviour; Terry for the virtues of randomisation in security protocols (of the personal and physical kind, not just electronic) in order to create uncertainty for the bad guys. Richard agreed: as far as we know, terrorists are risk-averse: although they will take extreme personal risks, they want a high probability of mission success. This dichotomy is quite rational: failed missions will damage them with their support communities.

  7. The session on privacy was started by Alessandro Acquisti who has been working on the behavioral economics of privacy since 2004. Lots of effects have been identified – the illusion of control; overconfidence; the endowment effect; the fact that salience of confidentiality inhibits disclosure. The latest results are the herding effect and the frog effect. In the first of these, participants align themselves with what they believe other people have answered. There are subtle effects: if people see lots of others admitting cheating on their wives, they are more likely to admit cheating on their taxes. This was measured using cumulative admission rates, with were correlated. The frog hypothesis was that people would admit to sensitive behaviour more often in the context of admissions of increasing sensitivity. This was strongly rejected.

    Adam Joinson is an experimental social psychologist interested in social networks, and in expressive privacy – privacy used to form or manage relationships. A big risk for social media is that they become like a phone book – much thumbed, little value. He therefore looked at Facebook across different European countries and found significant differences. In the UK, people use photos; in Italy, groups; in Greece, applications. (People trust Facebook less in Greece.) People who trust Facebook less use it less, and try to manage their own privacy. He also analysed tweets and was able to determine with 91% accuracy whether they came from Twitter or Secret Tweet: intimate tweets have more personal pronouns, fewer articles, fewer question marks etc. He concludes that value creation is tied up with expressive privacy.

    Peter Neumann argued for a holistic view of privacy. About four million individuals have access to medical records in the USA: not just your doctor but insurers, pharma, transcription vendors, financial institutions, you name it. Another issue is voting machines, where privacy is in tension with survivability, auditability and many other required properties. A third example of the scale of the problem is the Chinese decision to require everybody to run “the Green Dam Youth Escort” software, a state-appropved censorship application. Most people just don’t recognise the scale of the privacy problem. Look at his website for the 1000-odd cases of privacy violation he’s collected via comp.risks over years. Nothing comes close to tackling the whole problem.

    Eric Johnson says he’s working on a problem many of us think solved – the access problem. How do you set privileges for people in large organisations? We techies think RBAC does that: but CIOs tell him that it’s failing. Roles are hard to discover, and dynamic. RBAC may work in the military, but in a modern firm an employee may have several different bosses for different tasks. She ends up with too much or too little. He proposes economic mechanisms for temporary escalation.

    Christine Jolls is a law professor at Yale with an economics background, who does behavioral law work and is interested in privacy. But we don’t want to be entirely alone: the most robust predictor of happiness is how much interaction we’ve had with others. The hardest problem is the large number of low-value inquiries – the click-through problem. A more tractable one is where we make big decisions (such as agreeing to drug testing or email monitoring). Courts don’t like advance consent (e.g. when hired) but accept consent at the moment. This makes sense from behavioral economics: optimism bias, and the focus on the present. Such judicial interventions are helpful. The politics are complex though.

    Andrew Adams has been studying self-revelation by students in the UK, Canada and Japan with a focus on social networking systems. They agreed that their reason to go online was to improve interaction with people they knew already. The major change in the UK is that students now maintain links with old friends at home, whom they typically used to drop. Many wished there was more guidance about what to reveal and what to keep private (especially when about to go for jobs). They had learned a tit-for-tat approach to privacy: they should only put up the sort of things about their friends that their friends would reveal about themselves. In Japan, the systems are mostly local (mostly Mixi). The difference is “victim responsibility” – if too much is revealed about you, that’s not the system’s fault, but your fault for having bad friends.

    In discussion, David Livingstone Smith asked whether privacy is control of access to oneself, or of information about oneself, or of information I feel I should be able to control. Panel members agreed it was a complex debate with wide disagreements. Caspar asked Christine about coercion: this is less of a deal in the USA than Europe. Caspar asked Eric where escalation works: it depends on incentives (and in investment banks people are keen on doing stuff to make profits). Adam asked how close we might get to a perfect access control system? Eric said we might allow a certain amount of slack. Joe asked whether networks get more private over time; Alessandro said that in 2005 less than 1% of Facebook was hidden, and now it’s 20% and up. Maybe this is because the population has changed (and the early adopters have grown older). Alma asked whether, if a story is told about me in a planet far away, it matters if that story is not true? Christine replied that leaking secrets and telling lies are both wrongs; it’s not clear when or whether combining them makes something new. However it would be a good thing if people could protect their privacy (and others) without having to lie. Adam remarked that one could spam google with falsehoods in order to conceal truth; so A might tell lies about B in order to protect B’s privacy.

  8. The final session, entitled “How do we fix the world?”, was kicked off by David Mandel of Defence R&D Canada. First: we should beware of world fixers! (Recall Stalin’s view that death solves all problems.) Second, we should specify the respects in which the world is broken. second: what is, and what should be, depend on perspectives – and different nation states have different perspectives. Third: be aware that there are multiple agents, and good leaders must anticipate other states’ responses. Strategic intelligence is more an art of narrative construction than a science of prediction; and policy is mostly reaction to pressures. A useful tool may be analysis of variance, which can indicate the motivation to change from the status quo. He presented a graph of forecasts made by Canadian intelligence analysts about the Middle East.

    I was next. I suggested a longer-term research question: what forces might shape the eventual privacy equilibrium? Advancing technology simultaneously increases the incentive for firms (and governments) to discriminate, and their ability to do so. In the end we might see a equilibrium brought about by opposing economic forces (as with patents and copyrights) or a political resolution, of which a model might be railway regulation in the 1870s. In that case, economics alone won’t hack it; psychology will be part of the mix. For example, we recently published a report, “Database State”, on large privacy-intrusive UK public sector systems; we found that 11 of 46 were illegal. However the public and MPs have only got upset about a few of them; the principles discussed at this workshop explain why they are more annoyed by ContactPoint (with its child safety context) than about the more intrusive PDS (whose context is health). What sort of research might we do to understand the likely future shape of the privacy boundary, and what sort of interventions might help nudge things?

    Alma Whitten followed. She’s been Google’s chief privacy architect for the past year and a half; she grapples with definitions of privacy, especially from a European perspective. Her philosphy is that personal data should be collected only with the subject’s consent for a specific clearly communicated purpose; and people should have the right to access and correct it. This leads to many big questions, such as how you support access to personal data that isn’t authenticated? How can we give an honest and nuanced answer to “show me all information about me”? Information about individuals is much messier than say gas bills. For example, Ads Preference Manager will show you data based on your cookie. Verbiage only gets you so far: you need to let them engage with the system. And then there’s the problem of how you stop abuse, from DoS to screen-scraping. It’s hard to give enough data to honest users without giving too much to the attackers. And then there are the “known hard” problems such as how you give users fine-graned control without overwhelming them; how to link sequential actions by a user without leaking their identity; and how to avoid the inadvertent recording of data that can compromise individuals (such as credit card numbers). Finally, how do you sell the public on the value of anonymised aggregate data, like FluTrends? It probably means building stuff for people to play with.

    John Muller echoed David on “How to fix the world?” and his answer is “Give up!” Terrorism isn’t a threat; only al-Qaida wants to attack the USA and it consists of a few hundred people who’re marginalised in the Muslim world. There are some spectacularly incompetent and stupid wannabes; a few people might get killed but it’s not an existential threat. he likes the vivid and meaningless Washington phrase “self-licking ice-cream cone” to describe the terror industry; it will just keep going on wasting our money. It’s comparable to the American communist party during the later phases of the Cold War. See his book “Overblown”. The responses to that range from “what if they get nukes?” to “what should our policy be?” See his next book “Atomic Obsession” which is coming out in October for the first; for the second, there are infinitely many targets (including this room) with near-zero probability of being attacked and near-random targeting. Also most targets can be rebuilt cheaply (even the Pentagon). Many can’t be protected at all (such as the London Underground).

    Last speaker of the workshop was Adam Shostack. The big problem for Microsoft at least is prioritising security work, and the roadblock is shame. People are too ashamed to admit what went wrong, so we don’t have decent statistics of attacks and are working in the dark. We don’t have data for cost, or for frequency, of incidents. There are plenty excuses: your customers will flee, or your stock price will collapse. However these are instead of saying “I’m ashamed that I administer dozens of systems and almost all have been broken into.” We need something like the National Transportation Safety Board, or perhaps a Truth and Reconciliation Commission. If we are going to fix the world, we need to know what’s wrong and for that we need to tackle the shame.

    In discussion, Joe asked whether computer security is any worse. Adam replied that security breach disclosure laws give us data on personal information; we can try to extrapolate to the rest. Terry said that although terrorism does fairly trivial damage that has no strategic effect per se, our overreaction to it does have strategic impact; John agreed. “The enemy is us.” It was asked how users can deal with the different security metrics used, for example, by Google and Microsoft when it come to de-identification of search logs; anonymity is in fact a wider and deeper problem. Luke asked about whether we’ll ever get an equilibrium on privacy: I gave Andrew Odlyzko’s example of railway regulation in the 1870s. John Adams remarked giving a talk to a science fiction convention on the social consequences of hypermobility, which he treated from the viewpoint of Putman’s “Bowling Alone”: it erodes social capital. He asked whether any science fiction story talked of a democracy with the abolition of distance. No-one could; all such societies were tyrranical hierarchies. Adam Shostack retorted that Putnam just didn’t get technology: you can use it to build social capital, while Dan Gardner observed that democracy has grown with technology. Adam rejoined that much sci-fi doesn’t talk of government at all; interesting stories tend to need broken stuff to cause interesting conflict. Dan Gardner switched the topic back to prediction and asked David how confident the intelligence analysts are. David replied that good analysts are underconfident. This is not the only group of experts showing good calibration: meteorologists and bridge players do too. The next topic was bugs. Luke asked the relevance of the “bug tales” used in Microsoft to educate people on weirdnesses of software; Adam agreed the infosec industry has much folklore, but it’s hard to judge which of the tales are helpful. At present it’s so much harder to work on operations than on software development, because of cross-organisation issues. Andrew Adam suggested no-fault disclosure policies in healthcare as another data point. It was asked how search engines can tell what data is anonymous and what isn’t. Alma replied that anonymity has to be defined with respect to a threat model. I added that when university colleagues and I were under threat from animal rights people, I wanted for a few months for my home address not to be easily findable on google. That’s unexpected. How can you plan for all sorts of requirements like that? You probably can’t, so we have to be able to deal with new problems as they arise.

  9. Tiny quibble, but:

    “Adam Shostack remarked that perhaps the Crusades were a major policy change in the absence of a challenge; ”

    This is not remotely correct. While there is significant debate about why the First Crusade began in 1095 rather than, say, 718, I think nearly all historians would agree that the Crusades were a major policy change in the face of a very direct challenge, namely, invasion. Specifically the First Crusade was a response to the collapsing Christian defence in Anatolia (nowadays Turkey), where it looked like not only would another Christian realm be lost, but Europe would end up under attack from three directions at once (Spain, Italy and the Balkans.)

    It is particularly significant to note that while the Crusades proper began with Christian knights going to the aid of the Byzantine Greeks in Anatolia, for many years previously they had begun arriving in Spain to aid the Christian “Reconquista” there, while just 8 years previously the Venetians with the blessings of the Pope had begun raiding Muslim towns in North Africa that served as supply bases for Muslim forces in Italy, and 4 years previously Norman knights had finally driven the Arabs out of Italy.

  10. I’d like to register a plea to stop using “whitelisting” and “blacklisting”. I object to the stereotype that black is bad and white is good. I propose “safelisting”; any better ideas?

Leave a Reply

Your email address will not be published. Required fields are marked *