Security psychology

I’m currently in the first Workshop on security and human behaviour; at MIT, which brings together security engineers, psychologists and others interested in topics ranging from deception through usability to fearmongering. Here’s the agenda and here are the workshop papers.

The first session, on deception, was fascinating. It emphasised the huge range of problems, from detecting deception in interpersonal contexts such as interrogation through the effects of context and misdirection to how we might provide better trust signals to computer users.

Over the past seven years, security economics has gone from nothing to a thriving research field with over 100 active researchers. Over the next seven I believe that security psychology should do at least as well. I hope I’ll find enough odd minutes to live blog this first workshop as it happens!

[Edited to add:] See comments for live blog posts on the sessions; Bruce Schneier is also blogging this event.

19 thoughts on “Security psychology

  1. The first session brought home what a huge subject deception is. The first speaker, Paul Ekman, pretty well established the study of how people deceive each other in face-to-face contexts, and how we detect deception. We observe a social hot spot – a discrepancy or an implausible statement, say – and this prompts us to look for an emotional leakage, via something like a microfacial expression.

    However the error rate can be high, as the hot spot can come from a context switch and the emotion being leaked could be (say) anger at being interrogated rather than fear of being caught in a lie. Othello’s error was to read Desdemona’s fear correctly but to misunderstand its cause.

    And what about our social attitudes to deception, here’s a really great line of Paul’s – “would you really vote for a President you didn’t think could lie to you?”

    Jean Camp continued by describing how much harder deception detection becomes online. In meatspace we can easily tell context, and we can detect the reactions of others – an empty car park is way scarier than a busy town centre. So she’s working on a system that will tell users whether a web site to which they’ve navigated has been visited recently by many others.

    Mike Roe talked about the security of online games. One thing they teach us is the impracticality of most threat models in use by the crypto and computer security communities. Geeks tend to think of attacks that ‘if you build it they will come’, yet the attack scenarios about which we obsess typically lack all psychological credibility. We should get away from favouring attacks that we can model mathematically.

    The final speaker of the session, James Randi, is a former stage magician who now runs an institute investigating the paranormal, things that go bump in the night, and charlatans generally. Among the points he made was that many stage magicians don’t really understand how their tricks work. The mechanics are of course known, but the psychology is less clear. He recounted that he’d been doing a newspaper tearing trick for several years before he realised he was using it to size up the audience and select the assistants he’d use in future tricks. The psychological stuff is very much seat-of-the-pants for many … so don’t assume that something is deception when it might just be ignorance.

    In particular – and Paul Ekman agreed to this in the panel discussion – the bext way to tell a lie is to deceive yourself into believing it. This should get a lot more research. Charlatans in particular – such as spirit mediums who pretend to talk to the dead – operate by inducing self-deception. (And now I write up this post I realise that 419s and similar scams fall into much the same category.)

  2. The second of the morning sessions brought home the diversity of approaches to online crime.

    Matt Blaze started off with a comparison of online and offline crime. Despite comments in the previous session, the computer security business is the one branch of computer science that’s utterly unsuccessful all the time! The real world as amazing: we don’t get killed 100% of the time; we usually get to keep our money; and we know who we are talking to. The one thing we maybe do better is that we understand Kerckhoff’s law – which locksmiths also understood in the 19th century but have now forgotten. In any case, we can learn much from real-world protocols.

    Ron Clarke, the father of situational crime prevention, talked about the need to be specific and detailed when making crime prevention decisions. Gullibility does facilitate crime but it’s usually only a smallish part of the mix. Look at the modus operandi step by step; the better you understand it the more possible intervention points you will find.

    Eric Johnson talked about risk communication from a busines-school viewpoint. it’s all about the vividness of people’s mental models: most Americans think they are more likely to be killed by a shark than by falling airplane parts, while the latter type of accident kills 30 times as many. But people remember ‘jaws’. He reported research on making juries more likely to convict by presenting evidence with more vivid detail: a prosecutor should not say ‘the drunk knocked a bowl to the floor’ but ‘he knocked a bowl of guacamole to the floor, spattering guacamole over the white shag carpet’. Risk communicators should learn from the marketing folks, and in some firms -like Toyota – they have.

    Charles Perrow talked about software failures and ways in which they can cascade in complex systems. Modularity matters: governments should be mandating the use of open-source software, instead of going down the current route of ‘windows for warships’. This goes to the economics of crime via the costs of patching, which are serious and growing. By comparison the costs of credit card fraud are pretty low, and are declining as a percentage of the (faster growing) volumes of online commerce.

    Finally, Alma Whitten tols us about Google. She went to work there five years ago, having previously written ‘Why Johnny can’t encrypt’, the paper that kicked off security usability research. So what did Google need an HCI guru for?

    Well, in their engineering culture with a tradition of making information available, of ‘forgiveness rather than permission’, making internal controls viable at all (let alone scalable) was non-trivial. When you have built the world’s biggest supercomputer out of commodity parts, you need to build the OS too. So what does process security look like? What does user security look like? What’s flow of responsibility, and what monitoring is useful? What do we communicate?

    The solution is to to make processes visible and comprehensible, and get people to access data in appropriate ways. Make it normal that anyone who touches sensitive data does so ‘publicly’, and that it’s natural for people to use other people’s scripts. The doctrine is that you have ‘no privacy at all when you’re touching private data’.

    Many firms have a laziness that leads them to substitute surveillance tools for functional management. That’s just wrong.

    As far as external attacks are concerned, prevention scales very much better than prosecution. Law enforcement doesn’t work very well, bad guys are in different jurisdictions, and chasing them costs a lot of money. it’s much cheaper to change your system so as to render the bad guys harmless.

  3. The third session, just after lunch, was geek-heavy.

    Jon Callas talked about the history of PGP (subject of Alma’s famous paper, passim) and how they bought it out with a vision of developing ‘zero-click security’. This was motivated after they’d shown one-click version to a marketing suit and he wasn’t so much interested in the cryptogobbledegook as in ‘Now you have to get rid of that one last click.’ It turns out to be an extremely long hard slog, and not just because of usability: it’s made harder by specific problems such as the difficulty of testing security.

    Luke Church talked about context. His case study was the security of a drug delivery system to which doctors and nurses authenticate themselves using a swipe card. The common failure is that a doctor tells a nurse to charge a dose, the patient is harmed, and there’s a lawsuit. A usability study of how the nurse uses the system does nothing – it’s about the context in which she uses it.

    This brings us to the methodological problems of ethnographic studies, which are slow and not oriented towards helping engineers. User programmability can be a partial solution, but is tough in the security space: end-users make enough mistakes in their spreadsheets. In fact, this is a field in which we really need some ideas!

    Markus Jakobsson described his new system for password recovery, in which people are asked about their preferences rather than for remembered items. In applications where you need to authenticate someone every few years, preferences are much more suitable. There is though an adversary issue: preferences work fine against faceless online bad guys but less well against insiders. An ex-boyfriend may be the worst of all…

    Bashar Nuseibeh talked about requirements engineering, which is properly about scoping solutions. This should be more used by security engineers; stuff often fails because security engineers considered too narrow a range of threats. RE is also one existing discipline that incorporates psychology and anthropology as tools. The requirements are where the world and the machine meet. Broadening the business from safety to security means coping with ill intent. If a pigeon flies into an aircraft engine, that’s safety; if a bad man throws a pigeon into the engine, that’s security. What exactly is the difference?

    Angela Sasse argued that standard security mechanisms won’t be made to work, so we should aim for better integration with more variety. For example, standard password mechanisms are fine for frequently-performed tasks, while mechanisms like Markus’ are better for infrequent authentication where robust memorability is the goal. Designers should also be aware that there is a limit to compliance – use it wisely. Most fundamentally, many engineers try to turn people into sheep; it’s “human factors” rather than humane engineering. The DHS thinks that it’s succeeded when it can get fingerprints from people in wheelchairs; but this is a wrong approach from the higher-level decision makers. A complex problem is dealt with by inappropriate technology assumptions. In addition, tech is used to distance the two parties. you also distance the controller from the adversary.

    There followed a discussion about the distinction between safety and security, which has to do with conflict in requirements and, more generally intentionality; and don’t forget the modern (and medieval) trend to disbelieve accidents, demanding intentionality in everything. Intentionality is tied up with incentives and probabilities: most engineering disasters nowadays involve long chains of improbable events that set up conspiracy theorists. There was also discussion on circumstances where you can design sensible defaults versus those where you might have to compel people to do stuff.

  4. The final session of the day was on methodology. Bill Burns kicked off by arguing that risk perception is really an emergent phenomenon, which is best studied in the aftermath of an incident. His work models diffusion of fear, including its amplificantion by factors such as media coverage, and predatoriness (whether sharks or terrorists). Large differences can be made by small changes in wording. Fear goes up quickly but declines slowly; the half-life is something like 45 days. (Their measure of fear is anxiety plus protective action.) TV coverage ramps down much more quickly, while risk perception actually comes down slightly more slowly. Yet there are some irrational reactions: it’s very hard to get people in LA to prepare for earthquakes despite a perception of a lifetime evens risk of a Richter 7 event.

    Ralph Chatham complained that people R&D is woefully underfunded compared with tech. But it matters: in 1965-8, the air combat exchange ratio was just over 2:1, until the US Navy set up Top Gun and improved it to 12:1. You can get an order of magnitude increase for small forces in 2-3 weeks, and maybe a factor of 5 for large forces. It does cost money: 250 Iraqi-American play actors are not cheap. The ‘DARWars’ response was PC-based tactical language training software – like a Pac-man game but you have to talk, not shoot, your way out. Has now been used by 20,000 troops. Anyway, the problem is that hardware is easier to count than proficiency.

    Lorrie Cranor argued that there are three, incompatible, responses to unomivated / ignorant users. The first is to make security invisible (so it just works), the second is to make it visible so people understand it, and the third is to train people what to do. See Michael Wolgother’s book ‘Handbook of Warnings’ which collects railroad signs, drug labels etc and studies which of them work. You can look at how the human behaves in the communications loop: whether a communication is immediate or a corporate policy read three years ago, how it interacts with their experience, education and beliefs.

    Mark Frank talked about the serious problems in translating lab studies to the real world of terrorism and law enforcement. Much of social science makes the assumption that all lies are equal, but they’re not; we detect lies based on mental effort or emotional stress. The vast majority of lab studies have no real penalty attached, thanks to IRB restrictions; in the real world, you can get shot. Also, in labs people are randomly assigned, while in the real world they ‘own’ their behaviour. Lab subjects are typically undergraduates, while in the real world they can be different (except in 9/11 where many were in fact college students). However if you think the interrogator is the Antichrist you will feel less bad about fibbing to him. Again, lab questions tend to be closed, scripted and limited, while real interrogations are open-ended and long enough to lead to fatigue. Anyway, they did some studies and found that combined deception detection measures can get better than 85% over 132 subjects. By contrast, fMRI doesn’t work very well.

    Stuart Schechter found that 92% of users of Sitekey would enter their password anyway if told ‘we are upgrading our award winning Sitekey service. Please contact us if your Sitekey does not reappear within 24 hours.’ He criticised defense in depth, the current sacred cow of computer security. The belief that ‘another layer is always good’ can easily lead to diffusion of responsibility. There’s also the risk thermostat effect. So – can we protect people without making them take more risks? Can we warm people of danger without desensitizing them? And how can we get them to deal with low-probability but high-impact risks?

    The discussion touched on the risk thermostat and local versus global anxieties. On the one hand, people who wear crash helmets may drive faster, but don’t take up smoking; on the other, most people can only cope with so much anxiety. In addition, it may take a certain level of peer feedback before people actually panic. Intentionality, and the moral dimension, can matter: medieval societies got more worked up about witchcraft than about the plague. Also, people who’re angry have lower perceived risk than people who’re fearful.

    On the other side of the coin, people on a suicide mission are not in a normal state of mind, so what can our lab tests say about them? As far as we can tell they’re in a soldier’s state of mind, focussed on the mission. There may be similarities with some violent criminals (e.g. people about to murder their spouses) but they are not the same. Data on terror are sparse; perhaps the Israelis have the best available stuff. There have been a handful of terrorists caught at airports, but more felons, smugglers etc. There are also something like 80 people in US prisons who’ve killed federal officials or attempted to.

  5. The first session this morning was on ‘Foundations’: it turned out to be rather eclectic.

    Dave Clark is working with the NSF on future Internet design. One goal is that the Internet of 15 years’ hence should be ‘fundamentally more secure’, but what does that mean? Security at this scale isn’t about perfecting mechanisms but a tussle between conflicting interests. The actual components will be commercial-off-the-shelf, and the applications will be part of the mix. We need people to act socially in the absence of constraints, and the mantra of the future is “communication without trust”. Trust may be mediated by some mix of protocols and TTPs; what must the network do to support this? We are also good at forwarding packets, but we don’t have a generalised theory of availability. Where it gets hard is that points of indirection are points of control and power; solving a problem by adding a layer of indirection leads to a tussle over who controls it. The vocabulary of power is more familiar to social scientists than engineers – so they should get involved.

    David Livingstone Smith is a philosopher interested in human nature who believes the interesting and problematic problems require an understanding of our evolutionary history. ‘Danger and safety’ are more evocative terms than ‘security and safety’, and deception is a normal feature of nature because it works: organisms that can manipulate others’ perceptions have an edge. Humans play very elaborate social games around deception; we’re easily suckered by language, as we like to listen to the lyrics and watch the dance. You could say that language has thrown a monkey wrench into the works. Nonetheless most deception is nonverbal: clothing, cosmetics, bearing… We are not only deceivers but self-deceivers, and even self-deception has utility: “When a person cannot deceive himself, the odds are against his being able to deceive other people” (Mark Twain, Autobiography). We can also manipulate our own behaviour, most strikingly in war. See “Most Dangerous Animals”, his book: by excluding the enemy from the moral community we enable ourselves to fight more fiercely.

    Tyler Moore described a survey to determine whether brain type influences online safety. The psychologist Simon Baron-Cohen has developed a theory of brain types: systemizers, who are better at understanding systems, and empathizers who are better at understanding the perspectives of others. These personality types are correlated with men and women respectively. I survey showed that people who are strong systemizers are better at detecting phishing. Why? We hypothesize that empathizers rely on holistic assessment of whether a website (or environment) is safe, while systemizers are more disposed to look at a single clue – which is all you typically get to detect a phishing site.

    Carey Morewedge is interested in the ways in which economic actors are systematically irrational. if you ask people “how happy are you?” and then “how many dates did you have last week? the results suggests that dating accounts for 1% of the variance in happiness, while if you ask the questions in the reverse order the number shoots up to 44%. Defaults also matter: countries where you have to opt out of organ donation all have donation rates over 85% while in countries where you opt in the rate is always under 30%. In recent research on recall bias, he’s found that if you ask people to recall the best (or worst) baseball game they’ve seen, they will then anticipate that they will enjoy the next game that much. Read ‘Mapping Bounded Rationality’, D Kahneman, American Psychologist, 2003.

    George Loewenstein presented ‘The Best of Strangers’. This paper challenges the idea that there are stable and coherent privacy preferences. There are papers going back to the 1960s trying to measure privacy concerns but finding low consistency and predictivevalidity. Behavioural researchers have shown that the weighting of choice attributes depends on context. The survey asked subjects a series of questions of increasing intrusiveness, varying contextual factors. For example, a ‘How BAD R U?’ site with a laughing devil face was 1.7 times as likely to elicit incriminating admissions than a ‘Carnegie Mellon Survey of Student Behaviors’ page – despite the fact that a university survey might be expected to preserve privacy better than a alight-hearted site. In the second survey, some subjects were asked to rate unethical behaviours that they had engaged in, and others to rate behaviours they had not engaged in – a covert way of getting them to admit participation. In the third study they assured subjects of the privacy of their responses; strong assurance decreases disclosure by 50% compared with weak assurance. Overall, the move online is simultaneously increasing privacy risks while muting the cues that might make people take care; so we need more regulation.

    The discussion started on generational issues: young people having grown up with the net seem less privacy sensitive. Also, the concept of ‘friend’ is defined technically on social networking sites, which may affect the word’s usage among kids; and there is a growing desire for celebrity. So it’s all rather complex: experiments just abstract a specific slice of reality. More fundamentally, is it sensible to use the word ‘privacy’ for online as well as offline behaviour? The latter is about getting shielded space where you’re safe from losing face and can enjoy intimacy; the former tries to subvert privacy by sending out a message ‘I’m available’. (Monkeys value private space where they can pick their noses and so on; other monkeys loved watching this behaviour on video more than anything else!) For a cultural view on young people and the Internet, see ; there’s also work by Nisbett – including fMRI – relevant to systemizers versus empathizers. Oh, and cultural and personal differences can be on a scale or global versus local – might this be more useful then systemizer versus empathizer? (For example, Japanese are more global thinkers and Westerners more local thinkers!) Finally, biochemistry matters: blood oxytocin levels are correlated with trust behaviour.

  6. The sixth session was about terror.

    Bruce Schneier recalled prospect theory: most people are risk-averse about gains and risk-loving when it comes to losses. This is understandable enough for a species at the edge of survival. Now sales are fear or greed: salesmen know that greed is an easier sell, and prospect theory explains why security – as an insurance sale – is hard. One industry response is to turn the fear sale into a greed sale (‘we take care of your security so you can take care of your business’) or to push the fear button really really hard (FUD). Hence the overuse of terrorism as a means of marketing public policy. He also referred to the work of Max Abrahams on correspondent inference theory: .

    Frank Furedi remarked that in Anglo-American culture, fear has become increasingly dissociated from its object. The authorities have little incentive to tackle the existential sources of anxiety; so security becomes reduced to impression management. For example, keeping crime scenes sterilised for weeks with people in white suits is better propaganda than getting London back to normal in two days. The policy documents that talk about safety end up increasing anxiety; Bush talked after 9/11 of ‘America the vulnerable” which would have been unthinkable to Churchill! The whole world becomes one big target. We ought to replace the vulnerability-led response with a resilience-led one. Learn to live with ‘terror’; don’t acquiesce in it. A related issue is the increasing anxiety about non-police-checked adults following the wave of pedoparanoia in England. All this crap wears down the resilience of a community. A further (related) set of issues is the extent to which people can give security meaning – it must not be just a set of procedures set out on the back of a fantasy document.

    Richard John uses perceptual modelling to understand terrorist risk. He invited us to put behind us the ‘folkways or threat assessment’ in which people try to mitigate the worst possible attack and assume a zero-sum game. Instead we should try to take the opponent’s perspective – which many are unwilling to do as they confuse it with empathy. We are egocentric if we assume the opponent’s values are exactly the opposite of our own – this drives us to ruminate on our darkest fears. Actually, OBL has a very stable value structure that’s very public and within which he’s always acted: he is much more predictable than any candidate for president! He has used this framework to run simulations with different experts who try to put themselves into the opponent’s mindset and try to find the bext attack plans. However the experts come up with very different predictions.

    John Mueller is an expert in defence policy whose latest book, “Overblown”, is about how the terrorism industry overstates national security threats. In fact, the threats perceived by the US policy establishment since WW2 have been massively exaggerated. Given a 9/11 every few years, your lifetime terror death risk is 1 in 80,000, versus 1 in 80 for a road traffic accident. It probably won’t be anything like that bad; despite agency bluster about terrorists in America, 2 billion people have visited the USA since 9/11 and there have been no incidents. In one country after another, OBL’s tactics have been counterproductive: Jordanians, Pakistanis, Moroccans, Iraqis and others got disillusioned after Al-Qaida bombed them. The real cost is the fear: the USA has had another 400 road deaths per year since 9/11, so you could say that the TSA has killed more people than the terrorists. The incentives facing politicians, bureaucrats and journalists are clear enough. If it’s illegal to cry ‘fire’ in a theatre, why isn’t it illegal for officials to hype fear? It’s also unclear that there’s any economic rationality to hardening targets – there are so many, the risk per target is so low, and any attack that actually gets carried out can switch to a less protected target if you harden only some of them.

    Paul Shambroom, the photographer, was inspired by Bruce’s phrase ‘security theatre’. He put on a slide show of photos capturing images of fear, travelling to DHS training facilities. For example, ‘Terrortown’ is a ghost town in NM used for training SWAT teams. Paul sees his function as a canary in the coal mine of democracy. It’s about the currency of fear. By comparison, the task in the Cold War was to terrorise our enemies without terrorising our citizens too much. (Before 9/11 he photographed the US nuclear weapons arsenal.) Despite the fact that taking a photo from a public road or public land is permitted everywhere in the USA, he’s had assorted hassles with rentacops and even regular police.

    In the discussion, it was remarked that risk perception and bodycount aren’t well correlated; the mechanism (and intentionality) are very salient. we’re afraid of invisible stuff, and don’t trust the government. The fear is bottom-up (although the politicians stoke it); downplaying the fear can also be counterproductive. See for example Paul Slovic on Chernobyl; he was one of a number of western experts taken there to reassure the population, who got angry at him. There’s a big debate about why politicians don’t just declare victory; apart from fear of being shown wrong, and the economic and institutional incentives, there could be cognitive dissonance: what did we do to win? How did we win? Perhaps we need an armistice, as with the IRA after the UK beat them. But then the security-industrial complex is just so big and profitable…

  7. The seventh session was on privacy, and it was kicked off by Alessandro Acquisti whose topic is the economics of privacy. The ‘normal’ economics of privacy arose in the Chicago school in the 1980s and was sceptical; more formal microeconomic modelling started with the Internet. However, this doesn’t model the world well because of the privacy paradox: stated and revealed privacy preferences differ widely. We have to look for the explanation in bounded rationality: hyperbolic discounting, herding, optimism bias, coherent arbitrariness, framing effects and so on. For example, in work with Leslie John and George Loewenstein, he showed that people value information differently depending on whether they are protecting it or revealing it. The modus operandi was to offer shoppers a $10 anonymous card or a $12 identified card, and offering them the chance to switch later. 52% will stay in an anonymous condition, but only 9% who start off identified are prepared to switch later. In other work, he showed that 60% of people in large Facebook groups will make their profiles visible, while over 90% of small group members will. Also, profile visibility has declined steadily over time. Finally, why do some people actively seek bad reputations? For some, notoriety is better than no fame…

    Andrew Adams wrote “Pandora’s Box: Social and Professional Issues of the Information Age. His talk was on societal aspects of security: for the rulers the goal is power, although the rhetoric may be security. After the Bulger murder CCTV was sold as ‘crime prevention’, for example. The police admit privately the goal is investigation rather than prevention, but that doesn’t play politically. Similarly, the Sharon Beshenivski case exposed the fact that ANPR was for tracking vehicles, not for maximising traffic flow. Against this background of political manipulation, how can you sell privacy?

    Peter Neumann argues that privacy is in the eye of the beholder, and that most people don’t realise they had it till they lose it. Security and privacy are so hard that “Pandora’s cat is out of the barn and the genie won’t go back in the closet.” Although the DoD talks about ‘strength in depth’, there are some systems – such as electronic voting – that are really weakness in depth. Voter registration, voting, counting, certifying, recounting, dispute resolution etc are all broken. The California review (and the later ones) showed security so broken that privacy almost irrelevant! Curiously, the Saltzer-Schroder principles (1975) which might have prevented or mitigated this mess include ‘psychological acceptability’.

    Andrew Odlyzko will be spending the next six months writing a book comparing the Internet bubble with the British railway mania of the 1840s, which was the greatest technology bubble in history. It’s the only time private entrepreneurs, building public infrastructure, outspent the military. By 1850 they had spent $6 tr and lost $2 tr (in today’s terms). The sociological and psychological effects are interesting for security. Fear was a big deal: railway accidents were new and got much coverage. The railway interest tried to explain that more people were killed by horses in London alone than by trains in the whole country – just as we argue that more people die on America’s roads every month than died in 9/11 – but the public didn’t care to listen. Another lesson is price discrimination, which was first understood by Jules Dupuit and colleagues who invented microeconomics (their work was forgotten and had to be reinvented). Nowadays, provided you get rid of privacy, you can do price discrimination much better using technology. You can find out customers’ willingness to pay, and prevent resale. There are real benefits to society in doing this price discrimination: the tension is human dislike of it.

    Frank Stajano’s talk was “from understanding technology to understanding people.” Recently he’s been trying to learn from experts in other domains – bioinformaticians like Pietro Lio, lawyers like Douwe Korff, criminologists such as Martin Gill, and Paul Wilson of “The Real Hustle”. This all teaches the need for a much better understanding of real attackers. For example, a shoplifter has many ways of defeating security tags. Many scams exploit greed, compliance, fear, … When security and usability fight, security loses. One current project is a “secure and usable PDA”. What on earth should that mean? His slogan is respect for the user. (But you need to understand attackers, victims and other experts too.)

    The discussion questioned whether it’s a breach of privacy for a young lady to show her body to strangers on Facebook; often people are less worried about showing stuff to strangers than to people they know. Rosenbaum had a paper on fake mental patients – experimenters who faked schizophrenia, and found that nurses treated them as non-people (they would adjust underwear in patients’ sight). A relevant bias in all this is our tendency to value story over data. (The plural of anecdote is not evidence – but for the public it is.)

  8. The final session was rather grandly entitled “How do we fix the world?” Nick Humphrey kicked off with a tale of how he’d observed vervet monkeys in Botswana giving “leopard alarm” calls at the sight of an open book on a camp table with a picture of a leopard on it. We can’t help making sense of the stuff we come across through the lenses of our evolutionary and cultural history – as another example, when the Mongols found coal in Western Mongolia and found they could burn it, they decided it was Genghiz Khan’s horse’s shit. As Einstein said, “Everything has changed except our way of thinking.”

    With the Internet, we are not ready enough to change our ways of behaviour because we underestimate the threat, while with terrorism it’s the other way around: we overestimate the threat and are too ready to kowtow to the police. The underlying problem is that the Internet is too homely; it’s sold to us an ‘personal computing’ and so on. All the triggers are pressed to make us accept the PC as a part of the normal family environment, so we’re as loth to take precautions as we are to beware of being bitten by the family dog. Terrorism is presented in the exactly opposite way: alien creatures who are members of an out-group, in an alien environment, who want to take our lives – everything makes us batten down the doors.

    The practical task is to change our illusions – about the Internet and computers, and about the safety of public spaces. For starters we should make the Internet look like something that’s about to bite us. Rather than having a padlock there when it’s safe, have a shark when it isn’t! Many aspects of our health are regulated by mechanisms that make us vomit or give us fevers or otherwise make us uncomfortable, because otherwise we’d make errors about risks. Nature didn’t design us to be happy but to be fit. In the case of terrorism, and more generally the perceived safety of public space, the world is no longer full of sabre-tooth tigers, so we should reduce the levels at which we set off alarms. What combination of technical and cultural fixes could do that? The goal should be to get to the stage where security isn’t at the front of our minds – in both directions.

    Discussion touched on the time period needed to adapt. Britain and Israel have a much more phlegmatic response to terrorism, because of experience. Yet the paranoia and extremism in government is just as bad as in the USA. As for the Internet, people are adapting: young people have better bullshit detectors. The relationship with computers also matters: many people treat them as companions rather than apparatus. Computer designers from Steve jobs onwards have succeeded too well… Cars are similar – they feel like home, not like the dangerous missiles they are. It’s not just money – people in this room lost a lot of money over the last two days on the stock market, yet we are much less fazed than we would be if we’d been burgled for a fraction of the value. We just don’t have appropriate reactions for money yet, as it’s too new. As for terrorism, a good start would be to get the men with guns out of sight in airports – everything there sets us up to feel it’s a war zone. The reason terrorists attack airports is because people defend them: they have become an iconic place where the theatre of terror is played out.

    Richard Zeckhauser is an economist from the Kennedy School, and an empirical researcher. He goes to lots of seminars and hears stories of information insecurity, but where’s the evidence of people getting hurt? Are you guys trying to grab some of the terrorism dollars? The terrorism guys have much better scare stories, such as someone blowing up an LNG tanker coming into Boston harbour.

    Rational behaviour is to worry about the cost of accidents and the cost of avoiding them. When an accident happens, the costs are salient, but when it doesn’t the costs of avoidance are not. What side of the curve are we on – are we spending too much or too little? Where’s the evidence? How can we shift the curve, and what are the areas where we should do more – or less? What about the non-financial costs, such an anxiety? These are highly nonlinear – making fraud 100 times as hard might actually make me worry about it more by making it salient.

    Also, consider paltering (see his paper on the website). This is a term for misleading behaviour short of outright lying. The world is full of it – all the stuff I get from my bank on security seems to be in this category! How do we deal with that? This leads us to consider the possibility of security-theatre externality – if you come up with a new seal of approval for your bank, that makes mine look less secure. Lack of caution can have a positive externality – the more people go to central park, the safer it is.

    On terror, there’s one thing the USA did right and the UK did very wrong. In Britain there are many radical imams who cause trouble; in the USA, Muslims are happy – they earn more than average, and perhaps as a result most of our domestic terrorists are crazies.

    In the discussion: you can figure out people’s willingness to pay for security (and lack of pollution etc) by looking at house prices; and Internet security will get worse over time as more systems come to depend on connectivity. The loss of national power distribution could be pretty dire!

    There then followed a discussion, which I led, on ways forward. People enjoyed the event and want us to organise another next year. We talked about possible themes, and about how to spread the word to colleagues and to people in other disciplines…

  9. Ross

    Congratulations on a terrific event. I enjoyed it immensely, and look forward to phase two. It was very stimulating, on a variety of levels. Thanks also for live-blogging the event.

    I have a couple of corrections to your summary of my presentation. First, I said that we pay too much attention to the lyrics, and not enough to the music and the dance. Second, the title of my most recent book is ‘The Most Dangerous Animal: Human Nature and the Origins of War.’

    David

  10. Terrific event, and really useful blog, thanks Ross.

    In your summary of my talk you ask about the difference between safety and security in my example of a pigeon flying into an aircraft engine (safety issue) verus being thrown intentionally into the engine (a security issue). From an engine design point of view, there may be little difference – the engine may need to be made more resilient to cope with such events. However, by understanding the human behavior that might lead a person to throw a pigeon at an aircraft intentionally, we may end up with additional security solutions (such as strengthening the airport perimter fence) to prevent the intentional attack. I think exploring this subtle distinction between intention and accident allows us to better analyse our problem space, and in turn our solution space…

  11. Thanks. I tried posting a comment here earlier to that effect but it doesn’t seem to have appeared.

    What I wanted to add, Ross, is that I’ve known you for over 10 years and have always admired your writing talents but nonetheless I was most impressed by your ability to write meaningful summaries of the talks in real time and post them before the start of the next session. Truly amazing notetaking skills!

    I tremendously enjoyed the workshop and the chances to meet and interact with interesting people.

Leave a Reply

Your email address will not be published. Required fields are marked *