13 thoughts on “Security and Human Behaviour 2010

  1. The workshop was kicked off by Jeff Hancock. The recordability and searchability of material online is the real game-changer; things that you say online are a “digital tattoo” – you can maybe get rid of it, but that’s hard. Lies are different: lying on a car insurance claim is different from lying in an online game, and different again from a politician lying about whether a country should be attacked. In the old days we considered just “deceptive language” versus “truthful language” and looked for cues; for example, the use of the first person singular is more common in truthful language. However we now realise it’s more complex. There are words that serve the lie itself, and the truthful words that surround and support the lie. So now look at “the truths within a lie” by getting students to make their utterances afterwards, one sentence at a time.

    Frank Stajano was next, following up on his work of last year on why we fall from scams. There we extracted principles from “The Real Hustle” TV show recordings. Now: how on earth can they be applied to real systems? He showed “The Good Samaritan Scam” from the show which shows how kindness can interact with reciprocation, commitment and consistency to set up a scam: first get the mark to do you a small favour, then once they’re helping apply further social pressure to get another, creating a social interaction that leads to the sting. He compared his paper’s scam classification with Stephen Lea’s and with Cialdini’s work on the principles underlying marketing.

    Peter Robinson’s interest is the computation of emotion in man and machines. Since Duchenne and Darwin, scientists have understood the importance of facial expression in human conversation. But computers are autistic; they can’t read users’ faces. Peter wants to fix this. He showed a video of facial expression analysis and discussed the strengths and weaknesses of techniques to analyse expression in both face and voice. Both generalise fairly well from training data on one person to other people; and voice expressions go across to other languages. Bodily gestures are more difficult. His group is also generating expressions; they’ve done experiments with outline cartoons, properly rendered cartoons and a robot head with several dozen motors to replicate facial muscles.

    Pam Briggs wanted to start off with a film ‘Biometric demon”; this didn’t work but it’s available on YouTube. The demon is like a tamagotchi. You imprint it with your fingerprint, voiceprint, and more fluid biometrics such as gait; it also learns your favourite places. It shares your identity and become anxious when moved away from your usual routine; to reassure it, you pet it in various ways. If someone steals it, it would quickly pine and die for lack of petting. All the signals that you are who you are (keys, passwords, etc) are brought in from the world and become close to you. The idea was inspired by Philip Pullman’s “Dark World” trilogy. The next step should be to program the demon to warn you of fraud, for example by demonstrating anxiety when you’re about to use credentials at a dodgy website. Now if everyone had a demon, how easy would it be to lie?

    Mark Frank studies deception from the point of view of nonverbal behaviour. Recently he and Paul Ekman have been thinking about security in airports. Can we detect people who’re up to no good? Can we do it in a natural security setting? And what things don’t we know? Experiments show that the average person doesn’t do much better than guessing when trying to detect deception; a minority of people are good (the “wizards” in law enforcement). Law enforcement environments engage the emotional system though in ways that college experiments don’t. So they did a meta-analysis of the literature. It turned out that cops are no better than undergrads at judging low-stake lies (54%) but better at high-stake lies (65%). SPOT trained officers are the only people who can pick up low-stake lies in the lab with probability better than chance. Wait for the SPOT validation study. What don’t we know? Well, how can you motivate lab subjects? How do you assess, for example, the value of secondary screening? How do you classify positives and negatives – e.g. if your counterterrorism people only catch a drug smuggler, is that success or failure? And what’s the role of deterrence? In short, we need better metrics.

    Last up was Martin Taylor, a magician and hypnotist. The modern theory of hypnotism has to do with social compliance. The three planks are suggestion (if I say “saliva” the spit builds up in your mouth), peer pressure (herding is a survival instinct bred in by evolution) and obedience (it’s hard to disobey the big man at the front). By building up gently you can get them hallucinating an elephant in the room by the end. He will give some demonstrations later in the breaks.

    In questions: Frank Furedi asked where truth begins and ends; a lie can become who you really are over time. Jeff Hancock agreed; self-deception is a big deal, not just with children but for example with athletes who visualise themselves winning; and famous scam artists often believe their story (at least at one level). Bill Burns asked about the Samaritan scam – can’t we perhaps explain that using Bayesian updating? Frank Stajano agreed: some of the scam principles might be explained in terms of expectations management. Nick Humphrey remarked that other cultures have had “demons” – the superego inculcated by strict parents, and of course religion (see Bering’s work on religion and belief). Pam remarked that this has faded a bit from social psychology; you can may classify exchanged information in terms of who knows what about whom. Sandra Petronio remarked that she’s done research on the boundary between privacy and deception: adolescents in particular withhold information from parents that’s important to protecting the development of their identity. Such behaviour may be considered deceptive by parents but adolescents don’t see it that way! Mark Frank replied that in some circumstances the lie of omission isn’t considered to be a lie – but there are always grey areas. Luke Church argued that we’re less good at building usable computers than humans are at learning to operate them; so if I lie to a computer dating service to get the right outcome, is it even a lie? Jeff said that in the dating systems he’d studied, men always lied to be taller and women to be skinnier. Mark agreed that people are forever trying to find the easy ways around systems. Bruce Schneier remarked that parents tell their children to lie about their age; in effect people train their kids to lie to computers. Alexandros Papadopoulos asked whether the channel from me to my demon would be as well protected as the channels from my brain to my fingers? Pam said the idea was that the channel would be sufficiently multimodal to prevent easy middleperson attacks – but nothing would stop a mugger marching you to a cashpoint. Perhaps then you’d be nervous! (But a bad demon reaction to nerves could cause problems at airports, as Peter remarked.) Rachel Greenstadt has done research on communication effectiveness: posts using the first person singular are more likely to be highly rated. If this perhaps because of users’ assumptions about trust? Jeff agreed – he’d seen similar credibility effects in online rating sites. A lot of the cues we find in language are in the small functional words that we routinely ignore. Mark agreed: and there are facial muscles we find hard to control. Finally, Jean asked about authority and deception. Don’t I have the right to lie to my doctor or dentist? Do we want cops to think they’re infallible, or otherwise feed the growing petty authoritarianism? On 9/11 what worked was autonomous action. Mark took the view that that’s often fine; Pam remarked that lying is often positive, as when you lie to protect people’s feelings. High-trust systems that don’t ask for information they don’t need are best. Scott Atran asked whether there’s a big difference between Americans and Israelis at airports: Americans and formulaic while Israelis are interactive. Empathy often matters too; it’s essential in hostage negotiation. An NYPD interrogator could break anyone if he could either pray with them or laugh with them. Mark agreed that empathy is essential: good cops are good conversationalists. Jack Bauer isn’t where it’s at; torture is the tool of the lazy.

  2. Petter Johansson started the second session by showing a video on choice blindness. He and Peter Hall research the representations behind people’s decisions: by getting volunteers to choose one face and swapping their choice for another, they find that 80% of people don’t notice the swap. They then happily make up justifications for the wrong choices that were imposed on them by trickery. Curiously, it doesn’t work so well for computer experiments as it does for hands-on magic; people seem to be more alert with computers! Choice blindness applies across a range of contacts (not just faces, but smells and even questionnaires about moral sentiments) and they use it to explore self-knowledge. people don’t know how little they know about themselves: if you can trick or pressure someone into endorsing a statement, then you can get them to adhere to it and defend it. But perhaps it has some uses – maybe (in work with Harald Merkelbach) as a potential test for malingering?

    Michelle Baddeley is a behavioural economist, recently doing neuroeconomic research with Wolfram Schultz. Her paper (linked online from the schedule) sums up some ideas relevant to security. Even market models based on orthodox economics have problems in explaining security – for example the public-goods aspect. Michelle is interested in models that relax some of these assumptions. Herb Simon’s ideas of bounded rationality can be useful, particularly substantive versus procedural rationality. Asymmetric information can be dealt with in the former context while the latter encompasses the Kahnemann-Tversky literature on heuristics and biases. Present bias is particularly important; see the literature on procrastination, time-inconsistency and preference reversal. People just don’t get round to doing security. Social norms and social learning also matter, and can be modelled using Bayesian ideas. On the policy side – how can you do strategy-proof design when the costs to perpetrators are so small? Maybe we have to make security fashionable; people do what their neighbours do.

    Terry Taylor, speaking on behalf of the fifteen people in the ‘Natural Security’ project (with Scott Atran) – see Nature of 15th May for the gory details. Organisms can evolve or adapt; in the timescale since 9/11 it’s of course been the latter, and we’ve created an ecology of fear. What can we learn from nature? Everything is variable; risk can’t be eliminated; nature isn’t perfect and doesn’t plan; natural solutions are nested and recursive; they evolve many times or appear everywhere. We need to look at ecosystems and immune systems for models; he recommends Rischard’s book “High Noon” on how network ideas might reinvigorate international governance. Emergent ideas include whether actors increase or decrease uncertainty.

    Rick Wash looks at motivation – why people make the choices they do. How does my mom make security decisions on her computer? People think of “viruses” and “hackers”. Viruses are seen in various ways, from buggy software to software to mischievous software. In the former mental model, you have to download the virus intentionally and run it; the latter are written by teenagers to cause annoyance and are caught by visiting shady sites or opening shady emails. Different ideas about ontogeny lead to quite different ideas of prevention (care in downloading versus avoiding bad websites or buying antivirus software, for example). Similarly hackers can be graffiti artists, burglars, criminal contractors or people who target big fish; again different actions follow. We must stop asking whether these models are correct, rather than using them as a basis for policy and design.

    Wolfram Schultz is a neuroscientist who’s been working on the reward system in the brain for several years. Human imaging experiments look at blood flow in different areas of the brain; and they look at positive as well as negative risks. Risks can be different things; often variance, sometimes skewness of a distribution. In many subjects, they find the S-shaped probability distortion familiar from prospect theory; but in some people it’s inverted. You can see this happening (and distinguish these types) looking at the lateral prefrontal cortex: it’s there in the brain. Variance in expected reward can be seen in the frontal cortex. Risk aversion depends on stimulus size: £2 versus 50% chance of £4 might be 50-50, but if I offered you a choice of £2m or 50% of £4m then almost everyone would choose the £2m! This is the work of the lateral orbitofrontal cortex, and it’s tied to a warning signal. In effect, you don’t choose the 50% chance of £4m because you’re afraid you’d not sleep at night! Strong risk seekers have a strong signal there; gamblers have a stronger signal in the medial orbitofrontal cortex. This may have a genetic basis. What’s more it’s not just avoiding risk, but valuing risky outcomes less.

    Mark Levine quoted Frans de Waal – we know a lot about how humans and animals are aggressive, but much less about how aggression ends up in violence, or under control. Only 15-30% of soldiers shoot to kill! Why? Most violence happens in family groups where it’s maladaptive, so we have “natural conflict resolution” mechanisms. Also, most violence involves an audience. What can social psychologists say about this? Stories so far about negative impact; but there are positive effects too. Violence students are “a world of virgins studying sex” (Dave Grossman). Mark has collected a lot of data from CCTV footage of town centres. People actually spend most of their time trying to do the right things, and bring tension under control. Bigger group sizes lead to more prosocial activity – more conciliation than aggression. They have noticed that cumulative aggression matters: it’s not the first finger-jab that predicts a thrown punch, but the third (although the probability is less if three different parties do the three different aggressive acts). In conclusion, intra-group regulation matters; it’s about the security of the group.

    Jon Callas chaired the discussion. Dylan Evans wondered whether our class perspective makes a difference to our view of street brawls. Caspar asked whether it was sensible to see risk as positive as well as negative; Wolfram replied that we often have to redefine folk assumptions. Infrequent events tend to get forgotten either way, so we underestimate probabilities. John Adams asked about the frontal cortex of the Department of Homeland Security: why are their values so different from the probabilities? Is it a lack of a reward mechanism? Wolfram remarked that you could explain it by the fact that journalists report only infrequent events. Bruce asked whether he had any data on which kinds of people use which risk metaphors? Rick replied that that was for future work; he had some ideas but needed to do an actual survey. For example, “hacker as mischievous teenager” was a strong media meme of 20 years ago, so is likely to affect older subjects. Chris Cocking remarked that for more people were killed cycling after the 7/7 bombings than in the bombings. There are questions about how people habituate to risk after large trauma (it can be very fast). Ragnar Lofstedt asked Petter whether his results had been replicated outside Sweden (if it the Swedish psyche of a “clenched fist in your pocket”?) Petter has done replication work in Japan, and others in Germany and Indonesia. Frank asked whether choice-blindness will generalise outside the lab environment, where the subjects are under pressure? Petter replied that many experiments have been done outside the lab (e.g. taste of jam and smell of tea in a supermarket). Rachel asked about inverse authority: what if someone switches stuff on their teacher or doctor? Petter would love to do that. Stephen lea returned to risk perception: has anyone studied media effects? There’s already a literature on the media’s effects on inflation expectations. Ragnar remarked on the massive media amplification of risk stories on the birth-control pill 15 years ago, which lifted abortion levels (they’re still no back to where they were). Pam asked whether systems in societies with strong central media could ever usefully be described as decentralised! Terry Taylor agreed: decentralisation is needed but it’s hard (what about the Director of Central Intelligence?) Bill Burns referred to his paper with Roger Casperson on the media amplification of risk – they first wrote about this 20 years ago, and have recently been tracking the public response to the financial crisis with Paul Slovic for 15 months. Also, see Cass Sunnstein on “probability neglect”.

  3. The afternoon session started with Stephen Lea discussing psychological factors in internet scam compliance (he has a paper of this name in the works), He’s an economic psychologist: humans don’t use elaborate optimising procedures but a variety of short cuts. Scammers seek to intrude on imperfect decision-making processes and get them to produce bad outcomes. Are particular people / demographics vulnerable to different scam heuristics, or is it situational? Got 240 returns out of 286; relatively homogeneous (Exeter students); and asked them about 13 kinds of scam. 40$ said they didn’t find any kind plausible but 74% report complying in at least one way with at least one (giving info, losing money). The distribution isn’t random; people are either vulnerable or not, and tend to fall for many scams or none. They are either compliers or non-compliers; compliers might be 30-60%. The indicators are lack of self control and susceptibility to social influence.

    Jean Camp started off with Paul Slovic’s dimensions of risk and investigated their effectivenes online. The most powerful were immediacy of effect, then the voluntariness of risk, and then knowledge about it. The explanation may be that “nobody died” on the Internet, unlike car driving or private flying. Also, the internet at least appears to be controllable. Prodding further, people cluster online risks based on familiarity, not on any computer-science framework. Previously she’d worked on making risk visible; the next frontier is making videos of real-world analogies of phishing (a bogus FBI man comes to your home and asks for your date of birth, social security number and bank account details).

    Stuart Schechter presented work done by Jenn Tam of CMU with him as an intern at MSR. The new Oauth standard requires apps to declare what they could potentially do; Android is similar; Microsoft’s HealthVault gives a huge amount of information. What’s the right way to do this? How should we present information to get informed consent? In fact, what would be the success metric? They did a study to figure out how much people could absorb from such a capability list. They were hoping that one design would perform better than others, but were disappointed. However icon- or image-based treatments work better and were better liked than text. Also, people prefer information grouped by action rather than by resource. We hope that if users like the presentation, they will pay more attention. Perception matters!

    Chris Hoofnagle is a lawyer interested in identity theft, which he sees as an externality. The USA has dealt with this using law-enforcement incentives: it can even be a terrorist offence. Yet only one perp in 70 goes to jail. We’ve also tried education – but if you don’t share your social security number you won’t get utilities or anything else. There are two clashing views in the academic legal literature – Lyn LoPucki argued that identity theft happens because we live in a crowd – a masked ball – so privacy is the problem! Dan Solove, OTOH, sees it as a failure of architecture. He sees the problem as one of credit rating and granting. He got a bunch of victims to make FACTA requests to get application data from the credit granters. The results? Lots of marginal credit granters, lots of apps with obvious errors, and credit reports within which the fraud is obvious (perfect record except for a line of 20 completely unpaid obligations). This hold out the prospect of policy interventions. BTW there are many reasons for grantors to cut corners on fraud screening: automated granting maybe costs $25 while getting a human involved could be three times that. And marginal grantors can make very high returns. Finally, we have to move away from a negligence framework (courts can say there’s no duty of care, and no harm – your time is considered to be free) to strict liability.

    Tyler Moore shifted the theme slightly from fraud to cyber-war. Early computer security research assumed a Dolev-Yao adversary, which was OK for crypto but no good for the Internet; when that came along, we moved to economic models that could cope with strategic adversaries. The next shift may come from growing state interest in cyber-stuff: what happens when single organisations (like the NSA) can use IT as a weapon, while also having a duty to protect? All of a sudden there are changes in assumptions. For example, people might stockpile vulnerabilities rather than disclosing them. He presented a “vulnerability stockpile game” between two states. Only once you consider the cost of stockpiling can you get a player (usually the weaker one) to consider defence rather than stockpiling. What are the implications, for example, for the Russian tolerance of cybercrime? But there are nonlinearities: if Russia downplays its sophistication and unilaterally commits to cyber non-aggression, the result could be US stockpiling way beyond the social optimum. We need new ways of thinking about all this!

    In discussion, chaired by Ben Laurie, Mike Roe mentioned multiple discovery of vulnerabilities; Tyler said his model includes that. Nick Humphrey asked whether poor or less intelligent people are scammed at a greater rate? yes, said Stephen: see his OFT study and another by AARP. People who’re socially isolated, older, or of lower educational standing are more vulnerable. But be careful: the biggest problem is complacency, and a little knowledge of an area can make you vulnerable. John Adams asked why so few identity scammers get convicted; Chris didn’t know but thought banks concentrated on filtering while Tyler remarked that banks cared more about phishing as it hits their trademark. Stephen noted that the authorities care very much less about cross-border crime. Richard Clayton remarked that the scammers get the money mules to post scam mail too, but that scams only continue if people actually fall for them. Angela Sasse asked to what extent scams target automatic behaviour. Stephen responded that he got better response to scam letters than to “please take part in research” letters during his research. Dylan Evans remarked that it was normally hard to get computers to pass the Turing test – except in deception settings! I asked whether identity theft wasn’t just libel, in that credit reporting agencies knowingly or negligently reported defamatory information about victims (Paul Syverson, WEIS 2004)? Chris agreed: in the recent Wolf v MBNA case, Wolf won at the district court level and MBNA didn’t appeal. In fact consumer reporting agencies in the USA are under a “maximum possible accuracy” duty; computer scientists should push this upwards. Cormac Herley agreed that scoping up the duties of credit agencies and banks might be a good way to hold their feet to the fire. Chris agreed: techies must talk about what is and isn’t reasonable. Nicko van Someren asked whether there are any damage cases resulting from poor credit reports leading to (for example) poor interest rates on mortgages? Chris wasn’t aware of any but could look for a case. Bill Burns said that the big takeaway message for him from this session was the asymmetry of the exposure to risk between banks and their customers. Stephen remarked that as he’s also done a lot of research on debt, and is unsurprised: the banks are not altruistic and resist with formidable advocacy any changes that would reduce their profits. Rachel finally asked about the scope of cyber-war: attacks on civilian targets are not the same as attacks on military targets. Jean remarked that Congress doesn’t know how to separate the two, and also that it’s not clear how much salt to put on reports such as those on Buckshot Yankee. But it’s dangerous.

  4. Scott Atran is an anthropologist who studies extremes of human behaviour, to understand problems such as the space of all possible religions. Recently he has been studying terrorism and religious extremism; he talks to Hamas, the Israelis, the Syrian government and others. he works with Robert Axelrod on intractable conflicts; he gets mujahideen (and political leaders) to play games like psychology students. The most intractable conflicts involve sacred values, which work differently: they define who you are. People won’t give up their children; the Taliban won’t give up their arms any more than the NRA. Scott’s methodology is to ask leaders what sort of trade-offs might be acceptable; he then tests them with population samples and reports back. E.g. leaders say they won’t accept a two-state solution with no right of return for refugees, even shown data that only 8% want to go back; offers of money make things worse as the right to return is a sacred value. However a symbolic trade-off can be positive – such as an Israeli apology for 1948, which greatly cuts opposition to a deal. Recently he has tested 20-30 provinces in Iran about relinquishing a nuclear capability. Finally, what gets people involved in violence? 100m in the Middle East support 9/11 but only a few hundred have got involved since then. The big determinant was who your friends are! It’s not a clash of civilisations but a crash of civilisations; traditional values are crumbling, and the jihadist ideal is attractive to some young people in search of an identity.

    Dylan Evans next asked whether people can get any good at estimating probabilities. He has set up an online test, http://www.projectionpoint.com, to measure risk intelligence (or RQ for risk quotient) – how aware people are of what they know and don’t know. Some professionals are good (such as US weather forecasters) – this depends on getting the right feedback. Medics are incredibly badly calibrated. He identifies four basic error patterns and can model them in terms of the subject’s “need for closure” or for certainty; others can need to avoid it getting an s-shaped graph. RQ is significantly higher in people who don’t believe in the paranormal; there are sex differences, such as that education improves RQ in men but not in women; and the technique can now be used by companies (e.g. bankers could ask dealers the likelihood that trades will lead to profits).

    Ragnar Lofstedt is a professor of geography interested in risk management. Scandals such as mad cow disease and Vioxx have undermined public trust in regulators; they want it back. One approach is to make the process more transparent, but not all of us want to participate in policymaking. A couple of percent is high even for controversial issues such as a waste incinerator (and most of them are pensioners, housewives, students etc). Onora O’Neill argues that transparency empowers hostile stakeholders; one pharma company noted that 95% of the traffic on its website was competitors and trial lawyers. The FDA requires that negative signal findings on drugs be published; his research showed that the public welcomed early information but doctors didn’t (worried patients are a bother, and may stop taking drugs). So transparency initiatives need evaluation too!

    Bill Burns’s powerpoint evoked a security alert from the lab’s PC. That dealt with, he mentioned that he’d just submitted a paper with Paul Slovic and Ellen Peters modelling the effect of a zero-casualty dirty bomb attack on LA. They concluded that the cost could be $10bn: mostly from the disruption caused by evacuating a 6 by 6 block downtown area, amplified by the effects of perceived risk and emotional response. They estimated effects on people’s willingness to travel from the Christmas Day attack, the Haiti earthquake and the Times Square incident. They also found a lot of cynicism about TSA: people generally ascribe security-theatre motives to the TSA even in respect of protection measures that may be useful (such as securing cockpit doors). Despite this cynicism, perhaps a quarter of people considered delaying travel after the Times Square incident! Anyway, other effects are very stark in the risk perception graph – such as Obama’s inauguration. teasing all these effects is hard!

    Chris Cocking is a social psychologist studying collective responses from the viewpoint of social identity and self-categorisation, and takes a more positive view of human nature. the negative view goes back to Gustave le Bon who observed crowds during the Paris Commune of 1870-1: crowds “descend a few rungs on the latter of civilisation”, and he was concerned with panic and saw emergency planning as a public order issue, not a safety one. However if you treat crowds badly they can behave that way! Mostly people are guided by social norms, and fire casualties are often because people take their time. In fact social identity usually strengthens in the face of hazard; selfish behaviour is generally an individual problem rather than a good one. Recently he’s studied the effects of the riot police: they don’t actually disperse crowds with baton charges, as people dodge and reform. See YouTube videos of recent London demos – riot police charges simply made the crowd more cohesive and disciplined. Finally, he was recently at the Glastonbury festival; at events like this the crowd can be a valuable resource when infrastructure breaks down. The next research horizons: the effects of joy and fear.

    The last speaker of the day was Frank Furedi, a sociologist whose work for the last decade has focussed on fear. His next book is about how attitudes to fear have evolved in historical times: Athens was very robust at the time of the Persian wars, while modern America pees its pants with its precautionary culture. An interesting aspect is how a culture “performs” fear; modern precautionary cultures focus on worst-case scenarios. One blogger fantasises a bad contingency, and others then have to respond with worse. Risk thus becomes a fantasy and acquires a cultural dynamic of its own. Frank has been collecting the rituals of risk across countries. A curious example is the BP contingency plan for the Gulf which talked about protecting walruses: companies are required to indulge in self-deception to argue that they care about the environment. Only a small part of such documents deal with any technical risk management: the focus on reputational risk at the expense of managing actual risk is telling. Another thing that surprised him was that most parents agreed to CRB checking of other parents who see their kids regularly: this is the parents’ way of expressing piety. But this PR-oriented approach carries real costs, as trust breaks down and people just stick to the rules. For example, BP twice ignored warnings about the rig because they came from the wrong staff. Blame the culture that causes people to perform risk, not manage it.

    In discussion, chaired by Bruce Schneier, Stuart Schechter asked whether we have fine-grained data on the effect of terrorism events on airline ticket sales; Bill Burns reported a 30% hit tailing off to 6-7% after 2-3 years. The London bombings caused a measurable drop in tube ridership but principally at the weekends. Richard Clayton commented that there are different sorts of crowds, on the street for different reasons. Chris agreed but said the more he looked the more commonality he found between political crowds and emergency crowds. Nick Humphrey asked if there was any way to apply Scott’s ideas to Chris’s problem – could the police make signal concessions early to calm crowds? Scott replied that the firemen at the scene in 9/11 recognised crowd self-organisation and worked with it, breaking rational predictions of maximum flow. Tyler asked Ragnar how one should practically deal with transparency? Ragnar replied that we need more forms of managed transparency, which can be properly evaluated; he doesn’t believe transparency is wrong, just raising a warning flag. Bruce remarked that danah boyd has a paper on her website about the dangers of transparency without context. Stuart questioned the benefit of cockpit door locking; if cockpit door couldn’t be locked then the passengers could always try to retake the plane! Bill Burns said one should not underestimate the public’s attempt to estimate risk. Alessandro asked how we should analyse the heysel stadium disaster. Chris pointed to “Football in its place” by David Cantor; that, plus Hillsborough, were essentially stadium design (plus a wrong decision to have big matches at unsuitable venues). Nicko asked how often the reaction to scary press stories are actually worse than the reported thing: for example the beef scare led to a rise in salmonella as people barbecued chicken instead. Ragnar mentioned the contraceptive pill scare in the 1990s. Frank remarked that presentations for pregnancy-related illnesses went up; curiously the pill panic happened differently across Europe, hitting protestant precautionary countries hardest. Mark: during the war people panicked the first time something happens, but generally ignores it afterwards (the media can play a role here). I asked what effect controlled media make? He answered that while in the West the people drive the leaders, in the Arab world the leaders drive the people. That makes a big difference.

  5. Luke Church started Tuesday’s proceedings with a talk on “The role of the mother in security”. “Your mother wouldn’t understand it” is a standard complaint of usability folks. Yet some mothers are more competent than you think: some break DRM to remix boy-band videos, or to break copy protection on knitting patterns. Security design discourse is a problem: people take one case (as Cormac says, the worst case) and study it to death in the hope it’s representative. This crypto/systems security approach is lethal for usability; reduction to a metric is only a little better. Privacy is not a number any more than you can judge a graphic design by the amount of red! Graphic designers get a craft training with multiple perspectives; privacy training should be similar. privacy designs, like graphic designs, should be evaluated by critical appraisal.

    Rob Reeder is busy trying to teach Microsoft engineers to design better security warnings. At present they’re generally ineffective: they’re shown at the wrong time, they interrupt users, they make no sense, and users are habituated to click through. The latest version of Office is putting warnings in a goal bar, and IE v7 uses plain language for certificate warnings – with actionable steps (green for close the website, or red to continue but with a warning not to enter personal information). But there are still policy questions – as essentially no user is ever protected by a cert – why not just go to the site anyway (thus rendering the cert infrastructure useless)? In general, when should we warn? And when do warnings help, if at all?

    Angela Sasse first got involved almost fifteen years ago in the costs of password reset. Back then users were dismissed as “stupid” while in fact most humans are not able to remember sixteen strong passwords that they change once a month. This lesson is still not being heeded – which poisons relations between users and the security department. See Ingelsant and Sasse, “True Cost of Unusable Passwords” 2010: despite single sign-on, the number of passwords is just as high; so passwords are reused, shared and stored in email folders. So the way in now is to break into the user’s email archive. And many single sign-on systems are dire. Designers must accept that human behaviour is goal-driven. In fact the main advantage of chip and pin was that you could send the kids shopping now you didn’t have to actually sign the card voucher!

    Cormac Herley’s theme was “Where do security policies come from?” Why does a weather bulletin service (weather.gov) require 12 char alpha/numeric/special char passwords, changed every 180 days? (That’s 2^70 – 50 bits more than Amazon.com.) Cormac studied the policies of 75 websites (biggest FIs, unis, high-traffic sites etc). Big sites that are big targets had 20-26 bit passwords versus banks 20-30 and universities 40-50. Neither value, nor how phished a site is, has any useful correlation with password strength. However there is a strong correlation with ads: sites with ads have about 20 bit password security while sites without ads have about 40 bits. The strongest correlation is whether the user has a choice of provider at time of account creation. The conclusion is that it’s competition which makes sites usable. People like Amazon and Fidelity have to make it work, while for weather.gov users are just a cost – so the logic is to overshoot on security and dump the costs on the user.

    Joe Bonneau asked which is “harder” to guess: a randomly chosen 4-digit PIN, or the surname of a random user? The latter is 16.2 bits and the former 13.3 bits but that’s not the whole truth. The expected number of guesses for 50% success is different: G(surname)=8,000 while G(PIN)=5000 but for 10% chance of success G(surname)=82 while G(PIN)=1000. He proposes a new measure where you give up after reaching a certain probability of success; graph the work as a function of the probability. Recently 32 million passwords got leaked when the gaming website RockYou got hacked which helps us graph all this empirically. The stats for 4-bit PINs are particularly interesting as this is the first big dataset. Doubled two-digit numbers are popular, then years, then dates; if you want to guess just 10% or so of PINs, then PINs have only about 5 bits of entropy.

    The discussion was chaired by Dave Clark. Bill Burns remarked that a lot of work had been done on getting people to heed warnings on hurricanes and earthquakes. Angela remarked that if a website is selling Tiffany earrings at 75% off, then a browser warning mustn’t say “certificate error” but “if you shop here you won’t get any Tiffany earrings and your credit card will get ripped off too”. Bill agreed; warnings of natural disasters also work best with the right context. Rob Reeder remarked that there are other stakeholders than the user. Jon Callas pointed out that both Palin’s email and Obama’s twitter account were broken into because the security questions could be googled. Jean argued that governments were perhaps more rational: if voters got ripped off they called their congressmen leading to hassle and hearings for the agency, while if you lose money in a bank transaction they don’t care. In effect it’s the invisible hand giving you one finger! Luke suggested that we should also look at sites using passwords for social reasons, for example to suggest that their websites were important. Nicko argued that making websites brute-force-proof was easy enough and we should all just do it. Cormac countered that the unix environment of 20 years ago was not today’s, and making passwords usable was good for business. Stuart was concerned about the service-denial implications of password lockout or backoff systems, which made such systems highly problematic for 100m-user systems such as hotmail. Joe remarked that realistic attacks nowadays were not 10m guesses on one user, but ten guesses on each of a million users. Nick Humphrey admitted to shopping at risky websites for bargains and asked whether for many this was actively attractive – as unsafe sex apparently is for some? Rob countered that sometimes the risk isn’t to the user but to the whole ecosystem – for example if you end up in a botnet. Should the users not face the social costs of their actions? Angela suggested we should rather help them visualise the risks. Stephen Lea asked why UK banks managed to roll out the CAP readers, which are infuriating? Angela remarked that they basically beat their customers into submission and were prepared to lose a few “difficult” customers. Rachel remarked that if she’d had a password for RockYou it would have been a throwaway password – a dictionary word she used on 50 other sites. Cormac remarked that we should also consider the costs borne by people who chose strong password but who got compromised along with the site. Alma said that Google has data for what people do after getting a warnings with search results – and surprisingly many of continue! Richard argued that targeted attacks, such as on Palin’s email, were totally different from attacks aimed at acquiring random resources; and another issue was whether people could tell their auditors to get lost. Universities and weather.gov might find this hard, while at firms like Amazon usability is important enough to the business that the managers can be motivated to face down the auditors. Joe said that so long as people won’t engineer to prevent brute force, bit strength will be a useful measure. Luke commented that we have been having these discussions for a long long time now; do we need a confessional moment? Nicko remarked that this session was group therapy at least! Cormac concluded: PT Barnum said that if you want a crowd, start a fight; in the security community, if you want a fight, start talking about passwords!

  6. Rachel Greenstadt is interested in how online communities self-organise to regulate themselves. Her paper “Learning to extract quality discourse in online communities” is about how to study the arising peer-review systems and whether they can be re-engineered using AI techniques to make better use of the available eyeballs, to assess the crowd’s mood. They get 76% accuracy as rating comments as good or otherwise and 82% in picking out the best comments (from Slashdot, two days in February 2009). This is not much worse than the performance of a single human. The next step is to look at jezebel.com, and work with the digital ethnographer Jennifer Rode.

    Mike Roe has been studying user-generated content in online computer games, and not just moves and chat: he’s particularly interested in images (though some games also let people upload code). There are technical issues (sandboxing etc) but also social/psychological: specifically, x-rated images, harrassment by “griefers”, abuse of tools (e.g. creation of cartoon images of children having sex), and incongruous images that break the suspension of disbelief. Second Life deals with this by putting adult content in the continent of Zindra which combines mandatory access control with age verification. This raises many questions – for example, 16-17 year olds can’t see images of stuff they’re allowed to do in real life, and adults can’t do certain things if their avatar looks under-age. And people use false names, including those of famous people. In this context, what does authentication mean?

    Andrew Odlyzko’s topic was “Bubbles and gullibility”. How do you swindle people out of thousands of millions of dollars, with their enthusiastic cooperation? Well, just look at the recent credit bubble. In recent bubbles, policy makers were the most gullible. In fact, tolerance and indeed encouragement of gullibility have grown since the mid-19th century. We need to develop an objective measure of gullibility to tell us when things are getting dangerous! Society seems to have evolved to take advantage of herd behaviour and drive it in useful directions; it sometimes helps but often goes astray. How does the Internet change things? Jeff said there’s much more information available, and it persists for ever. Yet over 150 years bread has become less important and circuses more so. Hot air is much easier to generate; beautiful illusions (and their creators) become more important; gullibility is closely related to trust, the glue that enables collective action (we might even see Bernie Madoff as a signal of trust). Examples of gullibility of the smartest people: the peaceful Maya, the Coase lighthouse myth, the Bernanke and Greenspan “see no bubble, hear no bubble, speak no bubble” philosophy. The railway mania of the 1840s was driven by a strong collective hallucination, and its opponents had a different one that made their arguments counterproductive. A presentation by Michael Dell of 2000 is a similar example of nonsense on stilts, which contributed to the $100bn loss on optical fibre investment: he, and government, claimed 10x annual growth; system suppliers said 4x; undersea cable operators said 2x. Perhaps when gripped by mania, people lose the ability to do compound interest, then even simple arithmetic. Investors think they can get 10% in normal times and 20% in manias. Maybe we can base a mania detector on this sort of phenomenon.

    Bashar Nuseibeh instrumented a number of Facebook phones with his students to tag user experiences for later discussion. They learned that mobile privacy is highly contextual. He’s most recently been exploring the use of video to elicit user stories. He shows subjects both utopian and dystopian videos of a future technology, a technique he calls this “Contravision”. One test scenario was magic spectacles that assess the calories in food you look at plus a bodychip to monitor health and a link to your doctor. The test groups acted in often unexpected ways but, overall, having multiple tracks of evaluation seems to be an improvement as it elicits a wider range of reactions. He had a CHI paper on this a couple of months ago, and there’s a paper at SOUPS 2010.

    The final speaker of the session was Jeff Yan, who’s interested in user compliance. In an experiment ten years ago on passwords, about 10% of each group simply ignored the instructions – regardless of what the policy was. How can we motivate the user? Perhaps we should study the lessons of “compliance professionals” such as salesmen and even scammers. There’s the “Yale approach” to analysing persuasive communications in terms of who says what to whom; cognitive dissonance theory; and books such as Cialdini’s “Six weapons of influence”. What’s needed now is ethical use of persuasion in security, and he targets seven areas: UI design, interaction design, security policy, security awareness, security culture in organisations, novel systems, and standards/accreditation. Of these culture may be the most important.

    The discussion was chaired by Brian LaMacchia. Bill Burns asked how a metric of gullibility would distinguish perceived competence from perceived trustworthiness. Andrew agreed this was a good point; in the bubbles he’s studied there are trust reinforcement effects over time. Bruce said that maybe nowadays it’s just easier to notice and monetise gullibility nowadays – the gullible minority might not change. Andrew replied that the whole telecomms sector fell for the Internet bubble, but for different reasons – some out of credulity while others were driven by suppliers. During the dotcom bubble, less bullish pundits were sidelined by Henry Blodget. Nick Humphrey told of Randy Nesse’s article “Is Wall Street on Prozac?” which recounted that about half the dealers were on Prozac and felt invulnerable. Chris Hoofnagle used to run the denialism blog; bubbles can be political too, as enthusiasm for an administration waxes and wanes. Andrew remarked that lots of smart people with no personal interest invested with Madoff, such as museum trustees; had they simply done due diligence then like many professional investors they would have pulled back. In this case Madoff persisted despite a large body of knowledge. So it’s not just ideology. Angela remarked that we often ask friends for a good lawyer or a good dentist, despite our friends’ inability to judge such professional skills. So is our behaviour online really all that different? Andrew said it would be great if the experimentalists here would jump in and illuminate the question. Bashar remarked that many people believe that if something’s on the market, it’s safe. We’re not gullible for doing that. Andrew agreed: gullibility may be necessary for progress as society gets more complex. In fact, during the UK railway boom, something like $1tr in today’s terms was borrowed and spent despite Britain being in recession at the time – and a good case can be made that it was socially positive despite losses by some investors. Contractors made money and society got valuable infrastructure. As a result policymakers became more tolerant of the hucksters. Brian changed the subject by asking what the appropriate level of persuasion is in computing. Bashar mentioned “ambient influence” work on buildings which communicates group behaviour to occupants. I mentioned the lack of incentive for systems to make decisions explicit; persuasion is often nudging and defaults. Cormac agreed that often active persuasion too is deployed against the user’s interest. Bill asked how websites would segment different types of users; Jeff remarked that with experts you’d have to use the central route rather than the peripheral one. Bashar remarked that he sees real differences between kids and parents. Sandra agreed that Cialdini’s work is important, but in the medical field the emphasis is shifting from compliance to assurance – a partnership between the patients and the clinicians educating them. Isn’t the need for more interaction? Rachel answered that in security there’s an adversary, which makes the world different. Angela argued that the key thing is not to disrupt what people are doing, but creating a partnership would win the battle. Finally Dylan noted that there are ethical issues with persuasion: when people realise they’re being persuaded via a peripheral or emotional route, they resist. Psychoanalysis displaced hypnotism in part because people preferred “rational” means.

  7. The privacy session started with Alessandro Acquisti talking about two unpublished studies in the behavioural economics of privacy – on “Discounting the Past” and “The Illusion of Control”. The first of these goes back to Brickman et al (1978) on the fact that people recall bad events better and assign more valence to them. It set out to clarify this with a hypothesis of differential discounting: that bad information is discounted less heavily. In “the wallet experiment” subjects read of a person finding, and reporting or not reporting, $10,000. They asked the subjects whether they approved of the subject; a charitable act twelve months ago increases approbation while an act five years ago has no impact. On the other hand an uncharitable act, whether a year ago or five years ago, had a significant negative effect. So bad is stronger than good; but it’s also discounted differently. Bad information about people lasts longer. In ‘The Illusion of Control” they investigate the idea that privacy is control; so is more control equal to more privacy? Not always! They will disclose more sensitive information and thus end up less private. They want to figure out whether this is due to saliency or overconfidence.

    Sandra Petronio has worked on privacy for thirty years and is interested in how people manage private information. It’s about managing information in a social world: there are some people you want to share with and others you want to block. Robustness means dealing with turbulence and breakdown. Some data are owned, some are just controlled; subjects have privacy boundaries; when we share information we create co-owners and contracts. Boundaries are complex and fluid, with cells that shift and change, which people use to regulate information flow, whether in families or in companies. There is a “rules process” whereby people tell each other what’s confidential. There are gender issues: women believe everyone shares secrets when forbidden, while men believe nobody does. Privacy turbulence happens when expectations are not met. Again, it’s complex; problems arise with people who get lots of information they don’t want, including nurses, bartenders and pregnant women. The arising privacy dilemmas include not just confidants but also accidents and snooping. They are not soluble, but at best manageable.

    Lukasz Jedrzejczyk then presented the results of a study he did last year entitled “I know what you did last summer”. He collected information from a location-based social network, tried to re-identify it to assess data leakage, and then investigated what users thought. Of 1942 users, 242 could be reidentified. Patterns of individual mobility were clear enough, and quite often he could correlate the pattern with other data sources – particularly tweets. He talked to users, who were astonished at the amount that could be learned; the usual flow was that usability problems, which made data leakage obscure, led to privacy problems, which in turn could lead to security problems.

    Andrew Patrick has been working for six months now for Canada’s federal privacy commissioner. The key question for him is “What’s the reasonable expectation of privacy?” They have developed a four-point test of reasonableness and had it upheld by the Canadian courts. (1) demonstrably necessary for a specific need? (2) likely to be effective? (3) proportional to the benefit gained? (4) is there a less privacy-invasive alternative? they’ve applied it to fingerprints for law-school admission tests (failed); naked scanners; video surveillance in public transit; the data to go on chip passports. Now challenges arise from outsourcing and cloud computing, to name only two. Consent (informed, reasonable, …) keeps them busy as do defaults and complaint mechanisms.

    Bruce Schneier’s interests include public attitudes to cyber-war. People like Mitch McConnell (ex-NSA, now BAH) talk of a cyber-war having been declared on the USA by China; he recently debated McConnell with Jonathan Zittrain and – to his surprise – lost the debate (on audience polling). On the Internet it’s hard to know what an attack is, and when we do understand it it can be graffiti (Estonia), espionage (China) and other crimes of which most people have direct experience. War is being used as a metaphor (disease is another, and crime yet another). It’s about time we paid more attention to these folk models. In the old days, folk models came from elders, uncles and so on; now stuff is media-filtered and we can even have a battle of models (e.g. the battle of models of terrorism during the last US presidential election). Here the story matters more than the facts. The warfare model induces helplessness, as the military should be left to deal with it; the criminal model is different as the police should largely deal with it, but be limited, and there’s stuff you can do too. And companies also deliberately choose models that serve their interests: see the early history of electricity. So Bruce would like to see more work on models: where do they come from, and how we can get better ones. As for further reading: see the paper on WEIRDS (see http://journals.cambridge.org/images/fileUpload/documents/Henrich-BBS-D-09-00017R2_preprint.pdf ).

    The discussion was chaired by Nicko van Someren. Terry Taylor remarked that the war metaphor raised false expectations on the prospect of eliminating infectious disease, and alienated people who needed to be engaged (see “Ending the war metaphor). Bill remarked, on Alessandro’s discounting work, that he say poll negativity to Wall Street decaying even in the absence of any good news. Alessandro answered that his work did not investigate memory effects. Dylan asked to what extent do confidentiality obligations require consent; Sandra explained some of the issues that arise in legal theory, and remarked that people generally ignore them. Alessandro suggested that this would be a fascinating area for research on social norms surrounding privacy; for example an anonymous confidential letter imposes a duty with no reciprocity. Rick remarked that none of his study subjects used a war model of computer security; if they did, it would be the government’s problem. Luke Church remarked that there’s a vast research literature on models, metaphors and politics. David Clark has a student looking at privacy policies, which can also be analysed as warring models – with a view to an all-or-nothing opt-out where the user has to pick a positive service-and-disclosure story against a negative opt-out story, with nothing else. Chris wondered whether scoring (as in credit) could be used to validate discrimination on positive/negative data. Rachel asked Andrew hoe Canada estimates the costs of privacy; he replied that it was sometimes personal and sometimes social depending on the context. Bruce asked if Canadian laws had any beneficial effect outside; Andrew cited their ongoing dialogue with Facebook. Richard Clayton referred to his work for the House of Lords, who asked witnesses whether NATO could help; everyone said yes (except me and Lord West, the experts on the Internet and defence respectively). Bruce remarked that when “war” battles “privacy”, war wins. Tyler and Bruce noted that models are real power politics the war-versus-crime struggle is being settled in favour of the Department of Defense. Jeff Hancock wondered about the extent of third-person effects: “war” and “privacy” affect other people, not me. And he finds Facebook privacy hard to understand, despite having a PhD. Sandra told of a doctor who got alarmed that a patient could friend him; this breaks a contextual boundary (teachers having sex, bruce noted, gives another example). In such cases we have to update rules. Nick Humphrey noted that a lot of confidentiality actually has to do with protecting others against being caught out in a lie; a classic way of getting away with a lie is to ask the person you told it to not to check up. Even if this doesn’t prevent discovery, it may impede accusation. Stephen said that in a small town, as where he lives, is complex: if your doctor is someone you line manage, privacy taboos are needed. And this was how humans lived until at least the emergence of big cities with the industrial revolution. Seeing it in historical context, what’s strange? Actually it’s the idea of anonymity-based privacy. Sandra remarked that in a small town, even the football coach is a celebrity. How can you protect your sense of personhood? Alma added that in the old days you could move a few hundred miles to get a fresh start. That option doesn’t really exist any more.

  8. Nick Humphrey started off the last session, “How to fix the world”, by remarking that he was on a similar session with Alma and me in 2008, so we must have failed! Anyway the evolutionary view is his speciality, and we have an evolved system for dealing with sickness and injury. We have to understand our situation (illness, social support etc), forecast likely outcomes (comorbidity etc) and decide how to deploy immune systems (which are expensive – and we don’t want to use them all up unless necessary). We seem to have Bayesian mechanisms to decide whether to go all out or mount a holding operation. Our health management system can be taken in by scams; this is exactly how placebos work! They deceive us into believing we’re getting better than we are, or that a loving god is looking after us, or whatever. Yet the outcomes are often positive. Why? Quite simply, our system arose in the environment of evolutionary adaptiveness, which was nothing as nice as nowadays. He’s now involved in redesigning hospitals to give patients false information about the beneficence of the environment thus helping them to get better. There are conflicting messages – hospitals also signal fear and death, and we respond instinctively to signs of disease in other people by putting up our guard. What are the wider lessons? In setting the overall balance, we are also subject to scams. Internet security may make us less cautious than we should be – it has many signs of being familiar and safe. Your “personal computer” is designed to be a part of the family yet it betrays your personal information all the time. And you should see the number of people my kid kills every day on the computer! Your bank machine is a crime scene and a brothel and … the messages are getting very mixed up. Nothing in human history has prepared us for this, and it should not be surprising that we can’t deal with it.

    John Adams was next up. Last year he’d noted that the big question is “Quid custodiet ipsos custodes”, and that people are in two minds whether government is incompetent or the Stasi. He recalled a “Cyberinsecurity” seminar at Pentagon City last month, where most speakers thought cyber threats were massive. We ought to distinguish three types of risk: directly experienced, like climbing a tree, which we manage instinctively; scientifically-perceived risks, such as cholera, which we measure with instruments and manage with technology; and virtual risk, which includes most of the problems discussed over the last two days. We can’t feel them and the scientists don’t agree. We have a propensity to take risks and use a “risk thermostat” to manage them, which leads to moral hazard: changed risk perceptions shift behaviour. Institutionalisation leads to “bottom loop bias” – a focus on the negative rather than the positive in the risk equation, as the health and safety people try mistakenly to reduce accidents to zero. (You also find top loop bias, e.g. in investment banking, where traders considered only the rewards and socialised the risks.) That suggests we need perceptual filters. However, we have to deal with different personality types. Egalitarians tend to be bottom-loopers (everyone from the ban-the-bombers to the suicide bombers). Individualists tend to be top-loopers. Fatalists duck, but keep on buying lottery tickets. Hierarchists tend to get jobs as risk managers and regulators. The effect of the Internet? The “virtual” element of risk is hugely expanded, as we try to impose control on zillions of reflexive relations between strangers.

    Alma Whitten described herself as a top-loop egalitarian and is interested in explaining data use. She admits a bias toward the WEIRD world, as defined by Bruce; we’re grappling with the risks and benefits of new technology and have reasonable models for some of the effects. What’s missing is a model for statistical machine learning from crowdsourced data. You can say “surveillance” or “customisation” to your mom but it doesn’t really capture what’s happening. It’s very reminiscent of the problem of explaining key management. What sort of technical literacy should we aim for? She always has Angela at the back of her mind saying “You’re trying to make everyone into an engineer: stop it!” How can we explain tech trade-offs to our moms? Part of the problem is realising that you don’t have a good metaphor; then you can push past and build a model that people can click with. She presented some slides google developed to explain the uses of log data. The trick is to teach from concrete examples which can acts as the “hello, world” example.

    I talked about the paper “It’s the anthropology, stupid”.

    Martin Sadler chaired the discussion. Rachel mentioned the kismet work of Cynthia Breazeal at MIT’s media lab as being relevant to emotional interaction. Stuart asked Alma about visualisation of global search patterns. Would users take a different perspective if they could see “what big brother sees?” She said it’s difficult to get past the presumption of personalisation: people think it’s about them, not about aggregates. Voting is a better metaphor. Joe asked Alma what process should there be for producing privacy stories; Alma said it should be open and contestable. Peter Robinson argued that we approach banks or eating via a sense of place rather than ritual: so we might monitor posture or location. I agreed it might be useful to have a system that would let you bank online only from the study. Bill asked John whether we should not segment our risk message to the four personality types; John agreed. The different types have different sacred metaphors and all have a different relationship with the custodes. Bill followed up: yes, people in places like Wyoming are terrified of terrorism while people in targets are less so. Different groups have different media, which make things harder to fix. Nick suggested that you could settle which persona you were using by mirroring an appropriate picture of yourself on the screen. Frank agreed that changing a hat should change the screen background. I agreed that this could be a cool experiment; Alma liked the hat, or a desktop swap; Cormac said that tests of desktop swaps were not good; Luke said that the MLS folks had a problem with it after a while. Laurel Riek works on affective robotics; she said that it’s easy to induce emotional states, until the novelty wears off. Dave Clark argued that multiple desks might work if the experience were immersive, and recounted a colleague who kept a separate desk for writing. After a while he was conditioned to write when he sat at that desk. Stephen Lea said that while physicality might help us move from a brothel to a bank, our thoughts have always roamed, and computers make us freer by removing situational controls on the move from thought to action. Tyler remarked that a phone in the US does automatic context switching based on location. Nick remarked on a recent paper in which women given ‘fake’ designer sunglasses behaved less honestly. Finally, Martin asked what the current UK cybersecurity review should do. John said that “blind justice” was a recent phenomenon; in a small society everyone knew everyone else’s history. Now we’ve lost the social capital of knowing people, and we’re trying to invent substitutes. Nick said you get better justice if people don’t know priors; we saw this morning how mud sticks. Perhaps we should point people to the cartoon with Dave, the contrary user. Alma said we should keep on asking “Are we talking about people dying, or people not being able to read their email?” Also, a lot can be done by just fending stuff off; putting bad clicks in the bad click bin, and doing no harm. The mitigations that don’t involve catching people, but rather make the Internet stronger, are best.

  9. Here finally are the audio recordings of the workshop. There are sound files (mostly about 40Mb) for session 1, 2, 3, 4, 5, 6, 7, and in two parts for the final session. (Sorry, we had a technical glitch and lost a few minutes of that.)

  10. Nick Humphrey has suggested a further reading for people following this topic: a paper by Bill von Hippel and Bob Trivers on “The evolution and psychology of self-deception”. A preprint is available here.

  11. I work for ENISA and particularily on Economics of Security. I would be very much interested to access the audio files of the SHB conference 2010 but I cant.

    For example:
    http://www.cl.cam.ac.uk/~rja14/musicfiles/wmas/WS_30235.WMA

    I get the following error message:

    “Forbidden
    You don’t have permission to access /~rja14/musicfiles/wmas/WS_30235.WMA on this server.”
    ——————————————————————————–
    Apache/2.2.3 (CentOS) Server at http://www.cl.cam.ac.uk Port 80

    Can you please assist me in getting access to these files?

    Thank you
    Aristidis Psarras

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *