14 thoughts on “Security and Human Behaviour 2012

  1. Security and Human Behaviour 2012 was kicked off by Jeff Hancock, who studies “butler lies” such as “my boss has just called me into an important meeting” that make up about 2% of all text messaging they have studied. Online communications leave records though, so we can study them; and they have huge effects, such as via tripadvisor. The truth bias makes it hard for us to detect deception; text analysis can help, as truthful reviews tend to have more tempered opinions, more spatial details, more nouns&adjectives. Ask yourself: can you tell whether this person was in that hotel room or not? In general, with deception, can look for psychological distancing (fewer “I, me”), cognitive complexity (discourse structure) and anxiety/guilt (emotion leakage), but the relative importance of these varies hugely with the deception context. New work is trying to constrain deception in games at which people can cheat using primes like eyes and bibles.

    Tyler Moore studies the incentives that various principals have to combat cybercrime. These often work well enough, but the bad guys are adapting; see the description of search redirection attacks in his Usenix 2011 paper. These persist for weeks or months, because none of the exploited principals have an incentive to do anything about them.

    Robert Trivers has recently published a book on self-deception, and has started a blog on http://www.roberttrivers.com. If you talk about just deception or just self-deception, you miss half the story. Self-deception is about keeping the truth out of the conscious mind, and evolved in the service of deceit. 94% of professors say we’re in the top half, and in the inner eye we really believe we’re better-looking (see morphing experiments). Classic psychology sees self-deception as defensive but it’s actually offensive. Does happiness improve the immune system (the classic view) or does a good immune system make people happy. There are negative immune effects of hiding things, such as gay men in the closet: see his book chapter on the immunology of self-deception. Most deception experiments are quite unreal because there are no consequences. (some good ones though: [1]) Self-deception can operate at the level of the individual, the group, or an entire nation’s false historical narrative. Finally, 80% of air accidents happen when the pilot is flying – and the risk is even higher the first time the copilot flies with a given pilot.

    [1] “Darwin the Detective: Observable facial muscle contractions reveal emotional high-stakes lies”, L ten Brinke, A Porter and S Baker, Evolution and Human Behaviour (2012)

    Joe Bonneau described his thesis work on the statistics of human selection of passwords and PINs; almost ten percent choose a birthday. There’s now a reported case of a burglar using victims’ ID to guess card PINs. The best most people could do would be to follow the xkcd advice (four unrelated random words) and there are some serious pitfalls – particularly for Chinese and other people with non-ascii languages who often end up choosing numeric passwords.

    The last speaker of the first session was Stuart Schechter, who explored the effects of password rules such as having both alpha and numeric, and a special character: it drives people to choose the likes of p@$$word1. How do you do something better than saying “be unique!” Maybe something like Dasher, but which highlights the less predictable letters for typing? And what about the “don’t think of an elephant” problem? And memorability – if you prompt people in password choice they won’t recall the choices.

    In general discussion, the problem of deceptive attacks against high-value targets is quite different, and perhaps password guidance effort might be better directed towards other mechanisms like 2-factor. And how do people change behaviour as they age? How can you go about beating Jeff’s “review sceptic” algorithm? Does self-deception interact with microexpressions? – indeed it does as the upper facial muscles are under less voluntary control. (In method acting, you cue a pleasant memory to give a full-face smile.) Con artists are often good at self-inflation. And what about measures of gullibility: can you use self-deception metrics to identify the gullible? In fact, a guy who wrote a book on gullibility lost a lot of money on Madoff. In the USSR everybody lied all the time, and in some professions too … and perhaps there’s a strategic use of botox. In fact there is not much refereed published material on partial facial expressions. Vrij has shown that policemen often do better if they ask “who’s thinking harder?” versus “who’s lying?” And could we use predator eyes versus puppy eyes to work out whether this is about empathy or threat? And you can manipulate cognitive load in various ways to get people to reveal more. See also Robert’s blog post on fraud in science.

  2. The second session was on fraud. Sandi Clark started the batting, and told us that “scams are the buffer overflows of human security”. She discussed a number frauds on casinos. Some are offences, such as insider cash frauds, while some are not – such as card counting. See The man who broke Atlantic City for how a card counter bargained his way into a winning position. The initial system design doesn’t stop the frauds; it’s about interaction and adaptation within an ecosystem. Security engineers should study John Boyd’s OODA loop: observe, orient, decide, act. If a pilot gets inside his opponent’s OODA loop, he wins; if you only react, you lose. It’s the unexpected interactions, and the violations of assumptions, that get you.

    Richard Clayton talked about instant messaging worms in Brazil. There are some interesting differences between English and Portuguese, for example. He concluded that criminals don’t listen to SHB talks, and that despite the fact that some short some-on messages used to appear language-independent, it does actually help to customise bait in local languages.

    Cormac Herley found that he could transfer money from an American Express account without entering a password. The bank doesn’t have any proper identification or authorisation. He remarked that paying a credit card bill from a bank account is essentially a reversible transaction. The key is transaction repudiability and reversibility. Almost everything you can do with a password can be repudiated, and (in the US banking system) reversed. Non-reversible transactions such as ATM withdrawals and Western Union transfers are treated differently. Key idea: many internet scams exploit ordinary users’ failure to understand the link, and exploit mules to turn reversible transactions into non-reversible ones. So teach people this key difference, instead of boiling the ocean by trying to teach them all of security engineering.

    Markus Jakobsson finds password strength checkers annoying; combinations of heuristic rules give perverse outcomes. What goes on in a user’s mind as she chooses one? They reviewed passwords used on a number of sites and analysed by components (such as words, numbers, and special characters). Justin Bieber fan site passwords were technically “stronger” as they included a number of content-related words, but were bad for that website.

    Eric Johnson has been thinking about usability and healthcare data breaches. US healthcare is about $2.5tr, and fraud could be in the hundreds of billions as guesses range from 3-10% of the total. Models include upcoding, service never rendered, and medical identity theft, often used in frauds around drugs or equipment or simply to get treatment for uninsured / illegal immigrants (credentials traded). Problems: the “pay-then-chase” approach of insurers; the ease of joining medicare/medicaid; and the fact that credentials are trivial to get.

    Grainne Kirwan is a forensic psychologist studying cybercrime, and in particular the victims and how we can help them. Nervous people, with high levels of trait anxiety, can be exploited in various ways. Mendelsohn’s theory of victim spectrum juxtaposes victim facilitation (leaving keys around) with victim precipitation (insulting someone who punches you) and determines the amount of sympathy victims get. “Secondary victimisation” is when people get less refund because they’ve been judged to have facilitated the crime. How can you educate people to be less fraud-prone? Social cues matter; people do what their friends do.

    David Modic works on risk and Internet scams. He did an “eBay” auction experiment to see whether people are more vulnerable when ego-depleted (it’s already known that decision-making is influenced by fatigue) and whether materialism is also a factor. He set up keen prices but a variety of red flags, asking subjects for ratings and whether they’d buy. It turned out that ego-depletion has no effect on falling for a scam; neither did any of the materialism subfactors. The same thing happened with spear-phishing; ego depletion and action-orientation had no significant correlation, but there was a moderate correlation between ticket purchase and action centrality. Best practical advice is that scammers should get their spelling right, and have high feedback scores.

    In discussion, risk assessment is a dual-process thing, where the affective domain takes ascendance with depletion, so cognitively harder tasks might show different results; tax-return fraud looks more and more like medical fraud as it’s industrialisable &ndash having dumb money on one side of a transaction is key to much bad stuff at scale. We have no data on the circumstances in which the fraudsters try to be nice. We also don’t have good models of adaptation in malware and in scams generally. Finally, do we have figures on how much gets saved by forcing millions of people to choose fancy passwords, and how much it costs in support terms? Cormac’s answer was little, and lots.

  3. The usability session started with Pam Briggs talking about her new “family and friends” approach to privacy and security: few pay attention to inadvertent disclosure in social and family networks, where dynamics are fast-moving. Location-based services have been used to track a troubled teen; reciprocal tracking helped build reassurance and was seen as “cool”. Social network services that allow public location tracking in a user’s timeline raise all sorts of other issues, though, which her group is exploring with focus groups.

    Jaeyeon Jung analyses personal data exposure through apps, and builds tools to help users manage them. Permission architectures are bad, though; if a weather app wants your precise location data or won’t work at all, this is bad. She wrote Taintdroid, a tool that uses dynamic taint analysis to trace personal information flows through an Android app; it can tell you things like “Your phone number was sent to such and such an IP address”. She then recruited users for a field study and confronted them with the facts: they had expected much less leakage than happened. They were surprised by location tracking by accuweather, and periodic contact collection by Twitter. Among 20 participants, 6 were unsurprised; 13 were concerned. Tools are available at her website.

    Rob Reeder is Microsoft’s usable security guru, and talked about warning dialogues. How can a firm like RSA get spear-phished by opening an infected file with a flash exploit? Was the engineer stupid, or the warning dialogue wrong? “You should only open attachments from a trustworthy source”. What is a source? What’s “trustworthy”? Is it OK to save it instead of opening it? Security warnings must now be NEAT: necessary, explained, actionable and tested. Explanation quality is analysed by SPRUCE (source, process, risk, unique user knowledge, choice and evidence).

    Christof Paar talked about learning and hacking. Their team have broken many highprofile targets over the years including Keeloq, NXP DESFire and Satellite phones, using a variety of mathematical, CS and engineering tricks. Why does it take so long to develop such hacks, and for that matter for real-world abuses like phishing and botnets to evolve? Many systems are not hacked because no-one makes the effort. So: we need to know more about attacker learning. Sometimes the attacker will always make the effort (as in pay-TV); often you can follow maxims like “Don’t educate the attacker”. We should look at case studies, e.g. of systems never broken; we should look at obscurity, the Antichrist of cryptographers; and we should think of whether there are generic ways of making learning hard.

    Frank Stajano talked about his quest to replace passwords, which led to a framework for evaluating new schemes for remote user authentication that he developed with Joe Bonneau, Cormac Herley and Paul van Oorschot. They try to help designers of new authentication schemes avoid the mistakes of history, and to score competing alternatives against a set of requirements in the general areas of usability, deployability and security (“UDS”): they score 35 existing schemes. Most are better than passwords on security; some are better on usability and some worse; but every other scheme is worse on deployability. In conclusion, the funeral of passwords is still some ways off.

    Jeff Yan is interested in whither psychological profiling predicts cheating in MMORPGs. Various technical things are done to mitigate cheating; can we use psychology? Richard Bartle, the co-creator of MUD1 in 1978, hypothesised that players are achievers, explorers, socialisers and killers (who can be mapped on axes acting/interacting and player/world, e.g. achievers do acting+world). He found that achievers are significantly more likely to cheat.

    In discussion: perhaps the hawk-dove game model applies to in-game cheating; if too many achievers join the losers will be disappointed, so a game firm should control the cost of cheating to maximise revenue. But there’s a huge variety of possible cheating. On warnings, the majority lead to no bad outcome when ignored; should we not try to cut the false alarm rate? And could feedback on apps be put in app stores? As for hacker learning, does this say the hacker market is inefficient? (There were many diverse views on vulnerability markets.) In practice, we won’t get security usability right until there’s a security usability person on each product team, or at least we get the two talking to each other; and the risk communications is something else again.

  4. Dylan Evans started the methodology session by talking about the epistemological confusion that surrounds probabilistic evidence; the subjective nature of fingerprint evidence is a good example of the all-or-nothing fallacy in that examiners are often asked black-or-white answers. Christophe Champod argues that fingerprint evidence be presented as probabilities: x% if the defendant left the mark and y% if somebody else did. Can we apply the same probabilistic methodology to lie detection, so that interrogators give a probability of truthfulness? Current training techniques can actually make police officers worse, while perversely making them more confident!

    Angela Sasse talked about her “Can’t comply, won’t comply” work with Adam Beautement. Mostly firms just repeat security policies at employees over and over again, but ignorance is maybe 10% of the problem. They surveyed 1256 employees at an energy company and interviewed 118 of them about why they didn’t comply. Every employee reported non-compliance for time and performance reasons, because of clashes with safety or environmental goals, because it didn’t seem sensible, or because it was simply impossible. People tended to suggest “more secure” workarounds rather than praising compliance as a virtue, so perhaps the way forward is a way of reporting non-compliance anonymously as an action list.

    Vaibhav Garg has a macro-level perspective on cybercrime. Previous researchers wondered why some countries have more smuggling, and found it increased welfare in some jurisdictions; decreasing the entry cost into legal businesses can be effective. Vaibhav has been trying to apply these ideas, looking at factors such as enterprise costs, broadband access and English language proficiency. For example, he found that public spending on education decreases malware.

    Virgil Gligor’s talk was on street-level trust semantics. If Bob relies on inputs from Alice, his asymmetry can be fixed if he can either recover at no cost from her bad input, or deter her from making it. If the protocol is strategy-proof then she has no incentive to cheat, and you can go from pure strategy-proof games to games with deterrence so that the net present value of honest behaviour to her is higher. But Bob’s uncertain about Alice’s discount rate, so it might be better for Alice to put up collateral to a trusted third party. In what circumstances is social capital enough? You measure tie strength (social distance) more easily than actual friendship though.

    Andrew Odlyzko is interested not so much in how people are fools, but the extent to which gullibility is a feature rather than a bug. He has much more evidence than a year ago about the British railway mania, the only time in 1000 years when people put more (private) money into building infrastructure than into the military. The result was ruin for some investors but prosperity for everyone else. The snake-oil salesmen drove the boom; Disraeli started out by writing promotional material for bubble companies. In an earlier age, Columbus was another such; Ferdinand and Isabella beat the Moors faster than they thought, had spare cash, and Isabella said he had “honest eyes”. It can make sense to trust in the irrationality of driven people to push the limits from time to time.

    Rick Wash has been investigating how people tell stories about security, as this is how people learn and share information with each other. Pretty much all the 300 stories they collected were about incidents, not about security software or theory, and most (55%) came from family and friends. 51% were autobiographical, 72% contain a lesson, and 95% of tellers believe their story was honest truth. The stories convey complexity: the Internet’s a dangerous place, beware shady webpages, beware shady emails, keep personal information private. Responses included extreme behaviours such as stopping using Facebook, or going from no AV to 3 AV products.

    The discussion ranged over a number of topics. It’s generally better to be an optimist; optimal foraging theory shows that insects who are pessimistic end up with worse outcomes. Stories are more credible as precautionary tales if they happen to people like me, not experts. Ambiguity is mostly dealt with badly, but in some domains people get good at it, such as bridge players (who get prompt feedback based on ground truth) while in groups you can converge on some attractor. Probability neglect has been looked at by Slovic et al; people overestimate low probabilities and underestimate high, leading to overreactions to low-probability high-consequence events. There are also factors of discounting anticipated future joy, when betting on the lottery or investing in startups (also discussed as “long-shot bias”). The security mechanisms most often circumvented are access controls, and often can’t realistically be made usable except via a process of interactive improvement – which is just what you need to get people engaged. People are also concerned, though, that if they disclosed their coping mechanisms (such as their password generation algorithms) then they would be forbidden. Research just doesn’t go into practice at all well in the security usability world: stuff that’s claimed to be “best practice” never is; it’s just common practice that won’t get you sued rather than anything evaluated. Another view is that in nine cases out of ten of usability failure, the engineering was wrong. There’s a D4 dopamine receptor gene related to risk taking with two common alleles that vary widely round the world, that correlate with migration (and with other exploratory behaviour such as infidelity and bisexuality). And risk-taking has group / cultural aspects too; Spain and Portugal had exploratory cultures in the 15th century.

  5. The second day was kicked off by David Livingstone Smith on collective self-deception, and specifically the puzzle of ideology. Jack Kennedy described the great enemy of truth as not the lie, but the myth. Marx and Engels initially took the view that ideological beliefs are deformed, inverted images of social processes, as in a camera obscura. Marx later switched to a conspiracy model of ideology: that ideas are promulgated intentionally by an elite to suit its own interests (Eagleton wrote about this in 2007). But there is a third possibility: it might be like a biological purpose, which is non-intentional. Ruth Millikan has a theory of proper functions that encompasses this; a thing’s function is what it does and its proper function is what it’s supposed to do (what it evolved for). Thus we have a teleofunctional analysis of ideology: it can be purposive without being anyone’s purpose. David has written a paper analysing white supremacism in these terms.

    Rachel Greenstadt described Anonymouth, which is about helping people anonymise documents against stylometric analysis. The blog “A gay girl in Damascus” was written by a 40-year-old American male who developed a distinctive writing style over five years writing the blog; but almost no-one can do this (Rachel did the experiment). Can you create a tool that would enable normal people to maintain a consistent persona? Running documents back and forth through a translator doesn’t work. Anonymouth lets you extract features from documents written in a style you want to imitate, and gives you adaptive feedback about how to change your writing style. Some aspects are easier to train than others; people find bigrams hard, so you have to get at them indirectly via suggested word choices. The code is open source with an alpha release at https://psal.cs.drexel.edu.

    Luke Church’s theme was “tracking” for societal benefit, and specifically for improving usability. For example, he’s been observing Steam, an app store for games. And detailed usage data can be used to drive programmer productivity: it’s about analysing every character and click, to make software development tools better.

    Bruce Schneier has been thinking about airports and profiling. Failures in deception detection are particularly bad in public policy; arguments about false positives, attackers gaming the system and so on don’t have traction against “folk” arguments like “we know who the bad guys are”. Security engineers aren’t as effective as doctors at selling politicians on making policy decisions in line with our professional knowledge. Corporate and government interests are too good at seeking rents.

    Matt Blaze came last with the latest episode of the story of the security usability failure of the APCO P25 two-way radio used by police, FBI etc. They set up receivers in four cities to record all cleartext appearing on the usually-encrypted channels in the law-enforcement and national-security bands; they found about half an hour per city per day. They suggested changes to keying practices (e.g. changing keys frequently makes failures more likely). Cleartext went down, for a month or so, then up to higher than before (when they picked up cleartexts like “we’re supposed to be paying extra attention to encryption”). The act of paying attention to such complex problems can have random effects. Institutional memory can also play a role; the radios retired 25 years had poorer performance when encrypted, and people still believe this despite the typical field agent’s career being only 20 years.

    Discussion topics included that users may often be better educated than sysadmins; whether medical data (for example) should be provided to academic researchers but not to drug companies; that ideology can be generated with intentionality but without conspiracy in a market for ideas, where individual moral entrepreneurs contribute fragments of ideas that may or may not resonate with a growing community; that improvements in programmer performance variability from data-driven tools help the weaker programmers more; that street-level epistemology of risk is pretty good, in that we can judge neighbourhood crimes fairly well, but the folk beliefs go wrong with tech and lo-probability stuff, and these gaps get exploited; self-reinforcing ideology leads into frenzies like employers asking for Facebook passwords, and states passing laws forbidding them.

  6. Bill Burns was the first speaker in the session on terror. He modeled a dirty bomb attack on LA. Closing a 6-block district could have direct costs of maybe $35m a day; but a three-week shutdown would attract greater indirect costs through behavioural effects and these would persist – to the effect that a year or two later the indirect costs could be more than ten times the direct ones. He’s been testing risk messages to see which give more confidence in everything from willingness to fly to confidence in the government.

    Chris Hoofnagle is terrified by the privacy implications of mobile payments. They can bring huge benefits, ranging from challenging the 2–3.5% fees of the big card brands, to opening up payments to the poor. However both the merchant and the payment provider can get a much more complete view of the sale. This has all sorts of legal consequences, such as enabling merchants to call you even if you’re on the do-not-call list. He did a survey of consumer attitudes, which suggests that this could lead to an enormous mismatch in expectations. Perhaps what’s needed is a law like California’s Song Beverly law which prevents merchants asking for your address (even zip code) when you pay by credit card.

    Richard John’s subject was “Games terrorists play”. Previously he’s been modelling rational terrorists using Stackelberg games, where the defender is the leader, and the attacker can observe the policy via surveillance (as with airports). Following their previous ARMOR model shown at previous SHB events, they have a new model COBRA based on prospect theory and another based on quantal response called QRE to deal with random strategies.

    Steven LeBlanc reminded us that warfare has existed throughout human history and killed enough people (15-25% of males, maybe 5% of females) that it’s selected for behaviour that help people survive it. Surviving forager societies were very violent, investing in alliances and with killings often following treachery; as soon as we settled to farm we invested in defences and weapons with the result that violence went down (if you pay taxes you are safer). See Wrangham’s Demonic Males and Pinker’s Blank Slate. Male and female brains are different because of warfare. Forager men fight over women and revenge; boys roughhouse and form gangs. Girls often got stolen so needed to understand body language to survive in strange groups; they also are better at forming dyads. The effects of war are perhaps the second most significant factor in male-female differences after differential reproduction, but are almost completely ignored by research communities like this one.

    Mark Levine is interested in social-psychology mechanisms for preventing violence. A Northern city like Manchester might have 100,000 young people out drinking on a Friday night but only 40 policemen. He showed a video clip to 26 men and 26 women, half primed for personal identity and half for gender identity, and tracked their gaze. Men look at men and women and women; men look more at the perpetrator and women less. Gender prime makes women look more widely; personal prime makes them look more at the couple who are arguing. Thinking about yourself as a group member makes you scan wider, which may facilitate successful intervention.

    John Mueller’s book “Terror, Security and Money” shows it’s not worth spending money even to protect a big office building against terrorism until the risk is three orders of magnitude higher than now. He discussed the 50 Muslim terror cases since 2001, half of whom were set up by informants; though not nice people, most of the convicted terrorists were so inept that it’s doubtful they could have done much had they not met the FBI. He also doubts the value of the term “radicalization” given the numbers of people hostile to US foreign policy. Yet despite the fact there’s been no real terror since 9/11, public attitudes are much the same: 30% are still worried about air travel and 70% fear a major loss of life in the near future. There was a dip in 2008-9 but that may have been more the distraction of the financial crisis rather than Obama’s election. But then, fear of communism stayed at similar levels from the 1950s right through to the mid 1970s.

    In discussion, the direct costs of 9/11 were perhaps $40-50bn while the total costs of lost tourism etc maybe $200bn. The big issue is why the “terrorism delusion” persists; people seem to want to live with the fantasies. What’s more, similar delusions persist about healthcare and crime; we are safer now and have better healthcare now, so is terrorism that different? One factor is that’s hard to get closure; another is to do with world views (red states worry about terror despite the blue states being the targets). Forager warfare has been stopped by governments but homicide levels are high (while in-group homicide isn’t widely reported from the early 19th century contacts). Evolution has given men a high from winning, and watching our guys win; hence professional sports; while female rewards are more subtle. Riots are also complex; most UK ones start when the police attack the crowd, giving a sense of common threat.

  7. In the penultimate session, Andrew Adams went first (as the AV needed a reboot and he volunteered to speak without powerpoint). His point was that most systems assume a single person is logging on to them; there is a lack of affordance for shared and multiple identities. In Japan where he now works you can’t get a joint bank account, for example. This has spurred him to look for other affordance failures, and there’s a draft paper at http://opendepot.org/1096/. He’s attempted to classify joint identities (several, shared, subordinate, nominee). Issues range from parent plus child (which is dynamic, becoming looser as the child becomes more independent) to elderly parents (which may just be tech support) though to various nominee functions through to probate.

    Alessandro Acquisti has been investigating bias by setting up social network pages for job seekers representing them as tidy/scruffy, straight/gay and finding significant quite discrimination in numbers of interview offers. This has led him to think about the evolutionary roots of privacy and study the historical development of the concept and symbols of privacy. How is privacy salience manipulated by visibility to strangers / acquaintances; anonymous / identified watchers; and whether there’s a possible physical threat or whether the watcher is distant? Hypotheses: we respond more to physical cues, which are suppressed in cyberspace. Objective self-awareness, territoriality and other issues have already been studied; in future experiments he’ll see if he can manipulate disclosure and security behaviour by local physical stimuli (visual, olfactory and auditory).

    Adam Joinson is interested in whether Facebook makes life less fun, or less real, or less authentic. Foucault wrote in 1977 how surveillance could be internalised, leading to our becoming our own jailers. So are generation Y being changed by Facebook? Interviews reveal apprehension about photos of drunkenness, hangovers etc being spread beyond participants, and people self-regulating by keeping beer etc out of sight. Years ago he predicted social media would make people more other-focussed; he’s found that private self-awareness is decreased and public self-awareness increased by facebook use. He then poses as a market researcher for a trip organiser with trips to a theme park or a strip club and showed photos either vanilla or as they’d look on Facebook. In the latter case they wanted £26 spending money rather than £12. He concludes that using Facebook shifts our focus from ourselves to others’ view of us, leading to anxiety because of the discrepancy between different audiences. He calls this the PORN model: precautionary offline regulation to norms.

    Sandra Petronia works on communication privacy management. She got angry with the disclosure literature in 1979 because it ignored context. People believe they own their private information but often only understand the rules we implicitly relied on once they’re violated. There is in fact a dialectic between ownership, control rules and turbulence. The rules we choose are hugely influenced by the context. Physicians can be upset when patients friend them online, as this crosses a privacy boundary. Her favourite right now is a study she did of blog scrubbing: do people tidy up their online writings afterwards? It turned out there were three groups: the high-risk bloggers didn’t care, the cautious ones self-censor in advance, while people in the middle not only rewrite blogs afterwards but apologise face-to-face for infelicities.

    Ashkan Soltani talked of the economics of surveillance, inspired by the “Geolocational Privacy and Surveillance Act” (HR 2168) hearings into the different types of surveillance (foot, drones, helicopter …) and the differing control regimes. The big difference seems to be cost. What’s the cost per hour of calculating a suspect’s location? Foot / car pursuit is maybe $100 per hour (covert $500-$1000/h); helicopter $700/h; drones are much less, ditto GPS. ACLU reckoned GPS tracking costs $4/h. A more complete model would include fixed costs, training and scalability. Does the fact that we can get tracking as an outsourced service change the world?

    Last up was Paul Syverson, talking about the motivation to provide onion routing service. The Navy likes onion routing to protect its road warriors, open source intelligence and other things. One real problem is getting people to contribute to the network by running servers rather than just free-riding. There are various techie things you can do such as throttling bandwidth hogs and prioritising bursty traffic. However if you start paying people then the motivation becomes extrinsic rather than intrinsic; at present a lot of the server operators are idealistic, and the Israeli daycare example suggests that monetising the exchange could undermine things. What’s the best way forward?

    Discussion started with Tor, where governments might explicitly include exit nodes in net neutrality. The cost of wiretapping is higher in the USA as you need a person to listen in. The difference between old surveillance and new is not just about scaling but about coverage; who can give law enforcement the slip – just geeks? Facebook may make people other-focussed as they spend most of their time on others’ profiles rather than their own. It adds another layer in the employment context by adding an extra layer of data from sexuality, to a status update of having gone to the mosque – information that’s not in CVs and which employers cannot ask in interviews. Facebook is also anomalous in that when you die everything goes to your heirs, but not your Facebook page, and there’s the bigger issue of dementia: we need stuff like lasting powers of attorney. It’s not enough just to have an envelope with master passwords. More generally it’s difficult to negotiate privacy formally around serious matters such as divorce but we do give cues, which help us to start managing information, which as a result becomes part of our private information.

  8. I talked about the importance of deception deterrence as a frame for understanding many issues of cybercrime and policy. Every culture believes they can detect deception by observing physiological signals like gaze aversion, yet experiments show this is hard. What’s going on? One hypothesis is deception deterrence: by interacting with someone you bring them into your in-group, meaning that later misbehaviour is betrayal rather than risk, leading to a vindictive response. At the lower end, some deception is socially authorised, such as bluffing at poker; we might investigate what sort of manipulations have effect here. Also there are questions about different responses to being watched by people, or by software, which raise issues of the nature of privacy. Finally, while terrorists try to be as annoying as possible, other crooks try to be inconspicuous which may decrease our ability to push back on crime using psychological mechanisms – which may increase crime. (a href=”http://www.cl.cam.ac.uk/~rja14/Presentations/shb-2012.ppt”>slides)

    Dave Clark asked how we focus our attention on security problems. We’re making it simpler to lie by constraining the channel, but embedding everything in a rich environment of fact checking. We’re also moving to a world in which collaborative crime is easy and common; large numbers of specialists collaborate in a value chain of criminal production. The idea we can stop fraud by separation of duty is just dead. The collective production of trust is how the real world works, and the real task is to build software that lets it happen. It’s not man v tech or tech v tech but person v person. Who’s going to automate faster, the criminal or us? Can we persuade people to share data to help us make them more secure? The space where this will happen is the app, where every app is different and every app designer is untutored. So: collective construction of security, help the app designers, and see where we can get tech advantage.

    Peter Robinson works on affective computing and can infer emotions from facial expressions with moderate success. He looks, for example, for car drivers becoming cognitively overloaded. This was hard to do in a car simulator so they now get people to fly helicopters round the building by remote control to generate enough stress. How can you measure cognitive load? The answer is early saccades – the ocular response to a flash exhibits a reciprobit distribution due to two effects at play from two different neural pathways (cortex and fast path). If the cortex is occupied the fast track isn’t inhibited. So: might early saccades signal deception?

    Peter Swire has experience of the crypto wars and wonders whether our mistakes of the 1990s might be repeated in places like India and China. How might a policemen in a country like Peru respond to the loss of access to wiretaps? The FBI has described World of Warcraft as a global terrorist communications network. He’s written an essay “Going Dark” which argues that cops get much more than they lose, such as location and contact data back. Would they prefer the 1990 capability set or the 2012 set? He’s also working on the consequences of US v Jones which required warrants for tracking vehicles with a competition usvjones.com. A further project is on de-identification: in what circumstances can one combine partially effective de-identification with other compensating controls?

    Rahul Telang is an economist interested in competition and security. Under what conditions can markets improve security outcomes? Hospitals might provide a reasonable setting with IT security obligations and an existing literature on competitiveness (with towns ranging from pure monopoly to several players). The unsettling finding is this: in more competitive markets you get more data breaches.

    The last speaker was Alma Whitten, who’s always ended up in the “futures” session at this workshop, and realistically that means 5-10 years. Technologists’ imagination tends to be seeded by artists, particularly science fiction writers and moviemakers. She showed a clip of a future VR surveillance / modelling tool that showed plans of all buildings in a neighbourhood (so you could ask “show me all concrete walls”) and asked where the boundaries might be, who maintains them, and who pays for them? What if access is to be for all, not just eccentric techno-genius multi-billionaires?

    Discussion recalled that 1990s predictions were of smart homes and interactive video, not of the smart phones we now have; whether the recession is relevant; whether advertising is the only way of paying for stuff; whether hawk-dove, predator-prey or parasite-host models are useful in modelling crime when many strategies favour the attacker first; how we might get defenders reacting as quickly as criminals; when we might see attacks that exploit one firm’s credentials to attack an other; whether some software or tools might need ownership restrictions and others might need subsidy, like rural broadband; and the various discourses around lock-in and the regulation of software, such as why we worry about lock-in with Microsoft but not with World of Warcraft.

  9. It’s a little sad that a security conference gets so little attention – it’s been going down ever since the first one.

    On the other hand, it’s also a very strange initiative – set up a security conference without any security professionals… where is law enforcement? Where are the CPPs and PSPs, private security experts, physical security experts, or even behavioral analysts (as opposed to the behavioral economists…)? I’m very sure that any number of LE agencies and private security companies would love to lend a hand and talk about real world security administration, strategies and planning.

    At the end of the day, it looks like the people behind this “SHB” thing invites the people they’d like to hear (this being an “invitational” event”) and skips lightly past the people who actually know security beyond the computer screen, statistics and paper-pushing.
    The good thing may be, based on the dwindling attention, commenting, references etc. that the general security minded population is also catching on to that.

  10. If the consumers knew that they can claim a
    consumer protection today – wouldn´t they demand it?

  11. I don’t think so, unless someone else recorded them. I’m afraid I couldn’t find my voice recorder as I was packing for the trip; we’d been decorating and stuff was piled chest deep in the music room. Sorry about that.

  12. Also some people do not want the talks recorded. Perhaps the open, free-wheeling nature of the incredibly interdisciplinary dialogue is part of the unique value. It is an excellent, fascinating event, marred only by evopsyche.

    Because obviously, Steven LeBlanc’s theory only applies if males have male-only parents and females have female-only parents. It is canonically untestable. It is, ironically given the opening talk, just ideology pretending to be (bad) science.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *