Security and Human Behaviour 2013

I’m liveblogging the Workshop on Security and Human Behaviour which is being held at USC in Los Angeles. The participants’ papers are here; for background, see the liveblogs for SHB 2008-12 which are linked here and here. Blog posts summarising the talks at the workshop sessions will appear as followups below. (Added: there is another liveblog by Vaibhav Garg.)

12 thoughts on “Security and Human Behaviour 2013

  1. Richard Clayton kicked off the workshop, talking about the phishing problem faced by a typical large web services firm. This has decentralised in the last few years; it’s no longer done by large Russian gangs but by a cottage industry of people in West Africa. With Tyler Moore he’s analysing the efficiency of phishing. A sample of 1000 victims reveals 50 subject lines that work and 30% of them fell for “IMPORTANT MESSAGE!!!” where 15,000 messages yielded 373 victims – or one victim per 42 messages. He concludes that limiting people to 100 messages a day won’t work; you’d get one seedcorn for tomorrow and one to send a lost-in-London. “Webco account update” did even better with 1 in 25. Overall the hit rate was 1 in 73. So we are now getting real-world data on what causes people to click.

    Mark Frank is looking at humans in the security process with an emphasis on terrorists and airports. Humans are dynamic; terrorists read TSA websites, such as for machine specs, so the TSA lies and claims the scanners can find stuff they can’t yet. The goal of screening is to pick out people for more serious screening. The research literature suggests screening is not very effective, but lab studies mostly neglect emotion. Looking at videos of real offenders being interrogated, we can get detection to about 75% rather than just over 50%.

    Angela Sasse is studying the ability of call-centre operatives to detect fraud. One of the most powerful tools is letting the operative see the caller’s history; giving each customer a single point of contact helps fraud detection as well as customer care. The problem is hat operatives get rated on customer satisfaction, so you have to train them to say no nicely. Talking to successful operatives, the strongest fraud cue is the amount of hesitation when a direct question is asked, then recognition of fraud patterns. Firms want to replace humans with software, and tend to ignore the long troubled history of automated voice-analysis deception detection. The insurance industry uses it widely for screening, but one of these systems was trialled by the UK government on welfare claims and found to be ineffective. An academic paper on this was attacked by the vendor using a libel lawsuit.

    Don Fallis is a philosopher interested in epistemology. How can we trust what we read on wikipedia, and what techniques help us tell error from ignorance from deception? Animals deceive without a theory of mind, but there is a reinforcing mechanism to promote the dissemination of falsehoods. Also, is it the function of some information to mislead? Do you want to reinforce an existing false belief, or just keep people in the dark? A number of philosophers (including Floridi, Fetzer, and Skyrms) are chewing over questions like these.

    Cormac Herley discussed the economics of scams. Why should people enter into transactions where the expected return is below zero? Richard Thaler defined a “money pump” as people who make decisions that mean others can take money from them with high probability and little risk”. Information is the key. There are many scams done by large actors against customers; big firms use machine learning not just to detect frauds against them but to rip off their customers. The bifurcation is between there and the long tail – where the victims have almost no data and have not previously seen the scam that hits them.

    Tyler Moore’s subject was how consumers react to cybercrime, by analysing data from an EU survey of 26K citizens in 27 member states in 2012. People who fall victim participate less in online banking and commerce, but the effect was curiously less than that of general concerns about cybercrime. People may find the experience less awful than their worst fears, as they often get reimbursed. He concludes that awareness campaigns should focus on positive steps people can take to improve security rather than “scaring people straight”.

    The first question was whether crime pays. It does, but it’s hard work; lots of people try it who are no good at it. It’s also dynamic as service firms adapt and users learn: the cottage industry guys try one scam after another; many phishermen used to do rental scams. Pump-and-dump basically doesn’t work any more. Spam still makes maybe $50m, with most being divided between 3–4 big gangs. The cashout mechanisms remain remarkably durable though. At a deeper level, is deception always functional? There re all sorts of cases where we intentionally deceive people yet it isn’t in our best interest. And there are other goals than deception deterrence we can aim to demotivate and disconcert the bad guys too; you have to think about anger, contempt and disgust. all of which can lead people to fail to empathise with opponents in ways that enable wrongdoing differently. How do you measure deception in the lab? Mark Frank has been using real versus fake pain; one subject has an ice-cold compress and hurts, while another has lukewarm water and fakes it. The other subjects try to tell which. Holding eye contact works as a truth signal for 6–7 year olds but fails for 12-year-olds as they know the recipe. How do you train deception detectors? Angela uses James Reason’s framework from the safety world to build and train competence. The ideal is a mix of experienced people and supporting tech.

  2. Andrew Adams has been thinking of the problems of sharing under social pressure where the affordances of social networks are too crude. Estonia lets citizens access a lot of sensitive data (including medical records) online using their ID card. If the norm is sharing medical data with your partner, what do you do if something embarrassing comes up? Might duress codes work, or secret areas? How do you deal with abusive relationships, and what services do you provide for undercover policemen? In conclusion: covertness matters, and we have to start thinking about doing it properly.

    Heather Rosoff has been working on what influences cybersecurity decisions. Prospect theory says people avoid risk under positive frames and seek it under negative frames, so near-misses can make people complacent. They had four dilemmas (virus risk in music file, USB, facebook app; online book purchase fraud); 2 x 2 gain-loss framing versus recalling a near-miss outcome. N=266, psych students, mostly young and female. Respondents were more responsive to the near-miss under the gain framing. In a second study, 242 mTurkers had three dilemmas (virus from music file, game plugin or media player) and manipulated recall of a friend’s near-miss. People who got a false alarm were more likely to endorse the risky option, and those who got a near miss went more for the safe one. In conclusion, framing really works to influence cybersecurity decisions.

    Steven LeBlanc is an archaeologist who works on the history of conflict in a framework of evolutionary psychology. The only skill that chimps teach adolescents explicitly is killing. We’ve fought for millions of years with 15-25% of males dying from homicide, and we must have evolutionary adaptations for it. Tech progress has made it less of a problem now we have specialists to do it, but the genetic component is still with us, and conditions differential reproductive behaviour of men and women. Men fight about revenge as well as about women. See Joyce Benenson’s new book “Warriors and Worriers”, due out real soon.

    Jerry Kang is a lawyer but works with experimental social scientists on the implicit biases in our brains. Social categories cause biases to fire, like it or not; we lack direct introspective access to all this. The implicit association test lets us tease out the reaction-time deltas though, and measure the junk in our heads: if we’re told to shoot people with guns, we shoot black people faster. These implicit biases are the cognitive equivalent of computer viruses but are much harder to deal with! Some people will argue that many biases are just Bayesian rationality. Yet the harm is externalised on to third parties even more than with computer viruses, and there is great stigma associated with them. The good news is that we can clean them up.

    Jodok Troy is a political scientist dealing with the human aspects of international relations. He’s interested in the magic formulae and stories societies use to keep populations peaceful; for example, the stone Sisyphus had to push up the mountain was called “peace”. Violence is primal, original and endemic. It’s not just about sex and revenge but also about respect and standing; the need to be recognised. Nematic theory describes this and questions the illusion of the rational economic individual. Taboos are one of the foundations of the social order. Violence within groups is much greater. Don’t think that the perfectibility of man implies the perfectibility of the social order. If, as Bruce says, cybersecurity is moving back towards feudalism, these issues may matter.

    Milind Tambe is a computer scientist at USC who applies computational game theory to security. Optimal defensive strategies are often randomised, and his group has used Stackelberg games as a framework to advise the Coast Guard, TSA and even the Mumbai random police checkpoints after the attack there (with 5-15 checkpoints on 20,000 roads with 9,000 nodes). A recent project was to write software to decide when and where to check tickets on the Los Angeles Metro; it was difficult to incorporate real-world requirements from bathroom breaks to the time needed to process arrested persons. You end up having to solve large games dynamically and incrementally without having to enumerate the whole game in memory. Evaluation is hard! At least they avoid predictable patterns, and the graphs make more sense.

    Questions:
    Some men are more prone to violence. What does this mean? There’s the classic gene-environment interaction, such as the MAOA warrior gene which makes you more likely to go to jail, but only if you have violent experiences as a teenager. We just barely understand this. However it may raise toxic political issues, if Polynesians are more likely to have it than Europeans. Another foundational question is whether raiding behaviour is down to genes or social structure; it was predicted, and found, in spider monkeys, because of their social organisation (but not in bonobos who are close to humans and chimps). What could motivate people to change our implicit biases – shame, such as the “racist” tag? What goes deeper? Do we rely on mindfulness, or do we need to bring in technological tools such as surveillance? And how would social norms develop around decoy records – would we have false data, with the risk of polyinstantiation, or just gaps, which risk stigmatising the subject by flagging the existence of missing data?

  3. The lunchtime talk was from Paul Slovic on psychological responses to terrorism. Phil Zimbardo described terrorism as about making ordinary people feel vulnerable, confused and frightened. Psychologists have studied risk for 50 years, and can contribute by helping us understand judgement under uncertainty, the social amplification of risk, and what we can do to calm fears. Hazard effects drive perceptions, which have impacts; most reaction is fast experiential thinking. Unfamiliar hazards, involuntary exposure, no control, malevolent human agency, dread risk and violation of moral norms all ratchet up the impact. Also serious events have signal value (perceived new information about future events, such as Three Mile Island, Chernobyl and 9/11). Danny Kahneman’s book on thinking fast and slow has many useful insights. A 10% chance of going wrong isn’t the same as 1 out of 10 if you’re thinking fast rather than slow; a psychiatrist will release a patient with a 10% chance of being violent but not with a 1 in 10 risk (your friend might be that 1). Paul’s own work on the affect heuristic explains how we have primary feelings about particular concepts, which drive other forms of judgement. A strong affect will overcome probability; people will pay almost as much to escape a 1% chance of a painful shock as to escape a 99% chance. This is why toxicologists may have a linear dose-response function for a carcinogen while the public will have an almost flat one; any carcinogen is to be avoided. Cass Sunstein calls this probability neglect, and it causes us to neglect values like human rights. Some preliminary work on risk messaging suggests that engaging the public in a dialogue about risk controls and realities might help mitigate overreaction. You can inoculate people against challenges to their attitudes.

    The third session started with Maria Armoudian talking on her book “Kill the Messenger – the media’s role in the fate of the world”. Media framing made the difference between Rwanda and Burundi, and had a bug effect in Northern Ireland. In a politics from, there can be give and take; a blame frame removes this, and a hate frame drives people to hurt or destroy. What’s more every single genocide she’s studied had a distinctive frame in which “if we don’t wipe them out, they’ll wipe us out” figures prominently, plus a “noble” cause like “cleansing the homeland”. The pattern persists across the Armenian and Jewish holocausts, but was absent in Burundi, where a pogrom was avoided.

    Laura Brandimarte talked on “The discloser’s iron hand – how disclosure makes us harsher.” Will we become more lenient to people who have skeletons online once we all have them? The similarity / attraction literature suggests yes while the cognitive dissonance literature says no. Her first study was observation of public voluntary disclosures by 400 mTurk volunteers; in the second phase the subjects were asked four weeks later about a target’s similar disclosure (a drunken photo), which was either voluntary or by a third party. The former were less likely to be hired. Cognitive dissonance seems to be the explanation; those who later removed their own drunken photos were the vindictive ones. A second study looked at causality: people in the high-disclosure condition were less likely to hire a discloser.

    Mark Latonero works on detecting sex trafficking of minors over digital networks. This means commercial sex under force, fraud or coercion, or by a person under 18. The Internet facilitates more victims over more distance. Typical platforms are cityvibe, backpage and craigslist (which was forced to shut down its adult section). Hypothesis: the language signalling a minor is encoded. They’ve been tracking phone numbers, scraping language, and looking for patterns.

    David Modic set out to get basic data on whether people would pay more attention to different types of browser warning. About 6% of people turn off browser warnings, and the same again would if they new how. The likelihood of clicking through depends on many things including age, tech proficiency and advice from friends, but not particularly on the wording of the warning. What people want most is certainty about the risk, and it’s also clear that they are risk averse.

    Molly Sauter is interested in how the use of anonymity in online protest has changed over time. In the mid-late 1990s, the Zapatista group Electronic Disturbance Theater used floodnet to DDoS targets, and then the Electrohippies used it in the “Battle for Seattle” against the WTO. They modeled DDoS as sit-ins and were keen to be identified; they limited their tools to follow the “one activist, one voice” theory and saw this both as free speech a hedge against accusations of terrorism or criminality. Later, Anonymous refused to engage with governments on government terms. Their message board 4chan doesn’t support persistent ID, and people who outed themselves were ostracised. Anonymous urged street protesters against scientology to wear masks. Maybe one factor was that online protests were treated not as protest but as crime.

    Richard John has been thinking about how to wand football crowds to deter terrorism; the problem is the crush of people just before the game, when wanding everyone would not work. Would people prefer randomised good security to universal cursory search? They manipulated coverage: searching half the people versus a quarter versus 10%, or doing cursory checks that would allow 10%, 25% or 50% chances of getting through with contraband. People were split 50-50 regardless of 10%, 25% or 50% coverage. The people who favoured cursory checks preferred fairness; the others convenience.

    In questions: would people react more harshly to disclosures if they’d suffered from disclosures of their own? Probably, but what Laura had asked about was whether they regretted disclosure. Did online protesters care about externalities such as congestion elsewhere in the network? Yes, but the protests were tiny by modern standards and focused on attracting press attention. Why were Rwanda and Burundi different? The former spurred the international community to act and set up a cross-ethnic radio program. Is perception of luck different enough in different cultures to give a different response to randomised enforcement? And could we push it far enough to deal with the industrial-scale petty crime we find online? Or perhaps people just don’t trust randomness, because of confirmation bias … the TSA will say screening is random, but the IRS used to say that too.

  4. Zinaida Benenson has been comparing security usability and culture between Android and the iPhone. At the tech level, Google uses static warnings versus Apple’s run-time approach; at the more important cultural level Google talks about openness and tells people they’re in control while Apple tells them they’re safe as apps are reviewed. It’s not easy to find out what Google means when it says that permissions are important; but she couldn’t find anyone at all in the iPhone world who understood that premium-rate SMSes could be used to steal money. Overall, Google seems to appeal more to techie men and Apple more to women.

    Bill Burns has been studying the public response to the Boston attacks. The number of Bostonians who rated risk as high or very high fell from 37.8% to 16.8%, and fear from 16.3% to 4.6%, between 16 and 30 April. People’s confidence in their ability to adjust doubled, their worry halved and other indicators similarly showed a near return to normal over a fortnight. Event risk signals include terrorism, dread bio/rad risks, negligence, children, events that happen near us, and stories with no closure. In the Boston case we at least had closure, as the bad guys have been arrested.

    Chris Cocking takes a social-psychology view of security; he has a social-identity model of collective resilience (SIMCA) and criticises Le Bon’s panic model in which one person who behaves idiotically will infect the others in a crowd. There is not much evidence for the panic model, yet it’s used to justify a police/military response to CBRN incidents rather than a heath response; to justify keeping populations in the dark. Initial reports of mass looting, gang rapes and murders after Katrina were wildly exaggerated: the crime rate dropped in the aftermath, and the militarised response cost lives (the police line wouldn’t let a big black crowd walk into a white area, and people died as a result). Hillsborough was also a disaster caused by the police who irrationally feared a pitch invasion. It’s much more likely for an emergency to create a common identity, cooperative behaviour, and resilience; leaders emerge spontaneously. So: how can we encourage and facilitate the “zero-responders”? In Israel, for example, paramedics turn up with first-aid kits and hand them out to uninjured civilians.

    Charlie Ransford works in Chicago to reduce gun violence, which he wants to treat as a disease. It shows many of the statistical characteristics of an epidemic disease, from clustering to spreading. The mechanism isn’t a pathogen but social learning, and maladaptive behaviours get locked in by social norms. Combatant nations suffer postwar increases in homicide; abused children abuse their own children in turn; and different types of violence are cross-infective. As in other diseases there are all sorts of mediating factors. There are carriers who spread violence norms without themselves acting violently. Now we know how to stop epidemics: interrupt transmission, change highest-risk behaviours (such as by getting gang members on a risk-reduction plan), and change community norms. Chicago has seen 41-73% drop in shootings and killings, and the movement is starting to spread.

    Harvey Molotch is a sociologist who’s written a book “Against security”. Fare dodging in the New York metro led to tokens and low turnstiles being replaced with Metrocards and high ones. Yet these “cheese slicers” can hold three undergraduates and the emergency exits still allow people in. As with all re-engineering efforts there are losers, such as obese people and the disabled. Staff get round “health and safety”: they use insulating sticks to prod sleepers without breaking the rules by touching them (and brake handles as weapons). Cards can be double-used by folding them, so this was made a felony. In short, translating threat into command and control sucks. And if all New Yorkers obeyed the maxim “If you see something say something” the Metro wouldn’t work at all. Harvey’s mission is to get people to default to decency, thereby increasing resilience. In the subway the emphasis should be on ventilation, signage, announcement, sound quality … but these are ignored as they’re not “security”.

    Jonathan Ito is a computer scientist who tries to create accurate computational models of the human decision process. The challenge is that a model created in one context often doesn’t work in another because of framing. Appraisal theory looks at how emotions arise in a situation, from assessments of its pleasantness, goal congruence, controllability and so on. He’s been integrating these into a utility model and testing it on 2000 mTurkers. As the perceived pleasantness and goal-congruence increased, people became more risk-averse; conversely, controllability made people risk-seeking. He’s now interested in cultural differences in decision making; collectivist cultures can be more risk-seeking.

    Questions: Rebecca Solnit’s book “A Paradise Built in Hell” describe how crowds cope with emergencies but elite panic results in the police or army being sent in. Training can help crowd behaviour; the World Trade Center was evacuated efficiently as there were repeated drills after the earlier attack in 1993. (Inter-ethnic conflict can still override, as in partition in India, when group survival is seen as more important than group solidarity.) Panic is not understood: police chiefs see 9/11 as panic when it wasn’t and the videos show this clearly. If you treat violence as an epidemic, do you mean that as metaphor or mechanism? Charlie sees it more as metaphor, but it still responds to disease control mechanism so it has characteristics of mechanism.

  5. Tuesday’s sessions were kicked off by Jean Camp, discussing security warnings as risk communication. If computer scientists designed smoking warnings, they would read like medical texts and almost no-one would read them! She argues that warnings should be translucent: they should not be opaque warnings that no-one can circumvent, but neither should they be fully transparent warnings that require full understanding.

    Paul Ekblom is a former UK government crime scientist who now works in academia on designing out crime. Doing this properly isn’t trivial; it requires conceptual precision, but it’s not uncommon to find four different meanings of “vulnerability” in one document. The hard part is articulation and management of practitioners’ tacit knowledge. The focus should be the situation and the offender at the time of the crime.

    Karen Levy has been studying safety, usability and enforcement in the US trucking industry. US truckers have to fill out a form reporting their work hours, but drivers don’t take these “swindle sheets” seriously: no trucker would pull over 15 minutes from home. The US government is implementing electronic onboard recorders (EOBRs) which many drivers see as “digital big brother”. There is great variety of interfaces, and inspectors are reluctant to really learn them; trucks with “electronic logs” stickers tend to be waved through inspection stations. Its usefulness to drivers thus depends on unusability.

    Patrick Olivier of Newcastle studies interaction design. This involves a lot more than just workflow! We have to think about many complex additional factors from people’s emotional response to products through personal histories to the technology’s social embedding. There’s a lot of talk of scientific method, especially among computer scientists, but experience also matters. As an example, he’s been looking at the reluctance of elderly people to use online banking; this is not just the tech but down to the different ideas of banking acquired by people who first started using banks in the 1940s or 50s.

    Peter Robinson does affective computing. What are the emotions? Darwin crowdsourced assessments of emotions on photographs; Jim Russell did principal components analysis of mental states in 1980s; and Baron-Cohen found 24 groups of emotion words via lexicographic analysis in 2003. It’s not straightforward, and people take about 2 seconds to identify complex mental states. Emotional states can only really be used as hints: your nervousness when calling your insurance company might signal a fraudulent claim but just as easily your being upset at being robbed. People seem to want to believe in technical solutions such as EEG or fMRI lie-detectors, but physiological signals work better when aggregated over large groups than for individuals.

    Dustin Ho works for Facebook on blocking spam, stopping phishing, and account recovery. One problem is crooks using account recovery to steal people’s Zynga poker chips. Security questions don’t internationalise well: they used “what’s you pet’s name?” until they found that people in Indonesia don’t name their pets, and the usual answer was “cat”. So they tried recovery via friends – but you don’t want your ex and two of her friends vandalising your account after a breakup, so now they cluster your friends so they can call on one in each cluster. They dealt with fake friends by trimming small clusters and requiring recovery friends to be of some standing. Recently they have been experimenting with pre-assigning “recovery friends”, which seems to work well.

    Questions started on the nature of security usability: is it just common sense (as Dustin presented) or does it require theory (Mike Just and David Aspinall, the Cranor/Garfinkel book)? And to what extent does security require barriers? On Facebook, friend lists can’t be protected as they leak all over the place. Signalling systems might get traction without regulation; rather than tachographs some markets might go for insurance-driven surveillance. On emotion analysis, work has been held up for years by the belief that there must be a simple answer: “only a psychologist would think there might be a two-state answer” from voice or face analysis, and we will probably end up with more nuanced analysis. Facial analysis is also culture-dependent to some extent but there is also a degree of universality. There is also an issue of whether lexicographic analysis is in fact a folk taxonomy rather than a scientific one; but our vocabulary may simply reflect what we can distinguish.

  6. Shannon French is a philosopher who’s taught military ethics and law at Annapolis. Military law (just war, proportionality, … ) tends to be written from the prince’s viewpoint rather than the soldier’s. In theory you never shoot an unarmed prisoner of war; the practice is harder if an enemy sniper surrenders just after killing your buddy. Considerations such as maintaining popular support for war and avoiding damaging the prospects for peace are too distant to matter then. Yet all round the world, warrior codes set out strict rules, and the point of enforcement is the warrior’s identity. This identity is what enables you to kill efficiently in the first place; it lets you say “I am a warrior, not a murderer.” That line matters more than any judgement by an outside force.

    David Kahn, the historian of cryptography, noted that military intelligence agencies are typically established after a defeat (Russia after the Crimea, France after 1870, Germany after WW1, the DHS after 9/11) or when they fear aggression (Poland before WW2). Yet although intelligence matters, ultimately it’s strength that matters.

    Robert Mueller is working on a paper on the costs and benefits of the FBI’s counterterrorism work post 9/11. Does the risk reduction justify the extra cost, which is about $2.5bn a year? (Their counterterrorism budget went up from $500m to $3bn.) Break-even analysis suggests that the probability of a successful attack per annum should be a Boston bombing ($500m) every month or two, a London attack ($5bn) every year or two, or a 9/11 ($100bn) every thirty years. The 53 convicted terrorists in the USA so far have almost all been neither competent or determined; most spout off but don’t do anything, and are arrested by informants. The idea of a Boston every month or two is not plausible. A better way to deal with these guys would be what the Secret Service does with people who talk of harming the president; they pay him a meaningful visit to make clear he’s on the radar, and visit him again if the President will ever be in his town.

    Robin Dillon-Merrill studies near-misses in many contexts from air traffic control to security. A near miss may make you feel resilient rather than vulnerable; but this can shift risk attitudes in harmful ways (as when banks got away with lending money to people who couldn’t really pay it back in the run-up to 2008). Vigilance generally drops when there are no failures; reported shuttle faults shot up after each of the space shuttle disasters. To counterbalance, we might focus on worst-case scenarios, expect failures at time of high stress, and reward people who own up.

    David Livingstone Smith has a book “The politics of species” coming out soon. We have strong inhibitions against harming members of our communities so we have clever cognitive mechanisms to dehumanise other groups we want to attack. These are the ability to carve up the world into “natural kinds”; psychological essentialism, whereby we see natural kinds as following from causal essences; and to classify natural kinds according to a chain of value (God at the top, dirt at the bottom, everything else in between) which creates the possibility of “subhuman”, which means we can remove enemies from the moral community. And the folk category “human” is indexical: it names the natural kind of the speaker, however conceived.

    Rick Wash studies how non-experts think about how they protect themselves. The way folk models map into actions is not always clear! In the planned model of behaviour, thinking leads to intention and in turn to behaviour. But the real issues are around self-efficacy, the fact that security tasks are secondary, and hyperbolic discounting. He’s been studying how people do software updates; everyone hates interruptions, leading to people putting updates off even if they intend to do them. One iTunes update was so annoying that it stopped many people installing any updates, even security ones (and many people don’t believe that security updates are such). Overall Rick challenges whether it’s been a good idea to minimise the “human in the loop” as it makes decisions harder to implement.

    In questions, there are huge gaps between average-case, worst-case and “subhuman” thinking, and value per saved life (VSL) doesn’t cover the gap. The family of a dead soldier in Iraq get $600,000; government rates civilian deaths at $6.5m. Yet government still spends money on asbestos where the VSL is $125m. Perhaps in some ideal future world, officials who waste money to protect their jobs and thereby put lives at risk should be seen as immoral and ultimately corrupt. Shame can work: DHHS has a “wall of shame” for institutions who release more than 500 medical records. But people avoid thinking about death; in the Challenger disaster nobody said “If we get this wrong, astronauts die tomorrow”. People are even less likely to think of foreign deaths, such as Iraqi casualties. Americans were prepared to go to war in Korea to stop communism but not to save South Korea; and the Red Cross will save thousands of lives but pull out when some of their staff are killed, which also says that their own lives are worth more. There are complex issues here around the boundaries of the moral community. There are many anomalies: why do we think that cancer is worse than car crashes, when the latter kill the young?

  7. Alessandro Acquisti started the afternoon session on privacy (late, because of a false fire alarm). Privacy is not just about information but about invasion of our personal space. Interruptions have unexpected effects, even if we expect them. 158 subjects doing reading comprehension tasks were split into three groups: one was interrupted by instant messaging and another expected interruption. The second group performed worse but improved; the third performed worse but improved faster, perhaps because the expectation caused them to work harder. In a second study, 112 participants were split into two groups one of which put away their mobile phones away while the others left them visible on the desk. They were told to count words but actually scored on accuracy in answering questions afterwards. There was no significant difference in the primary word-counting task, but the subjects with phones on the desk were significantly worse at the secondary comprehension task. Thus: anticipating interruptions costs attention, which would otherwise be available for secondary tasks.

    Chris Palow of Facebook works on authentication. They try to make it easy to challenge suspicious logins as in the beginning they had two false positives for every true positive. (They now have one for two.) For example, you can pass the test by logging in from a machine you’re used before (cookie, IP, some geolocation). The mechanisms have shifted attacks from phishing to malware, and to compromises from people you know. They verify account access from unfamiliar locations at the next access.

    Ashkan Soltani studies the behavioural mechanisms that companies use to subvert or circumvent consumer privacy choices. Just as Mac users are shown pricier hotels, there are firms like Dataium which tracks individuals using car websites and tip off the salesman if you seem keen, so he won’t offer as good a deal. They used CSS history sniffing and obfuscated their tracking. Ashkan found that Staples priced staplers based on your distance to a competitor, based on IP geolocation; Homedepot and Rosettastone were doing the same. Hotel discounts depend on whether you use an expensive mobile phone, while people who follow links from price-comparison websites get good deals.

    Esther Dyson is interested in genes, identity and privacy. She’s published her genome online and is sceptical that genomes reveal much more yet than identity. They say who you are, but that’s only relevant in terms of what you did (which is conventional small-data). What’s more interesting is what you might do, which is more a big-data matter. It gets complex when you act publicly in some sense, with domestic and health issues. One good idea was the KPN camera system where workplace CCTV was split between management, unions and a third party so no-one could recover it on their own. Anonymity encourages bad behaviour; what about privacy? Does it foster hypocrisy?

    Chris Soghoian notes that in Washington it’s the government and contractors who propose tech while NGOs oppose it. At ACLU he is trying to spot low-hanging fruit, such as by complaining to the FTC that the mobile carriers deceive consumers by not shipping Android updates. He’s also concerned with the cellphone interception equipment that firms like Rohde & Schwartz and Harris used to sell to western governments, but which are now sold at low cost by third-world competitors. Open-source tech now lets you build an IMSI-catcher with Gnuradio and BTS for $1000; at Black Hat there will be a demo of a subverted femtocell which costs tens of dollars. The age of mass surveillance may soon be here. Technology controls on surveillance are failing; we need a public debate on better network protection.

    One question is why no-one’s selling a VPN that gets consumers a better deal! Perhaps it’s because firms have a much stronger incentive to innovate in surveillance than the subgroups of users who want to evade them. At least when a bazaar merchant in Azerbaijan sizes you up and quotes a price, it’s a human interaction; online this isn’t obvious. (It’s almost as if everyone in Azerbaijan had been following you around and filing reports to the merchant.) Will Do Not track help? DNT is largely a failure as the FTC can’t force anybody to respect it, and even if Congress could pass laws (which it can’t right now) many fewer Congressmen are interested in privacy than in cybersecurity. Increasingly, we see issues of bad uses of data for which good uses also exist (an example being DNA), so we may see moves towards regulating dating use. Yet bundled uses are growing, such as Recaptchas (for that matter, Facebook could use its face authentication process to get users to tag faces in photos). Fixing phone privacy would mean not just migrating everyone from 2g to 3g and 4g but also flashing all the phones to prevent rollback attacks by IMSI-catchers; it may be simpler to give up on the carriers and get everyone to use services like skype. There are IMSI-catcher-catchers but Wassenaar has scheduled them for export control. Alessandro will be following up his initial work on interruption with process studies.

  8. I opened the last session by announcing that we have a project on the deterrence of deception, and are looking to hire a postdoc from October. Colleagues at Newcastle, UCL and Portsmouth may also have places. We’re also working with CMU on other related projects: with Alessandro on vindictiveness towards cybercriminals, and one interest is the psychological boundary between protest and terrorism. This is very relevant to policy: governments scaremonger about cybercrime but then spend the money on cybersecurity. What we most want for our research is better ways of measuring propensity to deceive in mTurk experiments. People use deception for all sorts of purposes, nice and nasty, from good manners through fraud to genocidal propaganda. So we probably need a dozen or so different mechanisms for measuring readiness to cheat in different ways.

    Dave Clark led an audience-participation session: a majority favoured signed email, banning java, promoting two-factor authentication, promoting mutual authentication, an international cyber-police force, and vendor liability for software defects. If we’re going to have a voice, shouldn’t we have a coherent voice? Do we know what it would take to get policy makers’ attention rather than just get papers published?

    John Scott-Railton studies the mechanisms used in the Arab spring to challenge governments. There is a big problem, in that tech has man-in-the-middled our social interactions to the extent that our society now works on closed platforms like Facebook and Twitter. In self-organising rebel groups, roles are flexible and people build large-scale networks based on cores of trust. Anyone can pitch in, both in the country and the diaspora; Libyans used walkie-talkie word codes and skype for battlefield comms, and Google maps to generate targeting files for NATO air power. Gaddafi’s forces turned off the Internet from April to July 2010. They, and other governments, bought huge quantities of Western massive-intercept kit. They also organised the Libyan / Syrian Electronic Army to put out masses of propaganda and hack stuff. The Libyans backdoored a PC in an opposition HQ, exfiltrating operational data, and a picture of an opposition leader having cybersex. Groups have almost no operational security awareness and are tremendously vulnerable to the full range of spear-phishing and other attacks. Yet opposition groups like everybody else have moved their operations online and are pushed to use tools not designed to withstand state-based opponents. This should be seen as a warning for other groups who challenge or annoy governments. The Arab governments are using standard crime tools to repress opponents, nothing sophisticated: the Arab spring shows that they are a fundamentally unanswered problem.

    Terry Taylor moved from the military to diplomacy to public health, with a current project with Dom Johnson in natural security. The 100 million species around today are an open-access library on 3 billion years of conflict which we ignore at our peril. Darwinian approaches to a dangerous world can be seen all over, and the coevolution of attack and defence is played out again in human conflicts, an example being IED evolution in Afghanistan. Nature puts up with solutions that work rather than looking for optimal ones; it learns from success rather than failure.

    Bruce Schneier remarked how odd it was that computers were a mass-market product for which users were expected to provide security using aftermarket products. If a car vendor told you to buy brakes from down the road, it would not seem right. This model is now breaking as platforms become more closed and there’s less for third parties to do, and as data move to the cloud (also not under our control). Drivers such as performance and lockin and leading to a new model of security where someone else takes care of it. He can’t access the memory map of his iPhone or even delete cookies. His mother is now much more secure with her iPad, Gmail and Flickr; he’s spending much less time fixing her computer. But we have to trust the vendors and are running out of other options. We’re headed for a feudal models; we pledge allegiance to a lord who protects us, and the more loyal we are the better it all works. Yet we know how the feudal model goes bad. For large organisations it’s all less clear. How will this balance play out? Who will win? The nimble small actors will innovate faster but the large corporations can influence governments and do things at scale.

    Questions included whether we should encourage people to use linux more; it can provide a space to innovate. But how can SMEs protect themselves when the top security engineers end up with the big service firms? And where are user rights in all of this? Feudal systems collapsed as the lords stifled innovation and extracted rents, but our industry has many monopolies arising from two-sided markets. How long will continual technological change continue to refresh the industry? What obligations must be made legal obligations? And how can the law be kept up to date? Technological neutrality helps! This applies more widely than IT; the biological weapons convention is more more effective than the chemical one as it isn’t detailed. Where will it go long-term? If a government as incompetent as Syria’s can defeat an insurgency using digital tools, we might be better to bet on governments rather than on insurgents. Must we create a class of digital knights to protect the serfs, as argued by Deborah Estrin and Jerry Kang – people who earn OK money but also feel good amount themselves? James Scott’s “Weapons of the Weak” poses an alternative model: every so often the peasants have an opportunity as the ground sifts. Yet in the digital world, there’s a surprisingly small number of people working on measures to counter repression. There is much scope here for research students and engineers to do fascinating and vital work.

  9. “Might duress codes work, or secret areas?”

    Some observations on Ross’ question.

    There is a very compelling account of the complete failure of duress codes over an extended period of time in MRD Foot’s 1984 book “SOE:The Special Operations Executive 1940-1946”. This was one of the reason for the total failure of SOE operations in the Netherlands. It also occurred in France.

    SOE messages had two duress indicators: a bluff check, and a true check. The presence or absence of both indicators was included in the message metadata when it was circulated in London. eg “BLUFF CHECK ABSENT, TRUE CHECK ABSENT.”

    Positive duress indicators (“I am caught”) were added to messages by captured agents in the occupied Netherlands. They were ignored. Negative indicators (when the sender fails to include the “I’m OK” signal) were ignored. Ad-hoc duress indicators added to messages – the triplets CAU and GHT – were ignored.

    The failures were at all levels: field agents, communications networks and protocols and the senior staff officers.

    SOE was a professional intelligence organisation, with trained agents, operating in wartime, against a common cause, with lives at stake. That is about as highly motivated as it gets. I suggest this casts some doubt on the utility or usability of duress codes by ordinary individuals.

    On the other hand, making duress codes usable by ordinary folks while remaining secure looks, may be a promising research topic.

  10. There is more on SOE’s (considerable) woes in this area in Leo Marks’ “Between Silk and Cyanide”. That said, SOE wanted to believe their agents were safe and ignored indications otherwise. In Syria… false positives would not necessarily be so damaging so there might be greater belief that compromise had occurred.

  11. Richard,

    I don’t know if John Scott Railton was able to cover this, but in previous talks of his, he’s discussed how webcams turning on by themselves didn’t lead to concern. The people being watched were more worried about being shot than compromised.

    Perhaps the core issue is that once you know about a compromise, you may have to act, and so ignorance appears to be the easier path.

Leave a Reply

Your email address will not be published. Required fields are marked *