9 thoughts on “Security and Human Behaviour 2016

  1. The ninth workshop on security and human behaviour started with a panel on deception and deceit, started by Howard Bowman, who studies human attention. There’s an interesting interface between pre-conscious search and our attention, which he explores by rapid serial visual presentation. By showing a few dozen faces quickly he can test which faces are recognised; the subconscious makes these salient, which can be measured easily and reliably using an electrencephalogram to track the P3 brain wave.

    Next was Mike Tschantz whose goal is to defeat Internet censorship by deceiving the censor. Do pluggable transports for Tor do a good job? He studied 55 approaches to circumvention, 34 attacks on them, 31 measurement studies and 25 deployed tools. He found that academic papers focus on steganography while practical tools do polymorphism; academics worry about attacks during the use phase, but censors don’t do that, focusing instead on the setup phase. Another critical factor us robustness to noise; real censors err on the side of letting traffic through, and fail open in the face of traffic loss, while academic approaches overblock. There is little agreement in the community about the criteria a circumvention tool should satisfy. But to define success in circumvention, we need to understand censors better.

    Judith Donath studies online communications. Engineers think of communications as a mere conduit of an idea from one person’s head to another; in reality it’s often more about signalling, with an equilibrium between honesty and deception. Signalling games are getting more complex online; for example, most of the men on Ashley Madison were real, but most of the women were virtual – software agents. Such changes are Judith’s research topic.

    Silvia Saccardo studies self-deception in unethical behaviour. She’s done experiments on whether people do motivated self-deception; subjects are incentivised to write dishonest reviews either before of after seeing the products they are going to judge; in the latter case they can always lie, but in the former they can indulge in self-deception. Indeed they do; a quarter of subjects in the control group recommended a lottery with a lower expected return and lower variance, while almost half did in the treatment group, who presumably convinced themselves that the lower variance was more important than the lower expectation (returns were $2 or $4 versus $1 or $7).

    Elena Svetleva has a deception game in which student A is told “option 1 gets you $5+x and the other student $5” and lets them tell student B “option 1 gets you $x more” or “option 2 gets you $x more”. Deception rates of 30% for x=$1 soar to 50% for x=$10; but where the liar gets $1 and the other student loses $10, lying drops because of empathy. Highly empathic people were less likely to lie, particularly if they were sensitive to pictures of sadness. She measured various interactions between trait and situation.

    Sophie van der Zee interviewed 18 people in the Netherlands who work in roles that could benefit from deception detection, from law enforcement through the judiciary to insurance. She found that academic research on deception doesn’t percolate through to people who might actually use it; and that lie detection may not be what practitioners actually want, as in many circumstances you can’t accuse people of lying if you’re only 75% sure; you might ruin an investigation. It would be great if lie detection worked, but in the real world investigators rather want to know that someone’s telling the truth so they can be eliminated. Also, police officers on the streets want a single integrated methodology, not a grab-bag of disconnected individual tools.

    In discussion, Lara Ballard suggested studying US consular officers as a population that has to deal with deception without training; another would be airport screeners. Neither group is trained and neither sees their job as a long-term career. John Mueller asked how reputation is changing; it stopped businessmen cheating their customers in small towns. Judith agreed: a big question is how reputation should work now that it has to be consciously engineered. Jonathan Zittrain remarked that the implicit association tests used to show people they’re racist are an interesting example of lie detection, as the objective is to help the subject self-improve; it escapes many of the methodological and ethical objections, but not all of them. Also what happens to the fifth amendment where deception tests are subconscious? The fifth doesn’t protect you if you sweat when the police ask you whether you did it. Judith remarked that universal implicit association screening would be repressive. Shannon French remarked that empathy can cause people to act unethically; Elena agreed, and noted we all need to be mindful about the research we do. Joe Nye remarked that from the Federal government’s point of view, it wasn’t the polygraph test itself that mattered as much as what employees disclosed in anticipation, when they were asked whether they’d ever smoked pot and so on. So the answer to the question “how good a lie detector do we need?” is “One that the subjects believe in.” This is about metadeception, not deception. Sophie agreed; polygraphs are known to be about deception deterrence as well as deception. There are many things you can do to make people more honest, from reminders to religion; hopefully we can do this in more honest rather ways than lying to people. Howard agreed there is a strong placebo effect, but that should not deter us from seeking proper drugs too.

  2. The second panel was started by Helen Nissenbaum discussing the philosophical underpinnings of privacy, as contextual integrity: is what we do a more truthful account of who we are than what we say? She replicated the Pew studies on information sensitivity using mechanical Turk, then used factorial vignette methods to explore the contexts in which people were happy for their information to be used. Her findings undermine the standard legal theories such as the third-party doctrine and “reasonable expectation”.

    Rick Wash spends his time trying to learn how real people understand security. People get their know-how from stories, in the media and from friends, and try to turn these into operational practice. In a recent experiment he instrumented 150 student laptops for six weeks and tracked all the passwords they use. People tend to reuse stronger passwords, and stronger passwords get reused as they’re the ones you have to enter frequently, such as if your employer forces you to re-enter your password after twenty minutes’ inactivity. These are the passwords that come to mind first.

    Jen King examines privacy decisions as a means of social exchange. It’s well known that people’s stated preferences are often inconsistent and can be manipulated, but there’s a lack of research into the social factors that confound simple models. She’s looking into whether information exchanges for service are “free”, or part of a paid transaction, or understood as contributing to a broader pool, or subject to some kind of claimed assurance? Social exchange theory rests on four core assumptions: that people have relationships to maximise gain or minimise loss, that they create some dependence, that they do this by recurrent transactions, and valued outcomes are associated with satiation. Of course, the individual and the company can have different views of the exchange, because of the power imbalance. She’s exploring genetic testing apps like 23andme, and pregnancy apps. For example, do visible links to privacy policies encourage or deter disclosure?

    David Murakami Wood is a human geographer, interested in scale and space. He argues that the era of globalism is ending in favour of a planetary era, with planetary surveillance as disclosed by Snowden being a natural scale for such activities. This isn’t perfect; city management systems often give an illusion of control rather than the reality, especially in developing countries. Surveillance isn’t “panoptic” but “oligoptic” – there’s a number of intense surveillance systems through which people move. A long-term question is whether a planetary civilisation needs planetary surveillance, or whether it can operate without surveillance at all.

    Giorgio Ganis has been experimenting with the concealed information test. He incentivises people to lie about their date of birth and puts them in an fMRI scanner; he also tests them with evoked response potentials (ERPs) in EEG. Such lies leave different traces in the brain from lies with semantic content. He can get good results with cooperative subjects, but subjects who use countermeasures (such as confounding the signal with finger movements) can significantly degrade the signal. ERPs tend to work better because of the much faster response.

    Catherine Tucker has been investigating whether the Snowden revelations affected people’s search behaviour. DHS leaked a list of monitored search terms; she crowdsourced another one, and also got a panel to rate the top search terms for sensitivity. She then used Google Trends to search before and after the Prism revelations by eleven countries. She found a 5% decrease in searches for things like “anthrax” and “pipe bombs” in the USA, and a similar drop in more generally sensitive searches such as “anorexia” outside.

    In questions, Catherine said the chilling effect didn’t tail off, but then the Snowden files kept on being mentioned in the press. Robin Dillon-Merrill asked Rick Wash how he got students to hand over their password data; he simply paid them $10 a week for it. Laura Brandimarte noted that so much of the information economy is hidden and intermediated, which limits the validity of social exchange theory. John Mueller noted that from his data, the Snowden bump in privacy preferences lasted about two years, until ISIS came along. Lara Ballard noted a correlation between surveillance and population density, going back to the eighteenth century and still evident in different attitudes between US urbanites and country people out west. David agreed; surveillance systems exist to corral and control people, with the gates of ancient cities being at least as much about keeping the citizens in as keeping the barbarians out. It was driven by the paranoia of kingship. Mike Tschantz asked what anyone can do who doesn’t want Google to read his email but whose friends all use gmail? No-one has an answer to the third-party privacy problem. Shannon French wants to know whether the Prism revelations helped drive people to use messaging systems such as Telegram; Cathy said she could explain about 0.5% of the missing searchers, as Tor went up sharply, albeit from a low base. Andrew Adams reported that people in some countries (not Japan) said they had changed what they did online. Serge Egelman asked about the difference between sharing data with a firm’s servers versus giving it to people; Helen replied that for most people, having email read by machines is OK, as they don’t realise machines are actors. Jen agreed; she thinks many people think they’re safe as one in a million, and wants to explore this more. David concurred; in his study of body scanners, people were more relaxed about intrusive but non-contact scanners than about the touching involved in a physical search.

  3. Bill Arkin arrived in West Berlin as a young intelligence officer in 1974 and was told “Go read the files”. He’s been doing that ever since, for decades now as a journalist. He revealed the location of the world’s nuclear weapons in the 1980s and started counting casualties after the Iraq war. Having been on both sides of the fence he is uneasy about the effectiveness of intelligence. Two years ago, the feds tried to do facial recognition on everyone at the Superbowl and got over 90%; in Pakistan in 2010 the US manufactured 17,000 sandals with a transparent tracking wafer in the heel; half were passive and had to pass sensors, while the other half were active and could be tracked by satellite. Two percent crossed the border to Afghanistan, becoming targets of interest; these were boiled down to three high-value targets, who were killed. That’s how people with too much money and time on their hands behave.

    John Chuang has been using consumer-grade EEG headsets costing $99 for authentication, inviting users to choose a secret passthought; this was reported at SHB2013. He has now been experimenting with ear EEG, where the signal is picked up by earbuds rather than by conventional EEG electrodes on the forehead. The accuracy is reduced from 99% to 80%. He also has been working on controlling a mouse by thought gestures; the accuracy is 90% for head-mounted and 80% for ear-mounted sensors. Next is the recording of large datasets of dozens of students’ synchronised brainwaves, and industrial applications for social and care contexts.

    Next was Sunny Consolvo from Google who has been looking into how people share devices and accounts. She’s found six sharing patterns, highly influenced by trust and convenience. These were (1) borrowing, mostly about devices but occasionally about accounts (2) mutual use where computers are shared regularly, perhaps with separate profiles, and accounts are shared such as for access to content; no causes of mutual use of phones was observed though (3) setup, where people set up phones, accounts and so on for less technically competent devices, or when devices were handed down to children (4) Helping, as when answering calls, or assisting someone indisposed; usually within a relationship of complete trust (5) accidental sharing, where for example a mum and her daughter shared a chromebook and the mum peeked at her daughter’s stuff when she forgot to log out (6) broadcast, when people wanted to share stuff. Most people did not realise how much they shared until they participated in the diary study.

    Ewout Meijer studies deception using skin conductance and other techniques to test for guilty knowledge. He found he could extract a terrorist plan in a drill in 35% of groups, got no answer in 45% and false positives in 20%.

    Tyler Moore has been looking at how firms manage cybersecurity investment by talking to CISOs. 81% of security managers say their top management is supportive, and 88% that their budgets have grown; 46% believe their organisation’s spending is about right, while 64% think their peers are spending too little; 7% say their peers are spending enough, while 20% said their peers spend money on the wrong things. How is the money spent? The two popular answers were “industry best practice” and “frameworks”. Almost no CISOs were using any kind of quantitative method; instead they’re turning to frameworks that are process-based. Examples are Cobit and the Sans top 20. Possible research questions include the extent to which the availability heuristic is leading boards to pay attention, but only to certain threats;

    Jonathan Zittrain is interested in algorithmic accountability, from Facebook’s ability to tell that two people are in a relationship before they announce it, to their ability to engineer an election by prompting one side’s supporters. They’d be in the soup if they were caught, but they have been near the soup a number of times. One internal meeting had the question “What responsibility does FB have to prevent President Trump?” That has repudiated once leaked, but the age of innocence is behind us. Back in 2005 Google apologised when the hate site “jew watch news” appeared in search results for “jew”; but the site has morphed from tool to friend. Facebook’s M and Apple’s Siri are the same. This leads Jonathan to the idea of “information fiduciaries” whereby the big firms would have to put user welfare first like doctors or lawyers. Should Google tell you to vaccinate your child? Already in Europe they suppress hate speech and promote counter-narratives. To whom does Uber owe a fiduciary duty – the driver or the passenger? And should data scientists join divines, medics, lawyers and surveyors as a learned profession?

    In questions, Butler Lampson asked whether we ordered newspapers to tamp down unrest, back when they were more powerful? Jonathan noted that there was a “clear and present danger” test. Serge Egelman called for the abolition of the term “data sciensist” because of the implication that other scientists are faith-based. Could a truth-in advertising law constrain Facebook? Jonathan suggested we’d just get a boilerplate warning “Your feed may be curated.” Bill Arkin noted that 2016 has been the mainstream media’s best year; should they thank Donald Trump with more favourable coverage? And is the press coverage getting information security spending up to the right level, or do we need more? We are not enough at quantifying losses and probabilities yet to answer that. Firms worry about data breaches because there’s an obligation to disclose; they may be underspending in areas where things can be kept quiet. Ethics of data science are interesting but anecdotally, not a popular class among data science students. Joe Nye argued that on the ethical front we needed a truth-in-algorithms principle; but what might be feasible in practice? Should Google and Facebook have to put footnotes whenever their thumb is on the scale? Jonathan replied that we still have a concept of organic search being a tool that should return results independent of who’s asking; he’d favour a rule that they disclose whenever there’s a thumb on the scale other than for their convenience – which the companies might understand to be in their interest, as in future the thumb on the scale might often not be theirs. So might they agree to do it or do we need regulation?

  4. The last session of Tuesday was started by Bonnie Anderson from the neurosecurity lab at BYU. Mostly we tune out security warnings because we’re busy; if warnings could avoid dual-task interference they’d be more effective and less annoying. An overload in the MTL temporal lobe is responsible, and she’s been working with the Chrome security team to investigate bad times to interrupt a user (so: not during a video but after, not while typing, while switching between domains of different types …). Now she has an eye tracker that can be used with fMRI and is testing polymorphic warnings, jiggling warnings and much more. This demonstrated that polymorphic warnings are more resistant to habituation, and that eye tracking studies give much the same results as fMRI.

    Robin Dillon-Merrill has been investigating how students reacted to material about a possible cyber-attack taking out power supplies. Their willingness to act was not related to the existence of a problem so much as a clear solution to it, plus a feeling that “now is the time to act”. This last appears to be the missing piece in getting people involved in disaster preparedness, and she proposes wider testing with different strategies and non-student subject populations. We also need a better scale for “now is the time to act”.

    Serge Egelman reckons that most security failure comes from incorrect assumptions about human behaviour. When half the people don’t use passwords or PINs to protect their smartphones, why bother working on fancy additional hardware or odd biometrics? He’s been investigating why people didn’t lock their devices; this mostly came down to conscious decisions; some didn’t want to lock their phone in case they were in an accident and the ER wanted to call their partner, or in case someone found their phone and wanted to send it back to them. (Both Android and Apple let you put contact data on the lock screen.) All mentioned inconvenience and 38% lack of motivation. He did a field study for a month on 130 volunteers, and found that pattern unlock took as long as a PIN because of the higher error rate.

    Anupam Datta found that users who visited substance-abuse websites were targeted by rehab ads; and there are other case of medical data being abused for marketing. He’s interested in contextual integrity like Helen Nissenbaum and algorithmic accountability like Jonathan Zittrain. He suggests characterising the objectionable behaviour as explicit versus implicit use; how can you explain decisions? He proposes mechanisms drawn from statistics and economics for measuring causal influence in an paper on algorithmic transparency via quantitative input influence.

    Angela Sasse has been studying usability for twenty years and is convinced now that most security non-compliance is because the offered mechanisms are not compliable. Shouting louder at people isn’t a solution. She’s now been using the Johari window to understand risk communication, and found that the well-known frictions and failures such as tension between security and your main job, and slow technology, move things from the “open” quadrant to the “hidden” one. Big policy documents that contradict each other make people angry. There needs to be more work on “security hygiene” – making sure people can comply with what you demand, so as to avoid cognitive dissonance, which will cause people to talk down or ignore the risks in order to mitigate it.

    Tony Vance studies habituation in communication. It’s not the same as fatigue, as polymorphic stimuli still work, but rather a means of saving effort. However people habituate to whole classes of UI designs; and notifications are becoming pervasive and desensitising in general. It’s not just habituation you /have to forestall, but generalisation too. It’s good that the Chrome team worry abut their warning design, but not sufficient; their good design can be impacted by others’ poor design (or downright mimicry). Tony has been using eye tracking and fMRI to explore all this.

    Questions started on the role of the aesthetic in warnings, to the fact that people rationally ignore warnings with a false positive ratio of 15,000 to 1, to different responses to warnings on subjects’ own laptops versus lab machines. Even so, Angela did an experiment in 2011 where people used their own laptops and only people new to computing paid attention; only alarms that are rare and reliable will be heeded. One factor is that the underlying technology is stretched well beyond what it was designed for; another is the vested interests deriving income from broken mechanisms like certificates. Fundamentally it’s bad underlying engineering, said Butler Lampson, that people have to worry about clicking on links in email. What we must do is take out the features from the browsers that make it dangerous. In any case, clicking on links isn’t as dangerous now as it was ten years ago. On algorithmic accountability, how will this work in an adversarial environment? Anupam said his current measurement techniques would not detect all causes, leaving scope for conflict over the others, but more complete measurements techniques might be developed.

  5. Thanks for liveblogging, Ross!

    There’s an interesting tension between Bonnie Anderson’s timing of warnings and Robin Dillon-Merrill’s “now is the time to act.” There are many things for which now is the time to act (updating my Linkedin and Myspace passwords, applying the latest security patch and waiting for the computer to reboot), not to mention getting a bit more exercise and cleaning out the yard. To extend Angela Sasse’s point a little, it’s not just asking for things that people can comply with, but effectively using their time budget for compliance. (Which, of course, was her observation)

  6. Fareed ben-Youssef started the second day’s sessions by discussing the cinematgraphic portrayal of cowboys, superheroes and terrorists. What does it mean when Obama says that the Middle East is Gotham City and ISIL is trying to burn it down? Clearly, he suggests he’s batman – and superhero films increasingly figure 9/11 imagery, with devils coming from the sky rather than from under the earth. Vigilantes such as batman, birdman and superman all suggest the protection from the sky of drone warfare and help normalise extralegal state action. Enemies are depersonalised; only Americans are given a face, while those who harm them are other. We need a sophisticated critical approach to understand such metaphors and their effect on culture.

    Shannon French is interested in warrior cultures throughout history, and believes their key relationship is with death. Emotions such as bloodlust, vengeance and fear battle with cognitive control; in order to face and deliver death on a regular basis, they have to believe there are fates worse then death. She discussed the Helmand 2011 case where sergeant Blackman shot a Taliban prisoner, saying: “Shuffle off this mortal coil, you cunt. It’s nothing you wouldn’t do to us.” Yet warrior codes are not about tit-for-tat but identity; what we are is independent of who they are. If you tell the troops to take the gloves off, you deprive them of the one thing that protects them against feeling like a murderer when they return. What’s more, the bond that troops have with each other is one of love and trust, based on shared identity as well as confidence in each others’ competence.

    Alex Imas is a behavioural economist interested in time preferences, such as the effects of cooling-off periods on consumer choice. He has been looking into how exposure to trauma affects decision making. Previous studies of survivors of tsunamis and civil wars suggested they place a premium on certainty. He’s studied people who have witnessed violence in Bukavu in the Demcratic Republic of the Congo, where they worked with a grocery store that had a refrigerator unlike most local dwellings, so people visited it daily. Some people got a coupon they could redeem for a bag of flour today, two bags tomorrow, or up to five bags in five days’ time; others got coupons that paid the same except a day later. A survey showed that a third of the 258 participants had been exposed to violence in the last year. In the immediate treatment, a quarter of subjects took the flour now; in the one-day delay case it was 8%. Of people exposed to violence, 35% took the flour now while 18% delayed; in the delayed treatment there was no difference. So the delay had a fundamental difference on trauma victims’ behaviour. The model most consistent with this appears to be Herb Simon’s heuristic behaviour one.

    Steven LeBlanc is an archaeologist studying warfare in the deep past; see his book. He can find no societies that were peaceful for extended periods of time; warfare is a universal. In fact chimp warfare is a reasonable model for early human behaviour; even chimps that grew up without adult teaching go on raids, so there are learned and genetic components. Forager warfare is the same. Andaman islanders and eskimos both wear body armour; farming brought fortification and specialisation between combat and logistics; states brought a decline in violence, as Pinker has described. A good source on the evolution of violence in males is Wrangham; there is no other explanation for males’ greater upper body mass. Our genetic heritage is worth taking into account when designing systems.

    David Livingstone Smith‘s talk was on Pastry, terror and magic. Why are we suckers for political rhetoric? Plato’s Gorgias remarks that most people would take food advice from a pastry chef rather than a doctor; we’re vulnerable to pandering, as we’re vain. Freud’s take was that people feel helpless and seek salvation. Roger Money-Kyrle listened to Hitler and Goebbels, and wrote on the psychology of propaganda; on magical solutions to the terrors of helplessness. The modus operandi was to induce helplessness, then paranoia about enemies, then the megalomaniacal solution “I will save you from harm; we will be great again.” Donald Trump’s speeches follow the same pattern.

    Molly Sauter is working on a theory of general political disruption. Disruptive politics tend to be tied to radical ideological nostalgia and are in tension with both liberal politics in general and innovation in particular. In online media in particular, the lack of strong rights protections and the breakdown of geospatial controls lets disruption be more powerful; it can drag up stories and events from the past more easily; it’s no longer “misty-eyed myths among the old and weak minded” but a creative engagement with the past. Disruption in Silicon Valley is slightly different; it is more Schumpeterian creative destruction because of the fetishisation of progress and innovation and the view of society as atomised individuals plus monetisable resources. What does this mean for the political potential of technology? It may act as a relief valve, bleeding off the pressure that could otherwise build political momentum. So do we opt out? We have run out of new worlds to retreat to.

    In discussion, David Murakami Wood noted that seasteading and Mars colonies are also a form of radical nostaliga, but for the science fiction world of the 1950s and 60s. Fareed agreed: nostalgia is, in Greek, the pain from a forgotten wound; these scars came into existence during WW2. Nostalgia can also be protective: when he took Jerusalem, Saladin was tempted to sack the city in vengeance for the first crusade, but decided not to, and gave his reasons in detail. Stories like this can show the warrior he’s not the first to be tempted. Was there selective migration in the Congo experiment? No evidence of that. And how long did people wait? Those who waited, almost all waited the full five days. John Mueller noted that while people get nostalgic about Vietnam, no-one cares about Korea, or Gulf War 1; and the US civil war monuments date from the 1890s and later. Why do people pick up on some things, like cutting holes in their jeans? More to the point, how does popular culture affect the military? Shannon said that the TV show “24” had a terrible effect on recruits to Annapolis and West Point, as they wanted to be Jack Bauer and torture people. They had to get vets and older sergeants in to holler at them: “If you think you can play games with your troops souls you don’t belong in the army.” Karen Levy mentioned Betsy Armstrong’s work on women who eat their placentas as “animals used to do it”; this nostalgia is entirely imaginary, like much tech criticism, and writing that valorizes work as “craft”. In short we use nostalgia as a crutch, in all sorts of ways; if the world was nice in the past, it’s easier for us to be nice. It is a myth so deep that most people cannot believe it, as Steven remarked.

  7. Andrew Adams has been investigating attitudes to Snowden in a number of countries in Europe and Asia, asking questions like “Would you do what Snowden did if you were an American?” and “Would you do what Snowden did, in your own country?” Everyone believes in the right to privacy, with Spanish and Mexican respondents particularly strong; but most people don’t think they understand the right to privacy well. Knowledge of Snowden varied, being high in China (which publicised the revelations as a means of normalising their own state surveillance). Overall the results are consistent with the Pew surveys in the USA; people reckon Snowden served the public good, with young people being particularly strong supporters (Japan is an outlier; no more than 10% would emulate Snowden, saying they’re risk averse).

    Adam Harvey is an artist whose works include a burqa that blocks infrared surveillance from a military drone, and other variants on Islamic dress that provide separation between man and drone instead of between man and God. If the engineering and economics now favour total surveillance, what is the future for protecting facial non-verbal communication (or even preventing facial recognition)? He has created facial ornaments, hair styles and make-up inspired by WW1 dazzle camouflage. Face detection algorithms do worse on his styles than on pictures of soldiers in camouflage. He’s criticised for being anti-American, for doing things contrary to national security (but then, Theodore Roosevelt considered camouflage to be unmanly; by the end of WW2 it was considered intelligent).

    Butler Lampson argues that your personal data should carry a tag that links to your privacy policy and that stays with copied data and results computed from it. It is vital that data which get re-identified have tags added to them; we will have growing problems with data from the physical world. Tag respect should be a condition for data handlers in regulated industries. This doesn’t protect against coercion, or unregulated industries. He envisages no more than 5-9 types of data so that a policy can fit on one screen. There will be a need to certify applications that do aggregation and remove tags; ways of dealing with joint rights; and ways of dealing with different personas. Butler believes that some system like this is inevitable.

    Karen Levy‘s talk was on Boss Buttons and UI Tattletales. Basketball websites have “Boss buttons” to press when your boss comes by; what’s the general case of protecting privacy from physically copresent adversaries? She’s investigating the extent to which the personalisation of predictive text entry on mobile devices is a tattletale. If your partner picks up your phone, or if you share a tablet, they see your favourite words. What can be done? An app called “aspire news” is a domestic violence help app masquerading as a news reader; the UoW’s Pivot is aimed at sex trafficking victims, and is a piece of paper distributed in sanitary pads at truck stops containing helpline numbers that looks like a fortune cookie and is water-soluble so it can be flushed. Such apps need serious attention to UI and the context of use.

    John Scott-Railton reports from the trenches. He’s been investigating Iranian government phishing attacks on Google 2fa used by Western activists. A phisherman called Gillian York of EFF 34 times starting at 5 in the morning trying to persuade her to open a google doc; the idea is that the MITM causes Google to send a 2fa to you, which you enter into the bad website. The UAE ran a similar attack against Rori Donaghy, using a malicious shortener that also attacks Tor and reads off your AV. The attack was a powershell script. It turned out that a whole lot of the people this person had contacted on Twitter got arrested. This is the standard way for state-sponsored attacks against civil society organisations, with mobile devices the main target and a lot of attackers masquerading as journalists. All NGO folks should watch the Usenix talk of Rob Joyce from the NSA’s Tailored Access Operations unit.

    Alessandro Acquisti has written a survey of the behavioural economics of privacy; the first email dates back only to 2004. Chicago economists tend to the view that targeted advertising is win-win for both consumers and advertisers, and that privacy regulation might create market failures where none exist; yet data sharing creates losers as well as winners, and there are not only monopolists but issues around the distribution of the surplus generated from personal data. He is writing a series of survey papers on privacy.

    In discussion, Butler said that the privacy defence must often be plausible deniability, a field we really don’t understand. The ecosystem needs to be better; it’s good that we have private browsing modes but even when people remember to use it, their predictive text keyboard is still remembering the URLs they type, and there will continue to be convenience and other pressures to just log in everywhere with Facebook. What does behavioural welfare look like online? Helen Nissenbaum argued for tagging systems to incorporate social policy; for example, can you or I tag a photo I take of you? Butler agreed completely; his focus is merely a common technical underpinning that would support a variety of different rules. Alessandro noted that privacy economics has been too successful, as you commonly can’t win a case unless you can prove economic damages; and although there isn’t yet the evidence of the savage price discrimination predicted in the 1990s by writers like Odlyzko, the effort has gone into targeted advertising which leads to poor choices. Karen observed that privacy for vulnerable groups is hard, but there are some usable hacks, such as abused women registering their last address but one; John added that it’s notable that most people aren’t born high-risk but suddenly become so. Thus some privacy protection must become the default. Finally, Alessandro said that privacy is culturally specific but also culturally universal; all cultures have privacy behaviours, but each finds its own.

  8. Max Abrahms kicked off the terrorism session by arguing that most of what we think we know about ISIS is totally wrong. We’re told that ISIS is unusually successful because it’s unusually mean, and spreads by social media, which we should therefore censor. Yet the USA has the most ISIS twitter users, and Tunisia very few; Tunisia provides way more jihadis. In fact ISIS made its big territorial gains two years ago, before its social media campaign began, and it’s now taunted or offended so many people that it’s short of fighters and there’s no shortage of shia, Kurdish and other fighters to oppose them. The media has obsessed about the recruitment effects of barbaric branding while ignoring the attritional effects. Groups with a more moderate branding are getting more Gulf money and volunteers, and there’s a long history of groups that attack civilians losing out to groups who do targeted attacks on the military. Quite simply, ISIS has a stupid leader with terrible brand management. In general, attacks on civilians are associated with weak group leadership.

    Lara Ballard has written a history of how US surveillance has been inspired, and driven, by practices in Europe. The USA had surveillance systems for the Mexican-American war, and the civil war, but none at other times. Only with WW1, when American engaged with Europe, did an apparatus of surveillance become permanently established, along the lines that had developed over the previous century in Europe. A foreign intelligence agency had to wait for the establishment of the CIA in 1947. The history was written in a personal capacity and is available directly from her. The takeaway message is that a surveillance history must tackle not just the wars you like to talk about, but the wars you don’t; labour disputes; colonialism; restrictions on free speech; and the effects on minorities. It’s all about who’s watching whom and how, and with whose permission; such activities just don’t happen without some underlying conflict or tussle.

    Richard Dantzig is interested in the effect of technology change on intelligence consumers. Nicolas Negroponte noted that photography was invented by photographers but TV was not invented by actors; since the 1980s, the intelligence world has changed from being photographers to being TV actors. The private sector overtook the federal sector in R&D in 1964; Asian R&D has now overtaken American. Now the security establishment follows the technology, the strategy follows too (this started in WW1 with the railroads). Second, everything’s becoming path-dependent; in the old days offence dominated defence while now the move is to quieter, more manipulative exploits. (The intelligence competition between the USA and the USSR was more direct than the military competition, which helped to pave the way for the acceptance of cyber-attacks.) Third, despite the tech change, intelligence bureaucracies are bad at coping as they’re sclerotic and resist lateral entry.

    Richard John‘s talk was on deterrence and coordination in a three-player cybersecurity game. Attackers, defenders and users are not always well coordinated; can the attacker exploit poor coordination between defender and user? He set up a hacking game using bots for defenders and users built using the oTree toolkit and also using volunteers crowdsourced via TurkPrime. Correlated behaviour led to better deterrence.

    John Mueller‘s book Chasing Ghosts showed that US security expenditures since 9/11 would have had to have saved 170,000 lives to have been cost-effective. Bergen’s United states of jihad examines the plots since 9/11 to try to assess the actual lives saved; almost all plot leaders were buffoons or fantasy terorists who would have got nowehere without provocation or even assistance from government informants. Their numbers remain small, their determination limp, and their competence poor. Marc Sageman who has seen the classified plots confirms the pattern is the same there too. Yet still the number of Americans thinking another big terrorist attack is likely is 5%; the people very or somewhat worried is steady at 40%. In addition to the flashbulb effect of 9/11, its linkage to the large hostile Islamic “conspiracy” seems to feed the beast. In fact, 52% think domestic terrorists are a threat to the existence of the USA.

    The last speaker in the terrorism session was Vera Mironova, talking about Syria. Given the skyrocketing rate of defections in ISIS, should we fear Al-Nusra? In the old days it was hard work to set up a rebellion; now market entry is much easier, thanks to social media, leading to fierce competition between many small groups. Most civil wars since 1989 have had more than one rebel group. She surveyed 500 current and former fighters and others in Idlib. People who stayed did so to support fighters or protect family; less militant fighters were mostly motivated by revenge; militants by revenge plus Islam. He asked militants about the motivations of others in their group, and the answers were the same as from moderates. A big reason for switching groups was that if something happened to them, the group would help their family; many had switched 5-6 times. Islam was the second least popular reason for a switch; looking after fighters was tops, followed by corruption and bad bosses and missing pay and inadequate training. People quit the war for the opposite reasons they joined, starting with losing hope in victory. What would induce leavers to return is strong western support (plus a good working environment). It is now harder to get into Al-Nusra than Harvard; you need three references and get followed for two days.

    Discussion started on whether the jihadists should get consultants in. Vera said that in former Yugoslavia, Croatian brigades were run by former foreign legionnaires; new laws against military assistance prevent that kind of professional input and largely damage the more moderate groups. John remarked that the British film “Four lions” was a remarkably accurate portrayal of lone-wolf terrorism, despite trying to be comedy; but counterintelligence is often comedy too. John Scott-Railton warned against taking ISIS too lightly as they hold territory and have killed a lot of people. Max countered that ISIS is losing territory steadily, while moderate groups are making progress. John noted that ISIS succeeded initially because of the stupendous incompetence of the Iraqi army that the US army created using his tax money. Lara remarked that we might have forgotten the lessons of the counterinsurgency war in the Philippines. Max noted that when the USA invaded Iraq in 2003, people said they were doing just what the terrorists wanted; they said the same when Spain responded to the Madrid bombing in 2004 by withdrawing from Iraq. Whatever politicians do, people will perceive terrorist success. Joe Nye argued that success for terrorists is about fear and controlling the agenda; by this yardstick terrorists have been enormously successful, even if their actions are often more theatre than effective military or state-building action. He then asked how the need for control, or the illusion of control, would be affected with the spread of AI. Richard agreed that getting the intel community to be more agile and absorbtive would be a continuing challenge, and that AI is just part of a continuing technology trend. Richard asked John whether he’d actually do TSA differently; John said he’d cut back air marshalls and behavioural detection and other waste. The most effective anti-terrorist measure of all is not to overreact.

  9. I started the last session. I’d planned to speak on the emotional costs of cybercrime and secondary victimisation, but the previous day’s talks decided me to talk on the Nuffield Bioethics Council report instead.

    Bob Axelrod‘s theme was making agents accountable. Humans are wired to see agency everywhere; if you stub your toe on a rock, you get angry at it even though you know that’s stupid. People already see software to have agency; how can you hold agents responsible? You can train a dictation program, and in time AI may make this better. You can also switch to a different dictation program. Daniel Dennett noted that agency evolves; so can software. The policy implications are (1) assign responsibility to the agent that can avoid harm at least cost (2) hold agents responsible for promises (3) require transparency (4) make reputation motivating (e.g., valid and prompt) (5) encourage chains of responsibility amd learning algorithms. We can even do anticipatory agency, as when the Israelis warned Hamas they’d be held accountable for all rocket attacks from Gaza as they were in a position to prevent them.

    Yochai Benkler discussed various approaches to accountability failures; there is nothing particularly unique to national security, as organisations fail in many ways, studied by economists, sociologists and others. We expect that large bureaucracies will fail spectacularly often rather than being dependably competent; unless the agencies have a magic exemption, they will be as bad, and in fact have amplifiers of the usual classes of errors including secrecy, segmentation, rare and opaque outcomes that are often too overdetermined for responsibilty to be assigned. The Maginot line is the norm rather than the exception. If national security is that important, we should not give it a free pass but make it much more transparent. If we believe that security is the role of the state (the standard answer to Snowden) we must still acknowledge that the state is more than one actor, and its subsystems depend on being part of an open system on the outside. innovations such as the independent advocate on the FISA court actually matter; we need institutional designs that deal with the problem. However systems such as Bullrun and Muscular completely escaped control.

    Jack Goldsmith made the point that the 9/11 attackers looked like knuckleheads, and the government won’t be willing to take the risk that they’re not. Things are not as bad as just after 9/11 but there is still massive risk aversion. There is no cooler cat around than Barack Obama, and he’s been widely criticised for not having a visceral enough reaction. Yet Obama read the riot act to his cabinet after the pants bomber, asking what would have happened if the bomb had gone off: “If it happens again, you’re all fired.” That’s why the senior FBI guys want more powers; for them it’s only punishment if they get it wrong, and they want everyone to be on notice about what’s at stake to cover their ass. The effect of Snowden was to change the NSA mindset that there were no benefits at all from openness; they could not have got out of this without Snowden as they didn’t know how to speak in public. Curiously, successive whistleblowers have made the NSA stronger rather than weaker. The one thing they lost was the metadata program, which the NSA was thinking of dumping in 2004 when Jack was in government; now they have more metadata than before as it’s held by the carriers, and it’s much cheaper, and almost as available. So we owe Snowden thanks for strengthening the NSA.

    Joe Nye has been thinking about cyber-deterrence. Why hasn’t there been the electronic Pearl Harbour of which we’ve been warned so often? Why is there so much hype following a few incidents (Stuxnet, Shamoon, Ukraine…)? Deterrence can involve traditional denial or retaliation, but also punishment and cross-domain effects. America changed Chinese behaviour on espionage by threatening to put senior Chinese officials in jail. There’s also entanglement; if the Chinese took down the US grid, it would take down their economy too. Then there are norms, especially if you develop them to the point they become taboos and there are real costs to breaching them. Here, the G20 states are trying to make certain civilian targets off-limits. Many of these mechanisms are attribution agnostic. Our thinking on these issues is about where our thinking on nuclear was in about 1962.

    Bruce Schneier has been thinking about the Internet of Things, in terms of sensors, processing and actuators. In effect were building a world-size robot without stopping to think what we’re doing. Talk about too big to fail! We’re not good at security, and standardisation is making attacks scale in ways they haven’t before. As attackers get more powerful, we will have fewer of them. They will be truly alarming, not just disturbing; this will lead to louder calls for the government to do something. There will be conflict between engineering styles too: do we try to get it right first time, or throw it out there and fix it fast? Government is bad at technology and works in silos; while the FAA can talk planes and the FDA medical devices, no-one can talk computers in general. The same algorithm will have different rules in different jurisdictions. Government will come in heavy-handed and we’d better start thinking about what they can do that’s beneficial. Do we need a new agency to think in a holistic manner? We are regularly surprised by emergent properties online, and in physical systems they will be more scary. We are good at panicking, and the panic is a greater threat to freedom than the threats themselves. Policy and technology people had better work intimately together.

    Discussion started with whether the lowest-cost liability would put Google on the hook for everything? Google might be more able to block hate speech than ISPs but this would be unlikely to block the underlying harm; the cost might be low in dollars but high in civil liberties. Butler asked whether European countries were not entangled in 1914; Joe said the price of war was indeed catastrophically high but they fought anyway (three empires failed, fifty years of chaos); perhaps given a crystal ball they wouldn’t have. As for siloed government, the FCC is looking at privacy in ISPs while the FTC is looking at networked cars; different agencies regulate pizzas with and without meat. I noted that the EU does have DG Infosoc but doesn’t feel that different from DC. David Murakami Wood asked whether if we have a global robot we can have a global government. Lara Ballard agreed that getting accountability in the secret world was one of the most vexing problems in institution design; but without oversight the executive will start using the machinery against its opponents. The legislature must be able to ensure that the executive doesn’t use the intelligence apparatus on the legislature. There are also strong internal dynamics, such as the difficulty of firing permanent officials. Yochai replied that systems which doesn’t rely ultimately on openness undermine their own effectiveness; and the failure modes don’t all happen in the systems you’re treating as social norms may change, or the technology. On the privacy front, what happens once everything in your life is an active agent, trying to sell you stuff – what protection will we have against the government using that? What protections will we need? Bruce mentioned JZ’s Faraday mode: the right to have any device work without network connectivity. I mentioned the growing split between the USA and Europe since the Google Spain case: how many regulators will there be, the US legal system and the EU bureaucracy? Shannon French reminded us that ethics was about living a good life; the conversation should be also about unlocking the good in humans and making lives better. How can we do everything from helping the weak and allowing justifiable rebellions; rather than putting the genie back in the bottle, how can we make it work for us?

Leave a Reply

Your email address will not be published. Required fields are marked *