8 thoughts on “Security and Human Behavior 2018

  1. George Loewenstein started the workshop describing work with Erin Carbone on information sharing. Why do 730 people write a toaster review on Amazon, particularly the people who gave middling reviews rather than praise or blame? Classical economic models don’t explain curiosity, where people get information for its own sake, or information avoidance. In previous work he found that people didn’t clam up about sharing information until you rang alarm bells; so sharing avoidance is weak by default. To understand privacy, we probably need to understand its flip side, the desire to share. Why do people want to share stuff? 83% of people report an incident of “dying to tell” something; 87% of what we share about ourselves in positive while 57% of what we share on others is negative. Many feel they overshare. Negative experiences shared were adverse comments from friends and bad customer experiences, and were about emotional regulation, while positive sharing was about self-enhancement or rehearsal.

    The second speaker was Lorrie Cranor, talking on testing real people with simulated attacks. Use studies in security and privacy are different because of the adversarial element; can using a simulated adversary can undermine validity? Sure; can we design legal and ethical studies to deal with this? Options include observing real-world behaviour (which takes time; she has a “security behavior observatory” which pays people to use spyware); observe hypothetical security tasks with simulated risk, in deceptive studies where we debrief afterwards (but making security salient when it usually isn’t can also undermine validity); observe non-security tasks and pop up warnings; and do economic incentive studies where people have a genuine incentive to protect their game assets.

    Next was Laura Brandimarte, talking with David Sidi on what she calls the analogue keyhole problem: how do you prevent shoulder surfing on password entry in the presence of pervasive IoT devices that watch and hear everything? Modern vision APIs make this worse all the time as you can filter for interesting stuff like keyboard entry, and crypto isn’t very relevant for protocol design, and coordinating users’ and devices’ permissions is tiresome. A way forward may exist when the adversary isn’t the device owner but a third party who may attack the device owner’s servers and where user transparency is the key. His focus is on warning the user of their being observed using localino to tell them when they’re in sight of a camera; he’s been studying the extent to which this influences occlusion events such as a user shielding their phone when entering data.

    Alisa Frik is studying healthcare tech for older adults from alert buttons and fall detectors through voice-activated lights to care robots. She’s interested in patients’ beliefs starting with qualitative research based on interviews, to find out what they currently think and what they do in terms of mitigation. She will then do surveys and field studies leading to participatory design. She had anticipated that the interesting aspects would arise from technical complexity, such as context-aware devices and multiple-user systems. But so far (after 35 interviews) the main emergent point is that seniors try to conceal income. There are multiple reasons: they tend to be poor and some benefits are means tested; or they’re worried about family taking their money, or fraud and scams; or they’re afraid about robbery.

    Ashkan Soltani was last describing work on a ballot initiative, the California Consumer Protection Act. These ballots can be effective in areas of high regulatory capture such as tech (an example being data-breach notification), or where there’s a conflict with the legislature’s interests, or with third-rail issues such as gay marriage and marijuana legalisation. Most however are funded by rich people as collecting hundreds of thousands of signatures is expensive. Ashkan’s role was to rewrite the proposal to make it work. It’s modeled partly on GDPR and partly on Do-not-track; you can get a company to give you all data they sell or share, and request that they stop. The company most provide this opt-out and cannot discriminate on service. You can also get damages for breaches without proving actual harm. Behavioural advertising is allowed so long as the information’s only used for targeting, not sold or shared. All this had to be done in three weeks!

    Questions explored whether privacy research is cross-cultural – Petronio explains relationship building as progressive disclosure, so there should be some universals if she’s right, and the same may hold with theories of privacy as more general disclosure management; there may also be a learned component. The underlying motives therefore bear study: notoriety is one way of getting impact, and such things are likely much more solid than “sharing”. Information avoidance and motivated beliefs are also likely fundamental. People defend their views, and sometimes ignorance can be rational as well as just psychologically motivated. Social influence has an effect too, as does feedback or its absence, particularly if the motive was getting an audience. Hydraulic metaphors often signal such information motives: curiosity as a vacuum cleaner, or bursting with the need to share. Asymmetry is also common: people want to show their holiday snaps to people who dread this. People can become addicted to sharing and social media are exacerbating this. Ignorance can be bliss: when Lorrie projected people’s search terms on a wall, people were uneasy and blamed the projector; when it was turned off they were happy even though the search terms were still going over wifi. We gravitate towards proximate cause rather than ultimate cause. And how are communities set up? Do we feel loyalty to our fellow Amazon reviewers, like we feel loyalty to fellow members of a babysitting collective? How can we measure societal norms when trying to design stuff? It’s hard given habituation and drop-in-a-bucket effects. Could we do experiments comparing the behaviour of celebrities versus paranoid people who think they’re of interest versus narcissists versus random people? Much of the variation across cultures is probably not about a general propensity to share as about what things people will share: for example, people in China talk about their income while in the USA that would be shocking. This is generational too. What do we share less, apart from phone numbers? Might we share negative stuff less now that social media provide so much rapid feedback? Also, people are becoming less willing to do surveys and polls nowadays; some phone surveys get 99% of respondents just hanging up.

  2. Nicolas Christin has been doing a cross-cultural study of security behaviour across seven countries. The existing security behavior intentions scale (SeBIS) of 16 items measures things such as how often people update their devices. Translating the scale is hard enough as it has to be validated; it turned out to not work at all in Japanese, as questions with double negatives (“don’t you think that…”) just don’t go over. The whole scale had to be redone so that it worked in Chinese, Korean, Japanese, Arabic and Russian as well as English and French. Anyway, he found that users from some cultures are likely to be less secure, that other factors such as self-confidence matter too, and that security tools should be customised.

    Serge Egelman has come to believe that security messaging has to be customised to the individual recipients but this has been hard to do in practice. The one subfield with extensive work is password nudges, and so Serge has been getting mTurkers to role-play changing passwords for accounts. He treats the subjects in various ways including asking for a stronger password and prompting the “correct horse battery staple” meme. He then used psychometrics such as need for cognition and consideration of future consequences, collected via mTurk IDs, and correlated them with the effects of the various nudges. All the nudges work, in some sense, so many existing studies may be criticised for the absence of a placebo nudge.

    Tony Vance worked as a consultant before becoming an academic, and was startled at the difference in security culture between firms. It was not surprising that Home Depot had a huge breach, they resisted security advice saying “we sell hammers” and some staff were telling friends “If you want to shop at Home Depot, use cash.” He’s now planning to quantify the effects. He refused to be drawn on a definition of culture, but some parts of it are clear, including top management commitment. For example, it’s come out that Marissa Meyer had failed to pass on security information at Yahoo to her CISO as she didn’t want him asking for more budget. He’s collaborating with Risk Recon which has a set of benchmarks, inspired by the idea that while you may not be able to check a restaurant’s kitchens without being a nuisance, you can certainly check the bathrooms. He’s interested in how research might best be conducted with this dataset.

    Vaibhav Garg has been thinking about security standards and how they can be made sustainable. Small vendors in particular pay a disproportionate cost if they have to recertify against multiple standards over time. He’s been contemplating private and public solutions, and it turns out to be hard to assess what makes a good standard; he’s been using Elinor Ostrum’s five-dimensional framework for the sustainability of forests, fisheries etc. This consists of exclusion (of people who don’t abide by community norms); a low rate of change (rapid turnover of population makes norm building harder); frequent communication (makes norm building easier); monitoring (so that unobservable cheating is hard) and enforcement (so that cheaters can be held to account). This has helped him identify gaps in proposed IoT standards.

    Roger Dingledine works on Tor, which he describes as privacy to his parents, anonymity to geeks, reachability to censorship victims, and traffic analysis resistant communications to governments. You can’t have a safety system that’s only for cancer survivors as the fact that they’re using it would be a giveaway. The communications around this complex ecosystem are nontrivial; knowledgeable people need reassurance about transparency more than anything else. Vulnerable people are much more at risk though, and Tor is basically about power imbalances. His adversary isn’t just the government of Syria, but Blue Coat in Sunnyvale which sells Damascus its censorship systems. Finally, the dark web basically doesn’t exist, at least the way the press represent it; the biggest website accessed through Tor is actually Facebook, and half of Tor’s funding comes from the US government.

    The discussion started on sustainability. In Tor’s case it’s about building an ecosystem and maintaining funding streams to subsidise relevant research; in industry a critical problem is that chief information security officers typically last only a couple of years or so, so institutional memory is lost, especially now CEOs are keen to fire the engineers who maintain legacy systems in the hope of saving money. In the Tor world, there was evolution in countries with unpleasant governments: people who used VPNs or other weak tools ended up in jail or dead, and only the Tor users are left. However corporate America isn’t exposed to the same evolutionary pressure. Another factor is whether the people at the front end have an incentive to understand and use the tools properly; activists in bad countries do have this incentive. One way forward is to invest in training staff to care about security, but this is usually done badly: most people dread having to take badly designed security awareness courses. Jean remarked that computer security is actually caregiving, but it might be best not to advertise this as caregivers are not valued. However Tor has worked, and has ended up being the community of people who do it right; perhaps that’s why there’s only one Tor. Network effects work though and are a bigger deal here as it’s easier to hide in a larger crowd. On the assurance front, too; there are also 20 professors out there attacking Tor and maybe 100 grad students. Cross-cultural effects include complacency in countries that are physically safe and politically stable; this may be rational as in the USA, you don’t have to worry about being killed for what you say. There is a reverse network effect.

  3. David Smith started off by describing a lynching in Paris, Texas, in 1893 where Henry Smith, a black man, was tortured to death in front of a crowd of 10,000 people in a widely-advertised lynching that got wide press coverage, which described him as a beast or a monster. Yet again in 1995 Dilulio came out with similar language describing “superpredators”, and the same dehumanising language is being used by President Trump; see for example a press release about gang members that uses the word “animals” ten times. What’s going on, with the “animal” characterisation drifting into “monster”? The monster is out of the natural order, adding metaphysical threat, and has been seen in Nazi propaganda showing Jews as human-rat chimeras. David suggests that our inability to think of other humans as “nothing but animals” may be responsible for some of the worst excesses of dehumanisation.

    Lydia Wilson works on what to do with returning ISIS fighters. There are programs at the level of the individual (deradicalisation), community (re-engagement) and nation (risk assessment). Many of the resulting risk reports, which are now produced in large quantities, talk really of how the young men were radicalised to go there in the first place. Much of the detail is classified, but everyone’s recycling everyone else’s interviews endlessly, manufacturing “facts” that are surprisingly durable. The authors are often business firms who are given defined budgets and definitions of what sort of people to interview; the perspectives of social workers, religious leaders and other knowledgeable informants are usually ignored. There is no critical analysis of the propaganda, or indeed of the reports. We understand the drivers of radicalisation fairly well, both in general psychological terms and also of the clusters of local factors around lack of respect, opportunity and so on, and the ISIS appeal of community, kinship, caring and brotherhood. However we can’t make any useful predictions from such theories as over 99% of young men in such situations don’t succumb. Insiders take the view that we must see terrorist attacks as like road traffic accidents; we can cut the incidence in various small ways but have to accept them as a fact of modern life. The anti-radicalisation activities in many countries actually raise the risk but the evidence needed to establish this is made unavailable by the classification system. Access to the subjects is restricted to insiders. So as well as fake news, we have to beware fake information.

    Tamar Mitts sees counterterrorism as coming in repressive and non-repressive flavours, with governments focusing on the former, and the CVE (countering violent extremism) community as the latter. She’s working on community engagement initiatives and has a number of criticisms. They unfairly focus on Muslim communities and may be generating a sense of alienation, but we don’t know. So she has put together a database of over 47,000 twitter users with geolocation to study whether DHS interventions had any effect, using natural language analysis to measure radicalisation. She finds a very small decrease in pro-ISIS activity in the treated areas while nothing observable happens in the control areas, but does not know whether this represents a change in sentiment – or just young Muslims being more careful about they say.

    Ben Friedman is interested in preventing government overreaction to threats. The marketplace of ideas in which John Stuart Mill believed is a failed market, as there are many obstacles to rational government including cultures of risk perception that are in turn conditioned by iron triangles of competing interests. Rational argument often fails; for example, Dwight Eisenhower tried to calm public panic about Soviet ICBMs in the 1950s (knowing from secret briefings that the USA was ahead and that the Soviets had deployed no ICBMs yet) but got criticised by Kennedy and LBJ, so had to increase defence spending more than he wanted (Kennedy increased it again). Some other leaders got a better result from staying calm, from Obama to Bloomberg. A second approach is to blind people with science, and a third is a placebo such as security theatre, and a fourth is developing self-restraining institutions.

    Michael Kenney has been studying al-muhajiroun, activists who push the boundaries of free speech in the UK. He combines social network analysis with ethnography, describing this as ethnographic network analysis. He spent a long time hanging out with al-muhajiroun from 2010–15 and has 148 interviews with 48 activists, hangers-on and drop-outs. His work confirms the analysis Lydia just gave and the work of sociologists like Doug McAdam; youngsters have identity crises, or education issues, or involvement with crime or “immoral” liifestyles, but generally tend to be seeking for a purpose in life. Michael sees radicalisation as often proceeding person to person; youngsters absorb ideas from people they trust, whether personal friends or charismatic leaders. The network topology was scale-free around Anjem Choudary until the state cracked down after 2009, and changed to clusters of small halaqah study groups with redundant bridge nodes. As for exit, some burn out, some grow up and out, and others become disillusioned with Choudary. Some disengage with or without deradicalising while others escalate. CVE is not a switch; deradicalisation isn’t either-or. Like radicalisation it’s a process that unfolds over time, and is a youth phenomenon. Thick engagement is needed to understand it; some of his informants only came out with their real story at the third, fourth or fifth interview.

    Marc Sageman has been studying political violence for 45 years. He was on the terrorist side for three years (helping the Afghan mujahideen against the Soviets) and has been on the intel or law-enforcement side for the rest of the time. He believes you must deal with the people, not proxies such as their parents. You must also acknowledge the role of the state in creating terrorist violence; terrorism, like war, is a reaction to what the other guy does. The most important factor is not to escalate the conflict between the state and a community, including imagined communities.

    The discussion started off what models might also cover the radicalisation of the Founding Fathers and of white supremacists; in the latter case there is certainly a feeling of being oppressed by state violence. If men feel their opportunities are threatened by feminism they may act on that even if feminism is about something else altogether; the feeling is being barred from something that’s your right. At the other end of the scale, some muhajiroun had a justifiable fear of prosecution and may have emigrated rationally rather than making hijra. Other highly excluded groups range from sex offenders on release to demobilised soldiers and their reintegration varies; there is little unclassified data on outcomes, particularly of the “reaffiliation” of terrorists. Perhaps there is just little measurement at all. We only know the failures from the number of reoffenders; for example, the Saudi programme was disastrous. But there are many former insurgents and former soldiers in the world so the potential dataset is large. Among the fifteen muhajiroun who deradicalised, only one participated in a CVE programme, and it’s not clear what effect it had. What may actually happen is that we have many social identities and after the war the warrior one is just no longer relevant, so is not sustained. But the idea of deradicalisation is an old one; in earlier societies, men who had killed as soldiers had elaborate purification rituals, including a year’s penance in medieval times. Racism also colours views of what violence is legitimate or not in some societies.

  4. Coty Gonzalez is trying to understand why we are so much more vulnerable to deception when online, and use this knowledge in devising defences that are also based on deception – or at least taking account of the attacker’s cognitive biases and other weaknesses. To do that, we need to know the attacker better, and she also has projects on creative persuasion and adversarial machine learning. She’s been getting people to classify target versus adversarial images to try to figure out how machines could fool humans in the same way; it turns out that general attacks using Fast Gradient Step Method (FGSM) can deceive humans fairly well.

    Rachel has been beating up on ML models used to predict mental health from social media data, for example in apps touted for suicide prevention. Can such techniques be used for exploitative purposes by advertisers? Johns Hopkins has collected a twitter dataset from people with self-reported diagnoses of PTSD (n=246) or depression (n=327); simple models give precision in the 70-80% range; Rachel got the same with a more careful analysis. She concludes that such methods are good enough to raise privacy concerns as such models can spot mental illness often enough even when the subjects are not talking about it.

    Rick Wash has been studying the workflow whereby security professionals detect attacks and deal with them. People try to figure out the story of an incoming email until they have the “wait a second … ” moment when there’s a goal shift: it’s now to determine whether the email was genuine, and all the training kicks in. Eventually the professional decides what to do. From Rick’s research, the interesting phase is the first one. What discrepancies do experts notice between the story building in their head and the email in front of them? The answer is that it’s highly contextual. A bank email might have content instead of just being a notification, and different people may need different numbers of discrepancies until the goal-shift occurs – but the number is usually more than one. The clear distinguishers on which we train people come then. But what we don’t train people on is how to get suspicious in the first place.

    Julie Downs does a lot of work helping adolescents make better risk decisions around issues like nutrition, sex, and social media. How do girls in particular make trust decisions around things they see on social media? They like to keep up streaks on snapchat; they see it as a duty to “like” everything their friends post on Facebook; so what do they trust? Stuff from friends and celebrities and stuff they see again and again, but not seemingly paid links. Suspicion comes from wild claims, unexpected or contradicting or unbelievable content; fact checking means google or snapchat. The idea of a credible source isn’t really there, and they don’t follow news. In short, adolescents are overconfident in their ability to vet information and rely on many channels that are open to manipulation.

    Yi-Shan Lee has been studying whether people are able to make rational privacy trade-offs online in the sense of making consistent choices. She set up an experiment where people could choose to keep either their body fat or their IQ secret, for example by letting subjects block one person from seeing their fat score and the cost of letting two see their IQ. She finds that 63% are able to decide privacy labels consistently in a two-item experiment. So how do they trade privacy for money? There’s more noise, which differs between privacy types; less rational people share more and have more noise. The types measured in the experiment are consistent with reported real-world behaviour.

    Discussion ranged from the actual harms that might follow from remote diagnosis of depression to the quality of the ground truth available in such analyses. There are methodological issues around advising kids too but there’s a lot of low-hanging fruit before you get into the grey area. Millennials are better at identifying fake news than either boomers or teens; but this needs further study to determine whether it’s an experience thing or conditioned by the environment in which they came of age. Another open question is what might be done with all the metadata the big service firms have, particularly location history; there’s little known about this. Another is the extent to which there’s a spectrum from low-level phishing of users to the high-end stuff practiced by nation states against sysadmins and others with access. The mass market stuff is easier to block while technology doesn’t help with the carefully crafted one-offs; that’s why suspicion training is important for high-value targets. Is there a middle of any interest? Another is when suspicion training might be economically worthwhile, whether for sysadmins or for 14-year-old girls? How can we tell whether we’re being too paranoid, or not paranoid enough? There are costs associated with the suspicion shift; false alarms are expensive, as it takes a lot longer to dispel suspicion than to raise it. Rebuttal is hard: when people absorb information by reading it to understand it, or working it out, it may be learned more deeply than a later rebuttal which they accept at the time but then forget. Might there conceivably be tools that help people become less trusting? One view is that security awareness training can really annoy staff when the same objectives can sometimes be achieved by technical means such as by colour-coding external emails. Another is that spam filters are now so good that most people don’t see enough spam to be immunised against it. Another is that the people who complain about training are those who got phished. Another is that security training is a waste of time empirically. This debate ran out of time.

  5. Jeff Yan started Friday’s sessions with a talk on how Chinese scams differ from the rest of the world. A typical scammer pretends to be a bank employee, phones a victim, tells him his account has been taken over and used for money launderng, and suggests he calls the police on such and such a number. The victim is then shaken down for a large bribe. Jeff analyses such scams using a framework of manipulation and social fabric; the existing frameworks (e.g. Cialdini, Stajano and Wilson) need to be extended to include intimidation of various kinds. His analysis suggests that a robust defence requires changes to the social fabric.

    Elissa Redmiles has been thinking about whether and how discrimination law could be applied to security, where behaviour has lots of anchoring effects and people optimise in different directions. Good design must take account of people with different risk profiles as well as different socioeconomic status and education levels. She has developed ten principles for an ethical approach to development and has been testing this with codesign of virtual reality apps.

    Rutger Leukfeldt is a criminologist studying human factors in cybercrime. He did a survey of 10,416 victims and found no significant risk factors; everybody’s at risk. He has also studied 40 large-scale police operations against cybercrime networks doing social network analysis to see whether the members met up offline or online. The former type tend to be good to essential at organising cash-outs while the latter tend to be techies.

    Zinaida Benenson has been wondering whether it’s time to stop using antivirus; this is the favoured precaution of naive users, while experts rank it only eighth, way behind regular updating. She has run a study of the usability of threat detection. Out of 40 users, only 30 glanced at an AV message, but only 20 registered it; 25 didn’t notice that a file had disappeared and only three could explain what happened (a clash between the AV and Windows). People need time to understand rare occurrences; messages should be front and centre rather than in a corner of the screen, and rather than pressing the user to make choices, the product should take the safe default but make it reversible. People still love their AV, especially if they paid for it.

    Andrew Adams has been looking at superhero movies as an exemplar of how people think of power and security in society along with Fareed ben-Youssef. He’s interested in how texts disclose heroes, vigilantes and law, the use of torture, and the subjugation of power to authority. Movies like Batman v Superman and Captain America v Iron Man illustrate such tensions in a “good guy versus good guy” setting although it’s admitted in the first that they’re vigilantes and thus lawbreakers. Andrews that studying stories and myths gives real insights but has been neglected in our community.

    Discussion started around 30-second movies that some banks show their customers to explain scams; these are probably better than bare warnings, but is there any evidence that they work? Another open problem for banks is how you identify high-value targets among users, as these might not be a function of the user’s bank balance but whether they just became a celebrity. One useful feature in Japan is a limit on online payments; increasing it involves a day’s delay. An open problem in criminology is the extend to which the rise in cybercrime and the simultaneous fall in physical crime are linked by criminals going online; this is probably fairly limited but is not zero as physical criminal networks do a lot of the cashout. It is a mistake to think of cybercrime as online only. Some cybercrimes involve the threat of physical harm and sometimes the threat isn’t empty. Even bitcoin-based drug dealing involves exchanging bitcoin for cash. On the AV side, it’s important for security tools to explain what they’re doing so as to educate users rather than just advertising itself: “we saved you by doing X” is better than “we saved you from Y”, and attention has to be paid to habituation: rare and serious warnings should be different from the flood of routine popups. Other security software, like spam filters, tends to operate silently; do we need to rethink all this?

  6. Frank Stajano has been thinking about the distribution of security cost and effectiveness. At one extreme, ordering a Dyson part from Amazon caused his card to be blocked costing half an hour; they would not say why it was suspicious. At the other, upmarket human security (such as Oxbridge college porters) can be simultaneously effective and courteous. He’s also interested in the distribution of skills: when recruiting youngsters for hacking competitions he doesn’t get the top 100 from the qualifying competition, but makes the top 25 captains of teams each of which get three random others. As it’s an ethical hacking competition, it was cancelled this year following widespread cheating in the qualifier.

    Karen Levy works on vulnerability and a recent project is on intimate partner violence, which affects about one in four women and one in six men directly. This gets little attention from the security community, perhaps as it’s low-tech and perhaps as the victims are mostly women and children. She’s been interviewing people in family justice centres (shelters) and finds that many of our usual security assumptions don’t hold. Abusers may control victims’ social media use and other online accounts; in general they are co-present with deep contextual knowledge, so authentication is almost irrelevant; spyware is often used; threats are low-tech, usually social, sexual and physical as well as digital; and disconnecting can make it worse as it leads to escalation of other abuse. She contacted nine spyware vendors found in the play store by searching “track my cheating boyfriend” and asked them whether if she tracked her husband, he’d know; only one said “That’s not what this software for” while eight said “go to town”.

    Yasemin Acar studies usability for developers. She wants experts to stop thinking “why are developers so lazy?” and instead make decent tools for them. She’s been studying what distinguishes crypto experts from normal developers; the former had a mindset, with a lot of attention paid to process and details, while the latter were happy to copy and paste from stackoverflow. In fact she found insecure stackoverflow code in 250,000 Android apps; while a study of libraries found that documentation made a real difference. In current work, she’s found that developers choose security libraries by taking the first search result from google, and assuming that open-source software has had the bugs fixed (though they don’t participate themselves in reporting or fixing bugs). She has developed a development environment plugin to do basic “spell checking” for security.

    Christof Paar has been working on cognitive obfuscation with the psychologist Nikol Rummel. Proprietary systems can take a year or two to reverse engineer; a student of Christof’s took two years to reverse an electronic door lock that they hacked. So can the cost of reverse engineering be increased in a systematic way? They are aware of only one paper that analyses the difficulty of reverse engineering systematically. They are observing both expert hackers and students doing a reverse engineering course, then designing hard tasks which they test on their subjects to disentangle the effects of intelligence and experience. They are keen to make contact with experienced reverse engineers who might help.

    Discussion started around how real chipmakers acquired their core expertise via a real fight, namely the pay-TV hacking wars. There is also an established discipline of “program understanding”, applied to the maintenance of legacy code, that might provide useful insights or even metrics, as might the study of puzzles. The interaction with the CAD community and toolchain may also be significant. A second theme was the design of hacking competitions that are fair but don’t exclude women or trans people; one suggestion was that teams should have “at least two genders” and a more radical one “no more than two people of any gender”. Another theme was cultural boundaries interacting with security: the acceptable levels of family tracking vary across cultures and so the threshold for “intimate partner abuse” may be hard to define precisely. Surveillance can also be social with groups swapping observations of each others’ nannies behaviour with kids. It’s less clear that bystanders will intervene to help abuse victims, e.g. by flagging content where abusers demean victims. The dynamic is complex as the abuser is often trying to masquerade as a victim. A clear code of conduct would help, setting strong leadership on norms and communicating that abuse must to be taken seriously.

  7. Sophie van der Zee has been looking at how being watched affects behaviour; being in a better-lit room can make you more honest. So can looking at yourself: mirrors in stores cut shoplifting. She’s been playing with the HAIS-Q scale to see if it works in the real world as well as in the lab. The experiment had 175 entrants and a final N=133. They had a HAIS-Q questionnaire in the lab and got a phishing email two weeks later at work. Almost half (46.6%) clicked, and this was at a security company! There was no correlation between the score and the behaviour. This adds to the literature on the discrepancy between attitudes and knowledge, and actual behaviour in the context of the privacy paradox: there are 5000 papers on that but only 5 on “security paradox”. She proposes that we brand and study the “cyber security paradox”.

    Richard Clayton has been looking at blog spam. URLs in comment body text are too easy to spot and filter; the spammers really want to get the commenter’s URL to stay up. In October 2013 their spam template was leaked revealing the social engineering techniques; so which of them work? Flattery, a request for contact, technical information and a promise to revisit the site are the main ones but there’s no statistically significant difference between them.

    Henry Willis wants to know whether one terrorist attack makes another more likely, as folk arguments around this drive a lot of personal and political behaviour. So, is terrorism a random process, or do clusters exist? And will you see shadow attacks following a big (>3 dead) attack? Terrorist events clustered up till 2002, with a large-attack effect up till 1993, but neither effect has been seen since then. One possible explanation is attacks are very rare nowadays. Groups in the old days were bigger and could support campaigns.

    Marie Vasek has been looking at cryptocurrency Ponzi schemes. Even back in 2012 there were plenty postmodern Ponzi schemes but using Liberty Reserve; Marie has redone the work and finds they’re using bitcoin nowadays. Some pretend to be real investment schemes but advertise 5% per day yield and advertise on Ponzi forums and also on bitcointalk.org, which has a lot of traffic. She’s been looking at the blockchain and found that 9 scams earned $6.5m. In fact the Tulsa bitcoin meetup was started by some people trying to start a Ponzi to make their house payments. She’s been looking at scam lifetimes by blockchain analysis.

    Eliot Lear is concerned that the world won’t have enough network administrators to onboard all the expected IoT devices, even just in the corporate world. How can we cope? A trusted introduction mechanism would be a good start and fully automated onboarding should be the goal. Security will mean micro-segmentation so that light bulbs can only talk to light bulbs, while thermostats can talk to HVACs. We need to work out the certificate semantics. Further problems include the tension with services such as Tor, and clear enough communication patterns for the network to whitelist.

    Discussion started on whether blogs are obsolete, and whether old blogs might work as a surreptitious communication channel; it moved to the AI techniques that might be used to detect blog spam or write better spam. As for the cybersecurity paradox, there is a large psychological literature on the gap between attitudes and behaviour in general which might help; and you can change the response rate to phishing easily by small changes, which is one of the reasons you don’t deal with phishing by training people. One basic predictive measure is loss aversion; this can be measured by an EEG and people with low loss aversion indulge in much more risky behaviour. If we can get better security results by making people feel they’re being watched, is that a good thing? Or should we describe such behaviour as “more socially conforming” rather than “good” or “honest”, so as to leave room to consider chilling effects? On the IoT front, vendors will cock up so we need defence in depth on the network; but who is the adversary? Is it the device manufacturer? Or do we just want to stop the light bulbs spamming people? One way or another, we need scaling mechanisms to make it work – just as automated exchanges allowed telephones to scale. On the terror data front, the vast majority of clusters in the dataset were the Basques and the IRA. To see clusters in the data now you need to include places like Iraq and Syria where there are real campaigns. Police also try campaigns, in the sense that the FBI traditionally likes to bust half a dozen similar gangs all at once; however a recent campaign against booters suggests that a drip, drip, drip of arrests can be very effective and cheaper. As booter operators feel it’s getting a bit hot, many give up and disappear.

  8. Bruce Schneier started the final session talking about cyber-physical systems. Much of what we’ve discussed relates to an environment that’s evolved over 20 years, and that favoured insecurity; governments wanted us to be secure but not from them. This is about to change. The Internet of Things will contain sensors and actuators; rather than confidentiality, we’ll be worried about integrity and availabiity. Patching is how we get security, and this will fray or fail with things that can kill you. We don’t know how to do disclosure for cars or aircraft; and things like the CCTV cameras in the Mirai botnet can’t be patched at all. Worse, we have no idea how to maintain software for 30 or 40 years. We also have no idea how to authenticate things at scale; Bluetooth works because there’s a man or woman in the middle making it work, and that doesn’t scale. As governments regulate things that kill people, we can expect serious regulation sooner or later. Finally, Bruce has a book coming out in September with his first-ever clickbait title: “Click here to kill everybody”.

    Jean Camp asked why academics don’t oppose bitcoin which is a bubble and an environmental disaster. She than made further comments which she asked us not to blog.

    Richard John was next, standing in for John Mueller who had to leave early. Richard’s been studying about the extent to which you can do risk assessments of individuals; courts routinely use them in bail and parole decisions. Statistical tools totally outperform clinical judgments; however their diagnosticity is still low; his earlier work showed that a common tool was only safely usable for a small subset of offenders. Rulemakers mistake similarity for probability; people who look like terrorists are assumed to be terrorists. All such judgments are bedevilled by low base rates and extreme asymmetry in the costs of false positives and false negatives; yet the perceived risk is vastly greater than the actual risk. Plugging this into signal detection theory, the optima beta us the ratio of the utilities of false positive over false negative times p(non-attacker)/p(attacker) and if each is a million they cancel, so we might be missing one terrorist in twenty but at the cost of locking up one innocent person in twenty. Spotting terrorists is hard.

    Bob Axelrod has been thinking about disrespect. This caused the Estonian cyber-attack, when Russians were incensed at the lack of respect for a statute of a Soviet soldier; insult caused the Chinese government to turn off their hotline to the USA twice in times of tension; it also contributes to insider threats and terrorist recruitment. In fact, respect is very important in many security contexts, but we tend not to think about it.

    I spoke on bitcoin regulation, summarising a paper that will appear at WEIS this summer and which should be on this blog in a few days.

    Discussion started on the base-rate effects in cancer testing, about which Gigerenzer has written at length: the personal and social costs of misdiagnosis, overtreatment and defensive medicine are nontrivial. High base rates cause problems everywhere. The safety effects of attacks on, and mishaps with, things like medical equipment are not generally about confidentiality but integrity. Similarly, changing a database to show that the jet fuel on an aircraft carrier was adulterated will prevent the carrier operating. A real attack on a hospital in Lansing involved a virus encrypting its billing databases. However many of the heuristics and biases that hobble our decisions apply to the bad guys too so we may develop defences that couple machines and people; adaptive environments need people, as AI/ML really only works for static or at least passive environments. Max Abrams writes about terrorists being very like gangs – more about identity than about religion or a political cause. Respect is complex, coming up in different cultures, but can couple with entitlement and grievance in ways that create societal tension. It is often amplified, whether by Al-Qaida or by Breitbart. The propaganda and other techniques used by extremists of all kinds continues to evolve. What is the real relationship with violence? And while the current US President disrespects almost everyone, much of his appeal is to people who feel disrespected by people like us. Radicalisation has complex social aspects. However there are personal predictive factors; men who abuse women are more likely to commit other violent crimes. We should get better at using uncomfortable data.

Leave a Reply

Your email address will not be published. Required fields are marked *