SHB 2019 – Liveblog

I’ll be trying to liveblog the twelfth workshop on security and human behaviour at Harvard. I’m doing this remotely because of US visa issues, as I did for WEIS 2019 over the last couple of days. Ben Collier is attending as my proxy and we’re trying to build on the experience of telepresence reported here and here. My summaries of the workshop sessions will appear as followups to this post.

9 thoughts on “SHB 2019 – Liveblog

  1. Alessandro Acquisti was the first speaker, discussing his work on the economics of privacy, leading to his recent work on the economics of behavioral advertising (presented in more detail yesterday at WEIS). The high-level issue is who’s getting the surplus, and what we’re learning is that it’s going to the data-economy intermediaries – especially Google and Facebook. Publishers and merchants both compete, so that’s exactly the outcome you’d expect from economics 101. He’s also looking at the impact of GDPR on the quantity and quality of online content, and the effect of ad blockers (also a WEIS paper). This is all facilitated by his security behavior observatory at CMU, which has been going since 2014 and has instrumentation to measure all sorts of browsing, searching and purchasing behavior.

    Next up was David Sidi who’s been studying wariness weariness. We get so bombarded with alerts that we retreat into saying “It’s fiiiine!” A piecemeal approach to privacy may be mistaken. He’s built a platform called DUMP that sits on top of GNOME OS; its focus is mostly on email fingerprinting.

    Julie Cohen works on legal institutions and how they evolved as our political economy did. What will the effects be of the informational economy? The driver of legal evolution during industrial capitalism was the realisation that workers needed laws to protect them, as well as the requirements of capitalism itself. So, in addition to new laws around information production, we see protective laws such as those around privacy – and we’ll need to mobilise legal institutions (from local to transnational) to achieve further structural results. The trick is to deal with harms and wrongs without recourse to magical thinking. Simply proclaiming a fundamental right, whether in Brussels or California, isn’t enough. You also need changes in regulatory processes – see Benninger’s “The Control Revolution” (1986). The moves from bureaucracies to financialisation and privatisation lead to output-based regulation; will the move to a more networked society further erode accountability? Will rights be eroded as violations are less the direct action of states, and as discourses of obligation are replaced by discourses of aspiration? Privacy is a noble aspiration, but individualised approaches won’t deliver it. You need information governance at scale which imposes accountability and disrupts seamless operations.

    Alisa Frik has been thinking about smart home surveillance of domestic staff. This is complex because of the blending of home and workplace contexts, the power differentials involving not just money but the often unprotected immigration status of staff, and issues of targets vs bystanders. Surveillance is rapidly becoming pervasive via toys, alarms and voice interfaces. Alisa is researching attitudes on all sides about the privacy of both children and staff; how expectations are communicated; and how power dynamics play. Different parties have differing perceptions of what’s creepy. She wants to meet others with similar research interests.

    Harvey Molotch is a sociologist interested in how privacy is constructed differently in different cultures. Often it’s better understood the other way round, as revelation: what it’s proper to reveal to whom, from arms and ankles in the Middle East to feelings and relationships. Norbert Elias’s theory of the civilizing process has a lot to say about this – about how you blow your nose, go to the toilet and so on – which gives insights into how privacy evolved. The things that don’t go across cultures easily are often of this type, and we can sometimes map them by studying jokes: small children chip away at the edges of taboo by talking about poo. Another way of chipping away at taboo is the shared participation in taboo revelations such as those of wikileaks – something that people tend to enjoy. A catharsis of release from guarding each other’s secrets can build social solidarity.

    Discussion started with distraction; security tasks often distract people from their main goal, and similarly adverts are often a distraction; Alessandro maintains that economic models are the right way to study this. The language of betrayal is also appropriate for many breaches; if someone uses my information for ads I may be annoyed, but if my medical records are sold I am entitled to feel betrayed. Julie thinks we should not just focus on purchase studies and betrayal as the range of harms that law tackles is much greater, from mass shootings to market rigging. We do need quantitative work though, and proper economic work. Alessandro replied that work on privacy and competition are complements rather than substitutes. Maybe we’ll discover that most ads don’t do any work at all, and many of the others are selling stuff people don’t need, so the advertising industry could get hit. Up till now much of the effect of advertising wasn’t really measurable, but about accumulating image or a drive to consume. The methodological aspects of privacy are interesting: people talk and think of it in a transactional way, and the industry supports this rhetoric, yet the externalities and social factors mean you can’t really regulate it that way. Also, the ad support of the Internet is a giant game of 3 card monte; people don’t want to pay for services so pay via a tax on other consumption. You can measure the industrial economy fairly well now via national accounts and stock markets, but we don’t have a handle on the information economy as a lot of it’s opaque. Another way of looking at privacy is as friction; it’s a shame we didn’t get micropayments to work in the 1990s and ended up using ads instead. And the harms rapidly go beyond the simply economic. For example, to use Facebook you need to use the name on your passport, which is hugely problematic for lots of people; and you have to supply five years of social media to get a US visa. The diversity of the language used by the panel is also interesting. Security is illegal things we make money from stopping, while privacy is where the abuse is legal and can be monetised; so you can turn security into privacy by changing the rules. Don’t even ask about the horror show of DNS over https; Mozilla will shortly be sending this to Facebook. In fact, in the security community we systematically engage in betrayal with big data.

  2. Alan Friedman‘s experience of business and government teaches him that competitive policymaking works best. The components are high-level forces and individual human beings; pathways to innovation and impact include codes of conduct, to which the FTC can hold firms in an industry once it’s agreed. Scaling is the key. The fight between hackers and software companies was fun so long as there were only a handful of each, but now ever company is a software company, so we need to do things differently. There is also a growing realisation that consumer empowerment isn’t going to fix everything. It can do some work, for example by getting firms to declare how long they will update software. But there really isn’t any transparency in software yet. If you buy a big diesel engine you get drawings and specifications of the mechanical parts, but nothing about the software. Previous attempts to open things up were killed by the software industry.

    Vaibhav Garg notes that both security economics and behavioral economics have taught us about the lack of incentives for firm to invest. Yet incentives can come from unexpected places such as ISPs cutting costs by joint security service provision; large firms finding it’s cheaper to build secure stuff once they achieve a certain scale or have significant brand value or IP to protect; consumer pressures, depending on switching costs; and the history of a product’s evolution. All these matter when going to the board and arguing for more security budget.

    Tony Vance notes that there have been only two rulings by the SEC on cybersecurity; last year 407(b) of regulation S-K forced boards to disclose material risks to all shareholders as well as describing how the board is overseeing the management of the risk. So what are boards actually saying, and how’s it different from before? Facebook has incidents every month or two, and has added a provision that it might exercise direct oversight at its discretion; Equifax is falling over itself; Marriott appointed a director with relevant expertise. Cybersecurity has also become a factor in compensation, and boards pay attention to incidents at peer firms. It will be interesting to see what sort of effects this attention has.

    Ian Harris researches social engineering, from phishing to phone scams. Awareness training doesn’t really work. Ian has done work on using semantic analysis to spot scams, for example when they solicit private information or when the verb and object of a sentence are on a blacklist. Improving such techniques requires a lot of real social-engineering examples on which to train classifiers; he’s interested in any he can find that work. He’s been phishing UCI students pretending to be the registrar’s office, IT or the parking office, trying to get SSNs under various pretexts such as for “confirmation” after asking for innocuous information. The success rate was just over a quarter.

    Questions started with how banks have trained us to hand over personal information and continued with other phishing experiments – even offering pizza seems to work fine. Many businesses may conclude that fixing hard security problems is more expensive than just compensating customers in the event of a breach; but there are institutional and agency effects which will cause decisions to be taken differently at different places in a company’s structure. Bear in mind that security’s hard to measure, so establishing a return on security investment is also hard. Firms often waste millions on crappy technology products rather than training people properly, which is too hard organisationally (or alternatively the buyers are engineers who think in terms of solutionism). Filtering out skilled scammers can only go do far, given that so much money is stolen by really everyday scams. And surely a software bill of materials only goes so far, since it’s not usually which crypto library that was used but how badly it was implemented? Well, the point of a bill of materials is that it helps the lawyers work out afterwards whose fault it was. A broader problem is the shortage of experts; most people have to rely on rules of thumb when trying to assess cybersecurity. Another issue for screening and detection is that phishing is becoming non-interactive; most spear phish are simple malware, and many voice scams are about leaving voicemails to trick people into calling premium-rate numbers. Tricks include making calls appear local by partial phone number matching. Victim interviews can be a useful way of keeping up with the state of the art, but ideally you should cross-correlate with police reports and also collect crime scripts from criminals if possible. Sometimes all the victim knows is what the bank told them. As for consciousness raising, in Japan there are ads on the buses warning the elderly of the current tricks. In the Netherlands, with its prolific wiretaps, you can collect crime scripts from there, but you can’t share them with US residents! There is also a huge mismatch of skills: often the real cybersecurity need is in training, risk or policy but companies advertise for computer scientists; and things are being made worse by “AI” systems screening candidates. One professor noted that linkedin is so great that she’s regularly offered childcare jobs.

  3. According to Max Abrahms, rules for rebels start where rules for radicals end. How do rules evolve in civil conflicts? And what do they mean for the organisation of such groups? The tensions a rebel group has to manage are between centralisation for control and decentralisation for survivability. The Palestinians suffered from their inability to limit attacks on civilian targets during the Intifada. Militant groups also tend to become more extreme after a leader is taken out, and less savvy about denying credit for operations that might harm the group’s standing with its supporters. Rebel group organisation can be seen as another instance of the principal-agent problem. There are several implications for counter-insurgency strategies, both in the field and politically.

    Shannon French is a military ethicist, and is now working on AI in military decision making. Many countries are now jumping into an AI arms race, and Shannon wishes they’d pay attention to the lessons of history. First, early adoption does not guarantee superiority. Second, tech superiority does dow guarantee victory. Third, tech introduces new points of failure. Fourth, letting tech mature lets you figure out the strategies and tweak the bugs. There are arguments that AIs might be more ethical as they can be programmed, and decide quickly; Shannon thinks this is ill-advised as most combat scenarios involve urban warfare, and we must beware automation bias causing bad decisions to creep in unnoticed. This has a long history, including the incident where the USS Vincennes shot down the Iranian airbus by mistake, killing 290. It’s a signficant issue in aviation where flight crew trust autopilots and other electronics too much. (Paper checklists are safer.) Also, what does dissent look like in an automated military? How will service personnel resists the steady erosion of their professional autonomy? For example, AI systems to discriminate combatants from others are not considered to be lethal per se and so fall outside the ethical boundary of lethal systems. Do we believe that? We need to do loud thinking – thinking out loud in the public sphere – about this stuff. Even if systems are giving recommendations rather than orders they will not be seen that way. Are they partners or tools? Where do they sit in the chain of command? What’s the choice architecture: do they offer one option or several? What we actually need are systems that help bring out the better angels of our nature.

    David Livingstone Smith is a philosopher working on dehumanisation, ideology and media. He takes a functional view of ideology as a system of beliefs and related practices that support oppression of one group by another; they have typically evolved rather than being the outcome of a conspiracy and their proper function is established by replication. Historically, the media play a role in this and in making ideology efficaceous. His historical example is the centuries of Christian anti-semitism from St John Chrysostom through the third Lateran council which ruled that Jews should be slaves to Christians, leading to mass killings; then the Jews were blamed for the plague, causing persecutions that drove the Jews east. The media developed from carvings of the “Judensau” from 1210 (Cologne cathedral) through Luther’s viciously anti-semitic writings (1543) which also describe the Sau (whose ear is the Talmud in a carving on Luther’s church). The image persisted through the 17th and 18th centuries as printing improved; by the 19th we see Jews being pigs rather than merely eating pig excrement. They are creatures of a different type. In the 20th century the Freikorps (WW1 veteran militias in the 1920s) also adopted the imagery, transmitting it to the Nazis. And what about Pink Floyd’s inflatable pig, with a star of David projected on it?

    Tage Rai wonders what motivates someone to hurt or kill another person. Why, for example, when a man in Saudi Arabia is paralysed in a car crash, can a court and even a doctor calmly contemplate severing the responsible driver’s spine? The real story isn’t about moral failure, but about moral force. Often people kill or maim out of a moral sense. Violence can be increased by dehumanisation in some circumstances, namely where the deaths are instrumental, mere collateral damage; but to execute people you have to care. Self-control helps goal pursuit, so it only cuts violence when you know you shouldn’t do that; it doesn’t help when you think you must. Curiously, payment reduces moralistic punishment. So a huge question is this: how much violence is motivated by moral sentiments, one way or another? Max thinks most, but has no data.

    Lydia Wilson has spent eight years working on terrorism, interviewing radicals in prison. She’s come to the conclusion that there’s nothing special about ideological violence. Terror attacks do inspire fear, and the entry paths are understood: alienation, a sense of injustice whether personal or group, and a sense of a lack of a future; aggravating factors include job loss or relationship breakdown. When these hit at the point of identity formation, when young people are searching for their place in the world, the radicalisers can find fertile ground. Their ideology can explain the subject’s predicament and a scapegoat (immigrants, feminists, Zionists…). Many non-violent groups offer exactly the same: churches, cults, gangs, and even the local pub. The same drivers are seen in criminality, substance abuse, domestic violence and many other social ills. So why single out terrorism? Well, it’s newsworthy and the politicians feel they need to be seen to be doing something. Policy is always rushed by the perception of high risk and urgent need. The framing then defines the solutions people come up with. She things most of this doesn’t work, and often causes backlash; in one African country, the counter-narrative to Isis backfired as western-trained “moderate” imams embraced radical Islam (the government doubled the budget rather than admit they’d been fostering radicals). Structurally, it’s wrong as if you say to people that their religion makes them prone to violence it marginalises and stigmatises them. But it provides measurable outputs, boxes to tick, as a displacement from the messier real work. We need to reframe the problem, but there’s no obvious way to do that given political incentives.

    Questions started on how much people really trust algorithms; are the military uniquely gullible? Deference to authority is baked into military culture, so yes, they are open to authority that seems superhuman; automation bias may be a generic vulnerability of hierarchy. By comparison, there are studies of medics’ propensity to overrule checklists because of their social structure. The intelligence community seems to have its natural suspicion muted by machinery. And when soldiers come back from war, they stop killing – so there may be lessons in the cultural stuff around how violence stops. In the case of the Jews, there was a progression from their being different to being subhuman; there’s also the impact of the bizarre, as when you “click here to see weird stuff”. For automation bias, an issue may be the perceived reason for automation: is it a test? Was it mandated by experts? Automation embodies a lot more than we might think of the social structure that caused it to be built in the first place. There are also differences between strategy and tactics when it comes to radical action; and at a higher level, extreme political ideology has many more supporters than are prepared to back violent action either strategically or tactically. For example, will taking down Neonazi material from Youtube make a difference? Proposals for censorship of social media need to be much more subtle than at present if they’re to be effective. It’s known that stratcomms are extremely expensive if done well, as so usually they’re somewhere between a cheap hack and a fudge. A lot of our stratcomms against Isis don’t work as Muslims who don’t support them find them demeaning or offensive; there’s no real research on this. People don’t do things because they’re told to, they do things because of what they feel. There’s a direct analogy, in fact, with anti-phishing training which we know doesn’t work but which firms do anyway to tick the boxes. DoJ has spent millions trying to get people not to join gangs, and there are millions more on getting kids not to use drugs, and there’s study after study showing they didn’t work. Is anti-terrorist messaging any different? The one counterexample was Glasgow, which made drugs a public health issue, but we know no way of making politicians pay attention. The small successes we’ve had in deradicalisation have been one-on-one, just like classical rehabilitation or probation support, and in order to get this any traction we probably have to stop calling it deradicalisation.

  4. Bonnie Anderson started the afternoon session talking about notifications. Her previous work reported at SHB was on fMRI and security warnings; now we’re swamped by spammy stuff designed so we can’t differentiate it from important stuff, and consent to stuff we shouldn’t. She’s been repurposing her warning fatigue / habituation test and finds from an mTurk study that the latter predominates. She then did fMRI which was tricky given the physical/electromagnetic constraints, and found habituation: the more we see it, the less activation. She checked fatugue too to cross-check it wasn’t that. In theory you could cut this by changing the look and feel of the notifications, but that would be a bad idea from the usability / annoyance point of view.

    Judith Donath has been studying the cost of honesty. Facebook and other platforms seem to have been cutting the cost of signalling things like loyalty but this can have effects on the reliability of the signal. People think of communication as a conduit but it’s more than that; society isn’t just about transferring propositions from A to B. Since the 1970s we’ve been studying signals in the animal kingdom, many of which appear irrational (like the stag’s antlers or peacock’s tail) until you understand that they signal fitness. Online identity has involved arcane knowledge since at least the 1990s, ranging from specialist jargon to PGP keys. In the same way, groups can form around non-truths, such as the size of the crowd at a presidential inauguration; religions have used costly displays of alternative reality since God was a boy.

    David Levari studies prevalence-induced concept change in human judgment. He started with a list of things that TSA agents are supposed to look out for as signals of terrorism. What might you do, as an agent, if you don’t see any suspicious people today? If agents assume that there’s sampling without replacement and they’re seeing the whole experiment, they might assume that the risk is increasing. Do people think like that? Yes; he did experiments to find whether decision thresholds changed, measuring how concepts of colour, threat or unethical behaviour expanded.

    John Lyle is an engineering manager at Facebook, working on detecting fake accounts. What’s the definition? Most are driven by scripts; some set out to misrepresent, or do harm; but what about an account a child sets up for their dog? The practical problem is how to prioritise enforcement, and that means looking for the intent to do harm above all else. In the first 3 months of this year, they took down 2.2bn fake accounts, and still had over 2bn real ones; and this doesn’t include blocked attempts to create accounts. Type 1 errors (missed fake accounts) exist, and type 2 (genuine accounts killed off); so they isolate blocked accounts from the rest of Facebook and use ReCAPPTCHAs, plus other tests such as uploading an image that isn’t stock. No single test works for everyone, so it takes constant iteration of the workflow; and A/B testing is hard in battle against adaptable adversaries. The product also gets tweaked to generate more signal.

    Arun Vishwanath has been researching human cyber vulnerability and was an academic until recently. There are a lot of claims made about “the people problem” but not much in the way of knowledge of even data. Arun has tried in various contexts to test various proposed methodologies ranging from didactic training (doesn’t affect phishing success) to phishing your staff (may work against the trainer’s test but not against independent testing when you count people who click more than once). His suspicion, cognition, automaticity model (SCAM) looks for commonalities at the user level: suspicion is the key and is a function of habits.

    The discussion started on misaligned incentives; organisations make things more complex while the security guy wants to get clickthrough down, and there’s the fact already noted that organisations prefer to buy stuff rather than train people. Then there’s the question of when signalling behaviour becomes prominent; is it when identity becomes measurable? Again, it’s often misaligned incentives, between the signaller and the recipient. And what are the real incentives for Facebook; is it just to take down the dumb bots, in which case the surviving fake accounts might be the convincing ones and do more damage? The company just does what it can, and although in theory it comes down to the harms of the false positive and false negative, in practice there’s a lot of identical bots. Again, bank fraud teams find that moving a logo slightly can have a huge effect on abuse; it’s down to breaking the other side’s automation (real people are hard to program). In social media, there’s free riding; people trust the implied signal from apparent friends. In more traditional systems, there are continual fights over whether systems that partially automate human processes failed because of human weaknesses or because of structural problems. And some of the things we hope unmotivated people would find are actually rather rare, including phishing emails; we’d get swamped with false positives if people tried. Signals are increasingly contaminated by conflicts of interest, even with things as apparently simple as ReCAPTCHAs. We might also start looking for more individual tests: are there things that specific people can do, while others can’t? The things we expect right now are just weird, such as asking people to stick a foot through the steering wheel to make a turn signal. As for harm, it’s strongly contextual, which adds to the difficulty of using anything related to harm as a metric; and we’re often not measuring what we think we are.

  5. Yasemin Acar kicked off the first session on Thursday with a talk on how many security failures are due to developers being unable to use the available tools properly. Often the toolsmith’s failure is blatant, such as providing such poor documentation that coders turn to stackexchange for advice. A move to developer-centered security is overdue; we have to build tools for the programmers who’re going to use them rather than for security PhDs. She’s been studying how people choose security libraries; it turns out they state preferences for the right things (maturity, functionality, maintenance, source code) their revealed preference is for anything that will do the job. Once they’ve learned a tool, they will probably not change it even if they hear of bugs or other issues. We need usability testing for security libraries, and perhaps a labeling system to help with choice.

    Andrew Adams has been thinking about AI and autonomy. Can data protection law help? Sometime in the last 25 years, it became cheaper to just keep everything; there was no longer an economic incentive for minimisation. Now AI systems can do large-scale customisation in a world of media illiteracy and naive social trust. This also creates mechanisms for the social spread of fake news. Andrew reckons we don’t need new regulation, but just to apply the laws we already have; just because a platform like Facebook has access to user data, does not mean that they have the right to use it for purposes incompatible with those for which it was provided.

    Zinaida Benenson has been working on security update labels; she did a survey with over 700 users to determine their attitudes to claims of the lifetime of future security updates for cameras and home weather stations, selected as examples of sensitive and non-sensitive devices. The work will appear at IEEE S&P 2000.

    Rutger Leukfeldt has been studying security awareness of a population-based sample of 2,426 Dutch citizens who have volunteered to online activity monitoring and to fill out surveys. He’s collecting data about password strength for starters. He proposes to look at mood, time pressure and other decision factors. A second thread of research is pathways into cybercrime; the combinations of social ties and meeting places seem similar in structure to those in conventional crime, although the two communities are largely separate. Finally, they run a restorative justice program for first-time cyber-offenders.

    Alan Mislove has been studying discrimination in ad networks, and in particular when it comes to deciding which ad to show. Even if they are bid for identically, ads for bodybuilding versus cosmetics are stereotyped by gender. How is the classification happening? It seems to be down to the images used. He tried fooling the system using transparent images, that humans can’t see, and discovered that some automatic classification system is determining gender. A county-level analysis of North Carolina by race was used to discover race-biased delivery bias depending on whether music ad photos were for a country and western singer or a hip-hop group. This raises issues of whether Facebook is a publisher rather than a mere conduit.

    Jeff Yan has been working on Chinese scam villages. A number of common scams are run by specific villages in China, where scammers share scripts and other techniques with local colleagues who are neighbours and in many cases relatives. The patriarchal clan system was the base on Chinese society for centuries; this provides an alternative system of property rights that could be easily adapted to cybercrime. Given the nature of a criminal enterprise, conventional intellectual property right are not available, so other means are needed to scale up a successful enterprise. The enforcement response should be to treat such activities as organised crime; it isn’t (yet).

    Discussion started with interventions being tested in Britain by the NCA, who “knock and talk” – visit young cybercrime suspects, explain that they’re breaking the law, and warn them of consequences if they persist. Unfortunately they won’t do a randomised controlled trial, but they will allow researchers to interview the suspects. The Chinese villages are reminiscent of the US fake store frauds of the 1920s and 1930s, described in The Big Con; these were also group efforts. In the Netherlands, social opportunity is a big deal in organised crime; all sorts of bad guys recruit their relatives. Why don’t the victims wise up? Well, one Chinese scam has been running for ten years and the police only started paying attention four years ago; response is just slow. It’s also not always about economics. Read Gang leader for a day which discusses how most criminals don’t make enough money to support themselves and need day jobs too. There’s a lot of phishing from India now, as it used to have a lot of call centre jobs that are now moving to the Phillippines, so the people, the skills and the hardware are there for phone phishing. Diego Gambetta‘s “Codes of the underworld” describes the value of cohesive community in organised crime in Italy and elsewhere, while Richard Sosis looks at cohesion in religious groups; trust is established via rituals, some of which emphasise the cost of defecting. As for ad transparency and discrimination, there’s a start with Facebook’s facility whereby you can right-click to discover why you saw a particular ad. When it comes to gender discrimination, are there other explanations? Lots of women use cosmetics but few men bodybuild. A woman attendee claimed to never use cosmetics, yet to get make-up ads all the time; and to have spent a lot of time looking for a car to buy without later getting any car ads. It’s possible to manipulate the ads you get by killing cookies and clicking with care, but let’s call advertising discrimination what it is: sexism and racism. There are also gender issues with usability for programmers, as they’re overwhelmingly young men.

  6. Maria Brincker‘s interests range from neuroscience to philosophy of mind. She reminds us that websites try to change human behaviour, so we can’t see privacy as just a passive thing. When we are presented with a skewed view of the world, that matters. It’s obvious enough if Amazon moves their checkout button, but if our feed is filtered that’s less obvious. We don’t have an alarm system or an immune system for that. We still feel that we are autonomous and empowered but we’re not seeing the better options. When manipulating the newsfeed becomes the model of the business, a lot of the agency passes to the curators.

    Jean Camp is concerned about the use of WhatsApp to promote lynchings and other mob violence in India and elsewhere. It shows the limitations of our current understanding of privacy (and gender); we now think of it as data that can be sold, as a risk-benefit tradeoff, with limits founded on an old right to get your mail without the government messing with it. We need a broader concept, taking in Helen Nissebaum’s work on contextual integrity, and of the fact that data are a toxic asset with disposal costs. To get out of the weird trap, she sampled 820 Arab and Indian nationals. She’s finding that different populations have different risk preferences.

    Tyler Moore has found that the sharing of research datasets is in a sorry state. The incentives aren’t really there, as sharing can be expensive and is seen as ceding competitive advantage. The datasets are in effect a public good, and thus are underprovided. He examined 965 cybercrime papers from 2012-16; the data were made public only 18% of the time, but at least those papers that did so got more citations. For more, see the liveblog of Tyler’s talk at WEIS two days ago.

    Stuart Schechter reminded us that elementary school kids can understand passwords: the more you share a secret the less secret it is, and if people can guess it it won’t be secret any more. Two-factor may be “more secure” but it’s harder to understand; a well-known security researcher had to get a new square account after replacing his phone. And how hard is it to replace a yubikey with another, provoking the user to call support and enable the new one? Who thinks of the marginal costs of syncing new devices? And who sets up the recovery properly? Essentially nobody. He’s invented a mechanical device called a DiceKey to manage keys, which can be scanned by an app on your phone.

    Rick Wash has been working on phishing emails. What do people actually notice, in order to spot them? Different people notice different things, and the same person can notice different things at different times; as it’s about giving meaning to the email, your mental state is critical. There are actually two different types of expertise: “face-value” expertise which enables you to analyse the content of the email, and spot discrepancies with what you expect, such as a wrong .sig; and “phishing” expertise in what bad emails look like, so you have an alternative non-zero Bayesian prior, such as being sensitive to an action link at the bottom of the email.

    Ben Collier is a sociologist and criminologist who’s been studying Tor communities, interviewing 26 activists, developers, relay operators and others. You might expect shared values in the community, but this isn’t the case: there’s a lot of diversity in how people understand the organisation’s goals, and its links to politics and power. He’s used the social world analysis techniques of Susan Leigh Starr, which looks at discourse. Engineers see Tor as a structure, activists see it as a struggle, while “infrastructuralists” (any ideas for a better word) see it as “privacy as a service”. In more detail, engineers think political problems can be solved by doing engineering, by decentralising stuff; activists see it as an explicitly political act, linked to movements such as anti-racism; the third group (among which there are many relay operators) see a politically agnostic service that enables others to pursue their political agendas. Given this diversity, the key individuals are vital at holding things together by acting as translators between the agendas. Social world analysis looks for boundary objects, and the key one here is “privacy”, about which there is a productive ambiguity that creates the space for translation and bridging. To sum up, security at scale requires infrastructure; you don’t need consensus, but you need careful negotiation of values.

    Discussion started on the usability of two-factor authentication and moved to the dynamics of recognising a phish; lots of things can trigger an alert. Security experts tend not to understand ordinary people; according to one survey, one man changed his password from his wife’s name to his daughter’s. Our surveys are often biased to other experts. There are also issues of who controls the environment. It would be nice to see what other people were seeing; to be able to ask Alexa or some other assistant what she’s showing to other groups, but you can’t get that. As for Tor, it’s actually rather subtle; it provides anonymity, not privacy, and many people don’t get that. People who use Tor to log in to their own mail server may not gain much. But what it can do is mitigate some of the control points that allow power to be exercised over communications. Privacy can mean a lot of different things to different people.

  7. The seventh session started with Richard Clayton talking about booters; for ten dollars you can get a month’s worth of DDoS attack against targets of your choice, from your high school to Ecuador. About 50 websites offer service on an average day. Damon McCoy took down a lot by closing their PayPal accounts; the FBI arrested three people last year and one died of an overdose, seizing 15 domains. Richard has lots of data on the reflected amplified UDP attacks they ran, and has been matching this to takedowns. Attacks tend to be local, and there are other things to learn, such as that the NCA’s ads saying that booters were illegal did suppress crime growth for a while.

    Richard John has been studying communication in the wake of a disaster or terrorist attack. People nowadays rely on social media. What does that imply? There are lots of fake posts after events. How do people tell genuine tweets from misleading ones? Do they assess base rates, and if so how? Subjects were given $5 and then fined 25c for each misclassification with half the tweets being true and half false. Scepticism was positively correlated with performance; self-identified conservatives did worse than moderates or liberals; age was positively correlated; but confidence, gender, numeracy and education were not related. In a further experiment he manipulated the base rates between 20%, 50% and 80% and a $10 reward. The ROCs are much the same for the 50% groups, but the extreme base rate groups had odd results, perhaps because the tweets’ truthiness didn’t correspond with an extreme base rate. He wonders if people could learn to do better.

    Patrick Gage Kelley works for Google on security, privacy and abuse, and has been conducting a survey of privacy attitudes every December since 2015 in 16 countries. It covers government data use, surveillance and hackers; for example, in the USA 85% were concerned about government data use, with 37% extremely concerned (which has been stable for five years); twice as many are extremely concerned about hackers. Differences between countgries exist but are quite small; the proportion extremely angry about government data use varies from 10% in France to 20% in Germany. Most people support government access to data to protect them from violent crime. The right to send anonymous email is about 50%, with most Germans supporting it and most Brits not.

    Christof Paar has been studying cognitive obfuscation in hacking. Most of the cost of attack design is reverse engineering, and most real systems don’t get hacked as it’s too expensive. So can we push the costs up? Here, obfuscation may be useful, whether of hardware or software. He did two studies with 38 students each doing four exercises trying to reverse a crypto implementation to find an AES key using the HAL reverse engneering framework. Graduate students did better than undergrads, and students with a higher working memory score did better too. He hopes to use this methodology, and the best students already identified, to find out what kinds of obfuscation are cognitively more difficult.

    Jonathan Zittrain‘s topic was Machine learning and intellectual debt. Technical debt encompasses maintenance backlog and spaghetti code; is there an intellectual equivalent? Well, we discover truths about the world that we can’t explain, such as drugs that work but whose mechanism is unknown. Might AI be like asbestos, something that gets everywhere and which then costs a lot to get rid of? AI can fail weirdly for reasons we can’t fathom, or because of spurious correlations or even adversarial activity. Will this lead to a cargo-cult-like world of obscure causes being elevated into fetishes? Machine learning gives us a great stream of tasty fish without any fishing line. What happens to the university of the future if professors are just custodians of the machine in each department?

    Questions started on asbestos, the blockchain of its era, and moved to gravity, which we don’t understand, to mathematics, whose effectiveness we really don’t understand. We’re making the mistake of using machine learning for causal explanations, and if we want those we need to use other tools. There are real risks with sending an AI ahead of troops to distinguish friend from foe, and with the huge bubble of ML stuff coming through academia. We know we shouldn’t rely on this stuff, but that’s not how it’s marketed. We already see spammers gaming our ML systems by voting that their spams are not spam, so we have to hire humans to supervise them to stop that. Without this sort of maturity there will be bad results sooner or later. How do you assess half-baked theory against half-baked number crunching? Well, at least the former is falsifiable. Are there practical ways of setting the environment so that false rumours don’t proliferate after a crisis? You can look out for pranksters. How do we analyse intellectual debt? An important factor may be who owes it, and to whom, rather than trying to quantify it before we understand it better. How can we test AI? Often there will be predictions that we can test against what happens, but this is tricky; AIs may work often enough to take in people who would not be taken in by homeopathy.

  8. I started the eighth session talking about what we learned from our “Changing costs of cybercrime” paper at WEIS, and what I’ve learned from upgrading my book to a third edition: chapter 3 on psychology and usability may be of particular interest to the security and human behaviour community. The draft chapters are online for comment at https://www.cl.cam.ac.uk/~rja14/book.html.

    Jon Callas was next. He has long experience of security engineering, and this taught him that the first thing to do when developing a secure update phone was the update mechanism. The faster you can update stuff, the less your mistakes matter. One of the big problems facing us is figuring out how to patch everything, now that everything is becoming software. We also need privacy laws; he’s one of the few Americans who thinks that GDPR is a good thing!

    John Mueller works on threat perception, on all topics from cyberwar to election tampering. The former spans from sabotage to espionage; he’s not impressed with Stuxnet as it’s not at all clear that the Iranians were even slowed down. Espionage heaps more and more stuff on the pile, and it’s not clear this makes a huge difference. John asks why the Manning disclosures were classified, when we sort-of knew it all anyway. Isis’s efforts to coach would-be terrorists in the USA didn’t work as one of their recruits was an FBI agent, so the other got arrested. Their “How to make a bomb” stuff was at best misdirecting. He’s also sceptical about cybercrime. As for fake news, elections have always been full of bullshit. The real question is whether security measures decrease the risk enough to justify their cost. In short, much of the scepticism that sensible people justly apply to claims about terrorism can rightly be applied to cyber too.

    Molly Sauter did her PhD on venture capital. In 1970 it was not a serious thing and was mocked in daily newspapers; by 1984 Arthur Rock was praised in business magazines for doubling his money. The background was the space race and the Vietnam War pouring money into California, followed by the 1974 recession and the ERISA law stopping pension funds investing into limited partnerships. This was seen as a disaster: Bill Casey (later CIA Director) led a taskforce to roll back ERISA, create capital gains tax shelters for small business and venture capital, and allow risky investments provided portfolios were hedged. By 1976 this built scaling into the DNA of all the firms engaged in this business. They don’t care about security; they care about scaling.

    Bruce Schneier has been thinking about democracies as information systems; attacks on democracy, like the Russian disinformation campaigns, can be seen as attacks on information systems. How can Russian disinformation act as a stabliser there, and yet destabilise here? This is because the split of shared versus contested political knowledge is different in democracies as against dictatorships. Democrats agree on the rules but not on the ruler; in an autocracy everyone knows who’s in charge but knowledge of political actors is suppressed. This difference leads to the difference in vulnerability. It was assumed 20 years ago that the Internet would destabilise dictators as knowledge would become common; but it turns out that democracies are vulnerable to campaigns that undermine common knowledge and make everyone more distrustful. We’re seeing attacks on everything from news to the US census. The dictator’s dilemma was to get the facts needed to run the country without having so widely shared as to enable a revolt; there is a dual democrat’s dilemma of maintaining agreement on rules and facts. This model explains Russian strategy reasonably well, and leads to a taxonomy of defences: build audiences, strengthen consensus, build digital literacy, debunk lies, an honest ads act, make the truth louder, make useful idiots less useful, persistent engagement such as indicting people.

    Jim Waldo is a recovering philosopher, now a computer scientist, who believes we have approached privacy wrong. Almost all the data of interest are relational; they’re about Alice doing X with Bob. So Alice can’t control data when Bob has it too. A privacy regime should be based on use, and on provenance rather than on limiting collection. We also need to adapt defences to the environment; if there are few bugs we should try to fix them all, but if there are many then we should speed up the release cycle, as Jon Callas suggested. And we need to cause the US Congress to acquire some clue, rather than talking to each other about what might go wrong with AI in fifteen years.

  9. Thanks , I have recently been searching for info approximately this topic for a long time and yours is the best I’ve came upon so far. However, what about the conclusion? Are you sure concerning the supply?|

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *