11 thoughts on “Workshop on the Economics of Information Security 2014

  1. Markus Riek was the first speaker of WEIS 2014. He’s been investigating how the perceived cybercrime risk affects online service adoption. A WEIS 2012 paper on the costs of cybercrime suggested that the main cost was service avoidance; Markus started with Venkatesh and Davis’s technology acceptance model, as augmented by Featherman and Pavlou for perceived risk; added the criminological literature on the effects of prior victimisation and media reports on perceived risk; and looked at how user confidence moderates this model. They used the 2012 Eurobarometer cybersecurity report and found that perceived cybercrime risk was indeed significantly correlated with avoidance intention for all three studied services, namely online shopping, online banking and social networks. The correlation was much stronger for the first two, with transaction risk as opposed to privacy risk. It turns out that cybercrime experience explains all the perceived cybercrime risk leading to avoidance of online shopping; other factors seem to be in play with online banking and social networking. Finally, inexperienced users have less cybercrime experience but higher risk perception for these two activities.

    The second speaker was Jim Graves speaking on attitudes towards cybercrime (declaration: I’m a coauthor). Online activists appear to be treated much more harshly by prosecutors than equivalent street protesters; meanwhile online fraudsters are often treated more leniently than the traditional variety. Over two thousand mTurkers participated in six different between-subjects experiments to assess the seriousness or cybercrimes by type of data (directory vs medical info), scope 10 to 1,000,000 records) motivation (student, activist, profiteer) financial consequences, co-responsibility and whether the victim was a firm, a nonprofit or a government agency. Respondents saw more records as more sensitive than less; profiteers as much worse than students or activists; more expensive cases as more serious; that firms were less liable if servers were patched (but that the data were more sensitive in that case). In summary, participants recommend harsher sentences when cybercrimes involve more or more sensitive data, have costlier consequences, or are motivated by profit. This is much more rational than the reported behaviour of some prosecutors and judges.

  2. The second session started with my paper on the economics of surveillance which is blogged here.

    This was followed by Paul Laskowski speaking on Government Surveillance and Incentives to Abuse Power. In his model there are two players, a government and an opposition; variables are the surveillance level, the abuse of surveillance power, the popularity of government and opposition, and the probability of power changing. They assume that a little bit of abuse benefits the government, but a lot benefits the opposition; surveillance makes abuse more effective. There’s a unique level of abuse A* that minimises the probability of change – and this level increases with the level of surveillance. Less intuitive results from this model include that while the government is less popular than the opposition, more surveillance always decreases welfare; but when the government is more popular it can go either way. But one way or another, a government that wants to stay in power will always want to increase the level of surveillance, even beyond the welfare-maximising point. Improved models might include domestic versus foreign surveillance; multiple opponents (e.g. a democratic opposition but armed insurgents too)

    The workshop continued directly into a panel on cognitive security, moderated by David Reiter. The panellists were Coty Gonzalez and Christian Lebiere of CMU and Joachim Meyer of Tel-Aviv University. People have to tackle multiple different tasks from risk-taking through integration of large amounts of information against a background of heuristics, biases and other influences on risk perception.

    Coty Gonzales does cognitive modelling at CMU’s Dynamic Decision Making Laboratory. Cyber-conflict seems harder than the kinetic variety because of the complexity, large amounts of data, the rarity of real attacks, and their speed. Hertwig’s work on the description-experience gap suggests we can overcome much of the bias of prospect theory by training, as people making decisions from experience are more rational about risk. She developed this into instance-based learning theory which formalises the steps in the training process and represents the process computationally so we can make predictions about behaviour. She’s applying this to intrusion detection problems.

    Joachim Meyer argued that people often behave differently in the face of risk at different timescales, with behaviour related to a triad of factors, namely users’ exposure to risk, their responses to indicators, and system security settings. To explore this he created a Tetris game where malware attacks can eat your points, and you can protect them by saving them, whether routinely or in response to alerts. He found for example that expertise affected exposure to risks, but not the choice of security settings, while damage severity affected the settings but not the exposure.Players were measured on various risk aversion scales (DOSPERT, AER, QJE and BART); exposure to risk was only predicted to the BART score, while response to alerts was only predicted by the DOSPERT gamble score. He concludes that user risk taking is a result of a combination of different but interrelated behaviours, based on different types of cognitive process.

    Christian Lebiere’s topic was sensemaking. He has a project, ICArUS, which sets out to model cognitive processes in intelligence analysis. People forage for information and organise it into frames of a few data items; stuff gets organised into layers; instance-based learning complements this but is essentially associative. He’s trying to construct a computational theory of how cognitive biases arise. His “COIN AHA” model is designed to account for performance over five types of mission; he’s trying to adapt it to malware detection. His vision is that processing based on cognitive models sits between data mining and human analysis and complements them.

    David Reitter is interested in investment timing decisions: do we replace a rotting bridge pillar / patch a system / replace a phone now or later? Where the decision is not how to act but when to act, how can it deviate from the rational? Rivest’s FlipIt game gives a platform; how do real people play it? He looked at 400 players and found that modality matters: visual or temporal presentation made a difference, with visual easier; and individual differences were correlated with risk propensity and need-for-cognition (NFC). Risk seekers benefit from experience much longer than risk avoiders, and people with a high NFC did similarly. Also, high-NFC people did substantially better if their risk propensity was average.

  3. The lunchtime talk was by Rick Sullivan on “Ensuring payment security in the United States”. The Fed’s five-year strategy from fall 2013 includes not just protecting its own networks but enhancing end-to-end security and promoting industry best practice. Data breaches are considered to have become bad in the last couple of years, particularly with payments. Rick led a payment security landscape study. This showed the importance of incentives, and in particular how network rules allocate fraud losses, the mix between competition and collaboration, legal/regulatory uncertainty, supervision/enforcement, and the governance and membership structure of networks. Mobile payment systems are likely to be more risky because of the larger number of participants. Banks would like real-time information on frauds and threats. The Fed’s considering five strategies: (1) the establishment of a payment security advisory council (2) mobile payments security framework (3) accelerate development of payment security standards (4) fraud data collection and reporting (5) payment system security research. Perhaps we can get the bankers to cooperate more if they understand this is not just pain, but can bring benefit too. Rick is looking for feedback from the research community.

  4. The first speaker after lunch was Jim Graves who studied whether banks should reissue payment cards after a breach. Card reissue costs about $5 a pop at scale; a decision depends on many factors such as the efficacy of fraud monitoring and the number of undetected breaches. About half to two thirds of card frauds reported from victimisation surveys were from unknown causes; no data could be found on whether fraud monitoring works. Jim studied breach data and did a weighted average based on breach type; the number of cards compromised could be between 2 million and 40 million a year. The cost of not reissuing a card could thus be between 41c and $109. He suspects that the mean cost of not reissuing is low (under $5) so that in most breaches it’s cheaper to do nothing; however there’s probably a long tail of really expensive compromises. The big problem is that we just don’t have data. Discussion touched in the facts the big breaches (TJX, Target, Heartland,…) skew everything; and brand costs re also significant for issuers.

    Timothy Peacock went next with Automation and Disruption in Stolen Payment Card Markets. How do carders estimate the value of a batch of stolen card numbers? One way is refining – using a botnet to make large numbers of small transactions to see which cards work. His idea is to limit this by stopping automated payment card testing. Refining merchants and cashout merchants are different; the former face at most a small reversal fee, but do many harms downstream. Should they be incentivised by regulation, liability or cross-subsidy? Liability is hard because of information issues; we’re likely to see a fierce lobby fight over liability with the introduction of chip and pin; and the impact would be limited to large-volume websites. Discussion raised the possibility of sampling by the bad guys, and whether 3Dsecure could provide a single control point.

    Alison Miller was last with Defending Debit. She’s interested in how payment industry participants decide how much security is enough. For credit card issuers, the business model is based on interest and the surrounding fees (2.1% average in 2013 for Visa premium), while for debit cards it’s the interchange fees (0.69% for pin-based debit). The Durbin amendment capped interchange fees for banks with assets over $10bn. After a breach, issuers can use restrictive authorisation strategies (from simple caps to incorporating compromise in scoring algorithms) rather than just reissuing; more proactive options include tech from CVV and 3DS to EMV. Durbin means that instead of having to do dozens of good transactions to make back a bad one, it’s over 1000; the fix is to be sensitive to high-dollar transactions. The 40m cards compromised cards in Target led to $170m in response costs, with 17.2 cards reissued so far; one market signal was a recommitment to the 2015 EMV liability shift, but only from credit card issuers. Debit card issuers have preferred to impose simple transaction caps; they are unenthusiastic about EMV post-Durbin. Overall, despite some large breaches, system-wide fraud is at near-lows. Banks have been using transactional fraud liability shifts rather than PCI-DSS. In conclusion, Durbin did have an effect, but not the expected one; and liability shifts may affect fraud, but maybe not technology deployment.

  5. Neha Chachra has been studying the impact of spam domain blacklisting. She obtained leaked datasets from SpamIt and GlavNet, two online pharmacy affiliate programs; the former recruited via email spam and the letter via search spam. 20% of hotmail sales and 40% of Yahoo sales were from junk folders, suggesting that affiliates were actively looking for unlicensed pharmaceuticals in their junk mail folders. She looked at how spam revenue was affected by domain blacklisting on URIBL . Most domains appeared on the list within 48 hours, and made $21 in revenue before that happened; 88% of the SpamIt domains were eventually blacklisted, but the other 12% earned 62% of the total revenue, or $1900 per domain. Domains continue to monetize on a long tail after blacklisting (on average $147 per domain, though it declines from about 2 hours after blacklisting). If blacklisting were done in the infrastructure, say in the browser, then the $147 would go and the spammer would be left with $21. Blacklists are constructed using honeypots and user marking; competent spammers only email real humans (96% of blacklisted domains and 0.5 of the others appeared on honeypot feeds) or else redirected responses through intermediate domains (wordpress, facebook etc), inexpensive .ru domains, or compromised sites. In conclusion, blacklisting isn’t fast enough to overwhelm the cost of replacing domains, the penalty is too low, and competent spammers evade it by targeting real humans and using cut-outs. In discussion, email providers mostly use filtering to give users a better experience rather than protect them, and that works fine off URLs; emails marked ‘phish’ are not delivered at all.

    Richard Clayton collected a lot of data which show that When criminals register domain names for online criminality they don’t provide their names and addresses. Some hide behind privacy or proxy services, but there’s no point in banning these as those villains would then just use the techniques the others use. Richard collected large numbers of criminal domains, fetched whois data and found whether the registrant was using a privacy or proxy service; if not, he looked for a phone number, a random sample of which he called. Typically criminal domains had been registered in the names of people who denied knowledge of them or at invalid numbers, or were compromised. Curious other facts include the fact that almost as many banks use privacy and proxy services as child sex abuse image sites; and even more legal adult sites do.

    Samaneh Tajalizadehkhoob asks Why Them? Extracting Intelligence about Target Selection from Zeus Financial Malware. Online banking fraud in Europe costs about Eur 800m according to the ECB; why are some banks targeted while others are not? An industry partner collected 11,000 Zeus config files from 2009-13. In this period, about 2000 botnets attacked about the same number of targets in 94 countries, three quarters of them being in finance. Eighty-odd banks were attacked continuously, and hundreds more sustained transient attacks, whose duration had a power-law distribution. Although the Zeus inject code was highly reused and its source code, became widely available, its criminal market did not expand as theory and experts predicted; the number of attacked domains remained below a certain ceiling, which suggested bottlenecks elsewhere (such as in the recruitment of money mules). In discussion it was suggested that the Zeus market size is limited by the support function.

  6. Kim Kaivanto kicked off the second day’s sessions with a talk on The Effect of Decentralized Behavioral Decision Making on System-Level Risk. His idea is to meld signal detection theory with prospect theory; the former give the classic ROC analysis which assumes risk neutrality and against which behavioural variants are benchmarked. He extends it with prospect theory using a “neo-additive” (piecewise linear but with jumps) prospect theory value function. This gives steeper tangents and more conservative decision making. The upshot is that psychological factors in deception magnify the effect of being “behavioural” (in the sense of operating under cumulative prospect theory). This model is validated using both comparative statics and simulation results.

    Christos Ioannidis followed on Resilience in Information Stewardship. In an environment with interacting agents, stewards can emerge naturally who help maintain a shared asset. He models the contribution of stewards in helping a system recover from a shock such as an attack; asset owners choose asset mixes and security investment while attackers; attackers observe defence posture and decide whom and when to attack; an emergent steward sets a Stackelberg policy framework which, with perfect information, reduces attacks. He then weakens this by assuming the steward has limited powers of persuasion/investment, and then by reducing the available information; in this case the steward can actually degrade the free-market outcome. So institution design matters.

    Neil Gandal’s topic was Competition in the Crypto-Currency Market. There are network effects in both crypto-currencies and markets, the former being one-sided and the latter two-sided; but the winner-take-all effects are mitigated by the risk of a winner disappearing, as Mt Gox did. Neil analysed exchange rate data from BTC-e, Cryptsy, Bitstamp and Bitfinex for Bitcoin and six other cryptocurrencies and divided the data into two periods before and after the takedown of Silk Road. The bitcoin price was fairly stable in the first period but hugely volatile in the second; in the first bitcoin strengthened against its competitors, as if winner-take-all effects were starting to kick in, while this reversed later – though the other coins get stronger with respect to bitcoin when bitcoin goes up against the dollar, and vice versa, signalling perhaps that when confidence grows in crypto in general, the newer coins gain more than bitcoin does. Prices are indeed correlated with interest shown in bitcoin and litecoin as measured by Google trends.

    Viet Pham has been working on Incentive Engineering for Outsourced Computation in the Face of Collusion. When computational tasks are outsourced, lazy workers can return guessed results; but auditing is expensive and crypto can be more so. What’s optimal? He assumes that enforceable fines are bounded. In the simple case, principals choose an auditing rate, reward and punishment; the standard principal-agent analysis gives a non-convex optimisation, where the fine is always the maximum fine and often you can’t find a solution that satisfies incentive compatibility and participation. With redundancy, the task is given to two agents who’re punished if they answer differently; there [honest, honest] is a Nash equilibrium and feasible contracts are always possible. If agents can collude a bit, they can be honest selectively but redundancy is still useful, particularly if you have a bounty scheme to induce players to betray colluders (but take care that agents don’t conspire for one of them to get the bounty and share it later); if collusion is unlimited then optimal contracts don’t use redundancy at all. Quite a lot of the behaviour is driven by the size of the maximum enforceable fine.

  7. The panel discussion on genome privacy was kicked off by Aleksandra Slavkovic, whose research topic is statistical disclosure control. Little is formally known yet about the potential disclosure risks of genetic data or how we might protect them; yet NIH policies demand active involvement of their funded genome researchers in both data sharing and confidentiality protection. Yet many examples have taught us over the past two decades of the risks of re-identifying personal data as more data become available, and a relevant case may be dbGap which contains data from NIH-funded research. It’s “de-identified” according to HIPAA (which can mean removing 18 basic identifiers, or in some cases an expert determination in respect of redaction). In 2008, Homer et al showed it was possible to reidentify even aggregated data, upon which NIH removed a lot of data from its website. There have been some advances since; differentially private approaches work only for small samples.

    Carol Gunter was next; his focus was primary care. Will genomic data be centralised, or tethered at the provider and shared – and if so will be sharing be doctor-doctor, doctor-insurer, or involve the patient? US law is moving towards a personal health record which enables the patient to get a copy. Genomic data will have a big impact here because of its uniqueness, its implications for kinship, and its mystique. Direct-to-consumer access speaks to interests in genealogy, and enables paternity testing; and it’s available outside the medical profession, which is crufty but cheap, and will have a disruptive impact on clinical practice. Can the DTC stuff and the clinical stuff cohabit in a single information-sharing system? It would be helpful if we didn’t have to retest again and again and again; might there eventually be an app market for viewing your data? They could do genealogy, enable genomic social networking, or provide personalised subscriptions to research literature.

    Third was Vitaly Shmatikov who criticised much current “genomic privacy” work for looking at anonymising metadata but not touching the genome data itself, or doing maths of no relevance to practitioners. Why is privacy protection so hard for genetic data? Well, there are three main reasons. Genetic data does not become less valuable over time but more valuable; we don’t really know yet what parts of it are sensitive, though we learn more over time; and finally, it’s shared with relatives. This makes it a much tougher privacy problem than any other, and he has no idea to solve it; he sees only obstacles. Progress might perhaps come from better biological understanding of genetic data (so don’t expect any progress from this panel here).

    I went fourth with a brief talk on the policy problems we’re hitting in the UK with HES, CPRD and the 100,000 genomes project.

    Discussion started on what controls might work. If genomes were kept in the hospitals that sequenced them there would not be an issue but there are huge pressures to share for research, administration and marketing. What can push back? At least in the USA the Homer paper caused an established sharing program to be cut back. However disclosure risks are just not clear. It’s not just the genomic data that matter; we could simulate meiosis and generate in silico a virtual population that might be our ten-times-removed descendants. Despite its being representative, it’s unlikely researchers would find it much use, as they wouldn’t have real medical records for these virtual persons. And there are big holes: left DNA is legal to collect and use in the USA, and in a dystopian future might be a concern (Europe’s privacy laws are different). As for public attitudes, there’s been a huge shift in the direct-to-consumer field which is now mainstream; another big use will be advertising for personalised medicine, which in practice will probably mean pushing us existing drugs for diseases for which we don’t have symptoms yet. Are there more severe harms? The NIH databases were taken down because of the risk that a crime perpetrator might be identified in the data leading to the whole dataset being subpoena’ed. Can we establish rules that limit use for law enforcement? Probably not for investigation, though we can perturb data in ways that make it much less useful in evidence; consider the census rule in some countries of swapping one record in ten to prevent use in prosecution. (An easier use case might be adoption.) As for really severe harm, if population scale DNA databases became public they would reveal misattribution of paternity on a huge scale, leading to disinheritances, divorces, and worse. Statistical disclosure control is no help against such threats, or issues around genes such as BRCA or Huntington’s; it may work for the census but that’s because we understand the data and its uses rather well by now. We just don’t know what genes will be stigmatising in ten years’ time.

  8. Parinaz Naghizadeh studies Voluntary Participation in Cyber-Insurance Markets. Previous cyber-insurance literature contrasted competititve markets where policies are optimised from individual users’ viewpoints, which decresaes the incentive to invest, and monopolistic suppliers who can steer socially optimal investments. Parinaz studies the latter case where the insurer is a nonprofit and gets an investment profile and price profile proposed by each member; some members are less efficient in investing in security and get higher premiums. This is socially optimal but begs the question of participation: free riders can enjoy positive spillovers, and there are cases where there’s no mechanism to fix this. She proves that profit neutrality, social optimality and participation cannot all be achieved together. Possible resolutions are capital injection (e.g. catastrophe insurance from the government); an epsilon-optimal solution; partial coverage; or bundling insurance with other incentives such as approved government supplier status.

    Jeremy Clark spoke next On Decentralizing Prediction Markets and Order Books. Prediction markets usually have lower error than polling results; but firms like Intrade only make a market in popular contests such as elections. How can you set up a market for (say) the Turing prize? He decided to use bitcoin as a platform and create an alternative “altcoin” currency called XFT to do this (though you can also use coloured coins on the main bitcoin blockchain). He’s designed a transaction syntax including opening and closing markets, trading shares, and winner declaration (a machine-readable feed, a miners’ vote, a users’ vote or a trusted human arbiter). The order book runs much the same way as existing markets; miners get to keep any spread as their fee. In discussion, it was remarked that the probability of a misdeclaration would be discounted from

    The last speaker of the session was Bongkot Jenjarrussakul speaking on Japanese loyalty programs. There are over 200 of these spread across nine industries, and she’s been analysing their liquidity, security efforts and actual security levels. she used sectoral figures for security investment and cybercrime and found no correlation with the liquidity of that sector’s loyalty program. As proxy for security effort she used whether a physical card was required for registration. She found that both higher liquidity and greater security effort reduced the impact from security incidents with high significance (p<0.001).

  9. Alessandro Acquisti started the last session, talking on Framing and the Malleability of Privacy Choices – part of his ongoing fight against notice and consent as the sole pillar of privacy in the USA. Industry argues that increased transparency and control empower individuals without getting in the way of business; Alessandro argues that these are necessary but not sufficient for privacy. To add to his previous work on herding, context and other overt mechanisms, he now studies subtle mechanisms. Why for example did Facebook change its “privacy settings” to just settings”? It turns out that the labelling, composition and presentation of the choice set impact decision frames. In their first experiment, choices were privacy settings or survey settings, and marked as of high or low importance; in each case the first option led to more protective choices. In the second, the importance manipulation was replaced by a panel of homogeneous or mixed relevance settings; again, homogeneous settings led to more protective behaviour (and the effect was more striking among users who spent less time thinking). In the third, the manipulation was between prohibit and allow; again, more people showed protective behaviour when they did so by means of a prohibition. No compensatory disclosure was observed, leading to questions about the control paradox, and a Frost quote that “The strongest and most effective force in guaranteeing the long-term maintenance of power is.. consent in all the forms in which the dominated acquiesce in their own domination.”

    Juhee Kwon’s topic was meaningful healthcare security in the context of the rapidly increasing adoption of electronic healthcare records in the USA (up from 10% to 50% since 2008). One reason for this is government subsidy; since 2011 physicians and hospitals get nontrivial payments if they attest that they are making meaningful use of EHRs (such as $15,000 for a physician in private practice). Causation is unclear: perhaps attestation improves healthcare security outcomes; perhaps hospitals with less breaches or more resources will be more likely to attest; or perhaps the driver is an unobservable such as a strong security culture. So she used a probit model, propensity scores and a difference-in-differences model to analyse the HIMSS database for 4,672 hospitals of which 1,472 attested to meaningful use and which reported 606 data breaches in total between 2008 and 2014. It turns out that prior breaches by outsiders can expedite attestation; the number of breaches due to mishandled data increased in the first period (before attestation was introduced) and decreased in the second, in the attesting hopsitals; attesting hospitals had more breaches overall; but attestation had no effect on breaches by malicious insiders. She concludes that incentivising hospitals to increase EHR use should have been accompanied by robust and standardised security.

    The last talk of WEIS 2014 was given by Laura Brandimarte on Privacy Trade-Offs of Geo-Location. In the 2020 census, the US government might want to use our location information; how will people react once they realise their location can be easily tracked? Will they withhold personal information, or is the US government trusted enough? She did four between-subject randomised experiments manipulating geolocation awareness, requesting information and privacy salience. When they manipulated geolocation awareness, or simply requested their location, subjects were less likely to disclose sensitive information. When the requesting institution was researchers, the census bureau or the government, subjects were more prepared to disclose to researchers, then the census, then the government last. Third, they made privacy salient by getting subjects to solve an anagram for “Snowden” or “Clinton”: this was less significant than the institution effect. Finally they geolocated everybody and manipulated institution and privacy salience; this largely repeated the previous results, but with a small effect of salience priming where the government was the requesting institution. In no experiment were subjects less likely to answer the actual census questions – though mTurk rules prevented her asking for any of the personally identifiable information that the census actually demands.

  10. This is great and very informative for some cybercrime work we’re doing here. Thank you! I am perusing your co-authored piece on attitudes toward cybercrime with great interest.

    I tried to look at Riek’s paper and it’s not loading..! 🙁 Any chance it’ll be available soon?

    Thanks again, Ladan (at Durham)

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *