8 thoughts on “Workshop on the economics of information security 2011

  1. The first keynote speaker was Chris Greer from the Office of Science and Technology Policy at the White House. The May 2009 Cyberspace policy review made cybersecurity an Obama administration priority, linked to its innovation strategy. The White House recently released an international strategy for cyberspace, which includes protecting free speech; and it sent a legislative proposal to Congress including national data breach reporting and cybercrime penalties. Other initiatives include cybersecurity education, trustworthy identities and research – where SCORE coordinates research funding for the classified community and the open community, one of whose four priorities under NITRD is cyber economic incentives.

    The cyber economics topics are the economics of legislation and policy – immunity, liability, safe harbor, material disclosure and so on; market factors such as valuation, technology risk, standards and innovation, criminal markets, risk decision-making and intellectual arbitrage; and cyber-insurance challenges such as quantitative risk assessment, moral hazard, risk pooling, and catastrophic or interdependent risks. The cyber security team at the White House has been successful in moving funds to economic and behavioural research in recent BAAs and elsewhere.

    In response to questions, Chris said that the targets were now people more than machines, which pointed to growing importance of social and behavioural security; acknowledged the importance of international efforts (for example, the BAA has an MoU with the UK); said he expected no major budget increases for enforcement; argued for a worldwide norm for respect for intellectual property as a counter to economic espionage; defended NSTIC against criticism that a uniform ID regime would deepen perverse incentives online, saying that limiting the number of identity interactions would cut costs; acknowledged limitations to quantitative decision-making on trust; and admitted that while the current legislative proposal didn’t contain a private right of action, that might come up in later interactions with Congress.

  2. The first regular session speaker was Sam Ramsbotham who studied the impact of immediate disclosure on exploitation. The key finding is that immediate disclosure of a vulnerability increases the risk that it will be exploited in at least one attack and the number of targets attacked but, curiously, is associated with a small drop in total attack volume. His tentative explanation is that defenders react more quickly and compensate. The data came from a monitoring company selling signature-based intrusion detection to corporate clients: 400+ million alerts 2006-7 from 960 clients were matched to the National Vulnerability Database and 1201 specific vulnerabilities were studied for this paper, observed 1,152,406 times. Overall there wasn’t much difference between immediate and delayed disclosure.

    Cormac Herley was next with “Where did all the attacks go?” Most users are insecure, but still avoid harm; so we could do with a more accurate threat model than Alice and Bob. His proposal is that a small set Charles_j sends broadcast attacks at a large set Alice_i whenever benefit > cost. In this model the defence is in effect sum-of-efforts by targeted users, cascaded with best-shot at the service provider. Even if profitable targets exist, the average success rate might be too low and the whole attack becomes unprofitable. Also, attackers collide too often as they compete for the vulnerable users. So avoiding harm can be much cheaper than being secure.

    In the third talk – “Sex, lies and cybercrime surveys” – Dinei Florencio pointed out that men claim to have between 2.5 and 9 times as many partners as women, but if you consider only the subset claiming 20 or fewer partners, the difference disappears; it’s an artefact of exaggerated and often rounded claims by a minority of men. He discussed the possible effects of such errors and argued that they probably apply to cybercrime surveys too where the ratio of mean to median is often between 4 and 10.

    Last up in this session was Brett Stone-Gross on “The Underground Economy of Fake Antivirus Software”. He got data from 21 servers including tools, affiliate networks and financial figures. The market’s driven by affiliates who’re paid to infect computers and are paid via webmoney. The first server they looked at made $11m in three months; only 1544 chargebacks from 189,342 sales out of 8.403,008 installs; a 2.4% conversion rate. The second made 2.1% and the third 2.2% – netting $117m over two years. Refund rates were between 3% and 9%. Payment processors included Chronopay, a big Russian firm with links to criminal organisations and possibly the Russian government; Brett showed emails between operators and processors demonstrating clear collusion. The processor was taking an 18% fee for a “high-risk merchant account” and was giving the operator helpful advice and tech support. But it set a chargeback threshold over which the merchant would be terminated. The processors might be the weakest link; Brett argued that VISA and MasterCard should stop doing business with Chronopay.

  3. The afternoon session started with Nevena Vratonjic telling us “The Inconvenient Truth about Web Certificates”. She surveyed the top million websites (according to Alexa) and found 34.7% implementing https; but less than a percent forced its use. But only 16% use valid certs. The causes of failure are, in order, domain mismatch (45%); self-signed certs; and expired certs. Also, 61% of “good” certs are only domain-validated so it’s not clear what they’re worth. Only 5.7% of websites do it right. The incentives are wrong: sites want cheap certs; CAs want to sell as many certs as possible while ducking liability; users are trained to ignore warnings; and browsers have little incentive to restrict access to websites. It’s a lemons market. But there are metrics: some CAs (such as Godaddy) have a higher success rate than others (such as Equifax). Should lawmakers make CAs liable, or browser vendors limit the CAs they support? (The data are available here.) In questions, Richard Clayton pointed out that the owner of example.com maybe only intended its cert to be used for verysecure.example.com; Nevena replied that if they still serve the cert on example.com they’re doing it wrong. Stuart Schechter remarked that domain validation can be better as it’s harder to intercept mail to microsoft.com than it is to forge a fax.

    Catherine Tucker then presented the first study of advertising on Facebook. Ads on social networks can provoke negative reactions, leading to low click-through rates; this led SN operators to introduce privacy controls. So: would increasing privacy salience alarm users or reassure them? She studied the May 2010 policy change on Facebook using an advertising field test run by a nonprofit, and found that the click-through rate for personalised advertising more than doubled after the change (generic ads wee unaffected). The policy change affected only consumer perceptions, not reality, as Facebook considers even personalised advertising to be “anonymous” and thus outside its privacy framework. She concludes that giving users more control can make ads more effective. She admitted in questioning that the change could be due to the publicity surrounding the policy change rather than the actual mechanism changes; but one way or another, convincing users that it cared about privacy was good for Facebook.

    The third speaker was Tyler Moore, talking about “Economic Tussles in Federated Identity Management”. Identity platforms would be two-sided markets and lead to four tussles. First, who gets to collect and control transaction data? OpenID benefits ID providers and users but not service providers, while Facebook gives service providers much more. Looking at the top 300 Alexa sites he found identity providers who share the social graph are accepted at significantly more sites. Second, who sets the authentication rules? Time-to-market matters more than robustness, and established payment networks may tolerate more fraud. Third, what happens when things go wrong? Different error types, sizes and clarity lead to different strategies. The final tussle is who gains by interoperability: this helps users and identity providers but not service providers. In questions, coauthor Susan Landau remarked that NSTIC was trying to do many different things and should be thought of as multiple solutions rather than one. On the Internet, tussle three was perhaps the most important.

    Last speaker of the session was Laura Brandimarte reporting research that “Negative Information Looms Longer than Positive Information”, a topic on which there’s already generic research; her work extends this, showing that bad and good impacts are discounted in different ways. They used allocation games with side information on other players to test inferred versus direct judgement; whether the object was a person or a company; the time; and valence. Good information doesn’t matter unless it’s recent; defamatory information persists a lot longer. It’s a matter of discount rate, not initial impact.

  4. Richard Clayton opened the third session by summarising a report on the resilience of the Internet interconnect (full disclosure: I’m a coauthor). The report is an insider’s account of how the Internet actually works and in particular how the technical and business interconnection arrangements between ISPs contribute to, and could in some circumstances undermine, the Internet’s resilience. The Border Gateway Protocol (BGP) generally works well but is vulnerable to attacks and deals with reachability rather than capacity. ISPs have few incentives to secure it technically, or to provide enough surplus capacity to ensure reliability; the transit market is rapidly consolidating, leading to the possibility of significant market power. The report came up with eleven recommendations which have now become European policy, and should be compulsory reading for anyone doing serious work in the field. In questions he remarked that negotiations between the big transit providers are more like poker than business.

    Next, Steven Hofmeyer presented “Modeling Internet-Scale Policies for Cleaning up Malware”. This uses an agent-based model called ASIM to model the Internet at the Autonomous System level; it captures the scale economies from routing traffic in a multi-round game. Adding “wickedness” to the model (the proportion of compromised computers per agent) helps show that interventions are more effective if done by the largest agents; ingress filtering is more effective than egress. They can also explore the idea that some agents are too big to blacklist, and simulate evolving networks to see the long-term effects of interventions: over time the good traffic lost as a result of filtering declines.

  5. The second day’s keynote was on “Neuroeconomics & Experimental Economics: What They Can Tell us about Human Risk and Decision Making” and the speaker was Kevin McCabe, one of the founders of neuroeconomics (http://www.kevinmccabe.net/kevin/, http://www.neuroeconomics.net ).

    Experimental economics is new, going back to 1952 with the Santa Monica seminar; seminal papers included Vernon Smith’s 1962 “An experimental study of Competitive Market Behavior” reporting experiments to model markets as rules for message sending and receiving plus production rules; he found them surprisingly robust. His 1982 “Microeconomic Systems as an Experimental Science” established methodology for future work; by 1988 things had become computerised and he studied market bubbles, trying all sorts of rule tweaks to suppress them. Eventually he got the Nobel in 2002.

    A second thread was started by Kahneman and Tversky. Expected utility theory predicts how rational actors should value expected gains in the future as a function of risk; yet people don’t behave this way. Their prospect theory explains risk preferences better. The methodology for measuring risk preferences is now fairly robust, and experiments turn up many interesting twists: for example, people are more risk-averse when deciding for themselves than when deciding for others, and judges seem to be risk averse (there’s a theory that people select jobs based on personality type). Curiously, individual risk preferences depend on auction rules: people have a higher appetite for risk in first-price auctions than in second-price auctions. There are now automated tools, such as Z-tree which supports experimental economics, and Willow for programming experiments in Python (http://econwillow.sourceforge.net).

    Since 2000 the new thing is fMRI which measures localised oxygen demand in the brain. For example, the dopamine system is a reward-oriented learning system; it does temporal-difference learning as rewards change. Kevin and colleagues looked at how subjects reacted to stock market data depending on how heavily they were invested; there’s a fictive learning signal (or regret signal) of how much you could have made if you’d acted earlier with current information. It turns out that fictive error predicts change in investment behaviour; what’s more, under FMRi you can decompose the effect into fictive learning and temporal difference learning. Curiously, the signal doesn’t affect heavy smokers if they’ve just smoked. As for the future, we’re starting to see behavioural genetics experiments (though there are significant complexity issues) and once a genome costs $100 there will be huge floods of data.

    Online worlds are another frontier, bridging lab and field, and allowing the study of context, language, the emergence of norms and rules, the formation of social networks and organisations, large scale, long time horizons and persistence. In Second Life Kevin’s team uses TerraEcon to do fascinating experiments quickly, and expects wider virtual-world collaboration among experimental economists. OpenSIM is an open-source alternative: it has fewer users, is harder to use and has poor support but is cheaper and way more controllable (you can hack the back end not just the browser). For example, they did an experiment on managing the commons following Elinor Ostrum’s work on the circumstances in which cooperative management of common-pool resources is sustainable. They set up “hurricane island” where people can volunteer for weather defence; they earn no money while doing so but protect housing stock. Dealing with the free-rider problem can involve repeated games and/or language; the environment gets people to self-organise roughly according to Ostrum’s model (people make a plan, get buy-in, organise a monitoring scheme and use social reward). Failures generally come from a bad plan, whereupon people go back to the subgame-perfect equilibrium and become angry losers. They hope that language analysis may give an early indicator of failing cooperation.

    Points arising in questions include: online field experiments provide more data faster than FMRi experiments which are slow; institutional work tends to be more robust than behavioural, because people tend to work through intermediate goals that are often not stable preferences, making behaviour hard to predict. Adversarial versus competitive behaviour is difficult – the terrorist game is to get opponents to devote large resources to second-order things that the price system says is inefficient; on trust, people have more than one system for trusting each other – you can trust someone as a person, or because they are in an understood situation, and it’s the situational type of trust that institutions can hope to manage. Personal trust leads to powerful in-groups which can change institutions, extract rents and control things. The dynamics between personal and situational trust are interesting and depend on people’s experience with the institution, and their perception of their own risk management (particularly their worst exposure). Institutions rarely last more than a century because it’s hard to stay in a sweet spot where you manage to scale up trust from personal to situational.

  6. Julian Williams started the second session with a talk on “Fixed Costs, Investment Rigidities, and Risk Aversion in Information Security: A Utility-theoretic Approach”. He studies security investment as a decision-under-uncertainty problem following Gordon, Loeb, Arora and others but taking into account multidimensional threats and the kink points in utility functions from prospect theory. This gives a dynamic optimisation problem which they solve to get optimal investment: depending on the parameters a firm might never patch, patch at once, or patch on a planned cycle.

    Terrence August was next, on “Who Should be Responsible for Software Security?” Some people want software vendors held liable for vulnerabilities; others argue that this would raise prices stifle innovation and create market entry barriers. The paper models consumer preferences and vendor liability and argues that things work quite differently depending on the risk of zero-day attacks and market power: if liability leads to price increases this can cut sales leading to lower welfare, while patching can substitute for vendor investment. Where the risk of zero-day attacks is low, security standards give the best outcome (highest welfare and greatest investment); where it’s high, then if the vendor community is concentrated, patch liability may give the best outcome.

    Juhee Kwon and Eric Johnson then talked about their work on “An Organizational Learning Perspective on Proactive vs. Reactive investment in Information Security”. The existing literature on how well firms learn from investments in quality shows that firms who recalled a product voluntarily learned more than firms that were forced too. Juhee and Eric studied the 281 healthcare firms reporting data breaches from the HIMSS database of 2,386 health organisations and found the same effect: proactive investments had more effect, their effect is reduced if they’re forced by regulation. Also, external breaches affect investment while internal ones don’t. They suggest that organisations should be required to spend a certain proportion of their IT budget on security rather than told what to spend it on.

    Last speaker of the morning was Dallas Wood, “Assessing Home Internet Users’ Demand for Security: Will They Pay ISPs?” Various writers have remarked that ISPs are well placed to spot infected subscriber machines and quarantine those users, or at least nudge them into cleaning up. What would the costs be, and how would users react? Discrete choice experiments were used to assess preferences on six attributes: additional fees, time spent on compliance, restrictions on access vs reduced risk of identity theft, of system slowdown/crash and of harm to others. Home users want “lower taxes and more services” – low fees, no compliance and no risk. In explicit trade-offs, they will pay about $4 per month to avoid crashes or disconnections, and over $6 to avoid identity theft. They’ll only spend $2.94 to avoid harm to others, and a mere 73c to avoid spending an hour complying with security requirements.

  7. Wednesday afternoon started with a panel discussion on federal R&D initiatives in cyber economics. Carl Landwehr and David Croson from NSF were joined by Steven Borbash of the NSA and Jean Camp. Steven said that the military has a short-term focus; it’s good at figuring out what’s needed in the next six months but not why certain failures have been pervasive for decades. The DoD is now trying to get at the science underlying security engineering; part of the answer is probably economics, and part probably psychology. David is in the NSF’s Directorate for Social, Behavioral and Economic Sciences (SBE) which is thinking of a cybersecurity program but meantime welcomes applications to existing programs (including economics; sociology; innovation policy; decision, risk and management science – see http://1.usa.gov/NSFSBE ) with a focus on advancing the underlying science. He remarks that while WEIS uses a lot of game theory, mechanism design etc it’s not always cutting edge; he encourages collaborations between security folks and social scientists to enable applications to drive the development of new theory. At the October Internet2 meeting in Raleigh NC he’ll be talking on SBE funding sources for cybersecurity research. Carl Landwehr, who runs the trustworthy computing program (program 7795), remarked that the goals of the NITRD framework were to increase attacker cost; create tailored security environments; incentivise security deployment and socially responsible behavior; and deter cyber crimes. They also run a monthly seminar in DC: http://www.nsf.gov/cise/cns/watch/ .

    The first refereed talk of Wednesday afternoon was Matthew Hashim on “Information Targeting and Coordination: An Experimental Study”. Risk communication on teen drinking can be counterproductive by sending the message that “everybody does it”; do messages on music copyright infringement have the same effect? They did some experiments on whether information feedback affected free riding, and found that random feedback (as used now) how no effect while targeted treatments resulted in significantly higher user contributions.

    Next, Hul Cho Lee discussed “Security Standardisation in the Presence of Unverifiable Control”. Reported security breaches started to climb after 2004 and to fall off after 2008; might this be due to the release of PCI DSS 1.2, adopted in Oct 2008, which relaxed standards? Attacks often target unregulated stuff (data in transit in the Heartland case, a wireless network in TJX, both PCI compliant). He models the effect of liability reduction on security; players will reduce investment in the unregulated side. Depending on the size of the liability reduction and the strictness of the standards, welfare can actually be reduced.

    Finally Simon Shiu reported “An experiment in security decision making.” In a trust economics project focussed on the effect of the vulnerability lifecycle on Britain’s defence, they looked at the decision-making processes of twelve infosec professionals, broken into two groups, only one of which had a problem framed in economic terms. The problem was securing client infrastructure, and the experts were given four options (patching, lockdown, host-based intrusion prevention, do nothing). The treatment group were affected (they made the same decisions but gave better analysis) but denied it, claiming they understood such trade-offs anyway.

  8. Daegon Cho kicked off the final session of WEIS with a paper on “Real Name Verification Law on the Internet: a Poison or Cure for Privacy?” Anonymity may help free speech but also promote crime and abuse. After the dog poop girl case sensitised the public to online abuse, the South Korean government tried to clean up the Internet with a real-name verification law. Since 2007 the 35 websites with over 300,000 users have restricted posting to users who register their ID numbers; people can still use pseudonyms but their real names are known. The author crawled over 2 million postings, measuring participation, swear words and antinormative expressions before and after implementation. Participation in political discussion is not associated with the law change; but swear word use and antinormative sentiments decreased slightly, especially among heavy users. The new Facebook commenting system gives a chance to repeat the experiment.

    Next, Idris Adjerid talked on “Health Disclosure Laws and Health Information Exchanges”. HIEs facilitate health data exchange between hospitals and other health service providers, and it’s not clear whether laws restricting health information exchange on privacy grounds help or harm their growth. He studied HIE data against state privacy laws; 7 states are proHIE but require consent while 11 are proHIE and don’t. Three each are pro-HIE and pro-privacy, while 26 have no relevant laws. A panel analysis showed that the regime most likely to promote HIEs was the first, namely proHIE law but with a consent requirement. The effect is even stronger in states with pre-existing privacy laws, and it’s robust to other health IT adoption measures, breadth of coverage and even other healthcare characteristics. In fact the only HIE promoting initiatives that worked were those that included privacy requirements. The main limitation is the size of the dataset, with only about 300 points.

    The last talk of WEIS 2011 was by Soren Preibusch on “Privacy differentiation on the web”. This paper tests the hypotheses that web sites differentiate on privacy (H1); sites offering free services differentiate less (H2); that sites which differentiate can charge cheaper prices (H3); and that monopolists are less privacy friendly (H4). They looked at 130 web sites in three service and two good categories, plus ten monopolies. H1, H2 and H4 are supported but H3 isn’t; in fact it’s the other way round, with the more privacy-friendly sites having cheaper prices. So there is no trade-off between getting a good deal and keeping data private; privacy does not come at a premium, except when you have no choice.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *