17 thoughts on “WEIS 2009 – liveblog

  1. The opening keynote was by Hal Varian on “Computer Mediated Transactions”. What’s the significance of the Internet revolution, and where’s it going?

    As for the mechanics, history records several waves of combinatorial innovation where new components became available and people set off to explore all the ways in which they could be used together. In the 18th century, it was interchangeable mechanical parts such as gears and levers, developed for the armaments industry. Around 1900 it was the gasoline engine, which got put together with carts, bicycles, and kites to give cars, motorbikes and planes. In the 1960s it was the IC and now
    it’s the Internet.

    As for where it’s going, it’s about better contracts that facilitate trade. This also has a long history. Around 3000 BC shippers started using bullae as bills of lading – clay models of goods, sent along with the cargo sealed in a clay envelope. With this technology, people who could not read or count could do audit by just comparing amphorae with models of amphorae. In 1883, the “Incorruptible Cashier” patent of James Riddy and John Birch introduced the cash register with a bell and a paper tape, facilitating the employment of non-family-members in stores. Satellite tracking on trucks hugely cut fuel theft. More recently, the video rental business was revolutionised in the 80s with revenue sharing; instead of buying videos, the store got them from the studio free and shared the rental income. This means many more videos on offer and more revenue for both, but was only possible once there was a computer mediating each transaction.

    Google’s innovation is tied up with the fact that publishers want to sell impressions while marketers want to buy sales, or clicks as a proxy for sales. Now value per impression = value per click times click-through rate, so if you can measure the latter you’re in business. Where an ad has history, you can measure that; else there’s a bunch of complex AI and ML. But what gives Google advantage is that this “assembly line for marketing” lets the sales process (like Ford’s car process) be broken down into components that can be optimised, and the presence of data lets Googlers experiment constantly – tweaking search algorithms, auctions and so on. It’s the controlled experiments that really identify causality.

    For example, in English “diamonds” is a more valuable search term than “diamond” – the former may be a guy about to propose, the latter a kid writing a chemistry essay. And the time at which people buy varies widely across countries; in Spain, sales go down at 3pm for the Siesta, while Britain has a strong weather dependency. A hot day like today is bad for business, whereas clouds have a silver lining in Mountain View!

    The question is data vs HiPPOs – do you rely on data, or on Highly Paid Persons’ Opinions?

    There are tensions with consent, privacy and trust but it’s important to realise that these mainly arise with secondary uses of the data.

    As for future opportunities, just as Ford moved the machines around so that they optimised the workflow, so cloud computing has the potential to do the same for information; having stuff globally accessible facilitates outsourcing. As an example, McKinsey consultants have swish powerpoint – because they send it out to Bangalore overnight to be groomed and prettified. Also, we’re seeing very many more “micromultinationals” – firms with maybe 6-10 employees scattered round the world.

    In discussion, Angela Sasse asked about the Googler who resigned because graphic design was so machine-driven: an extra pixel line width was determined by experiment. Hal replied that innovation was, according to Edison, 1% inspiration and 99% perspiration. The inspiration doesn’t go away, but the perspiration is mechanised: it becomes “cyber-sweat”.

    The first regular speaker was Alessandro Acquisti who asked for his talk to not be blogged as a journal version of his paper will appear in PNAS in a few weeks. I’ll skip the details of his talk, which showed how US social security numbers are to some extent predictable, and largely because of well-meaning but poorly-thought-through measures introduced to deter other kinds of “identity theft”, notably the automated issuing of SSNs at birth, which makes them sequentially predictable by state from date of birth, and the publication since 1980 of the death master file, which yields enough triples of (date of birth, state, SSN) for an attacker to interpolate.

    The second regular talk was from William Roberds of the Fed, who also asked for no blogging (an idea I’m not at all sure is consistent with the idea of a scientific meeting). His topic was the question of whether “identity theft” is a market failure. Popular opinion, elected officials and the legal literature say it is: see Swire 2003 on the payment industry’s failure to do privacy. The industry says it isn’t: fraud losses are low relative to payment volumes and most fraud is low-tech anyway, like lost or stolen wallets. Roberds’ take is from the economics of payments, which deals with intertemporal displacement of consumption and production, and the limited enforcement of promises of future actions. His core argument is that collecting more data will deter the unskilled thief but attract skilled thief. Payment systems are modeled as clubs that try to optimise member utility overall. Inefficiency can arise when there are externalities (positive and negative) between clubs. With enough data overlap you get inefficiency: overaccumulation of personal information, low data security, too little unskilled theft, too much skilled theft. Possible remedies? (a) increased civil remedies may be inefficient (b) data security standards may work well but only if very high (c) constraining the amount of data collected may work almost as well. In conclusion, the paper tries to define “efficient confidentiality” and provides a framework that may be applied to many other questions.

  2. Second session.

    Tyler Moore asked whether many of the security breaches we see might in fact reflect rational behaviour by the victims rather than market failure. In many applications it may be sensible to wait for attacks and then respond. One insight is that many systems exhibit weakest-link security but defenders may not know which links are weak. For example, EMV was supposed to cut card fraud in the UK but just shifted it overseas; fraud kept on rising. He presents a model in which the attacker does know the attack costs and thus the weakest link location, while the defender may not. This enables one to see the effects of more or less defender uncertainty: and with enough uncertainty he may not defend anything at all (at least in the first round in a dynamic game, or at all in a single-shot static game). It’s better to husband resources. The model can be extended to deal with sunk costs and other extra features. The novel feature of this model is that it presents a case of security underinvestment that’s independent of the actions of others; no externalities are required.

    Nicholas Cristin talked on the price of uncertainty in security games. He observed that self-protection is a public good while self-insurance is a private good. From this he builds a general utility model, where everyone’s protection levels are coupled. As an example of his model’s effectiveness, he shows how it can be used to model mix-networks: it’s enough for some nodes to remain uncompromised. It can also model a mixed economy which has one or more expert players in addition to naive ones. Finally, it can answer the question: what does uncertainty cost an expert layer? It’s like “the price of anarchy” but the more players you have, the less uncertainty matters; and regardless of the number of players, experts are not hugely handicapped by it. However naive strategies can be awful. More in related papers from his web page.

    Cormac Herlihy tackled cybercrime markets, which appear to have flourished in the last five years and enable bad people to trade. How much do we actually know about them? One puzzling factor is that reported prices are fractions of a cent on the dollar. Nobody sells gold for the price of silver, so what’s going on? And how can a market flourish in which cheating is prevalent? All the IRC channels satisfy the classic conditions of a lemons market even better than the used car market. In effect there is a tax on anonymous channel transactions. He hypothesized that banks detect 90% of all detected fraud, that rippers account for 90% of IRC channel traffic, and that sellers
    want a 5X profit – this would explain why a CCN worth a nominal $2000 sells for $4. The natural outcome is a two-tier market in which gangs extract the lion’s share of the value while newbies meet rippers at the bottom. But effort doesn’t imply dollars and there could be two orders of magnitude between reality and the hype from security firms. He argued that this hype makes the problem worse – it may create much of the supply of newbies to keep the rippers fed. This second tier may not respond as well to economic and law-enforcement interventions as tier 1. Ironies: (a) white hats are recruiting their own opponents and creating externalities in the process, which may exced direct losses (b) more realistic estimates of the criminal economy would benefit almost everyone (except the rippers and the security firms). Question: Richard asked whether whole-life details are a separate market, as they ask $50-60 vs a buck for a CCN? Cormac said he didn’t know the answer.

    The second keynote talk was from Bruce Schneier on security and psychology. Security is a feeling and a reality; they can be very different. It’s often worth splitting them apart. There’s no language with different words for feeling and reality – which itself is instructive. People have a natural intuition about security tradeoffs and make them all the time: do we leave our laptops at the table? Do we eat at this restaurant? We respond to the feeling of security rather than the reality, as most of the time this works – read evolutionary psychology! Our mental heuristics and cognitive shortcuts made evolutionary sense.

    We are satisficers; we are storytellers; we’re optimised for living in small family groups in East Africa in 100,000BC, but not for London 2009. Technology obscures stuff, as do the media. We judge probability by the ease of bringing examples to mind; this made sense against lion attacks but the mechanism is screwed up by the media reporting rare events such as 9/11. Prospect theory also gives a useful model of risk aversion. Bear in mind though that our brains adapt easily to what “normal” is, or is commonly perceived to be. We fear intentionality; we exaggerate involuntary risks (for example, people who move to an earthquake area think the risk is less than people who were born there). We’re not good at large numbers (can’t visualise 1000 mangoes) or long timetables (we think of the next hunt, not global warming).

    As reality and perception get out of kilter, we see security theatre, which tackles the feeling but not the reality. What about actions that make me more secure but don’t make me feel more secure? It’s the sort of thing I expect the CIA to do – to protect me without telling me. But we don’t even have a word for it!

    What makes people notice that their feeling of security doesn’t match the reality? You can excuse security theatre when it’s obvious it’s not working. Ultimately we need data, but consumers don’t get enough to assess a terrorism prevention system any more than a meteor prevention system or a unicorn prevention system. What can cause people to not notice? fears, folk beliefs, an inadequate model of reality.

    Human decision-making processes are complex. When we see a snake, the amygdala initiates the fight-or-flight reflex before we even realise it’s a snake. Then our conscious brain kicks in afterwards. bruce discussed an African tracker he met on safari who had a complex set of rules extending his intuition on lions, elephants etc; as a New Yorker he has a different set of rules, about which neighbourhoods to avoid. Yet many people’s models come from the media – we have no personal experience of terrorism or child kidnapping, so we get them from journalists. We are educated about CCTV by its vendors. Cancer, SARS, bird flu, swine flu … and as for global warming, it just doesn’t feel like it makes sense. It doesn’t match our intuition at all. But people can be educated. People used to be scared to push doorbells, as they contain electricity which is dangerous; no-one feels that nowadays.

    Again, security depends on your perspective: different stakeholders make different tradeoffs, and often security decisions are made for non-security reasons. So you have stakeholders trying to manipulate the model and influence what people think of terrorism or firewalls or whatever. An interesting example is the risk of smoking, which can now watch through the decades. Initially the tobacco industry sold the health benefits of smoking; when experts can along to say differently, the industry worked to undermine and marginalise them. Eventually they lost and the new model replaced the old. Seat belts are another example. ID cards are an example of conflicting models.

    Cognitive dissonance helps entrench models. But new ones can supplant old ones, and there are various mechanisms. Flashbulb moments, such as 9/11, or a personal experience of crime or disease. The growing trend to rely on others – scientists and other experts – sometimes even religious leaders. And eventually models feed back into feelings. But change happens slowly. Smoking and seat belts took decades; global warming and the torture debate will too.

    The specific problem facing our community is technology moves. It can change reality faster than our cultural models can catch up.

    Finally, don’t be too hard on security theatre: it can help. A good example is the Tylenol poisoning scare, which was countered using the security theatre of tamper-proof packaging. Without it the non-prescription drug industry might have taken a huge hit. Another is the practice of putting RFID tags on infants “to prevent kidnapping”. That basically never happens, and the real benefit is in making new mums more relaxed at nurses taking their babies away for tests. There, security theatre was used to fix a non-security problem.

    In questions, Rainer Boehme asked whether it was cheating to trick people into drive more safely, for example by putting white lines closer together near accident black spots. Bruce said it was fair game and recommended a book called “Traffic” on traffic psychology – and Dan Gardner’s book “Risk”.

  3. First on after lunch was Qiu-Hong Wang, talking on cross-country interdependence and enforcement. She discussed the progressive implementation of the cybercrime treaty and asked whether there was a displacement effect in that signatory states move crime elsewhere. She collected data on 62 countries 2003-7 and investigated interdependence. Joining the convention is associated in a significant drop (16-25%) in cybercrime originating in that country; it also reduces the correlation in its rate of cybercrime with other countries.

    There followed a panel on “A Broader View of Cyber Security Economics”. Shari Pfleeger from Rand had proposed this panel with Lance Hoffman from GWU; she kicked off by suggesting we broaden the scope of our models. The next IEEE S&P will have an article classing security models into five classes, mostly with unrealistic assumptions.

    Ann Cavoukian, the information and privacy commissioner for Ontario, said that we should present privacy as a negative externality that can be turned into a positive one. (She has a book on this, “Privacy by Design”, a copy of which was given to delegates.) The guiding principle is Coase: a burden should be placed where the cost is least, and this means business (which faces some costs implementing privacy) rather than consumers who face impossible costs). This provides a foundation from which regulators can chivvy firms to build security in from the beginning. Lance Hoffman, from GWU, argued that computer scientists should listen more to advocacy and policy groups such as EFF and should learn to talk to policymakers. Shari Pfleeger would like us to work with law enforcement to get decent definitions of things like cybercrime. David Good, a psychologist from Cambridge, remarked that getting interdisciplinary dialogues going is harder than it looks and poorly understood; he warned against expecting to get from psychology a model that will just solve stuff. The most useful psychologists may be foxes rather than hedgehogs: people who understand persuasion, greed, parochialism and many other immemorial aspects of the human psyche. Finally, Alessandro pointed out that Nash’s seminal paper in game theory was less then five pages: simple models can be really useful even if some of their assumptions don’t coincide with the real world. Similarly, security ecocomics models have gone from the simple to the complex over the last eight years.

    During the panel discussion Anne emphasised the necessity of turning privacy from a roadblock to an enabler if firms are to take it seriously. Her example was a Canadian driving license with a default-off RFID that may simultaneously satisfy the DHS and protect the privacy of Canadian citizens. Lance asked how we’ll know when the security economics project has succeeded; Shari remarked on how a Darthmouth study showed that while some CSOs reported to the firm’s CTO but others to the CFO: the latter need to talk money. David cynically remarked that if the security problem were solved, the industry would collapse; like healthcare or law enforcement, there is a nontrivial optimisation problem for the provider. Perhaps the best one can do is to keep cyber crime to a minimum. Anne disagreed. When democracies unravel, the first right to be is privacy, and without it you can’t have liberty. The Germans were quite right to embed informational self-determination into their constitution.

    Floor discussion was next. Soeren Preibusch argued that businesses could get customers to shop more by offering privacy. Andrea Brolioso from the European Commission called for much greater realism about how real policymakers work – they are far from rational – and asked for input into the European research agenda. What should the EC be paying universities to study? And what data should we try to collect? Hernando, also from the Commission, said that in August the Commission will launch a consultation lasting two months leading to an initiative in 2010. Elliot Leer from Cisco also mentioned that Cisco has an open call for research proposals in security policy: see http://www.cisco.com/research (actually http://www.cisco.com/web/about/ac50/ac207/crc_new/university/RFP/rfp09057.html).

    Benedict Addis asked what the panel thought of banks who monitored the asking price of their stolen credit card data on the underground and were satisfied so long as it cost more than their competitors’. Shari remarked that good data can make problemns visible. Richard Clayton objected that perhaps a secure bank’s card numbers would be worth almost nothing, as the bad guys can’t extract value! How do you know what to optimise, and at what point should a policymaker leap in? Shailendra Fuloria remarked that although India was a democracy, he’d never heard anything about privacy. Anne replied that in the last three years the Indian outsourcing industry has suddenly had severe privacy requirements imposed on it. This is leading to draft legislation there. Russell Thomas asked whether the next step was productive linking of theoretical with applied research, in a managed program? David suggested that doing research on real problems along with a real psychologist, anthropologist or sociologist could be the best way forward; a good social scientist will know lots of stuff and see lots of ways to apply it once the research starts travelling a specific path. Tyler Moore remarked that we discuss cyber-crime rather than cyber-war simply as a function of the data that we have and don’t.

  4. Alessandro Acquisti started the last session of the day with a brief roundup of the last few years’ results in the behavioral economics of privacy: illusion of control, over-confidence, ambiguity-seeking, and endowment effect, a salience effect and a large gap between willingness to accept and willingness to pay. What’s left? The comparative nature of human judgment, that’s what! We compare our situation with where we were in the past, and where other people are now. In the first study, self-revelation by others was found to significantly increase what subjects would disclose – even when the admissions were highly stigmatising (sexual fantasies with a minor, fantasies about violent non-consensual sex) and the trigger admissions were of other behaviours. Curiously, individuals who did not provide email addresses were relatively resistant to manipulation. The second study was about whether news of privacy intrusions sensitizes us or desensitizes us. They manipulated the order of questions: tame to intrusive, or intrusive down to tame; and whether they asked for identifying information before or after the study. The tested hypothesis was “frog”, that people would disclose if gradually “warmed up”, and “coherent arbitrariness”, that people’s attitude to the survey will be anchored by the initial questions and will thus disclose more in the decreasing case. In fact the subjects in the increasing group disclosed less; the decreasing and control groups were all but indistinguishable. Thus the frog hypothesis was strongly rejected. Another way of putting it is that people’s privacy concerns depend on priming and framing – in this case, on relative judgments.

    Next up was Ramon Compano from the European Commission, with the lovely title – for EC folks – of “The Policy Maker’s Anguish’. His objective was to find out young people’s attitudes on electronic identity. How would people in the UK, France, Germany and Spain plan to use services that require eID? The methodology was a UK pilot followed by a 100,000-user spam run, that drew 5265 complete questionnaires and 6000+ partly completed. They found that internet confidence was generally low: 70-80% were very concerned about overall risks (fraud, dossiering), 60-70% concerned about reputation damage. They didn’t used external tools – “privacy enhancing technologies” – but rolled their own hacks, such as by tinkering with the personal data they offered. The privacy paradox was visible: despite saying they were worried about privacy, kids disclosed data – saying that routine data was not really private. There’s also a control paradox: youngsters want full control of personal data, but without the hassle. They agree that users should get more control but don’t believe it would work. Also there’s the responsibility paradox: young people say it’s their responsibility to protect data, but don’t think most youngsters will accept that responsibility. There are also national differences: Spaniards use social networks a lot while French kids blog and German kids are more tech savvy. Levels of trust in government also vary. So what is the “value” of electronic identity? He can see no single option to meet all the requirements.

    The third talk was presented jointly by Joe Bonneau and Soeren Preibusch. The common story on social networks is that it’s a monopoly so they’re not trying; but looking at the global picture it doesn’t look like that. For example, Britain has Facebook, Bebo and Myspace in the top 15 web sites. Joe joined 45 big social networks (43 have over 1m users, 2 have gone out of business since the survey), and evaluated their privacy policies and practices. It’s a young, dynamic market with its business model still up in the air. Some started from b logging, some from photos, but evolution converged. Most are based in the Bay Area even if their customers are elsewhere. Technical security is horrible, but the interesting story is privacy. Bigger, older sites got a better privacy score. More functional sites have better privacy and niche sites have worse (privacy is functionality, too). However, sites don’t generally position themselves as privacy protecting and keep it low-key. It’s a privacy communication game: the idea is to act good but don’t make it known. If you make privacy salient, it will scare people off: so hie privacy concerns from mainstream users, convince users who bother looking it up, and avoid criticism. In fact, it’s a good idea to scare away privacy fundamentalists who’d just emit negative externalities and put off your other users. In conclusion: better data give better models; and a period of vigorous competition might be a time to act. In questions, Elliot Leer asked how? Soeren said: make privacy more salient. Rainer asked whether the data collection will be repeated to observe the dynamics. Joe answered probably in a year. They had already noticed that sites which did privacy better grew faster. What about data retention, especially after account closure? Sites were generally vague; some said they’d try to delete data after account closure but warned it might not be possible.

    The final talk of the day was from Ajit Appari on HIPAA compliance. HIPAA is the US law on medical privacy, and it’s newsworthy: 17 hospital workers tried to access President Clinton’s record when he had heart surgery. From 2003-2009 there were 44,236 complaints of which 8851 have been referred for enforcement. 457 privacy violations have been sent to the DoJ and 309 security violations to the CMS. There’s a lack of rigorous and well-grounded empirical study of enforcement and compliance, and the authors’ objective was to start tackling this. Their framework is the neo-institutional theory of Powell and DiMaggio: organisational behaviour is driven by coercion (law) and various cultural factors internal and external. Their primary data source was the 2003 HIMSS survey of 1564 US hospitals with at least 100 beds. 64% claimed privacy compliance but only 19% security compliance (for which they had till 2005). Quite a few independent variables have significant effect: size, comprehensiveness of state privacy law and the existence of compliance officers help, while the use of external consultants is negatively correlated with compliance. Both academic and for-profit hospitals are more compliant than average. On security compliance, external consultancy is not significant, while academic hospitals are less compliant. Finally, HHS will report on compliance annually to President Obama from next year, so we may get lots more data of this type.

  5. Thursday’s proceedings were started by Martin Sadler, who’s responsible for security at HP. His topic was how big companies exploit security research. Many subjects “computer science plus X” have fizzled out; the signs are that security economics might not.

    Big companies are under attack, with both the capability and the range of motivations of the attackers increasing. There have even been cyber-sitins! A lot of people want reassurance, which we aren’t good at giving them. The global IT security market is growing at 15-20% p.a. – $17bn in 2006 to $38bn in 2011, but it’s incredibly fragmented with a lot of snake oil. Services are growing fastest. So why aren’t all rich?

    Corporations don’t care much about R&D any more; they’re run by accountants now. The obvious exploitation paths are founding and selling a startup, and consultancy. There’s also a question about whether tech transfer goes from university -> corporate lab -> product, or the “open innovation model” where the problems come from customers, are sent to unis, partners and corporate labs in parallel, and from there to product. Service businesses are harder though!

    Service firms nowadays are about taking an “art” and understanding it well enough to turn it into engineering. You distill the key principles, make it repeatable, package it, train people, and then cut costs and deskill as hard as you can. This is happening to infosec as well: there are maybe 100,000 people who position themselves as security experts. They are “the industry side of script kiddies”: they go down checklists. But that’s how the bulk of the work gets done. The challenge is the sheer volume of people out there. We can’t afford to treat them just as competitors, spouting nonsense; they are too numerous. We have to win them over. Security economics is a potential silver bullet for this consultant community, but we have to provide them with tool supported ways of working.

    Consider the risk lifecycle. The requirements engineering side, from compliance through needs assessment to policy, is the domain of consultants with degrees in history. The other side, from technology deployment to infrastructure management, is the domain of system houses like HP and IBM, plus customer sysadmins. The two sides don’t talk to each other; they barely have a common language, let alone the tools to extract pearls of wisdom and get them to the consultants. To link up analytics to assurance, security economics should tell corporations what tests to run on their data to check whether the models are right.

    We should be very worried about government directions. They have put huge effort first into PKI and then into biometrics, neither of which really went anywhere, and left to their own devices governments will focus next on offensive capabilities. We should rather establish the science and engineering of security decision making. And if we’re going to reach the 100,000, the time to push is now.

    In questions, Soeren asked whether PKI and biometrics really were dead; Martin maintained that they had just not been as transformational as hoped. Angela explained that decision makers like to simplify decisions, and repeatedly fall for the latest “solution”. Shari cautioned that IT people are forever lowballing projects, and have a dreadful reputation with CEOs; even this community should start being more careful about total lifecycle costs. There was some discussion of whether our message should be simplified – do we need an equivalent of MCSE? Martin agreed: we have to figure out how to package the simpler bits of our science. Russell said that from the Big 4 viewpoint, Martin’s analysis was spot on: but would we need to organize the innovation networks formally, or just let them grow? Martin said that the EU, UK research councils and NSF were ready to help. Tyler objected that the academic long view was valuable, and needed. Richard said the real value of our work was at the strategic level: we’re not so relevant to helping Joe Bloggs improve his consulting business as helping make the EU more secure by changing incentives. Eliot said we need both: it would be great to introduce security economics into corporate accreditation programs but work on (for example) the cybercrime convention would be over their heads. Finally, John Levine said that the way to affect the 100,000 guys with checklists is to give them better checklists – working through industry associations or whatever.

    The first refereed paper of the day was presented by Rainer Boehme. He wants a longer-term way of assessing the value and cost of personal data disclosure. If you try to (say) identify returning customers by personal attributes, there is uncertainty at both the micro level (a customer might change her preferences, religion, name, whatever) and the macro level (other customers might adopt the same attribute). His idea is to apply option pricing theory to value “privacy options”: the option mechanics are used to deal with probability distributions, but of the number of bits disclosed rather than the number of dollars. Personal data disclosure is like writing a call option, where the premium is the incentive given to the subject and the strike price is the cost of data retrieval,. Then the value will depend on how the population develops. This can be done by stochastic modeling techniques such as the binomial option pricing model – or by many other tools, developed over several decades for finance; the key insight is that these can be recycled into privacy analysis. In question, Caspar picked up on whether there could be a put option – would this not include a mechanism to enforce deletion, such as trusted computing? Another interesting aspect would be the right of the subject to be informed. Rainer agreed and suggested that transparency could be modeled as an inverse option.

    The final speaker of the first session was Ken-ichi Kasumi, who refines the Gordon-Loeb approach of using real option theory as an aid to deciding on the scale and timing of security investment decisions. A naive manager would evaluate such a project by discounted cash flow; however this ignores the reality that a project can be abandoned at any time, which amounts to a put option on the budgeted but unspent cash. Herath and Herath had previously built on Gordon-Loeb but their model has no closed form solution. The innovation is to model the threat as Brownian motion: dT = mu T dt + sigma T dw where mu is drift, sigma is volatility and dw is the increment of the Weiner process. This leads to an analytic solution. The higher the drift of the Brownian motion, the larger and sooner the optimal investment will be. Possible future extensions include attacker behaviour and investment equilibrium.

  6. The second session started with Shailendra Fuloria presenting work that he and I did together on security economics and critical national infrastructure. Critical national infrastructure encompasses many things but electricity is the key in modern societies, having been attacked by the USA in both Gulf wars as well as the bombardment of Serbia. Online attacks have happened too. Might someone attack electricity generation or distribution over the Internet? The security economics are different from PCs or mobile phones: allthough there’s (physical) lock-in and complex supply chains, there are at least powerful customers exposed to real risk – namely the asset owner. Why can’t security just be left to them? One reason may be the externalities of correlated failure. A single attack on an oil refinery might be left to the oil company, but simultaneous attacks on half a dozen refineries might lead to rationing and large social costs, which may give governments a reason to intervene. Curiously, there is a large natural experiment underway in that the USA is regulating electricity security (via FERC and NERC) while the UK is more free-market. The US regulatory regime has already led to perverse outcomes, such as power companies trying to make their assets no longer “critical” so as to escape regulation. These compliance games carry wider dependability costs: for example, plant operators remove black-start capability to escape regulation. Further security-economics problems the industry is about to face include patch management (hard in systems requiring 99.999% uptime) and evaluation. And the typical plant operator earns under $35,000 a year. How do you design information security that Homer Simpson can use safely?

    In questions, Soeren doubted the relevance of national approaches. Shailendra replied that we have reasonable data on the USA and the UK but it’s hard sometimes to get people from elsewhere to talk to researchers. An audience member, involved in SCADA for water systems, agreed that security usability is critical, but that Homer isn’t the only user. Shailendra agreed; top management doesn’t understand security either. Angela remarked that bulding systems where people can recover from bad things is also important. Another speaker queried the balance of fixed and marginal costs. Shailendra agreed that the economics of control systems and control systems security may be different. One of the EU guys called attention to a Directive in 2008 requiring member states to identify systems whose failure could affect two or more states, o it’s not purely a national responsibility.

    Richard Clayton was on next to explain how Internet routing works from the technical point of view, and then why it doesn’t work at all from the economic point of view. A critical problem is multi-homing, where firms buy services from two ISPs for resilience. The firm’s IP address space ends up in the global routing table, and thus a local resilience decision has a global cost consequence. Dividing the 300,000 routes by the $23bn cost of the IP infrastructure, we find that each route costs $77K. However the cost of a route is much lower: at worst Eur 1300 pa if you join RIPE, and much less if your ISP does it for you. So this is a tragedy of the commons. IPv6 will use SHIM6, whose RFCs were published last week. However you only get a benefit from SHIM6 if the other end does too, and only multi-homed sites have an incentive to; so there’s no first-mover advantage to anybody and why should anyone ever move? A pure economic solution might be hard (it’s hard to get the $77,000 from each firm to the router buyers) and so a policy change may be needed. He ended up by recommending that in the routing area, RFC writers should have to put in a section on “economic considerations” as well as the mandatory section on “security considerations”. In questions, Elliot asked whether this might apply to the consumer space; Richard agreed that he’d really like to have his own house multi-homed. However most homes don’t have long-lived critical flows. Perhaps firms could in the long term get away from such flows by getting their application writers to provide better recovery from network outages.

    Zach Zhou then tackled the impact of security ratings on vendor competition. This is motivated by the move to software-as-a-service. He asked whether risk rating increases social welfare, whether it helps the best vendors, and whether it helps the most discerning customers. He models this as a two-period game in which vendors can hide their rating for the first period. In this model, if the difference between low-type and high-type vendors is very large, then rating can hurt them. However risk rating does always increase social welfare. In questions, Soeren and Tyler suggested empirical follow-up, as real ratings exist and Ben Edelman got an interesting initial result in 2006. Zach agreed.

    The final speaker of the morning was Christian Probst, whose topic was the risk of risk analysis. The specific issue is the insider threat, and the initial question is why organisations choose to be so vulnerable. There are various types of insider threat, from errors through low-grade breach of trust to the high-profile betrayals by senior people that end up in the papers. Preventing this is about understanding the time evolution of trust and risk; over time, the effective risk posed by a staff member becomes much greater than the trust explicitly vested in him. But it’s made harder by the very complex and potentially conflicting goals of organisations (profit, compliance, risk) and individuals. Major, complex insider scams are fairly rare, but unlike some other rare events their risk is underestimated rather than overestimated. One factor is that firms don’t know exactly what controls to impose, or what they might cost. In questions, there was some discussion about whether it was better or worse to promote employee true. I argued that organisations with strong loyalty cultures can get insiders go very wrong when they passed over for promotion or otherwise rejected. Coauthor Jeffrey Hunker agreed that such cases were probably insoluble, and that there is a large literature on insider threats – but very poorly integrated.

  7. The final keynote was given by Robert Coles of the Bank of America. We were again asked not to blog. I dislike this in an academic meeting; I took my usual note and I’ll email it to him to see if he wants any deletions before I post it here later.

    The first regular talk was given by Galina Schwartz, who asked why we don’t have much of a market in cyber-insurance, and whether it would help if we had one. In her model, the probability of attack is a combination of a private good (user security) and a public good (network security) which is average user security; otherwise it follows the asymmetric-information treatment of Rothschild and Stiglitz (1976) to explain the missing market. In the absence of easy measurement of user security, equilibrium won’t exist unless users’ exposure is most of their wealth. Also, where an equilibrium exists with positive coverage, security will be worse. In the case that we can enforce user security levels at zero cost, there will often be positive equilibria, but security will still be worse at equilibrium unless the quantum of damage is very small. In effect, insurers free-ride on other insurers; insurance is a tool for risk redistribution, not risk reduction. If you still want the market to take off, reduce the information asymmetries, with security ratings and disclosure laws.

    The final speaker of this session was Frank Innerhofer-Oberperfler, who has been working with Swiss Re on cyberinsurance premium calculation. He agrees with the previous speaker that asymmetric information is a big deal, but it’s not always feasible to do an onsite assessment. So underwriters use standardised questionnaires, and are interested in what indicators they should ask for. Frank talked to 36 experts from Germany, Austria and Switzerland. They got 198 indicators which actuaries cut down to 94. These were then rated by 29 of the experts. These are now being used by Swiss Re’s actuaries. See the paper for the actual indicators.

  8. The first speaker at the last regular session was Marc LeLarge, talking on the economics of malware. As an example, the Storn Worm started infecting computers in January 2007 with emails about “230 dead as storm batters Europe” and grew rapidly to have over a million bots. He produced two models: an economic model of whether people invest in security, and a random-graph epidemic model in which the probability of infection differs for machines whose owners did, or did not, invest. It turns out there’s a “fulfilled expectations equilibrium” (Katz and Shapiro 85) and it’s possible to get a closed-form solution for the externalities. It’s prudent to distinguish between private and public externalities. It also turns out that agents under-ibnvest in security in all cases; the price of anarchy is always positive. (He also computed network externalities for the Erdos-Renyi graph case; see his paper at Sigmetrics 08.) Anyway, the nature of the equilibrium depends on how strong the protection is. If it’s strong, there’s one equilibrium and a free-rider problem; if it’s weak there can be two and a coordination problem.

    The second talk, by Vicente Segura, was about evaluating the incentives behind DDoS attacks, specifically on the femtocells being deployed by Telefonica. (Their proposed architecture included gateways which they were concerned might be DDoS’ed.) They tried to estimate possible revenuews from extortion, and to negotiate bandwidth prices with Russian botmasters. With the numbers they got, so long as fewer than 13% of users refused to pay up, extortion would not be a viable business. The data collection process taught them that cybercriminals are highly specialised and well organised. In questions, Soeren questioned whether a botnet might be rented for less to an associate of its herder. Richard complained that the model is nothing like complex enough, by comparison with the history of Russian DDoS attacks on the UK gambling industry – where the victims stopped the extortion by agreeing not to pay up any more.

    The last regular speaker of the workshop was Stefan Frei, on the dynamics of insecurity. he collected lifecycle data on 27,000 vulnerabilities. This provided statistics of discovery, exploitation and patching. Only 15% of exploits are zero-day; the likelihood of an exploit goes up sharply from 15% to 78% at disclosure day, and 94% after 30 days. As for patch dynamics, 43% of vulns had a patch available at disclosure day, which measures responsible disclosure; within 30 days 72% had a patch available. The “insecurity gap” arises because exploits are systematically available faster than patches. It seems that the hackers have the better coders! Thus, he says, there is a need for independent information provision, and a business opportunity for AV/IDS to provide non-vendor defences. As for vuln distribution: in 1998, the top 100 vendors accounted for 98% of the vulns and now it’s only 40%! MS is consistently the top vuln vendor, and Apple the second. Linux is currently 10th. Whatever platform you use, you can’t hide! However there are huge inter-vendor differences in responsible disclosure. As for the whitemarket (Tipping point etc), it clearly has its place in the ecosystem: Sophos gets 57% of its vulns there and CA 39%. In short, it is now quite possible to relate lifecycle data to the processes in the security system. In questions: how does patch uptake affect all this – surely it’s worse in practice? Stefan agreed: he has also been using google data to track browser patching. What’s the big picture? The main problem is botnets; they’re built from user machines; but the patching cycle is optimised for corporates; yet the home users don’t enjoy the perimeter protection that corporates do. We could get a better equilibrium from more frequent patching. The general model appears to be sustainable though.

  9. The last event was the rump session, at which anyone could stand up and talk for five minutes.

    Frank Stajano talked of work on scams with Paul Wilson, of The Real Hustle (our dinner speaker yesterday): we should learn from scammers. He showed a video clip of “the black money blag”: the line is that cancelled banknotes are cancelled by being painted black before being sent to be burned. The conman produces a magic solution to remove the ink, demonstrates that it “works” using sleight of hand, and sells the crowd black banknotes plus solution. Such scams lead to principles: distraction, social compliance, herd, dishonesty, deception, need and greed, and time. They have a paper on this: see Frank’s home page.

    Next up was Richard Henson of the University of Worcester, researching what infosec can do for SMEs, and looking at stats of ISO 27001 certifications. Most go to big companies in the UK; SMEs do it only if customers tell them to. Only 28% were even aware of PCI DSS. Does this matter?

    Third speaker was Joe Bonneau on “Making Privacy Viral”. Lots of people have published guides on how to protect your privacy on facebook; the most popular one has been downloaded over a million times. Joe can’t figure out how it works without experimenting; users clearly want to know. Now it’s economic and there’s a book: “The Holy Grail of Facebook Privacy” (by the author of the million-download pdf). Privacy is about control, which often means delegating to others. Maybe what we need is a means for people to adopt the privacy settings already set by others: “I want Joe’s facebook settings”. A paper on this will appear at SOUPS.

    Kanta Matsuura spoke next on “The Broader View, and Interactions” inspired by yesterday’s panel on interdisciplinary working. He’d like more inter-sector working: three quarters of this WEIS’s papers are from university people only. How do we encourage industry participants? He mentioned the IFIP conference on trust management in Japan next year, http://www.ifip-tm2010.org. General security events like Oakland and SCIS exhibit more industry-academia collaboration.

    Nicolas Courtois: “Economics of Keeping the Spec Secret”. Businesses often see secrecy as simply raising the barrier for competitors and keeping out hackers. How can we value this? Take for example Mifare classic, whose cipher was secret (contrary to Kerckhoffs) and recently broken. The optimal financial strategy for a security company may be to sell snake oil, and go bankrupt when their products are broken, Should there be some regulatory intervention, such as requirement for bonding or evaluation?

    Pern Hui Chua is working on collaborative software scrutiny for mobile platforms with Nokia. Android does not require testing; iPhone uses centralised scrutiny; Symbian and J2ME require independent testing. What’s the future for third-party apps? They did a study and found that only a quarter or so of users paid attention to signatures; and the gmail client isn’t signed while the flexispy spyware is. He concludes that signing is a sigalling game, and wonders whether social rating might not be better. The risk communication might get better. As part of this, he prototyped a UI, in which users had to kill a Pac-man monster in order to install an app, in order to prevent automatised click-through; 80% of users liked the idea.

    Eric Johnson is working on rating vendors. At present, a security vendor like Iron Mountain is being rated by hundreds of customers, and a customer like Goldman spends tons of money rating vendors. (BITS has got some traction but is self-assessment.) So Moody’s and Goldman tried a rating agency as a joint venture. Is the rating market viable, and do ratings affect vendor competition? Moody’s charge vendors $23,000 for initial assessment and $10,000 pa for monitoring; customers charge $3,000 pa plus $500-750 per report. (In the bond market, only the issuers pay.) Moody’s is stepping back and Goldman’s interested in continuing with other partners. (John Moody started in 1909 with analysis of railroad investments, got a bit of traction in the 1930s, and took off in 1975 when they were blessed by the SEC as part of a regulated oligopoly).

    Jean Camp announced SPIMACS for Nov 13 – http://www.infosecon.net will have the link soon. She is also working on incentive-based access control. The idea is that there are two types of insiders – malicious and inadvertent; the proposed fix is to provide each employee with a risk budget and pricing risky actions. Consistent rewards, even if small, can bring large changes in behaviour. Prices can even be a time friction. The goal is to align personal and organisational risk budgets, and also identify staff who are particularly risk-seeking or risk-averse.

    The final rump session talk was by Ronald from Oxford, working on usable security. Protocols such as bluetooth require users to tell, compare or agree key fingerprints, the dependability of which can be hugely context dependent. Rather than formal proofs we need to pay attention to users’ intentions and incentives. Where are the trade-offs? This could be a useful thing to study.

  10. Thanks, Ross, for sharing your careful and insightful notes. Lots of interesting ideas at WEIS.

  11. I’m very interested in Pern Hui Chua’s findings on collaborative software scrutiny; is there a paper available? I couldn’t find it on the WEIS site.

  12. That was a rump session talk, so you’ll have to track down the speaker and ask him. Sorry, I should have made clear in that post that it was a rump session; anyone could stand up and say anything for five minutes. I’ll edit the post to make that clear

  13. Hi Craig, very glad to hear that you are interested. We look at certification issues on mobile platforms, and study the behavioral, social and economic aspects of software installation. It is a joint work with my friends in NRC (Helsinki) and they have noted your interest. I have also put up some basic info on my PhD page. Look forward to working with you.

    Note that my last name is Chia. 🙂

  14. Here’s the final talk report.

    The final keynote was by Robert Coles, listed as “Strategy and Budgeting in Practice – Art or Science?” He was CISO of Merrill Lynch and is now at the Bank of America. His actual title was “Information Security – Art or Science?” (A few sensitive points have been cut at his request.)

    His topic was whether current budgeting processes are very relevant to us. Banks are very much driven by Basel II. There’s a whole raft of security-relevant standards such as ISO27001 and COBIT, but none of them particularly consistent.

    Case study: before he joined Oct 2006, Merrill Lynch had experienced some high-profile data losses. So he sat down and tried to do a systematic analysis of how staff could get bits out, and loss vectors from accident through external attack to departing employees. Quantifying these threats was hard: what’s the value of a golden client list, for example? Anyway the bank agreed to stump up (mid-budget-cycle) for software to control who could copy what files to USBs where.

    Then an HR employee took some data – name, salary, SSN etc on 33,000 staff – and put it on an external hard drive to save money. It got lost during an office move, and this was front page on the Wall Street Journal (http://www.cnbc.com/id/20162588/). Suddenly he was dealing with the original people’s bosses’ bosses’ bosses.

    So he got consultants to look at three other FIs to get the comparative picture, using a threat and control matrix. They started with threats (starting with assets would have taken too long) and also looked at control maturity (ad hoc, up through formal, to enterprise-wide, and finally to continuous and validated with a control loop). He came up with a prioritised roadmap. That he could have done ad hoc, but as he was new to the organisation it was helpful that he’d spent money on trusted consultants. He ended up having to budget 40 projects in a short period of time.

    He concludes that running security in a big organisation is art, not science. There’s no real data with which to make real assessments of probability; it’s almost impossible to catalogue assets in large complex organisations and understand the impact of losing them; the prioritisation process is pretty arbitrary; you end up doing the stuff with a high profile rather than what’s necessary. See D’Arcy and Hovav, article in Handbook of Research on Information Security and Assurance about lack of research in security management. See Anderson, Hernot, Hodgkinson, J. Occupational and Organisational Psychology 2001 on the widely different demands on practitioners. Need more opportunities for academics to work with practitioners. Pragmatic science – with both rigor and relevance – is the goal.

    In questions, he doubted that we’d get decent loss history for interesting incidents from a single organisation; it would take data collection from many organisations to do that. Security decision making is difficult and important; the kind of information collation and dissemination that Bruce does is important. However real organisations often find it hard to deal with complex policy issues and just adopt a baselining approach – put together a policy once, drawing on various international and other standards, and stick with it. Closed-door information exchanges? They participate in CPNI; Steven Bonner could talk more. Are internal controls driven by the big four accountants? Not sure – some junior auditors started demanding stuff after SOX but were pushed back after the industry said those controls wouldn’t work. And as for pragmatic research, the building blocks – technology, economics, psychology – are probably there; the challenge is to put them together into stuff that practitioners can use.

  15. Excellent coverage. I felt that I was present during the deliberations of the seminar. God bless you for your service.
    Regards,
    R. Ramamurthy
    Chairman, Cyber Society of India
    56A, Dr. Ranga Road, Chennai 600018, India.

Leave a Reply to Pern Hui Chia Cancel reply

Your email address will not be published. Required fields are marked *