Workshop on the economics of information security 2010

Here is a liveblog of WEIS which is being held today and tomorrow at Harvard. It has 125 attendees: 59% academic, 15% govt/NGO, and 26% industry; the split of backgrounds of 47% CS, 35% econ/management and 18% policy/law. The paper acceptance rate was 24/72: 10 empirical papers, 8 theory and 6 on policy.

The workshop kicked off with a keynote talk from Tracey Vispoli of Chubb Insurance. In early 2000s, insurance industry thought cyber would be big. It isn’t yet, but it is starting to grow rapidly. There is still little actuarial data. But the tndustry can shape behaviour by being in the gap between risk aversion and risk tolerance. Its technical standards can make a difference (as with buildings, highways, …). So far a big factor is the insurance response to notification requirements: notification costs of $50-60 per compromised record mean that a 47m compromise like TJX is a loss you want to insure! So she expects healthy supply and demand model for cyberinsurance in coming years. This will help to shape standards, best practices and culture.

Questions: are there enough data to model? So far no company has enough; ideally we should bring data together from industry to one central shared point. Government has a role as with highways. Standards? Client prequalification is currently a fast-moving target. Insurers’ competitive advantage is understanding the intersection between standards and pricing. Reinsurance? Sure, where a single event could affect multiple policies. Tension between auditability and security in the power industry (NERC) – is there any role for insurance? Maybe, but legal penalties are in general uninsurable. How do we get insurers to come to WEIS? It would help if we had more specificity in our research papers, if we did not just talk about “breaches” but “breaches resulting in X” (the industry is not interested in national security, corporate espionage and other things that do not result in claims). Market evolution? She predicts the industry will follow its usual practice of lowballing a new market until losses mount, then cut back coverage terms. (E.g. employment liability insurance grew rapidly over last 20 years but became unprofitable because of class actions for discrimination etc – so industry cut coverage, but that was OK as it helped shape best employment practice). Data sharing by industry itself? Client confidentiality stops ad-hoc sharing but it would be good to have a properly regulated central depository. Who’s the Ralph Nader of this? Broad reform might come from the FTC; it’s surprising the SEC hasn’t done anything (HIPAA and GLB are too industry-specific). Quantifiability of best practice? Not enough data. How much of biz is cyber? At present it’s 5% of Chubb’s insurance business, but you can expect 8-9% in 2010-11 – rapid growth!

Future sessions will be covered in additional posts…

8 thoughts on “Workshop on the economics of information security 2010

  1. The second morning session had four talks. The first was “Data Breaches and Identity Theft: When is Mandatory Disclosure Optimal?” by Sasha Romanosky (work with Richard Sharp and Alessandro Acquisti). What’s the net change of social costs following breach-disclosure laws? The authors base their work on the standard economic analysis of tort, minimising cost of care plus cost of breach. Rational consumers will take more care and be better off; firms suffer costs; solve as a sequential game. Evaluate at the firm’s optimal level of care. The social cost converges on the firm’s optimal cost as the level of care increases. The disclosure tax seems larger than consumer harm though firms can control it; but disclosure seems more appropriate than mandated standards

    “Encryption and Data Loss” by Catherine Tucker (with Amalia R Miller) found the surprising and important result that hospitals who adopt encryption end up having more breaches, not less! The data breach notification laws that give encryption exemptions don’t explain it all, either. Various things go wrong – e.g. Troy Beaumont hospital in Denver had records on laptop with crypto key taped to it. Open Security Foundation volunteers collected data on breaches; they also collected 2005-8 data (the HIMSS Analytics Database records whether they “use encryption”). The most common cause of data loss was loss of equipment; you also get more breaches if you have poor pay relative to local wages or if you have an “enterprise master patient index” (which makes it easier for your staff to find patient records, but easier for the bad guys too). It seems that encryption software makes staff careless, and the damage outweighs the actual protection the software gives.

    “Market Impact on IT Security Spending” by Lisa Yeo (work with Bora Kolfal and Raymond Patterson). Do security breaches affect a firm’s competitiors in a positive or negative way? Firms can be substitutes or complements in loss, and in the latter case should spend more to avoid breaches.

    “Outsourcing Information Security: Contracting Issues and Security Implications” by Asunur Cezar (with Huseyin Cavusoglu, and Srinivasan Raghunathan). Outsourcing contracts for managed security service provision often have a penalty clause, but neither party can observe the outcome perfectly. This paper provides a model for penalty-based contracts under which the optimum penalty exceeds the typical breach.

  2. The afternoon sessions started with an analysis of the adult website business: “Is the Internet for Porn? An Insight Into the Online Adult Industry” by Gilbert Wondracek (with Thorsten Holz, Christian Platzer, Engin Kirda and Christopher Kruegel) investigates the links between adult pay sites, link collections, traffic brokers, search engines and redirector services. To begin with they had students look at sites; then crawled 269,000 URLs from 35,000 domains, which they checked for malware (over 3%, or about five times as much as expected) and other standard exploits. 87% of these URLs were classified, of which 8.1% were pay sites and 91.9% free. Traffic brokers also play tricks on each other and on search engines. To investigate the business the authors set up two porn websites and joined eight affiliate programs. They joined 3 traffic brokers, where $160 bought 49,000 visitors, of whom more than 20,000 were vulnerable to at least one known vulnerability. By comparison, pay-per-install sites charge $130 per 1000 US installs. The industry seems to suffer from a lack of technical sophistication; and although not all porn sites are crooked, many are, and there’s a whole ecosystem of shady services.

    The second paper was on another provocative topic. “Guns, Privacy, and Crime” by Alessandro Acquisti (with Catherine Tucker) studies the effect of the publication of a list of gun permit holders in Memphis, Tennessee. This was publicised in February 2009 by a shooting incident in a shopping mall, and gives an insight into the debate on the tradeoff between privacy and security. Would knowing which neighbourhoods in Memphis had the most guns deter crime, or attract burglars trying to steal guns? The significant effect was a drop in burglaries. There was no evidence of more guns being stolen, or of geographical displacement; and the effect lasted about 15 weeks.

    The third talk, “Misplaced Confidences: Privacy and the Control Paradox” by Laura Brandimarte (with Alessandro Acquisti and George Loewenstein) went back toward sex. The control paradox is why we’re willing to reveal sensitive information, but get concerned when the same information is revealed by others. They investigates the hypotheses that lower perceived control would trigger lower willingness to reveal, and that higher perceived control would trigger higher willingness. They did three survey-based experiments which confirmed these hypotheses, with the effect being particularly strong for privacy-intrusive questions, and for publication rather than sharing, access or other use. A possible confounding factor is ignorance.

    The final talk of the session was “A Welfare Analysis of Secondary Use of Personal Data” by Nicola Jentzsch, which took us from sex to mathematics. Nicola models the welfare effects of secondary data uses. Some countries like Poland prohibit cross-industry data sharing; others demand consent, whether positive (Germany) or negative (the Czech republic); yet others allow it (Austria, Spain, the UK). Her model involves a monopoly in each of two industries, which can trade data in some countries in order to price discriminate. Data protection rules do impact pricing strategies and the distribution of rents, but the net welfare effects depend on parameter values.

  3. The final session on Monday was kicked off by Joe Bonneau giving a paper on “The password thicket: technical and market failures in human authentication on the web” (work with Soeren Preibusch). The authors looked at 150 sites’ password mechanisms; only 33 gave any advice at all with no consensus; TLS deployment was also sparse and inconsistent (some sites used it in some screens and not others); there were also sporadic standards for password length, password recovery, user probing prevention and guessing prevention (over 80% did nothing here). In short, most sites reinvent the wheel, and most of them do it badly. Popular, growing, competent sites tend to be more secure, as do payment sites, while content sites do worst. Password overcollection is a tragedy of the commons; insecurity is a negative externality. Attackers get passwords from weak sites and try them elsewhere: Twitter recently forced a million users to reset their passwords after such a password-reuse attack. Password authentication may be breaking down.

    “Please Continue to Hold: An empirical study on user tolerance of security delays” by Serge Egelman (with David Molnar, Nicolas Christin, Alessandro Acquisti, Cormac Herley and Shriram Krishnamurthi). Behavioural econ tells us people will put up with delays better if explained (Langer, Blank, Chanovitz 1978) so the authors tested this on Mechanical Turk. Would people be less likely to cheat given a “security” explanation? Actually more so but not significantly. Highlighting the delay didn’t help either. There were no differences in accuracy or task time.

    The final paper of the day was “Inglourious Installers: Security in the Application Marketplace” by Jonathan Anderson (with Joseph Bonneau and Frank Stajano). The authors studied twelve application markets and found a correlation between transparency and confinement. Browsers are low on both; desktop operating systems are low-to-medium on both as programs run with ambient authority; mobile systems like Symbian, iPhone and Android have good transparency and confinement. The exception is Facebook (and Open Social) with externally hosted apps, rich permissions and crazily open defaults. The clustering suggests it’s about markets not technology! As well as the usual externalities and asymmetric information, app markets have a further factor. The cost of evaluating code depends on who does it; it’s cheaper for the vendor. Given this, the key tool may be branding; the precedent is Microsoft, Symbian and recently iPhone, where Apple exposes its own brand in return for market power. The Android market by contrast is a Wild West.

  4. The second day’s proceedings were kicked off by Richard Clayton asking “Might Governments Clean-up Malware?” Malware on your PC is typically detected by others who observe its effects and report them to your ISP. When they tell you, you can go to Geek Squad, or a computer shop; 8% of people just buy a new machine, according to a 2006 survey. Some ISPs ignore reports though – talking to the customer costs 6-8 months’ profits. Richard proposes that ISPs be compelled to report to the government, which would notify the customer and offer an official clean-up scheme at a subsidised price. At what price should a contractor tender to provide the scheme? Assuming $70 clean-up cost and $30 fixed price, the tender should be $40. However you might make $21 from selling half the punters AV, $5 from selling 5% a new machine, and $4 worth of customer relationship … with a more careful model, the tender might be about $11 rather than $40. Assuming 0.5% of population use the service each month – costing 66 cents per machine per year. That’s a low price for a public health policy (fluoridisation of water costs 92c per person per annum). The discussion brought different attitudes to public health and state intervention.

    The second talk was on “The Role of Internet Service Providers in Botnet Mitigation: An Empirical Analysis Based on Spam Data” by Michel van Eeten (with Johannes Bauer, Hadi Asghari, Shirin Tabatabaie and Dave Rand). The authors analysed 63 billion spams from 138 unique sources, which were mapped to ISP. This time series confirmed that ISPs are the critical control point: the top 200 ISPs controlled 60% of spam sources and the top 10, 30%.There are substantial differences in ISP effectiveness at botnet mitigation – two orders of magnitude. They explored possible explanatory factors (do piracy rates matter, or ISP size?) and found that cable providers have less infection, and a country’s adherence to the London Action Plan also helps. Education, regulation and automation may explain some ISP differences as well.

    “Security Games in Online Advertising: Can Ads Help Secure the Web?” by Nevena Vratonjic (with Jean-Pierre Hubaux, Maxim Raya and David Parkes) studies what happens when people meddle with advertising, such as when ISPs screen subscriber traffic, share data with ad companies, or even insert ads into web pages on the fly. A game-theoretic model of ISP strategic behaviour: ISPs can relay traffic, leak profile data to advertisers, or cheat by stealing ads; advertisers can buy profiles, or risk cheating, or implement encryption. This gives a multi-stage game between a website and all ISPs. They used real data on web traffic to estimate incentives to divert ad traffic; an ISP could make money out of the top 1000 websites, but the more aggressive ISPs get, the more websites would be secured. With other parameters, ISPs won’t even try to divert revenue from the most popular sites as they’d be rapidly secured (and secured websites cant cooperate on targeting).

    The final talk of the session was from Pandu Rangan on “Towards a Cooperative Defense Model Against Network Security Attacks” (work with Harikrishna Narasimhan and Venkatanathan Varadarajan). They build on the work of Varian and Grossklags on weakest-link, sum-of-efforts and best-shot security games. In best-shot, we can get optimal protection iff the best player contributes; in weakest-link, we cannot so long as someone slacks off. But what happens in the middle when we have come players who’ll cooperate? A model is presented using partition function games. It works if there’s a non-empty core – a partition of players none of whom has an incentive to deviate. Technical conditions are given for this in both optimistic and pessimistic cases. It’s harder when the players are optimistic or the network is large.

  5. The current session is a panel: Policy for Payment System Security. First up is Richard J. Sullivan, from the Federal Reserve Bank of Kansas City. He looks first at what has changed with payment fraud. Now “identity theft” solutions are a standard product, even offered by one car repair shop. Criminals have been fairly effective at compromising the existing payment approval system (relying on zip codes, mother’s maiden name, etc.) This has just led to banks to widen the types of information they use.

    For payment card fraud, the PCI standards have been introduced (led by the major payment schemes — MasterCard, Visa, Amex), applying to non-bank institutions, requiring that they protect card details. Also, card issuers are producing contactless cards which have transaction-specific cryptograms, to protect against replay attacks. Outside of the PCI process, tokenization and end-to-end encryption (techniques to reduce which systems have access to card details) are being considered.

    There is very little data on fraud losses in the US, but Richard’s estimate of losses proportional to payment value is higher in the US then other countries (UK, France, Australia, Netherlands). Despite this, other countries moved towards Chip & PIN when their losses were much lower. There are reasons for this, including that online payments are well developed in the US, which Chip & PIN does not protect.

    Richard argues that the card payment industry needs an institutional structure that fosters wide participation in developing security standards. The existing structures are either open (ANSI) or closed (PCI), but co-operation is hampered by self interest. Also, the industry in the US needs more data on fraud losses.

    Next up was Mark MacCarthy, from Georgetown University. He argues that the main problem with payment system security is that card payment details are static, and steps to protect the confidentiality of this data (the account number and CVV) can only go so far.

    Current liability rules protect the cardholder, but the entity that is breached is not always the one who pays the losses. This is an example of misaligned incentives, resulting in data being insufficiently protected. PCI is a way to internalize these risks, requiring that institutions who process card details do not store the CVV. Large merchants are generally compliant, but smaller ones are often not.

    Also, public policy has now become a player in these discussions. Costs of breaches can be recovered through civil regulation, and some aspects of PCI have now been codified in law. Nevertheless there is dissatisfaction from merchants, particularly over the cost of PCI compliance. However PCI still deals with protecting static information.

    Chip & PIN is a way to avoid having to deal with static information, with the goal to make this information useless to criminals. In the UK Chip & PIN reduced card fraud at point of sale, but online fraud increased. Currently there is no timescale in the US for deploying Chip & PIN, but perhaps the Federal Reserve is the right institution to push it forward.

    The final speaker is Ross Anderson, who started by comparing merchant bankers in the 19th century facilitation shipping trade, and card payment schemes who facilitated online trading between mutually untrusting parties. However different countries have made different decisions as to who pays the cost of fraud, which can offer us data.

    The position in the UK is that if a Chip & PIN is used, then the customer is liable, otherwise the merchant pays. The result is that while counterfeit fraud initially went down, criminals adapted and it eventually went up.

    The technology didn’t work as well as one might have hoped. The tamper resistance of terminals turned out to be seriously flawed. Also the Cambridge team have discovered a way to use a stolen card without knowing the correct PIN. Ross showed some footage from BBC Newsnight demonstrating that this vulnerability works in practice.

    Fixing the flaw is difficult because the the information needed to detect the fraud doesn’t exist in the same place. Also, the specification has become too large and complex with nobody being in control. This points towards more regulation being desirable, but financial regulators have been captured by the institutions they are supposed to regulate.

    There are many opportunities for further research and work. For example should interchange fees be capped? How should non-banks become involved (PayPay, Facebook)? Should standards be developed in the open? Which other routes are open for governance?

    Tyler Moore then leads the discussion, asking how could chip card payments be deployed in the US. Should EMV be scrapped, or is it salvageable? Richard thinks that EMV can probably not be scrapped, but EMV was not developed using best practices for standards. However, EMV has become embedded, and incremental steps are more likely to succeed. Opening up governance for the development process would like help.

    Mark suggests that the cost of dropping two legacy systems at the same time (EMV and magnetic stripe) would be too high, but institutional improvements in governance would help. He does feel more strongly that pushing harder on PCI standards implementation is a mistake, instead more fundamental changes are needed.

    A question from the audience asks if the Chip & PIN vulnerability applies to ATMs. Ross responds that this vulnerability doesn’t but there might be other flaws which do. Also, there is more of a connection between card payments and online due to CAP, where your ATM card is used for authentication of online banking and perhaps also online purchases.

    Richard gives an example of a pachinko payment card in Japan which had an undetectable security flaw. This was eventually noticed due to the increasing liability in the system, and the operators had to shut down the entire system and absorb the losses.

    Ross suggests that while the known flaws in EMV can be fixed, this doesn’t help deal with the other ones lurking. A question asked is whether an open standards processes (along the lines of the AES competition) would be better.

    Richard comments that moving to PIN-based transactions would be good, but there is resistance from the banks because they make more money off interchange fees for signature-based transactions.

    Dan Geer asks to what extent can the merchant help? There are systems which allow a merchant to detect whether a customer machine is compromised, or even take over the machine to push out malware. Ross points out that South Korea has a similar system, but it comes with a high cost because of the requirement to use Microsoft Internet Explorer.

    Richard says that online retailers are now doing fraud detection on transactions, because their merchant bank account doesn’t offer them payment guarantees.

    Ross suggests that NFC might be the way forward, where your mobile phone becomes your payment card.

  6. The afternoon’s sessions were kicked off with “Self Hosting vs. Cloud Hosting: Accounting for the security impact of hosting in the cloud” by David Molnar (with Stuart Schechter). The move to cloud computing gives new attacks: an adversary can also rent units at the same centre and try to snarf your secrets. There can be jurisdictional issues, and as it’s hard to audit the infrastructure, the provider has an incentive to hide breaches and underprovision. Cloud economics favour some apps but not others: the paper discusses a number of cases. In questions, it’s said that big firms should self-host and medium ones use the cloud: should small firms use the cloud too? Quite possibly as cloud providers have a lot of skills; but that’s really for future work.

    The next talk was on “Modeling Cyber-Insurance: Towards A Unifying Framework” by Rainer Boehme (work with Galina Schwartz). It looks at the existing literature and provides a framework for integrating the results.: So far we’ve had 12 talks on cyber-insurance at WEIS since 2002 (only 2003 didn’t have a paper). Cyber risk is different because of the factors that made IT succeed: distribution, interconnection, universality and reuse. Add complexity and we have the picture: interdependence, risk propagation and imperfect information. However so far the literature doesn’t consider truly strategic attackers, seeing attacks as a force of nature. Models are evaluated on market breadth (when will a cyber-insurance market exist?) network security (would adoption lead to fewer attacks?) and social welfare (would we all be better off?). The paper goes on to suggest ways in which insurance models might be more realistic.

    Cormac Herley was next on “The Plight of the Targeted Attacker in a World of Scale”. Where are the missing attacks? We have 2 billion Internet users most of whom care diddley-squat about security, and lots of sophisticated attacks – yet life goes on. Why? Well, Carl’s attacks are scaleable (attack costs are sublinear if attacks can be automated); Klara’s aren’t (e.g. spear phishing). Carl attacks everyone, as often as possible; Klara must be selective. Carl’s profit is driven to zero by price competition, and ditto the value of assets successfully attacked. Klara needs high-value targets: best is an observable long-tailed distribution of value. Most users have less than average value: 1.8% of people have above average wealth, while with fame 2% are above average. So 98% of people are useless to Klara and she has no incentive to attack them. Alice is protected by the worthlessness of the average Facebook account! Wealth may remain unobservable (though this is being eroded by social networks); gullibility is not normally observable but a scaleable attack such as 419 can make it so. In any case, the moral is that your optimal investment depends on whether anyone’s targeting you.

    The final talk of the session was “On the Security Economics of Electricity Metering” by Shailendra Fuloria (work with me). Europe has mandated the replacement of gas and electricity meters with smart meters by 2022 – a 12-year, £300bn project. This has significant problems including data volumes (for both transmission and storage), privacy (with Dutch courts having ruled against smart meters), dubious effectiveness (as the behavioural aspects have been ignored in pilots so far) and conflicts of interest – between energy companies, governments and customers. There are five recommendations: that meter data should belong to the customer and be shared only by consent or when needed (for billing or technical operations); that we should have an open interoperability framework rather than a large central database; audits by a third party – the distribution network operator; active demand management should be done by energy companies not the government, with interruptible tariffs supported by means other than remote switch-off; and there should be an independent authority (which can’t be Ofgem if Ofgem continues to drive the smart grid project).

  7. The first talk in the final session was Sam Ransbotham on “An Empirical Analysis of Exploitation Attempts based on Vulnerabilities in Open Source Software” aka ‘the boy who accidentally kicked the hornet’s nest’. Are open-source vulnerabilities more likely to be exploited? Can we apply innovation-diffusion analysis to exploits? He’s not interested in the hornet’s nest around Linus’ law, but what happens after release. His answer is that source code does seem to make a bit of a difference. He looked at 883 vulns in 13101 software products in CERT/NVD, and found that open source does have a significant positive effect on the probability and volume of exploitation; exploits of open source also spread farther and faster. In questions it was admitted that the pre-release benefits of open source might well outweigh these later effects, which might be due to patching discipline or even the timely generation of IDS signatures.

    The second talk was on “The Mathematics of Obscurity: On the Trustworthiness of Open Source” by Hermann Härtig (with Claude-Joachim Hamann and Michael Roitzsch). If the defenders have to find all bugs before the attackers find one, complex systems need many more defenders. How many? They model it as a race: the defenders need to find any error earlier than the attackers (not all errors). There’s a closed form solution which lets us play with the skill levels required of attackers and defenders, and how these might change from open to closed. It turns out that on some realistic assumptions, the maximum winning probability the defenders can achieve regardless of budget is only about 80% (and for 60% you’d need 60,000 defenders). The attackers always have a window, as intuition suggested from the start. There was a question about whether their url model should have used sampling without replacement; this would have been harder to calculate.

    The final talk of the workshop’s main sessions was “Structured Systems Economics for Security Management” by Adam Beautement (with David Pym). They want a framework that sits between policy and system modelling and that’s like HP’s security analytics. The idea is to separate declarative and operational concepts; have a hierarchy of roles characterised by dependencies, priorities and preferences; with security objects, components and actors instantiating the framework model and capable of being exported into a systems modelling language such as Gnosis.

  8. The rump session was announced as 15 5-minute papers.

    Brent Rowe, in “Measuring and shifting demand for ISP-based cybersecurity solutions”, asks how much might home users be prepared to pay for security. He is also concerned about how fear-based messaging might impact ISP marketing. Preliminary survey shows 50% of people were prepared to pay $5 a month, and this was highly correlated with their stated trust in ISPs. 45% were prepared to pay an average of $2 a month to ensure they didn’t harm others, e.g. by joining a botnet. They will now do a full-scale survey (1600 people, ten ISPs). For more,

    Doron Becker, in “A Company’s Security Posture as an Element of Goodwill?” talked of accounting standard FASB 142 (accounting for goodwill).

    Mark Felegyhazy (with Rainer) “Information security investment with penetration testing”. If defender faces uncertainty about vulnerability, she can adapt Rainer and Tyler’s idea from WEIS 2009 of iterated weakest link, and boost it by penetration testing. They solved the game and claim that the return on security investment is raised. The paper’s been submitted to (22-23 Nov 2010, Berlin). In questions, he didn’t know any other work on the economics of penetration testing.

    Steven Murdoch talked on “Chip and PIN; 5 years on”, focussing on how the EMV flaw actually came to pass. Latest crime survey shows that 44% of victims didn’t get their money back. Security economics also drove the deployment of EMV: the rules were changed so the party causing fallback to the older system paid for fraud. So banks rushed out chip cards, dumping liability, so merchants rushed out terminals. To fix the bug you have to compare the CVR (which being controlled by the banks is usually right) and the CVMR (which being controlled by the merchant is often wrong). So the security economics that was used to deploy EMV undermined its security: if you use liability to deploy a system you may get only one go, as you lose the leverage to fix it later. The moral: “Be careful what you wish for!”

    Tyler Moore: “Policy recommendations for Cybersecurity”. What realistic regulatory options are there for infosec? You can try ex ante safety regulation or ex post liability; information disclosure (breach disclosure, like toxic release inventory); or intermediary liability (if malfeasors beyond reach but a third party in a good position to prevent harm – precedent being Communications Decency Act, DMCA and UIGEA. He proposed obliging ISPs to act on malware reports, with reports on, and software companies and ISPs contributing to cleanup costs based on incidence.

    Kantu Matsuura: “Product Validation Systems and the Economics of Information Security”. JJapan cryptographic Module Validation Program started in 2007. They’ve published their experience, albeit only in Japanese.

    Jonathan Anderson talked of a bike courier in Winnipeg who got a $50 reward after returning $20,000 left by an armoured car company on top of an ATM. Public outcry caused them to raise the reward to $500. So what is the optimal reward? He suggested a formula based on value, replacement effort, reward effort, the return probability, the liquidity of the object, risk, reward and the cost of return. An experiment in Edinburgh showed that you were more likely to get a wallet back if it contained a picture of an infant.

    Haruo Takasaki talked about “Study of user acceptance regarding secondary use of personal information for online services”. He works for KDDI, a telco with 17,000 staff, and did a government-funded study on social mechanisms to promote the use of personal data with privacy protection. Trust positively affects, and privacy concerns negatively affect, the uptake of services. Consent seemed to have no effect.

    Russell Thomas is now at George Mason, asking for participation in “operation red queen” – the modelling of security as an evolutionary arms race. He’s setting up a wiki at

    Joe Bonneau: “Passwords and Intimacy” (work with Soeren Preibusch). Why did only 10 websites out of 150 provide or accept OpenID? Bloggers have written about how the economics of OpenID may be wrong, but it could be deeper. Passwords form an intimate bond &ndash partners often share passwords (Singh 2007 on Aus couples, Boyd 2009 on teens). Does the act of registering a password increase the amount of private information people are prepared to share? Does it make them more willing to buy? See

    Debin Liu: “Bring Incentives to Access Control” (with Jean Camp and XiaoFeng Wang). This proposes RIB, risk- and incentive-based access control, in which users get a budget of access points, as a means of controlling exceptional accesses in the window between the permitted and the forbidden. Points can be modulated by users’ risk-mitigating behaviour, designed as a contract game to give optimal effort, or by risk consequences. They evaluated it with three rounds of user studies.

    Steve Borbash from the NSA talked about evaluation. This is hard! Why doesn’t the government take a product, put it online, and offer a reward for breaking it? A product “secure at the $100,000 level” would be a concrete metric! Discussion mentioned Stuart Schechter’s thesis, vulnerability markets and the Cambridge paper on PEDs.

    Roger Dingledine discussed Tor metrics from It’s hard to collect stats while protecting anonymity. How do you publish 1600 IP addresses for Tor bridges and prevent China from learning all of them? The Chinese are currently winning that arms race, and we need new ideas. The stats show other interesting events, such as when Iran pressed the “no SSL” button in September 2009. In questions, Whit Diffie suggested Chip Delaney’s book “Time considered as a helix of semi-precious stones”.

Leave a Reply

Your email address will not be published. Required fields are marked *