Workshop on the Economics of Information Security 2012

I’m liveblogging WEIS 2012, as I did in 2011, 2010 and 2009. The event is being held today and tomorrow at the Academy of Sciences in Berlin. We were welcomed by Nicolas Zimmer, Berlin’s permanent secretary for economics and research who mentioned the “explosive cocktail” of streetview, and of using social media for credit ratings, in he context of very different national privacy cultures; the Swedes put tax returns online and Britain has CCTV everywhere, while neither is on the agenda in Germany. Yet Germany like other countries wants the benefits of public data – and their army has set up a cyber-warfare unit. In short, cyber security is giving rise to multiple policy conflicts, and security economics research might help policymakers navigate them.

The refereed paper sessions will be blogged in comments below this post.

11 thoughts on “Workshop on the Economics of Information Security 2012

  1. Alessandro Acquisti kicked off the privacy economics session with a paper on empirical analysis of data breach litigation. Since 2005 there have been 2,725 breaches reported, or about one a day, and individuals are suing firms for harms caused by breaches. Previous legal papers focussed on specific cases, so Alessandro and colleagues collected 230 federal data breach lawsuits from 2005-10. They studied who gets sued and which lawsuits settle. Plaintiffs are usually private class actions, but some FTC etc; defendants are banks or other large firms; and outcomes are mostly dismissal or settlement. Both lawsuits and breaches rose to about 2008 then went down; the proportion of breaches that lead to a lawsuit is falling slowly (now about 4%). Lawsuits are more likely to be started over financial data than personal or medical while remedies sought are likely to involve credit monitoring costs. Lawsuits that settle are correlated with medical data rather than financial, and with attack rather than accidental loss. Mean winnings were $1.2m for attorneys and $2.5k for each plaintiff (of which there may be thousands per case; attorneys usually get less than half of the total). Firms should pay attention to data handling and provide free credit monitoring; most will have liability insurance; financial and medical firms should take particular care.

    Next, Rahul Telang asked whether patient data are protected better in competitive markets. Competition is very significant in US healthcare, with the DoJ taking an interest when a hospital merger might create market power in a city. Hospitals also invest a lot in IT, and there are many data breaches. Hospitals have clean boundaries though, and some are monopolies, so they provide a nice experimental setting. Will competition help? Unclear – where resources are fixed, you might expect competing providers to overspend on more visible quality attributes (architecture) and less on the less visible (security). In fact Mukamel (2002) had found that competition can increase mortality by shifting resources from clinical to hotel functions; tangible quality signals such as a medical school or a residency program improve outcomes. In this work, Rahul used the Herfindahl-Hirschman index (HHI) to define competition and looked at 202 data breaches. He found that a 100 unit increase in HII is associated with a 5% decline in breaches; in other words, competition makes security outcomes worse.

    Sören Preibusch’s topic was the privacy economics of voluntary over-disclosure in web forms. Typing data costs effort, so why do people do more than they need? Is it down to the autocomplete features in browsers? They designed a 1500-participant mturk experiment where users only had to complete questions 5 and 6 to get paid but could complete 12 if they wanted (with some of the others being sensitive) and the payment was either 25c or 50c; some got a bonus for disclosing two more. Voluntary over-disclosure was about 70% (date of birth 57%, favourite colour 87%); a minority gave over-detailed responses (e.g. being asked when you last spent $100, said not just “four days ago” but also “buying groceries”). However mandating the disclosure of weather and favourite colour caused a significant drop in disclosure. Reciprocal personality had no effect but “being a good person” did: it was correlated with overcompletion. Reported motivations: for the money, looked easy, joy, help research, interesting, shape public opinion, habit: “like it more when the form is filled in completely”. Takeaway: voluntary over-disclosure is highly prevalent.

  2. The second session was a panel chaired by Nicola Jentzsch on “When will Privacy Policy Learn from Privacy Economics?”

    Alessandro Acquisti (CMU) doubted that ten years of social media could have changed our need for a private sphere and a social one. But it’s an open question where we’ll be in ten years time. Jonathan Cave (of RAND) agreed; a big problem is that we have no good set of brakes, and we get disempowered when the world pushes us too fast to think. Sarah Spiekermann (Vienna) remarked that 99% of our personal information isn’t really private as we tell it to people all the time, so it’s bogus to accuse people of not being privacy-sensitive when they communicate it online. The backlash we see is that people don’t like some of their information being commercialised, though they may sometimes use “privacy” as a term in this tussle. Alexander Dix (Berlin data protection commissioner) agreed with all this and remarked on Mark Zuckerberg’s expressed contempt for his users’ stupidity; at least these more visible tussles have helped to focus the discussion of privacy on the Internet.

    Bertin Martens (economist advising the European Commission) also agreed; privacy isn’t dead but we mustn’t exaggerate its importance, or underestimate its difficulty. There are three schools of thought: regulation, nudges and property rights, and they can work in the lab or in theory, but privacy in the wild is rather different. It would be nice to have data on (for example) how many people changed their privacy settings after Google changed its policy – but we don’t. In the absence of real-world data it’s hard to give policymakers robust advice. Alexander Dix mentioned the interaction with antitrust law, and the dissenting opinion of Pamela Jones Harbour at the FTC about the Google takeover of Doubleclick; the economics of privacy suggests that antitrust law should be more closely linked to privacy on both sides of the Atlantic. Jonathan Cave agreed that competing firms should protect consumers.

    Another gap is between thinking of privacy as a human right and as a property right. Alexander suggested that both might hold; with a baseline covered by rights law and less important matters being a matter of property. Alessandro retorted that we now have fifteen years’ data showing that contractual mechanisms such as privacy policies just don’t work: few people read them, fewer understand them, and in general the notice and consent approach is broken. Jonathan doesn’t agree that antitrust law is enough because of indirect monetization and unconscionable markets. Sarah asked why enforcement could not be technically enabled – firms that bought rights to data could show that they were observing the terms of their contracts.

    Audience questions followed: Stefan Neuhaus noted that certifications suffer from perverse incentives; Alan Friedman asked about do-not-track and about multi-stakeholder processes; and someone questioned the relevance of seals in monopolistic markets. Sarah said we are now standardising privacy impact assessments and this may in the long run lead to privacy certification standards; Alessandro recalled Ben Edelman’s WEIS 2006 paper on certification failure. Jean-Pierre Hubaux argued that a seal might help consumers and wondered whether technical enforcement mechanisms in companies might become visible. Jonathan noted that certifications can set floors or ceilings, and it’s not always clear which outcome you’ll get. As for do-not-track, Alexander agreed that there was confusion in Europe after the Commission backed off from its original intention to extend it to marketing as a whole; it’s better than nothing but doesn’t do what the European data protection community wants (opt in). Jonathan remarked that social conventions can easily mask signalling of attributes like privacy, and the risk-dominant one will tend to prevail is it can be more robust. Bertin remarked on fallibility of agencies who certified banks in Ireland, Spain etc. Eric Davis asked what personal information is: Jonathan answered it’s an evolving and intersubjective thing, and has been changing – Google dredged up things he said as a young man active in politics that he desperately wanted publicised at the time but which he now finds embarrassing. Alessandro denied that there was a useful static definition as it’s too context-dependent. Sarah remarked that we can’t ignore the non-private personal data as that’s where most of the economics happens.

  3. The second refereed paper session was started by Juhee Kwon, who looks at healthcare systems and is interested in whether resources and capabilities are complements or substitutes, whether they affect performance and compliance, and how compliance and performance are related. Both breaches and noncompliance are risks, and more elastic to reputation than price. She found strong correlations between functional capabilities (prevention and audit) and compliance, but not with security: the better the audit, the worst the security! Similarly, when she looked at cultural aspects she found that collaboration was correlated positively with compliance but not with security. She concluded that compliance was largely driven by internal factors but security depends on external factors too.

    Stefan Neuhaus was next, talking on software security economics and criticising common assumptions such as that the return on security investment is a convex increasing differentiable function as having little relation to the real world. He looked at 292 mozilla vulnerabilities, 66 in Apache httpd and 21 in Apache Tomcat, using a one-year moving average; with each project the fix rate peaked after a few years; then the first fell off substantially, the second fell with a sawtooth, and the third didn’t. In the case of the first one, new code commits sort of ceased at the drop-off when they switched from subversion to mercurial. A reservoir model is better; you have a reservoir of bugs and different processes that add and remove them. In that case the cx^{1-a}L(x) where a and c are constants and L a slowly-varying function. If the number of vulnerabilities fixed is about proportional to the number present, then checkins per day should also be heavy-tailed, and they do indeed turn out to be. If on the other hand it were an arms race (the red queen model) then there would be a distribution between successive days with fixed obeying a power law, and we don’t see that. Conclusion: the reservoir model looks better than the standard one, but we still don’t understand all this yet.

    Finally, Bongkot Jenjarrussakul talked on the sectoral and regional interdependency of Japanese firms. The Great East Japan Earthquake taught that supply-chain dependencies matter for resilience, and led them to ask whether there might be an information security equivalent leading to the loss of a whole column in an input-output table. She discussed the applicable methodology for examining inter-sectoral dependencies. She used data on industrial productivity, the IT industry and input-output tables. Sectoral interdependency is often contextual, in that it depends on the number of other critical sectors with which its links are analysed. She also concluded that the scale of a region affects interdependency.

  4. In the final Monday session, Cormac Herley asked why Nigerian spammers claim to be from Nigeria. Suppose we have a victim density d, giving an opportunity of dGN where G is the gain. If the attacker’s true and false positives are t and f, expected return is tdCN – f(1-d)GN, maximised at the optimal operating point C(1-d)/dG. The number of victims found falls much faster than density; with 2000 victims in a population of 200 million, only 2 victims can be profitably exploited. The other factors in play beside d are profitability G/C and the ROC curve (but it’s hard to improve the ROC if victims are so rare you can’t train – the catch-22 of low-densty search). This may explain Nigerian scams: repelling false positives is more important than finding true ones, and the initial email costs almost nothing. So the function of the word “Nigeria” is to find people who’ve not been sensitised to the scam.

    Vaibhav Garg is interested in why individuals in some countries appear more likely to participate voluntarily in ecrime, why PCs in some countries are more likely to be recruited into botnets, and how legitimate and criminal businesses affect each other. He’d previously produced a “smuggling” model and noted that as well as increasing the crooks’ costs, you can also cut legitimate firms’ costs. As for labour markets, 44% of the jobs on freelancer.com are dodgy (according to Stefan Savage) so he compared it with mturk. He studied possible explanatory variables for their relative takeup, from GDP to urbanisation to broadband. High-quality broadband predicts more mturk use and less freelancer, as does English language competence. A strong legal framework reduces the number of jobs accepted on freelancer, but not the number of jobs offered. He concludes that we have to start thinking of the macroeconomic aspects of cybercrime, and the potential policy implications.

    In the last talk of the day, Dominic Breuker asked whether we can afford integrity by proof-of-work. The bitcoin system doesn’t have a central authority but instead participants check cryptographic computations in a distributed manner. Dominic’s interested in the global cost and environmental impact. He analysed the current processing costs of global transactions versus what they might cost using bitcoin, and the largest plausible attackers such as a 30m machine criminal botnet or even an 85m machine protestor botnet. He concludes that you might be able to reduce the banks’ current energy costs but not by much.

  5. The second day’s sessions started with Tyler Moore talking on Measuring the Cost of Cybercrime (declaration: I’m a coauthor). The UK Cabinet Office estimated £27bn annual cybercrime losses in a report by Detica that was published last year and widely criticised (it would be 2% of GDP), but is still propagating as a meme elsewhere. The authors were asked by the UK MoD to come up with better numbers. The paper distinguishes between direct and indirect costs, and also between primary cybercrime and shared criminal infrastructure. A further split was between ‘genuine’ cybercrime like fake AV and stranded traveler scams; traditional crime that’s now done online such as tax filing fraud; and ‘transitional’ offences such as card fraud that existed before but whose modus operandi has changed radically. As examples, Tyler discussed the figures for payment fraud in detail, and what the Eurostat statistics and industry surveys might teach about the indirect costs of loss of consumer confidence in online shopping. It’s surprising that so much more is spent on antivirus than on law enforcement; user clean-up costs are also very high. To sum up: traditional frauds cost each citizen a few hundred bucks a year, transitional frauds a few tens, and pure cybercrimes a few tens more. Indirect costs are less than direct costs for traditional frauds, about equal for transitional crimes, and much greater for the pure cybercrimes. We should put more effort into catching cybercriminals and locking them up.

    Second was Thomas Nowey with A Closer Look at Information Security Costs. The heavily theoretical models proposed by economists are hard for businesses to use; in the real world it’s tough enough to tot up the costs of purchase, training and operations of say a firewall and balancing these against benefits such as VPN management and traffic shaping, not all of which are ‘security’; some are compliance and others about user satisfaction. There is interesting related work though on the return on security investment, the cost of quality, on cybercrime costs and on infosec spending surveys. Methodologically, you can try a balance-sheet approach to infosec costs, or a lifecycle approach, or a process approach, or a security-management approach – which he favours.

    The third speaker was Yuliy Baryshnikov whose topic was the Gordon-Loeb model and 1/e mystery. The Gordon-Loeb model assumes you invest z to mitigate an expected loss L (the value at risk) to LS(z), so you want to minimise LS(z)+z. Gordon and Loeb claim z* never exceeds L/e for typical convex twice-differentiable functions; Yuliy exhibited a counterexample: an S(z) which a slightly smoothed version of 1-z, giving z* arbitrarily close to 1. He presented a more rigorous approach: if S(z) is log-convex then the 1/e result follows from the Lyapunov convexity theorem.

  6. Toshihiko Takemura’s paper is on Who Sometimes Violates the Rule of the Organizations? An Empirical Study on Information Security Behaviors and Awareness and his focus is on improving compliance within organisations. A survey showed compliance with many infosec policies was low; he asked respondents whether their motivation was cash, pleasure, moral behaviour, peer acceptance or self-realisation and the last two scored highest. Myopic behaviour is common, and even staff who experience incidents violate the rules, especially where they are comfortable with the risks personally, but they do it less if they value their peers’ opinion of them more.

    Lukas Demetz’s topic is To invest or not to invest; he has a tool to help tackle the complexity of policy and security configuration management. He compares a number of security investment models, and it turns out most of them don’t deal with running costs, network effects or non-financial effects; some don’t even consider attacks. The most comprehensive are those of Tallau and Bodin. He may recommend his tool to auditors.

    Tim Kelley was third with a talk on Online Promiscuity: he’s interested in new epidemiological models of malware. The standard SIR/SIS models don’t capture recurring infections; there are new behaviour-disease models by Perra and others capturing heterogeneous risk behaviour in populations, with some individuals switching according to perceived risk. Small groups of risk-takers pose a threat to the risk-averse population. A critical factor is the cost of maintaining risk-averse hehaviour which in a normal influenza epidemic might naturally last only a couple of weeks. In the real world, things are more complex as infections nowadays tend to not spread directly between client machines, but via drive-by downloads and other more complex processes.

    Julian Williams had a different take in Contagion in Cybersecurity Attacks. He has an agent-based model of attacker response to defences, and the mutual excitation of vectors of attacks. It’s based on the Aït-Sahalia model of mutual self-excitation, where a matrix links variables in a stochastic process and can cope with jumps in variables’ values, on the assumption that their arrival can be modelled as a Poisson process. They fitted this to multi-channel Dshield attack data and found that while long-run correlations are low, the short-term correlations are high; attacks are bunched.

  7. The last refereed paper session started with Min Chen talking about The Effect of Fraud Investigation Cost on Pay-Per-Click Advertising. Click fraud can be competitive, when firms click on rivals’ ads to drain their budgets, or inflationary, where people click on ads on their own pages to make money. Click forensics estimated fraud at 14.5% in Q3 2009 rising to 22.3% in Q3 2010. Service providers and advertisers can choose good or bad fraud detection technology unobservably from each other leading to a double moral hazard problem. Could a third-party auditor help? A model suggests that it depends on the extra cost to the service provider of high-quality detection – if it’s low then both parties might have an incentive to pay for audit and then to improve their technology.

    Next, Nevena Vratonjic tackled Ad-blocking Games. Online advertising is worth over $30bn but it’s sufficiently annoying that users increasingly use ad blockers; how can we model this? In 2010, Ars Technica stopped blockers accessing their website, with mixed results (from whitelisting to outrage); other sites offer subscriptions for ad-free pages; others tie functionality to download (such as ads in Youtube) and yet others make ads hard to detect. Nevena considers sequential games between the website and the user; the equilibria generally depend on how much the users value the content, and to a lesser extent the cost of blocking detection and the extent to which users hate ads. This framework can help publishers maximise revenue, but it means profiling users more closely, which may be bad for privacy.

    Serge Egelman’s talk was Choice Architecture and Smartphone Privacy: There’s a Price for That. 73% of smartphone malware uses SMS while only 3% of genuine apps do; but it’s a bother to deal with install-time permissions and fewer than 20% do so. Serge surveyed 483 US residents to see if they’d pay more for a newsreader that didn’t ask for location or record audio: 212 cost-sensitive subjects chose the cheapest and 120 chose the most private. In a second experiment, he ran an auction to see how much 353 Android users would need to give an app access to GPS, contacts or photos; the only significant effect was on contacts.

    In the last refereed talk, Miguel Malheiros asked Would You Sell Your Mother’s Data? They hypothesised that the amount of data people will disclose should depend on the reason for the request as well as the sensitivity. They asked 285 people to rate their comfort with disclosing, while applying for a loan, 53 items that might be useful for credit rating. People were least comfortable with mobile phone contact data and online friends’ profiles. They also asked 48 young people whether they’d be comfortable with a “super credit card” application process asking for similar data, with and without explanations; they found that disclosure was inversely correlated with sensitivity but that explanations made no difference. Finally, when subjects were segmented by Westin’s critera, privacy fundamentalists were six times less likely to submit the form. They conclude that TV license and council tax payment history might be usable for applicants with ‘thin’ credit histories, but indices of social capital may not be.

  8. After a debate between me and Bruce Schneier on whether we spend too little on cybersecurity or too much (I argued we spend too little on policing, he argued that much expenditure is based on scaremongering) we came to the rump session.

    Alessandro Acquisti reported an experiment which cannot be blogged until the fieldwork is completed.

    Iulia Ion of Zuerich is investigating irrational users. She asked people whether they would keep sensitive data on a PC or Google’s servers; 80% said on their PC so they can physically guard it. Also, people hated a tiresome pairing method between phone and PC – except for payment when they thought it the best way to connect a phone to a payment terminal. Decisions are also based on whether the subject wants to appear security-aware to whoever they’re interacting with.

    Timothy Kelley of Indiana is investigating whether experts and novices see security indicators differently (following Whalen & Inkpen 2005), and reported struggling to use “Expert Eyes” eye tracking with laptops.

    Zinaida Benenson of Erlangen has been exploring smartphone users’ mental models of smartphone security; smartphone users are much more interested in protection than regular phone users, but feel less safe using it. They tend to think they’re secure so long as they don’t use the Internet, and believe they can tell good apps (and links) from bad.

    Jean-Pierre Hubaux of Lausanne believes some people still care about location privacy. He has an improved location privacy model based on the error probability of allocating traces to users which enables different location privacy mechanisms to be compared.

    Luca Allodi from Trento works on incorporating black-market valuations into security models, for example of the number of vulnerabilities in a system. Writing exploits is hard and gets outsourced; he’s building a database of exploits, and now has about 100 unique ones.

    Pern Hui Chua is interested in game-theoretic modelling of attacks on interdependent systems; see his paper Colonel Blotto in the Phishing War.

    Jens Grossklags advertised GameSec 2012 in Budapest in November

    Sarah Spiekermann argued that people want to be immortal, to make history, to leave traces – not to delete everything about them. She did an experiment in which subjects were told Facebook was about to close, but they could bid to preserve their data. The willingness to pay was near zero (previously people expected somewhere between $10 and $500). It increased if the alternative was not deletion but sale. Her hypothesis is that this is about asset consciousness. She also mentioned the WEF’s report on personal data as a new asset class.

    Kanta Matsuura suggested the community work on developing common research datasets, and perhaps a competition based on WEIS. The SIG-CSEC of IPSJ has such and a malware workshop called MSJ.

    Tyler Moore announced the APWG’s eCrime 12 (October 23-4, Puerto Rico) and Financial Crypto 2013 (April 1-4, Okinawa). He then discussed how most enforcement is done by private actors, who have multiple constraints (from not being able to compel cooperation to having day jobs) and wondered about how best to motivate them. He did an experiment on badwarebusters.org’s notices, dividing them into control, minimal and detailed. The full notices were significantly more effective; in fact minimal notices were not significantly different from the control of no notice at all.

    Chris Kanich has been doing work with Stefen Savage on fake pharma, which has now got to the point of mass spectrometry of acquired samples: every pill he’s bought contained the right active ingredient in the right amount. It’s also turned out that some herbal sex pills sold from the USA contain real Viagra. More at Usenix Security on August 8th, which describes 3 years’ worth of fake pharma order data they got (“PharmaLeaks”).

    Jeremy Epstein from the NSF described its Secure and Trustworthy Cyberspace (SATC) solicitation which will come out this fall and which covers security economics.

    Allan Friedman talked on “Outrunning the cyber panda bear”. Is economic espionage a big deal? It’s framed as a massive harm in Washington. Could we tell if we’re in a long-term intelligence war, and how?

  9. Hello. My creds: MBA in Economics and Finance, and several decades of work in the practical aspects of same vs. academic.

    @ Breaches: Stricter liability laws are needed for those who request or demand your sensitive private data, including punitive damages where the negligence is gross. Individuals need to be able to sue in Small Claims Court, where they do not need to hire an attorney. Such courts will often handle cases of up to $5k, $10k, or $15k, depending on jurisdiction. The summary notes that each harmed plaintiff received an average of $2,500. So they might as well file in Small Claims. Take out the lawyer’s cut, ask for $10k (or whatever is the max), and note the deterrent effect of thousands of individual suits in thousands of counties and States. Very deterrent to the data-storer, on legal costs alone.

    If one claims that this would deter firms from reporting breaches, the answer is that such a firm has no more right to withhold the fact that your cc # was stolen than for a bank to withhold from you that the bank was robbed, and the contents of your safety deposit box were stolen. Hiding this should be a *criminal* offense, as well as a tort (civil offense) for which one can sue. N. B: Not saying that suffering the breach should be criminal (unless the negligence meets the legal status of criminal negligence), only that hiding the breach should be.

    @Overdisclosure: People are lonely and want others to listen to them. See the famous “Hawthorne Effect” — merely believing that someone cared about their preferences caused workers to increase output, independently of the workplace variables beng studied. Rather poor discussion here: en.wikipedia dot org/wiki/Hawthorne_effect, but there are better ones.

    @ Privacy: “Sarah Spiekermann (Vienna) remarked that 99% of our personal information isn’t really private as we tell it to people all the time, so it’s bogus to accuse people of not being privacy-sensitive when they communicate it online.”

    What a non-sequitur! If I tell my best friend a secret, s/he *may* tell some others, but how is that similar to posting it where people halfway around the globe can see it, and commercial interests can exploit it? Even better example: My close friends know my birth date. Posting it online is one more valuable datum for an identity thief. Education + Gov laws, though commercial interests corrupt Gov. So, user education — which probably won’t work very well. If someone requires your birth date, then reveals it, sue.

    @ Software security economics: For commercial, again use strict liability, though user errors must be taken into account. If the flaw can be exploited against an average user using standard best practices (don’t click on spam links, etc.), then the vendor is liable. For OSS, this may render some as not viable, but then, more-secure models would enter the marketplace, either by the same vendor or by competitors. Comparing Google Chrome or Firefox+NoScript to IE, lawsuits should drive IE out of existence in no time. Or MS would put more effort into securing it.

    @ Why individuals in some countries appear more likely to participate voluntarily in ecrime:
    Because some countries don’t seem to care very much, or look the other way because of the incremental wealth it adds to the economies, or because the Gov itself is corrupt, or politically, would like to see its adversaries weakened and/or subverted.

    @ Ad-blockers: “Nevena considers sequential games between the website and the user; the equilibria generally depend on how much the users value the content, and to a lesser extent the cost of blocking detection and the extent to which users hate ads. This framework can help publishers maximise revenue, but it means profiling users more closely, which may be bad for privacy.”

    Or the advertisers could make the ads less hate-inspiring. They have brought this on themselves with dancing frogs, cellulite-ridden buttocks, etc.

    @ “Would You Sell Your Mother’s Data?” — not just the reason or sensitivity, but the giver’s belief in how private the data so given would remain, and how protected. Also, I’ve been in the mortgage business, and the US FNMA Form 1003, standard mortgage loan application, does not ask about online friends or other irrelevancies. It may well be illegal for FNMA-connected or Federally-chartered lenders to ask such things. Everything on that form is *very* relevant to the lender’s decision.

    @ “Iulia Ion of Zuerich is investigating irrational users. She asked people whether they would keep sensitive data on a PC or Google’s servers; 80% said on their PC so they can physically guard it.”

    How is that irrational?If it’s on my PC, at least I have *some* hope of guarding it. If it’s on the servers of the most notoriously-privacy-invading private company on the planet, how do I have any hope, once I let it leave my physical possession? The cloud adds almost limitless incremental risks to those already existing on a computer under my physical control.

    @ “Jean-Pierre Hubaux of Lausanne believes some people still care about location privacy.”
    I do, so under the “Black Swan” reasoning, he is correct.

    @ “Sarah Spiekermann argued that people want to be immortal, to make history, to leave traces – not to delete everything about them. She did an experiment in which subjects were told Facebook was about to close, but they could bid to preserve their data.”

    Why can’t I just download it and preserve it myself, perhaps burning to CD or DVD for posterity, rather than to pay? Or keep it that way in the first place, adding to rewritable media such as flash drives as needed?

    @ “Chris Kanich has been doing work with Stefen Savage on fake pharma, which has now got to the point of mass spectrometry of acquired samples: every pill he’s bought contained the right active ingredient in the right amount.”

    That’s a much better record than “ethical” drug-makers in the US, where the FDA permits a generic substitute to have anywhere from 80% to 125% of the amount of active ingredient of the original, approved, patented drug. Maybe I should buy my generics from these “fake” pharmas.

  10. Good to see you again, tommy. I remember enjoying talking with you on another blog about security and economics. Quite a big comment you dropped here. I figure I should address some of the points.

    re breaches. Many of us figure liability is the best way to get change going. Ranum is an exception (see Ranum vs Schneier on software liability). He has a decent point, but I can’t see a good alternative. Leading back to using courts. I like your suggestion of Small Claims Court. I think of it as death by a thousand cuts for companies making a breach easier. I think there’s value in SCC for ACH fraud & banking industry issues, too.

    re sarah spiekermann & privacy. Definitely a bogus argument. From childhood on, our culture teaches people to differentiate between information about people we share or keep between ourselves. There’s also contextual rules about how information is shared. A SSN goes to employers, but not retailers for instance. Schneier also mentioned long ago about how acquiring information can give you power over someone, like a cop being able to run your information. She’s essentially equated limited & limitless sharing of information.

    re ” For commercial, again use strict liability, though user errors must be taken into account. If the flaw can be exploited against an average user using standard best practices (don’t click on spam links, etc.), then the vendor is liable.”

    A side effect of this is essentially Ranum’s argument referenced above. He thinks only the lawyers will win out b/c firms will be spending more efforts on legal offense and defense than making good products. We see this with patent lawsuits right now. It’s easier to take out the competition in court than in the market. Liability laws would have to be carefully designed to prevent this outcome. A nice side effect of a liability heavy marketplace is that companies might be more likely to write from scratch than keep legacy forever. Getting rid of all that legacy baggage in OS & Internet foundations might be worth the legal costs. (How else would it happen? 😉

    re “Or the advertisers could make the ads less hate-inspiring. They have brought this on themselves with dancing frogs, cellulite-ridden buttocks, etc. ”

    Seems partly true. However, many people just want free stuff. It’s a benefit of the Internet. A library without making a trip, movies without buying a DVD, shows without long commercial periods, etc. People have an economic incentive to get the most out of what they do for themselves. Even with nearly perfect ads, many people would block them or skip them. (I’m a notorious ad fighter myself.)

    re “Everything on that form is *very* relevant to the lender’s decision. ”

    Most important thing you said, imho. Many companies are collecting information “just in case it is useful later.” Plenty of risk accumulates. Our situation might be better if they are forced to switch most things to opt-in & only collect what’s truly needed for business.

    re @ “Jean-Pierre Hubaux of Lausanne believes some people still care about location privacy.”

    I figure both the “don’t trust the government” and “old fashioned” crowds prefer location privacy. That’s a large number of people. I also recall a significant amount of protest to mandating trackable cell phones. (Although they’re inherently trackable lol)

    re “irrational users”

    Tommy, I think the thing that made them irrational was the “physically” guard it part. Many people haven’t wrapped their mind around the differences between the physical world and the information world. If it’s own their machine & nobody can take their PC, then their information seems “safe.” In reality, they’re data is usually stolen remotely via software.

    That said, I do like how you described the trusting the cloud situation: “The cloud adds almost limitless incremental risks to those already existing on a computer under my physical control.”

    People lulia investigated might feel that way for irrational reasons, yet your statement shows why rational are justified in feeling the same way. Mainly, the rational people with competence to guard their data.

    re pharma. “where the FDA permits a generic substitute to have anywhere from 80% to 125% of the amount of active ingredient of the original, approved, patented drug.”

    Didn’t know that, although I did see that many vary. I think this is another example of govt corruption making a situation ok in one context & criminal in another. Another example of that would be cocaine being illegal & frowned upon medically, but many ADD/ADHD “treatments” having almost identical effects.

Leave a Reply

Your email address will not be published. Required fields are marked *