WEIS 2019 – Liveblog

I’ll be trying to liveblog the seventeenth workshop on the economics of information security at Harvard. I’m not in Cambridge, Massachussetts, but in Cambridge, England, because of a visa held in ‘administrative processing’ (a fate that has befallen several other cryptographers). My postdoc Ben Collier is attending as my proxy (inspired by this and this).

9 thoughts on “WEIS 2019 – Liveblog

  1. Peter Swire is a law professor from Georgia Tech who started writing about the Internet in 1993 and started teaching law and cybersecurity in 2004. He’s done public-service work including the NSA Review Group in 2013 and is currently working on cross-border issues. He wrote a Pedagogic Cybersecurity Framework; to get through to people with policy and management responsibility. People in our field tend to think about layers 1-7 in the stack, but that’s not enough. You have to imagine extra layers: layer 8 for corporate management, layer 9 for government and layer 10 for international. What literature do we have for considering these layers?

    Layer 8 has a lower layer of training and hygiene, and higher layers of dealing with other organisations and standards. Relevant literatures include Jensen and Meckling on the theory of the firm, the Chicago school, and Oliver Williamson’s “Markets and Hierarchy”; we should be getting our students to read this. As for the government layer, we’re into the law schools and public-policy schools. How do we do private-public partnerships? If you’re writing a law for hospitals, you look at the market failures and ask how they might be fixed; or you look at the public-choice literature and ask what sort of laws might get passed. Then there are the constitutional lawyers on limiting the government, and the criminologists on how people get into crime.

    Layer 10, international, we might not think about as the US does so much; but countries don’t act alone. The US and China have a deal, for example; if that’s done well we have better cybersecurity and if it isn’t we have war. And nations set limits on each other. The relevant literatures are international relations; the realist view of how we maximise our advantage will have consequences for cybersecurity. The relationships with other actors are diplomacy, so you have to learn to talk to diplomats and think about communicating the tragedy of the commons. What’s the role of the UN? That’s not traditionally considered part of computer science, but there are people there writing books about cybersecurity. And they’re not here in this room. How do we go about teaching cybersecurity in a business school, as a number of us here do? When is it better to have markets fix cybersecurity, and when do we need rules?

    As for research, Peter hopes that the framework shows the importance of what we do at WEIS. Which risks are you worried about? And which literatures do you need to read, and what sort of research is likely to pay off? Ask yourself what sort of story within one of these literatures might interest people, and what sort of specialist conferences you might go to to tell it. We know there’s an explosion of complexity; how do we deal with it? Attacks happen at layers 8 through 10 as well, and are fundamentally similar to the attacks we already know of at layers 1 through 7; we need different conceptual tools to deal with them. He hopes that this framework may be helpful in thinking about what we need.

    In questions, I asked whether there’s a “layer 11” of cultural or religious analysis; Peter answered that such approaches may be useful but his focus is on the institutional aspects.

  2. The first refereed talk was by Matthias Weber on A Research Agenda for Cyber Risk and Cyber Insurance. Why is cyber risk management such a vague and hand-wavey topic despite fifteen years of research? Matthias surveyed a number of disciplines from computing through law, economics and accounting to behavioral science, politics and management, finding that they all understand it differently and believed that they were the heading discipline. Dealing with cyber-risk needs a multidisciplinary approach; they suggest some possible research projects here. Another possible nexus of research is whether cybersecurity is a public good; a third is how we can increase the insurability of cyber risk, where a global event database could be really helpful.

    Next was Tyler Moore, on Valuing Cybersecurity Research Datasets. Lots of the datasets being shared for operational purposes isn’t much use for research, and making them suitable for sharing is time-consuming and costly. Research datasets on the other hand are often given away free. In addition to the direct costs and legal risks of data sharing, there’s asymmetry, uncertainty and mismatch of value. Incentives for sharing can include direct payment, shared costs of glory. There are collaborative programs including Wombat, CCC and Impact. Impact has been supported by DHS since 2006 and Tyler has studied 200+ requests made up till September 2018, and emailed the requesters of whom 114 replied; over 60% would not have collected the data themselves, and the median avoided costs were about $300,000. However there was no obvious correlation between the cost of collected data and its popularity. Newer datasets are more popular, as are datasets that are restricted rather than completely public.

    Richard Clayton was next with Measuring the Changing Cost of Cybercrime (of which I’m an author). We had a survey of cybercrime at WEIS 2012 and decided to update this seven years later to see what had changed. Surprisingly, we got much the same result despite the dramatic changes in the online world since then, from Windows to Android, and to the cloud. There are some changes though. First, we have better data, including victim surveys, so we know that about half of all crime is online (which we could only surmise in 2012). Second, business email compromise and its variants such as CEO fraud and authorised push payment fraud are up. Third, ransomware and cryptocrime are now big thanks to bitcoin. Fourth, there was real damage from Wanncry and NotPetya (though not as much as reported at the time). Phone fraud is down (calls are cheaper) and so’s IP infringement (Viagra’s out of patent, while software and music are sold more by subscription). Oh, and fake antivirus has been replaced by tech support fraud. The takeaway is that cybercrime isn’t so much to do with the technology as with the underlying lack of enforcement, and that hasn’t changed.

  3. Daniel Woods has been working on The County Fair Loss Distribution: Drawing Inferences from Insurance Prices. He’s extracted data from regulatory filings of 26 US insurance companies and used this to reverse engineer loss distributions; he estimates the parameters for six different distributions over some 2000 cyber liability prices offered by one insurer, then extended this to the other 25 firms. He was inspired by Francis Galton’s observation that when farmers try to guess the weight of an ox at a county fair, their guesses may be all over the place but the average is pretty close. He found that the gamma distribution best explains the observed prices. In questions, it was noted that the randomness of county fair attendees may help debias their average guess, while insurers and indeed the insurance industry may assume specific models; and that the difference between the mean and the median indicates that the data are a mix from small and large companies, so one might use impaired rather than implicit probabilities.

    Dennis Malliouris has been working on The Stock Market Impact of Information Security Investments. Previous studies had shown mixed results as to whether information security certification raised or lowered a company’s stock price. Dennis found that 145 firms that completed the UK’s Cyber Essentials scheme in 2014-18 had a significant and positive stock market reaction while among 76 firms that became ISO/IEC 27001 compliant in 2011-18, it was associated with significant negative returns.

    Jonathan Merlevede was the morning’s last speaker, on Exponential Discounting in Security Games of Timing. Games such as FlipIt have assumed constant returns; Jonathan has been investigating what happens with exponential discounting of gains and costs over time, both with constant-rate and stochastically-timed play. For both attacker and defender, increasing impatience increases the effective cost of moving; as the defender starts off in control, the attacker must play at reasonably high rates to get control at the start of the game, when the contested resource is most valuable. Rapid defensive moves may cause the attacker to drop out. The periodic strategies that were effective without discounting are less obviously so. There are various other subtleties that such models can explore.

  4. Duy Dao started the afternoon with the Economics of Ransomware Attacks. He models them as a game with software vendors who set quality and price in a game with customers, whose equilibrium can be disturbed in odd ways if customers can pay to mitigate losses. For example, instead of cutting prices as security risk increases, the vendor might even raise them. Ransoms can lead to more unpathced machines, and higher ransoms to higher prices for software.

    Daniel Arce was next on Cybersecurity and Platform Competition, arguing that platform competition and cybersecurity shape each other. Daniel previously built models that describe duopoly with equilibrium between security level and market share. Here he shows that the security levels can be endogenous, generated by one of the players competing on security, defined as the probability that an attack is unsuccessful. Part of the model is social engineering, and another part is malware targeting based on market share. He assumes that security has the same ratio of fixed costs to marginal costs as the rest of the services provided, and that each platform can set its security level in such a way as to prevent switching. The implication is that competitors to Windows or Android must compete on security.

    The third speaker was Sasha Romanosky whose subject was Improving Vulnerability Remediation Through Better Exploit Prediction. Might we prioritise bug fixing by predicting which bugs are most likely to be exploited? Sasha treated automatic severity / exploitability rating as a supervised learning problem where we want to understand the model and the results. He used gradient-boosted trees. His training and evaluation data included Mitre’s CVE list and published exploit code from Exploit DB and Metasploit as well as exploit data from SANS and elsewhere. A question noted that in previous work, the ability to recognise an attack and recover from it appeared to be more important than a raw exploit probability; another that what mattered was the deployment of patches, not just their publication; another was whether this would work in a world where people scan for vulns actively and a find is exploited quickly and massively; another that coverage seemed rather bad even for good AI models, and anything near 80% seems to require fixing most of the bugs.

  5. Kai-Lung Hui started the last session with a talk on Bug Bounty Programs, Security Investment and Law Enforcement: A Security Game Perspective. Bug bounties are now sexy – “the new boy band” – but what are the trade-offs? In his model, a firm should always launch a bug bounty program unless there’s a strategic hacker who costs too much to bribe into participation, in which case according to this model it would take the hit of an exploit. Firms will still retain in-house teams to limit the bounty bill. Several questions challenged the realism of the model’s assumptions; Kai-Lung responded that the model could be elaborated.

    Next up was Pietro Saggese talking about Identifying the Arbitrageurs on Mt. Gox. He studied arbitrage between the failed exchange Mt Gox and three others, Bitstamp, Bitfinex and BTC-e, finding over 2000 potential potential arbitrageurs with accounts at all three and whose trades might be explained as exploiting differences in published prices. He also studied patterns of zero-fee trades. Many users with low IDs seem to have executed coordinated buy orders from August to November 2012, and coordinated sell orders just before the $1000 price peak in April 2013. He used these patterns to separate arbitrageurs from others. Distinguishing them is still surprisingly hard, and it’s not obvious whether there were a few big arbitrageurs or a lot of little ones.

    Leting Zhang then presented Does Sharing Make My Data More Insecure? An Empirical Study on Health Information Exchange and Data Breaches. She analysed data from 2010-4 on breaches at hospitals that did or did not join healthcare information exchanges. Did the mandated security measures mitigate the larger attack surface? Yes, especially for hospitals with a capable IT function – and there’s a spillover of reduced breaches to other local hospitals even if they don’t participate directly, especially for big systems, in regions with a high participation rate and in cities with competition between providers.

    The last speaker of Monday was Mehmet Bahadir Kirdan with Hey Google, What Exactly Do Your Security Patches Tell Us? A Large-Scale Empirical Study on Android Patched Vulnerabilities. He studied 2,470 Android patches from 2015-9 and looked at the many vulnerabilities left by android end-of-life policy; this is also uneven, as 4.4.4 appeared 8 months after 4.4 but reached end-of-life 22 months later – rewarding users who had updated. Vulnerabilities coming in from Linux and Qualcomm are patched late. The average maximum vulnerability lifetime (from code release to patch) is 1,350 days. We need better coordination between Google and other stakeholders, and improved techniques to prevent the introduction of common vulnerability types. In questions it was pointed out that Oems are another big weak link in the chain; Mehmet has no data on that, nor on the date a vuln was discovered, as opposed to patched (he’s been working from the source code). A comparison with linux would be nice, but hard, as its issues are not as easy to track.

  6. Nikhil Malik started Tuesday’s session with a talk on Why bitcoin will fail to scale. Bitcoin does 3 transactions per second while Visa does 2000; will the gap ever close? If bitcoin adds capacity, in a way that reduces fees, this would cut payments to miners, who would fail to support the system or even undermine it, for example by strategic partial filling of blocks (for which he has found some evidence). Also, smaller colluding groups can launch double-spending attacks, and added capacity can be artificially reversed by a colluding group so as to protect its fee income. In short, disrupting the auction mechanism may undermine security. In questions, Nikhil remarked that much of the hike in mining fees since April is probably due to strategic partial filling, though increased demand also played a role; that if you block the big miners to try to make collusion harder this will just incentivise them to run attacks; and that there have already been double-spend attacks on other cryptocurrencies such as bitcoin gols; there have been none on bitcoin so far, but one would cost only about a quarter of a million dollars, so let’s not be complacent.

    The next speaker was Arghya Mukherjee, on The Economics of Cryptocurrency Pump and Dump Schemes. He joined all relevant Telegram and Discord fraud groups and identified nearly 5,000 pump-and-dump schemes from January to July 2018 – a much larger sample than previous studies. Scammers agree on a cryptocurrency to buy and start driving up the price, often to an agreed target; as both price and volume increase this attracts suckers. Cryptocurrencies are ideal vehicles as they are illiquid with occasional liquidity and price spikes; pumps worked better on obscure coins, hiking prices about 20% for coins ranked over 500 but about 4% for top-75 coins, and also working better against coins traded on few exchanges. There are also “promotions” that amount to pump-and-dump trading by insiders. Although such schemes are illegal, against coins just as against stocks, there have yet to be any prosecutions. In questions, Arghya noted that scams take place repeatedly against the same coin; he’s interested in traders, not miners; and that he ignores small groups as he doesn’t believe they move the market, but just copy the bigger groups. He doesn’t know who motivates the scams and gets the most benefit; it could even be the founders of the coins.

    Matteo Romiti has done A Deep Dive into Bitcoin Mining Pools to provide an empirical analysis of mining shares. He’s interested in how the mining centralisation evolved and how pools reward their members, and has combined several sources of miner and pool attribution data. He’s found that individual large miners are operating across the three mining pools that together exceeded the 50% threshold in the first half of 2018; and that in each pool, fewer than 20 miners receive most of the payouts. He’s making both code and data available for others to build on. In questions, he noted that miners don’t join all the pools as there’s typically an entry fee, and it’s rational to join a larger pool to get steady revenue while it would be better for bitcoin to distribute across lots of pools; so maybe it’s the true believers who do cross-pool mining. Matteo actually did some mining to get ground truth, but with substandard hardware, and in a year got only 10 euros.

    Panagiotis Chatzigiannis was the last of the early morning session, with a paper on Diversification Across Mining Pools: Optimal Mining Strategies under PoW. He has a computational tool that helps miners diversify across pools in order to get a regular income stream, which is measured by the Sharpe ratio (average excess reward over standard deviation). He presents results for active and passive miners participating in various combinations of pools. It turns out to be slightly better to reallocate hash power fairly frequently, say about once a week, between pools. In questions, he said he didn’t do any mining.

  7. Rob Pinsker has been working on The Economic Cost of Cybersecurity Breaches and in particular the spillover to related firms – firms in the same sector as a firm that gets publicly hacked. He analysed a sample of 353 breaches and found that related firms suffer significant negative equity returns (roughly 0.5%) as well as higher audit costs, though they are not punished as severely as the breached firms. The effect depends on breach size and competition. In questions, will we see this effect in other types of breach? Internal controls matter for insider breaches but aren’t apparent in the data used, and there are fewer insider breaches reported. There will also be negative spillover as peer firms want to invest in the same countermeasures; these can also be signals to the market, which wants to know that the firms in a sector are doing due diligence. Rob hasn’t studied this.

    Arrah-Marie Jo has been studying Software Vulnerability Disclosure and Security Investment. The best metric for security investment may be the effort made to find vulnerabilities rather than time-to-patch. Arrah-Marie studied browsers, desktop OS and mobile OS from 2009-18 and found that a highly public vulnerability disclosure spurs on this effort; third parties’ discovery rates in particular are positively affected. Perhaps this is a “Bayesian updating” of the probability of finding new flaws. In questions: was there any correlation with a bug bounty program? Yes, insofar as such programs reward individuals. Were any such programs set up in response to a big vulnerability? That’s worth investigating. What about other platforms? It should be worth looking at things like WordPress.

    Raviv Murciano-Goroff‘s topic is Do Data Breach Disclosure Laws Increase Firms’ Investment in Securing Their Digital Infrastructure? He’s gone back to the start of data-breach laws in California in 2002, following a breach that affected California state employees, including legislators; that was followed by other states from 2005 and by now all 50 states have such laws. He’s collected header data from the Internet Archive which show when 213,810 US firms updated their web server software between 2000 and 2018. His data confirm that tech age predicts data breaches; older software is associated with a significant increase in breach probability. An analysis of the data from 2002-5 shows that data-breach notification law makes software about 2% newer. High-traffic websites are 25% more responsive, while the biggest impact was on technology laggards. The dataset he’s collected might help answer many other questions too. In questions, he did no further work on the relationship between security and server hygiene such as types of servers and TLS use; that’s for the future.

  8. Eric Zeng has been working on Fixing HTTPS Misconfigurations at Scale by sending notifications to website owners, continuing Google’s previous work on notifying people of malware. In addition to real certificate issues, bad TLS configurations can lead to both false positives and actual attacks. He tested messaging for less urgent issues (TLS versions 1.0 and 1.1, certs about to expire, and ciphersuites that don’t use authenticated encryption) and also for urgent notification on distrusted Symantec certificates. In the former case the effect is low, about 10%. He tested English versus translated messages; user impact versus technical impact; notification from Google or UC Berkeley, in a randomised controlled trial with 1000 websites in each bucket. There weren’t any really significant effects of treatment; they were swamped by effects of context, with certs about to expire getting much more attention. Perhaps website owners are happy to keep old configurations for other reasons. Notifications are better from Google than from academia; deadlines had a substantially higher effect; and translations have no effect, but might be done out of respect; messages should identify the issue clearly. Their latest work is on using the browser UI to deprecate old features.

    Raveesh Mayya‘s paper was on Delaying Informed Consent: An Empirical Investigation of Mobile Apps’ Upgrade Decisions. Since Android 6.0, the separation between installation and run-time consent has let users deny access permissions sought by apps. There was a three-year window for apps to upgrade their SDK use, which gave them new features but meant that users could opt out. So which apps delayed upgrading, and what did they gain or lose? He crawled 2 million apps in 2016-18 and installed the most popular 13,600 (though app adoption was seen as exogenous). Sneaky apps, such as those getting non-essential access to the microphone, delayed. Apps that did upgrade reduced their demands for non-essential information by 6%. He also found that consumers make meaningful privacy choices when these are less cognitively costly.

    Alisa Frik has been studying The Impact of Ad-Blockers on Consumer Behavior. More than 600m users employ ad-blocking technology of one kind or another; as well as cutting ad annoyance they make web pages load faster, cut distraction and mitigate discrimination. But they also cut access to coupons, offers and so on. So what’s the net effect on consumer behavior? She observed how 212 volunteers did real shopping with and without ad blockers; in the former condition, they saw only organic search links. She found that ad blocking did not damage welfare by adversely affecting prices, search time, or subsequent satisfaction; however subjects who had previously used ad blockers at home tended to buy cheaper goods. However searching for specific branded products was faster than looking for generics. The experiment involved contextual ads for the control group but not behavioral ones. She is now planning a field experiment. Experienced ad-blocker users also inspect more search results before buying, and are less satisfied with both their browsing experience and product choices. In questions: perhaps you’re just observing power users? Possibly.

    Alessandro Acquisti gave the last referred paper, on Online Tracking and Publishers’ Revenues: An Empirical Analysis. Behaviorally-targeted advertising may now be about three-quarters of the total, and may help advertisers, but does it bring any benefit to the publishers who add all the trackers to their websites and thereby compromise their readers’ privacy? From the viewpoint of theory, ad targeting may make advertisers more willing to pay, increasing publisher revenue; it may also diminish competition, leading to a decrease. which effect predominates? Alessandro got a dataset of 2 million transactions of 5000 advertisers at 60 websites, from which he could estimate revenues in the case where behavioral ads were allowed, and also in the case where only contextual ads were. The answer is that publishers get only about 4% more. There are caveats about this being from the ads of only one media company, and not taking account of other factors such as device fingerprinting; but it does somewhat undermine the claim that restrictions on third-party cookies would be the end of life as we know it. In questions, it was noted that websites would be designed differently if cookies weren’t available; and that firefox was making ad blocking a default just this week. Alessandro noted that publishers he’s spoken to were surprised about these results.

  9. Rump session

    Sasha Romanoski has been looking at the kinds and magnitudes of harms from breaches and trying to quantify the various parts of the kill chain.

    Frank Nagle of Harvard and the Linux Foundation acknowledges that open-source software is eating the world; how do we measure it? Hopes to produce a public database by the end of the year.

    Mohammed Mahdi Khalili of U Mich is working on interdependent security games, including one for resource pooling for local public goods.

    Ying Lei Toh from FRB Kansas City has been looking at consumer reaction to the Equifax breach. The proportion of people looking at their credit report increased from 0.5% to 3%. Ideas please to yinglei.toh@kc.frb.org.

    Andrew Odlyzko pitched his paper Cybersecurity is not very important.

    Zinaida Benenson has done a survey of user attitudes to security update labels, like energy labels for light bulbs. Users considered them more important than price, and not just for “sensitive” items like a camera; the same held for a home weather station. This will appear in IEEE S&P 2020.

    Shuan Wang described cybersecurity research in Singapore.

    Farhang Rouhi has a prey-predator model of cryptocurrencies.

    Richard introduces the Cambridge Cybercrime Centre, which has tons of data to share and a job opening for an engineer, as well as a conference on July 11th.

    Joost Houwen has been studying what controls amount to a testable minimum standard of due care from an insurance point of view.

    Sander Zeijlermaker has been thinking about strategic initiatives; can models such as the red queen hypothesis and the iterated weakest link be applied to managing suppliers?

    Rainer Boehme announced the Honeynet workshop in Innsbruck which will be from July 1-3.

    Sasha Romanosky rounded off asking how we can find the next generation of star malware researchers. Should we test the whole population at 17, like the Israelis, or test psychological traits, or what?

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *