WEIS 2021 – Liveblog

I’ll be trying to liveblog the twentieth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (June 28/29). The event was introduced by the co-chairs Dann Arce and Tyler Moore. 38 papers were submitted, and 15 accepted. My summaries of the sessions of accepted papers will appear as followups to this post; there will also be a panel session on the 29th, followed by a rump session for late-breaking results. (Added later: videos of the sessions are linked from the start of the followups that describe them.)

7 thoughts on “WEIS 2021 – Liveblog

  1. (Video) Marie Vasek chaired the first paper session, and the first speaker was Scott Lee Chua. He’s been Measuring the Deterioration of Trust on the Dark Web. The direct impact of law-enforcement interventions seems hard to measure; are there indirect effects? Scott proposes vendors’ “return on reputation” (RoR) as a proxy for whether police operations have been effective in undermining trust. Positive RoR values have been measured on eBay and Taobao; will darknet vendors pay more to cut risks, and drive RoR higher still? He studied the Alphabay takedown in July 2017, after which buyers and vendors migrated to Hansa. Three weeks later people learned that Hansa was already controlled by the Dutch police, which sowed distrust. Scott reviewed data on the Dream Market, which persisted. It did indeed turn out that RoR did not change with the Alphabay takedown, but increased significantly after the Hansa takedown; 5-star vendors started doing over 30% better than their 4-star competitors. So damaging buyer trust increases barriers to vendor entry, leading to fewer and more conspicuous vendors.

    Ugur Akyazi has been Measuring Cybercrime as a Service. This has been growing as a business, is promoted through underground forums, and is a priority for law enforcement. How do CaaS vendors find customers? Underground markets are better for product-type sales while custom websites lack reputation mechanisms. Therefore people have shifted to forums, especially since the takedown of Alphabay, Hansa etc. Services range from renting infrastructure through hacking on demand to cash-out. He developed classifier for service type, supply, demand and cost. It turned out that, in hackforums at least, CaaS peaked in 2012. However the ratios of the service offerings have remained stable over time.

    Richard Clayton’s subject is that Cybercrime is Entrepreneurship (I’m an author). Up till now, cybercrime has been analysed technically, empirically, through security economics and through criminology. Yet something is still lacking, and we present a framework for seeing crime gangs as tech startups. There’s an enabler that makes a crime possible, a barrier to entry that has to be overcome; there may then be pathways to scale, or bottlenecks that prevent things scaling; in the absence of defenders, it may scale up until eventually it saturates somehow. Richard described 419 scams as an example; there are ten more examples in the paper. We hope that the framework may help in assessing new cybercrime types to get some idea whether we ought to be worried. The biggest difference between crime startups and regular ones is that the crooks don’t have access to VC; they’re running a tech startup with the financial structure of an ice-cream shop. In questions, Susan Landau pointed out that location also matters; being in a non-extraditable country is a big advantage.

  2. (Video) The second session was chaired by Erin Kenneally, and the first speaker, Daniel Woods, described How Cyber Insurance Shapes Incident Response. Ten years ago, NIST 800-61 set out how organisations should respond to incidents; experience has shown that SME’s can’t work with such heavyweight planning. The reality nowadays is that the person who notices the attack calls a hotline, and a responder takes over; cyber-insurance has helped to drive this. However the victim firm and the insurer may not agree on the selection of forensic investigators, lawyers, PR folks etc; a classic principal-agent problem, spiced by high transaction costs and a short timescale. Daniel found that insurers often restricted clients to a pre-approved panel of support firms, and had their hotlines staffed by lawyers. Daniel studied the panels of 14 insurers to analyse how they act as gatekeepers; some legal, forensics and PR firms charge at a discount rate, or even offer a fixed-price service. A handful of law firms dominate; sell attorney-client privilege, and have invested seriously in relationships with insurers. A large number of forensic firms get work, with new competitors breaking away from incumbent firms. This drives down prices and means that SMEs have a response capacity where they previously didn’t.

    Kiran Sridhar was next, working on Cybersecurity Information Sharing in the context of CERT/CC and supply-chain threats. He got access to 434K emails since 1993 discussing vulnerabilities; he extracted statistics on trends, vulnerability characteristics, and priorities. CERT/CC has become more aggressive at prioritising vulns; multi-vendor vulns get more attention; and that resolving vulns take longer if they’re deeper in the chain. Kiran has come to believe that cooperative vulnerability coordination is possible, but it will take more work. The issues are around the ways software is deployed and the need for automation: we need ways of alerting repositories, and shipping patches based on software bill of materials. We also need better ways of prioritising vulns as this simply takes up so much bandwidth. There will ultimately be hundreds of supply-chain vulns affecting every company, so getting this right will be increasingly important.

    Anna Cartwright is trying to measure The value of data, in the sense of willingness-to-pay and willingness-to-accept for access to files. In her first study, of 800 people from the UK about the value of the data on their principal digital device showed a tenfold difference between WAP and WTA in the case of malicious deletion. A second study was 400 people in employment, with WTA framed as the deletion being voluntary rather than adversarial; the discrepancy was similar to the first study. WTA seems less reliable, but WTP may underestimate the value of the data.

  3. (Video) Monday’s third and final session was chaired by Mingyuan Liu. The first speaker was Shakthidhar Gopavaram exploring Willingness-To-Pay vs. Willingness-To-Accept in an IoT Marketplace. The discrepancy between the two may be explained by the endowment effect and status-quo bias; the latter can be decomposed into loss aversion and omission bias (the desire to avoid later regret). He recruited 40 people interested in owning a smart plug and studied whether they would pay more for cameras or fitness trackers. He found that the detailed design of the marketplace has a significant impact on purchase decisions around privacy – so privacy markets might be significantly enabled by a few key players such as Amazon.

    Elsa Rodriguez was next, Quantifying the Role of IoT Manufacturers in Device Infections. She’s been using the ‘Mirai telescope’ of the IP addresses of infected devices from July to September 2020, observing almost 32k infected IoT devices. 42% of infections are due to just nine manufacturers, all based in Taiwan and China, of which the worst are Avtech and HikVision. This pattern holds across the 20 countries with the most infected devices. As for the vendors, slightly over half (53%) have some updates available to download. Elsa concludes that we might get results if we focus efforts on these firms; the evidence is now sufficient to justify government intervention.

    Tuesday’s last speaker was Amutheezan Sivagnanam who has been studying The Benefits of Bug Bounty Programs. He’s analysed the Chromium program to see the probability that vulnerabilities are rediscovered, and whether external bug hunters report the same kinds of vulnerabilities as found by internal testers and as used in exploits. Over 21k reports have been made public since 2008; by comparing the Chromium issue tracker, CVEs and git, a lot can be learned. It turns out that there are significant differences between internal and external bug hunters, while rediscovery is non-negligible but trickier to pin down as most bugs are patched rather quickly. Exploiters go for the subset of critical bugs with high severity in stable release channels that affect the rendering engine and where the code is in C++. This raises the question of whether the Chromium team should incentivise external bug hunters to go after those sorts of bug.

  4. (Video) Adrian Ford led off with Tuesday’s first talk, on The Impact of Data Breach Announcements on Company Value in European Markets, responding to the rather US-centric literature on this subject. He analysed data on 44 breaches affecting European quoted companies from Jan 2017 – Dec 2019. The only sector that showed significance was ‘consumer defensive’ where CAR actually increased. There was a weak negative correlation between number of records breached and share price; the introduction of GDPR in 2018 had a weak positive effect but not significant. Overall, there was no clear effect of breach announcements, so it would be hard to justify investment based on them alone.

    Nicola Searle and Andrew Vivian’s talk was titled Surprisingly Small: this refers to the effect of trade secret breaches on firm performance. There’s a lot of rhetoric about trade secrets being important, like patents, so you might expect announcements of secret compromise to have a negative effect on share prices. Nicola found only two papers on this, so searched US court records (as they’re electronically available) for 1996-2020 for US criminal codes 1831/2, finding 214 cases and 103 alleged victim companies. It turns out that all the CAARs are rather small (less than 0.5% in absolute magnitude) – regardless of whether raw, market-adjusted, four-factor or five-factor, and regardless of the event window and risk adjustment. None of the effects reach significance, except possibly in high-value severe cases involving outsiders against R&D-intensive firms. It was also notable that there were only this many alleged victims in 25 years.

    The third speaker of the session was Eduardo Schnadower, who has been investigating Behavioral advertising and consumer welfare. He had participants visit random websites to search for products, and then compare products in randomised order that came from organic search results, their ads, and ads targeted at other participants. With results from 181 participants, purchase intentions were low, but for random ads they were significantly lower. Measurement is hard because of the dominance of large websites and because so many ads are annoying that consumers block them; an environment that makes it harder for SMEs.

  5. (Video) Alexandre de Corniere started the last refereed paper session with A Model of Information Security and Competition. As security markets fail because of externalities between vendors, asymmetric information, and various kinds of market power among vendors, Alexandre has been modeling how competition is affected by vendors’ business models, and customer savvy. Starting with a Hotelling model of duopoly, with two firms with different security attributes and different numbers of customers, the more customers the more the bad guys will target that product; only savvy customers will realise this. Price is a decreasing function of the vendor’s security, and also of its rival’s, so competition increases with security, leading to underinvestment in equilibrium. The model enables a regulator to decide what fines to set for market abuse depending on the proportion of sophisticated users, and the proportion of firm revenues that come from advertising versus license fees.

    Tongxin Yin was next with A Game-Theoretic Analysis of Ransomware. Is deterrence, backup, or insurance the best strategy? This can be modelled as a four-stage game where the attacker decides to attack despite any deterrence factors, then the defender then decides to pay or recover, then the recovery either works or doesn’t, in which case the defender either pays or loses the data. She has worked out the equilibrium strategies, and in her model backup and deterrence and complementary. Having done this, she introduces an insurer as another player; this makes things more complex, and details can be found in the paper, but it may have a moral hazard effect on backup, which is otherwise a credible threat. This work is intended to provide a basis for new types of ransomware insurance.

    Spencer Oriot gave the last refereed paper, onOmnichannel Cybersecurity. People may opt for low-security tools at home but use higher quality tools at work, despite poor usability. Inattention may be rational in a low-threat environment, but there’s an externality in that work educates people. So the sweet spot may be to deliver work-type security but without extra hassle, and the opportunity may be for a firm to provide cybersecurity as a utility. How might this be built as a platform? We need somehow to provide good protection to less capable people, and a platform that mediated between users and vendors might be able to measure and mediate this. We know the shortcomings of standards bodies and auditors; a third party that adds quality had better be involved in the transactions or at least function as a near-real-time information sharing agency.

  6. (Video) In the panel discussion I recalled how WEIS had grown from basic applied microeconomics in 2002 to rapidly include financial economics from 2003 and then got behavioural economics on board, thanks to Alessandro. As for the present, we’ve done more and more work with criminology in the last 5 years. As for the future, the failure of Solarwinds taught us that we’d better pay attention to the ownership structure of our critical suppliers. And maybe a new frontier is political economy; things fail because of how people behave in organisations. Yet another is safety; security economics extends to dependability economics.

    Alessandro Acquisti recalled early days of behavioural economics of privacy, and how now both security economics and privacy economics have gone mainstream. In a sense we’re victims of our own success, and there are many other opportunities for scholars to send good papers to first-class venues and journals.

    Rainer Boehme wondered what the world might be like without WEIS; at the very least many of us wouldn’t have got to know each other, and would perhaps have taken other tracks. Rainer got engaged at WEIS 2005 and realised we’d been successful in 2011 when he started hearing our arguments coming back from government officials. He is relieved that our workshop hasn’t been drowned in papers on cryptocurrency.

    Jean Camp recalled that some people even questioned whether a subject such as the economics of security should even exist. We have immense methodological diversity, yet we connect security with privacy in ways that fit together well with reality. Jean wishes we could get better demographic diversity though.

    Marty Loeb showed some photos of people at WEIS 2003.

    Kanta Matsuura reminded us of some statistics he collected in 2009 on coauthorship of conference papers. WEIS got significantly fewer inter-sector papers, such as academia + industry or academia + government. Then it was a bit over 5%; recently it’s just over 10%. However the number of authors has been stable since 2005 at about 3, and the number of pages at about 25.

    Andrew Odlyzko agreed that WEIS was really important in bringing many people together, but it’s not been so visible as many disciplines are now involved with information security for their own reasons. Our field is sort-of messy, with no clear breakthroughs of the kind seen in Newton’s laws, so although it’s been a major contribution and intellectual endeavour, it’s harder to describe it as a revolution.

    In discussion, one of the difficulties is that success is not observable other than by the absence of attacks. Security budgets are often justified by scaremongering, in both the private and public sectors; we still haven’t studied the agency and political-science aspects as has been done in other fields. However WEIS has made a great start: it legitimised the introduction of social science to security, and saw to it that both the economics and the systems were sound. Now that economic arguments are abused, for example to argue that there’s no privacy problem in the absence of measurable direct harm to identifiable individuals within the jurisdiction of some particular court, it’s important to have the social-science tools to measure the “chilling effect”.

  7. In the rump session, Adam Tagert of the NSA funds research and will soon open a call for proposals in economics at the strategic level.

    Richard Clayton mentioned the Cambridge Cybercrime Centre, which has licensed data to 58 teams now with four more in the pipeline. The collection includes crime forum posts, malware, spam and much else.

    Spencer Oriot has launched a startup to implement the work described in his paper and is hiring.

    I announced that the Cambridge Cybercrime Centre is hiring.

    Max Hills has been working on cookie consent. In last year’s WEIS, Daniel Woods and Rainer Boehme investigated the commodification of consent: some companies trade consents to manipulate their stats. Max has found that many consents are provided by third parties, one of whom processes more than 25 billion consents a year. On average, these take just over 3.2 second, so this company wasted 2,500 years of people’s lives.

Leave a Reply

Your email address will not be published. Required fields are marked *