9 thoughts on “WEIS 2018 – Liveblog

  1. Loukas Balafoutas started the meeting with an invited talk on economics research in Innsbruck. What do taxi drivers and IT experts have in common? We face similar issues of asymmetric information and fraud when we buy repair services, and can ourselves rip off our customers. Loukas has studied taxi driver fraud in Athens by having test passengers who signal different levels of information, and measuring whether the driver took a longer route or overcharged by manipulating the tariff. He had local, nonlocal native, and English-speaking passengers. Both foreigners and non-local natives were driven significantly further, and the difference between them was not significant; and only foreigners were significantly ripped off by manipulation of the tariff (which is nationwide). In other words, those drivers who cheat do so systematically. A similar experiment was done in the Austrian market for computer repair services. Here Loukas’ colleagues were exploring second-degree moral hazard. Mentioning the existence of insurance pushed the repair cost of a RAM module in a PC up from €70 to €128, with most of the extra charge in the labour cost: the repair shop charges 1.02 hours on average rather than 0.55 hours.

    The first regular paper was by Thomas Maillart, on designing organisations for cybersecurity resilience. How can we measure cybersecurity performance? The proportion of vulnerable hosts in an organisation is a spiky graph and the best data to measure it is usually the ticketing system. This encapsulates technical difficulty, individual human capacity and group coordination capacity. He categorised 60.767 incidents over 6.3 years in a large organisation, which changed over time from largely browser malware and USB attacks to more web server compromise. He measures risk as the frequency and damage of shocks, and resilience as the ability to recover and learn from the shocks; the firm can now deal with four or five times the volume of shocks that it coped with six years ago. The key was reducing extreme events, as these take the most effort and energy to deal with. One technology that helps is full-disk encryption, as it mitigates stolen laptop incidents. Another lesson is the need for reserves to cope with large incidents, which means drills for staff who can become responders if need be; another is the need to think about risk across member firms of a conglomerate or along a supply chain. We’re missing theories of portfolio risk management and of learning at scale.

    Fabio Massacci was next, on the security maintenance effort of open source components. He started off wondering whether free and open source software (FOSS) is more secure and whether the laws for vulnerability discovery are very different in practice. Large firms’ products include both FOSS and proprietary code, and reported fixes may be a decent proxy for effort. However FOSS components are rarely ‘chosen’, their upgrades are an ‘ungrateful cost’, and their development is not under the company’s control. People are often using 10-year-old components if they don’t have to upgrade. So how do you analyse all this? The obvious way in is to study which factors affect maintenance effort: the popularity of a FOSS component? Its code complexity? Its security lifecycle? Distributed versus centralised bugfixing? He found that coding standards cut effort; bigger projects need more maintenance; vulnerability lists have a positive effect, perhaps as they signal a security-conscious project; external security inputs helps, such as a bug bounty program. The absolute numbers of programmers, and of lines of code, don’t seem to matter. However, is the effect of publicity real, or a monitoring epiphenomenon? Does over diagnosis lead to excess care?

    Janos Szurdi has been studying domain registration policy strategies and the fight against online crime. Crooks register malicious domains for a wide variety of purposes, with some wanting large numbers of domains while others want domains with specific lexical features (e.g. typosquatting). Previously he’d developed classifiers to spot typosquatting, and now he’s been doing a systematic high-level study of registration policies. This is topical because of the conflict at ICANN between Whois 2 and the GDPR. What effects would various policies have on malicious registrations, genuine ones and ICANN’s income? Three policies that do well are an anti-squatting policy of looking for lexically distinct features, removing known squatting domains and asking for more information from registrants; a second is incentivising registries and registrars to push up the bad registries’ costs of business; and and anti-bulk-registration policy, where genuine bulk registrants would have to provide more information, else they’d face increasing prices per domain owned. It might take a base price of $100 to deter typosquatters but anything above $10 might deter benign registrants. Hence the case for the price to escalate by the number of domains owned, which would require registrants to identify themselves as one does for bank accounts. In questions, I asked whether robust identity checks might not do much of the work, while Richard Clayton noted the difficulty of spotting typosquatting in real life and the many people with genuine requirements for multiple domains.

  2. Kristián Kozák has been measuring the underground trade in code signing certificates, which are starting to be used by malware authors. To begin with, the bad guys would compromise genuine signing keys; now there’s a trade in certs, signature-as-a-service and signature as part of install-as-a-service. He’s been looking at the supply side, looking at vendor offerings, and also the demand side by collecting signed malware. He identified 4 vendors on multiple forums, one of whom started in e-shop during the observation period (August 2017); this vendor got going in Mar 2017 and offered no other services. The selling point was “bypass WIndows smartscreen” and they advertised fresh Comodo certificates for $350. Thawte certificates were available for $600 and Comodo EV certs for $3000. He also found 14,221 correctly signed malware samples, and about half of the relevant certificates were abused within 40 days of issue; this is different from previously when the relevant certificates seemed old and compromised. Also, malware authors seem to be confident about their source of supply as they happily burn certificates. He estimated that the vendor’s revenue could be as much as $10,000 per month; 145 certificates were observed in the wild over 104 days. He has no idea how they do it but suggests that the publisher names be standardised so that if a publisher turns out to be a front for the reseller, all its certificates could be revoked at once. His data are here, and a further paper on code signing cert revocation will appear at Usenix Security.

    Masarah Paquet-Clouston has been looking at ransomware payments in the Bitcoin ecosystem. She started with 7,118 bitcoin addresses related to 35 ransomware families and built on the clustering techniques of Meiklejohn and others to get family-specific graphs. She learned it was better to analyse outgoing transactions only, to get the collector addresses; these might either be large clusters such as 47 gambling sites or exchanges like BTC-e and Kraken that are used to launder the funds directly, or mixers like bitcoinfog; these are the end point of tracing. The largest mean payments, of over $7,700, were from SamSam which targeted business and charged an unlock fee per machine. The biggest earner though was Locky which made $7.8m while SamSam made just short of $600k. The top 3 families made 86% of the revenue. Overall, she concludes that from 2013 to mid-2017, ransomware made a bit over $12m, with a few kingpins and a lot of wannabes; and notes that estimates of the overall costs of industry are $350m or more.

    Edward Cartwright has been developing a game theoretic models of ransomware. He suspects that ransomware is the first cyber-enabled crime that offers viable long-term profit, but their economic strategy is naive and could be sharpened considerably. There’s an existing economic literature on kidnapping going back to the 1980s by Selten and Lapan and Sandler; although this is mostly about terrorist hostage-taking, can it teach us anything? In Selten’s model the kidnapper demands D, the family offers D, with probability a(1-C/D) the victim is killed, and the kidnapper is caught with probability q. Then there’s a unique subgame perfect equilibrium: if the willingness to pay of the victim W < qX(1+a)/a the criminal doesn't attack; and the victim is released if the family offers at least aW/(1+a)(1-q). The value of a is crucial: the criminal must have a tough reputation or nobody will pay, and also a good reputation for giving victims back if paid. Lapan and Sandler extended this to a deterrence game: you spend a certain amount on backup and the criminal decides whether or not to attack as a result. However the deterrence must be high to work, and if not there's a spillover: the attack is costly even if it's defended and there are no other victims. This may justify cybersecurity subsidies or law-enforcement action.

  3. JT Hamrick started the afternoon session with a talk on the rise and fall of cryptocurrencies. About 45% of exchanges have failed so far, and some take their coins with them; but does the same happen with coins in general? He analysed a dataset from coinmarketcap.com from February 2013 and February 2018, and harvested 13,000 announcements from bitcointalk.org – from which emerged that 85% of announced coins fail before being traded publicly. The market seems to be getting more mature over time, but 9 out of 10 coins lose 40-50% in the month after a peak, and abandonment (where 99% is lost after a peak) is common. Smaller coins are abandoned easily, and also resurrected easily (that is, up to at least 10% of the previous peak). The big bubbles for new coins were in 2014, and in late 2017. The main correlation table shows that as the market rises, lots of new coins are created while old ones get abandoned. Traders ride the price wave, and price increases lead to wholesale coin resurrection too. Finally, many coins are no longer tied to the success of bitcoin. In questions, he ventured the opinion that much of the activity in the market seems to be driven by outright scams, but there’s still a fair amount of ‘fear of missing out’ or FOMO. It’s not clear what’s contagious FOMO and what’s pump-and-dump.

    Christian Rueckert was next on virtual currencies and fundamental rights. How might the Charter of Fundamental Rights of the EU, and the European Convention on Human Rights, constrain bitcoin regulation? Conceivable regulatory approaches range from a blanket ban through controlling exchanges with fiat money to transaction blacklisting. Central user databases have been proposed in the 5th anti-money-laundering directive of the EU. Potential control points include miners, exchanges, mixing services, wallet providers, banks, merchants and ordinary users. The relevant case law does not encourage the bulk collection of personal information on citizens without warrant or suspicion. The right to property in article 17 CFR may be engaged; there is a lot of complexity around the definition of property but in his opinion they are property for the purposes of the Convention. Other rights including that to pursue a trade or regulation and to freedom of expression may also be engaged.

    I was third, giving a talk on Bitcoin Redux, which I already blogged here. In it I argued that much of the discussion about bitcoin relates to the system as it is in theory and as it was four years ago, rather than it is today. The current ecosystem might be very much improved by enforcing existing laws. My talk slides are here.

  4. The last session on Monday was kicked off by Alisa Frik, who’s interested in increasing cyber-security compliance by reducing present bias. Users often postpone software updates, just as they postpone going to the gym; might similar strategies work? She’s been studying the strategies used by various software vendors, and did a user study of 300 participants on their willingness to do automatic update with a control group, a reminder condition and a commitment to be updated in a week’s time, the last of these giving the best overall outcome. A second study of 734 mTurkers also studied 2-factor authentication and automatic backups. The main reason for not updating now was inconvenient timing; trust, necessity and annoyance were the main roadblocks with 2FA. Mac and Windows users also behave differently, with the former considering the malware risk to be low. So a one-message-fits-all approach to security education doesn’t seem optimal; reminders and commitments only work when present bias is the underlying problem.

    Marco Cremonini was next, with a study of the relative contribution of professional expertise and formal security education on the accuracy of security assessments. The test was to assess 30 vulnerabilities, chosen at random from the CVSS list, within 90 minutes using the CVE data and short CVSS summaries. The idea was to have a good proxy for security problem-solving ability drawing on a variety of concepts and skills, including the ability to deal with uncertainty. There were 35 CS students, 19 security students and 19 professionals; none had previous experience of CVSS. He found that security students and professionals performed more or less the same, with the professionals better on user interaction. The CS students with neither security education nor experience did clearly worse. This work enables us to quantify the benefit of education or experience as a 20% improvement in diagnostic accuracy, plus higher confidence; and the mix of skills explains most of the variance, so should be investigated further. This is consistent with research into performance in software engineering.

    Rahul Telang was Monday’s last speaker, presenting whether online piracy makes computers less secure. The CMU security behavior observatory collected panel data from 300 users by installing sensors on their machines with consent and IRB approval, This enables him to see a lot of detail about user behaviour including downloads, browsing, and malware (as measured by virustotal). The only significant factor in malware infection was the total time spent on infringing sites. Doubling the time on such sites translates into about a 20% increase in the likelihood of malware infection.

  5. Tuesday’s first speaker was Claudia Biancotti from the Bank of Italy, presenting The price of cyber (in)security: evidence from the Italian private sector. A regular Bank of Italy survey of firms now includes cybersecurity questions, about frequency and cost of attacks, defensive measures, security expenditure and insurance uptake. The answers suggest that Italian non-financial companies spend very small amounts on it, with the median spend being €4,350 a year and the average for large firms being under €50,000. This may be inflated by state subsidies for staff training. She has identified four clusters of firms: capable, well defended ones that don’t get attacked, typically IT firms; firms with assets that get breached, and to which the Bank of Italy thinks it should pay attention; hapless victims of mass phishing campaigns, of no real importance; and firms nobody cares about. Only IT firms have significant insurance (by which she means over 10% have a standalone policy and over 20% have cyber cover as part of a larger policy). The aggregate data can be downloaded and there’s a remote access portal to run queries on the microdata.

    Mingyan Liu continued with a paper on Embracing and controlling risk dependency in cyber insurance policy underwriting. She discussed how business insurance premiums and deductibles are computed based on firm turnover and modulating factors such as for industry risk (an agricultural firm might pay 85% of the base premium and a tech firm 120%). The “first party modifier factor” is about firm risk, such as your security policy on laptops, websites and disaster recovery; while the “third party modifier factor” is for correlated risk, which might include what ISP you use, your SLAs with service providers and patching policy. She argues that third-party modifiers should be more about real third-party spillover risks and less about due diligence. She proposes offering customers discounts on condition that the money is applied to improving security. She then presented a principal-agent model which suggests an insurer should target all of a service provider’s customers and offer them a discount so long as the service provider defends them properly; this maximises social welfare compared with an arrangement where some of the loss can be recovered from the service provider’s policy. There are some subtleties around loss functions, so there are numerical examples to test the model against some real incident data. In short, insurers should embrace and manage dependencies rather than running away from them.

    Daniel Woods was the third speaker, discussing Monte Carlo methods to investigate how aggregated cyber insurance claims data impacts security investments. In theory, insurers might act as private-sector regulators; in practice, they don’t always know what works, but fell they will understand the market better once they have more claims data. But how can you use claims data as a quasi-experiment? The adaptive security model was proposed by Boehme and Moore at WEIS 2009, and may be the best model of the cyber-insurance market in its current state. Daniel extends their iterated weakest-link model to multiple defenders. Each defender’s best strategy is to deviate from the initial defensive configuration only when an attack is observed. A further extension is to multiple policyholders with whom an insurer can share data, and a Monte Carlo simulation is run to predict outcomes of active, passive and diverse strategies; the last of these is where the insurer has a control group of passive customers in a quasi-experiment where the others are managed actively, and it offers the highest return on security investment. Insurers sharing claims information may pay less in claims, but the actions (and benefits) of the insured parties may vary because of risk aversion and other parameters. Daniel is still slightly sceptical about the value of claims data as a quasi-experiment as the available information isn’t sufficiently detailed, and the underlying platforms and cyber-crime behaviour aren’t stable enough. Adaptive security is, he argues, a more robust approach.

  6. Alain Mermoud has been working on Incentives for human agents to share security information. Theory tells us that security information sharing can reduce information asymmetry and produce positive externalities, but it’s known that ISACs don’t work optimally. This paper reports a questionnaire answered by 262 firms (with a response rate of 63% of the 424 asked); the forum was Melani, a Swiss ISAC. His research hypotheses were (1) that sharing would be reciprocated; (2) that valuable information would be got; (3) that there would be no institutional barriers; (4) there would be a positive effect on reputation; and (5) that sharers would trust each other. Hypothesis 4 was not supported; (1) and (3) were supported strongly, while (2) and (5) were supported to some extent. In questions, Richard Clayton noted that sharing data is harder in the presence of lawyers, policemen or spooks; and the bigger the group, the less sharing there is. So why not examine size as a factor?

    Shu He was next, on Information disclosure and security policy design. She has run a randomised controlled trial, helping some customers to evaluate potential security risks and monitoring their outgoing attack traffic. There were 1,262 organisations involved from 6 different countries or territories in Asia; half were in the control group and half in the treatment group. The treatment consisted of three emails over six months giving treated firms feedback about how well they were doing. She found a statistical improvement in spam but none in phishing; and those organisations that actually opened their emails had double the improvement. Those that actually visited their website had a larger reduction still.

    Bernold Nieuwesteeg analysed the EU data breach notification obligation. He was struck by the fact that law and economics, the subject that studies the effectiveness of laws, has almost completely ignored security economics. The GDPR data-breach notification obligation sites alongside those in the e-privacy directive, the eIDAS regulation, directive 2016/680 and the NIS Directive. The private costs and benefits are such that disclosure will be suboptimal in the absence of such laws, but will the law work? For example, will firms keep quiet if the cost of disclosure might bankrupt them? Are the notification thresholds sensible? A low threshold and repeat notification might be better than a high threshold one-shot approach, but the law is currently unclear. It would be helpful to have carrots as well as sticks; one possibility would be for privacy regulators to offer a data breach first-aid kit.

    The morning’s last speaker was Eduardo Mustri who has been researching sponsored search advertisement and consumer prices. Previous research on sponsored versus organic search results didn’t look at the effect on prices, so Eduardo searched 2,444 times on 72 product models on google, finding 738 vendors on 550 websites, mined the data and compared the prices. Organic results are less likely to have the cheapest price, but the difference is not significant. The model used is slightly complicated as there can be multiple clicks with different expected savings. A simulation suggests that some consumers miss out on bargains by not going through the organic results, and consumers often don’t search for specific products but for products of a particular type. Eduardo is going to collect a lot more data (on quality as well as price) to explore this further, and to explore behavioural and price-discrimination aspects.

  7. Malvika Rao started the last session, talking about a trading market to incentivize secure software. How can we optimise bug fixing in the decentralised peer-production world of free and open-source software? Malvika’s idea is to introduce price signals; she has built a prototype trading market, Bugmark, based on smart contracts, supporting a futures market in the status of specific software components. She hopes this will enable people to just pay to get bugs fixed. In questions it was pointed out that developers could introduce bugs and bet on their being found; and that the introduction of financial incentives can undermine a volunteer ethos.

    Next, Jukka Ruohonen presented a bug bounty perspective on the disclosure of web vulnerabilities. He has studied many bug bounty platforms. Hackers get both monetary and non-monetary rewards; rewards and liabilities for vendors; but it’s unclear whether there’s net societal benefit. There are strong network effects: the more vendors use a platform, the more hackers will join. Jukka studied Open Bug Bounty in particular as an open platform which has disclosed over 150,000 vulnerabilities over more than two years. and has had 703 contributors whose contributions exhibit a power-law distribution, with the most productive having contributed 20,000 bugs. 85% of the vulnerabilities were reported to vendors or website operators on the same day, and many of them may not even be reachable (hence the report to OBB). Patch times average 6 months, and this has reduced from 600 days since the platform launched, presumably as more people pay attention to it. Even simple XSS bugs, found automatically, still take months to patch. Finally, it’s not clear that monetary rewards make that much difference.

    The last regular paper was by Quanyan Zhu, on the analysis of leaky deception. He’s particularly interested in defence involving active deception, from obfuscation to honeypots. The associated signaling games incorporate dynamic, deceptive and information-asymmetric features; there may also be leaky signals giving some information about defender type. They may have a separating Bayesian Nash equilibrium (in which the type can be guessed) or a pooling equilibrium (which leaks no information); in between there can be a partially separating equilibrium. Detectors can be aggressive or conservative, and this leads to interestingly different regimes. For example, senders will tell the truth more often to aggressive detectors and hope they make false positive errors, but there are various effects of detector quality. And it is not always the case that the sender wants the lowest quality detector; in some regimes she wants to leak information, particularly when the game isn’t zero-sum. In questions, Quanyan suggested that a future research area might be where both sides can leak information.

  8. The rump session started with David Pym talking about the Journal of Cybersecurity which will have a special issue associated with this conference; authors of conference papers are invited to submit their papers by September 30.

    Susan Landau was next talking about the second crypto wars, which essentially started after the San Bernardino case. She’s written a book on it, “Listening in”, about the history of the dispute and the issues it raises, and contributed to a National Academies study as well as an analysis of Ray Ozzie’s escrow proposal.

    Claudia Biancotti will be working on the competitive aspects of big data and deep learning and is interested in collaborators on economic models as well as on the privacy and security aspects. Can we model data as labour rather than capital? Can the problem be framed as one of labour monopsony?

    Klaus Miller has been working on the economic costs of cookie lifetime restrictions. Google volunteered 2 years in 2007 but there are no legally binding restrictions. Some prior studies value a cookie at $2-15 but we know nothing of how this evolves. He’s found an average cookie lifetime of 600 days and a value of about 10 Euros across electronic publishers. A one-year lifetime would cost about a billion a year.

    Maarten van Wieren told us that WEIS was valuable to industry as a place to meet academia.

    Michel van Eeten wants to recruit seven PhD researchers on four-year salaried appointments, working across both the social-science and technical aspects of security.

    Dimitri Percia David has been working on the performance perception of security information sharing. How can we get people to adopt sharing as a long-term behaviour? It takes a positive performance perception; usefulness, reciprocity and regular interaction matter. This is supported with a study of 262 ISAC participants.

    Stevlana Abramova has been working on coin mixing in taint chains. In her paper, she has a game-theoretic model of different tainting strategies under perfect or imperfect information.

    Kate Labunets is doing cyber-insurance research at Delft on Dutch SMEs, exploring protection motivation theory. A helpful insurance broker has introduced her to a number of firms whom she’s interviewing. This is part of the Cybeco project.

    Dominque Machuletz has been studying how many people cover their webcam. A large number of people in the audience admitted to doing this. She’s analysing what sort of people do this, from the viewpoint of planned behaviour theory. Users with covers are more likely to feel watched or uncomfortable without a cover; users without covers don’t want to be perceived as paranoid; users don’t believe the webcam indicator light is effective; and 6% claim to have been victims of spying.

    Alexander Proell and Philipp Mirtl of A1 Telekom Austria have a team of 150 people working on IoT services for different industries in Eastern Europe under the brand A1 Digital, and are hiring.

    Fabio Massacci has been thinking of the security economics of the planned move to digital air traffic control. This will pack three times as many planes into the sky, but who will pay for security? The answer is a flat tax of 7 Euros per customer. As Innsbruck is the 145th airport in Europe by volume, it is likely to lose money, but be mandated to reach the same standards as Heathrow or Frankfurt.

    Richard Clayton announced that the Cambridge Cybercrime Centre has masses of cybercrime data to share, will have a one-day conference on July 12th, and is also hiring three people – of whom at least one will be a computer scientist and at least one a social scientist.

    Ganbayar Uuganbayar has been studying whether the price of cyber-insurance could be conditioned on the customer’s time-to-compromise.

    Kanta Matsuura announced a scaling bitcoin workshop; the submission deadline is the end of June.

    Sander Zeijlemaker has built a model of organisational defences against DDoS which enabled firms to model defence capability and costs.

    The last speaker was Ben Kellermann who works as a pen tester and offers half-day courses to universities. he also wrote dudle, an open-source version of doodle that he did for his PhD; he’s looking for applications for it.

    Finally, Rainer Böhme thanked the team, and announced that WEIS 2019 will be held in Harvard, with Sam Ransbotham as the program chair and Bruce Schneier as the general chair.

  9. Hi Prof Anderson

    Thank you very much for these summaries. Comment from Klaus Miller your statement “He’s found an average cookie lifetime of 600 days and a value of about 10 Euros across electronic publishers”

    KM https://twitter.com/klausmiller/status/1009097451402334209
    “Small correction: the cookie example referred to one cookie. Overall we find an average cookie lifetime value (so far) of 1.48 and an average cookie lifetime of 215 day”

    Thanks for the time to write this all up, saves so many people so much time!
    Daniel

Leave a Reply

Your email address will not be published. Required fields are marked *