7 thoughts on “WEIS 2022 – Liveblog

  1. The first speaker as WEIS returned to an in-person meeting was Brenden Kuerbis, whose topic is Exploring the role of data enclosure in the digital political economy. Many services are making user data less available, not just for privacy but for platform competition and monetization. He’s studied DNS over https (DoH), and the move by Apple and Google to lock down mobile advertising identifiers. In DoH, query resolution is unbundled from internet access and presented as a privacy advance. In Google Public DNS, about 6% of the traffic is DoH; but it has had little impact, with Mozilla continuing to lose browser share and most resolution still within users’ ISPs. In the case of mobile advertising identifiers, only about 20% of Apple users opted into sharing their identifiers, and spending on Apple Search Ads grew strongly since the policy change in mid-2020. Facebook appears to be the loser, while Google’s ad revenues have continued to climb gently.

    The second speaker was Rajiv Garg, talking about the Impact of App Privacy Label Disclosure on Demand: An Empirical Analysis. Do consumers actually care about privacy notices? With over 2m observations of 37,088 iOS apps over 208 days, before and after privacy label disclosure was mandated by Apple, he found that privacy labels do indeed have a negative impact. This was typically a drop of about half a percent of downloads, although it varies by category, being significant for many free apps but less so for paid ones. There was no effect with social media apps but the effect was present with other free apps across the gaming, productivity, travel and utility categories. The effect was present regardless of whether the app disclosed that it would track the user, or link them – or even if it claimed that the information disclosure would be anonymous and not identify the user at all.

    The third speaker, in a reshuffle of the announced program, was Ying Lei Toh, talking about Prior Fraud Exposure and Precautionary Credit Market Behavior. She’s studied the 2017 Equifax data breach and whether its victims were likely to freeze their credit or close credit card accounts. Are prior ID theft victims more likely to take precautionary action, and what about prior fraud victims, and how do the two effects interact? She studied a 5% sample of US consumer credit bureaux data (11.8m accounts) and compared it with the Equifax data. Customers who ordered credit freezes rose from 0.7% to 2.8%. Ying Lei used fraud alert flags to signal ID theft victims, and identified fraud victims from the 2015 Anthem data breach, which was geographically concentrated. The Equifax breach had a very significant effect only on freezes. The effect on card closures was less significant; prior breach victims close fewer cards while prior ID theft victims close more.

  2. Amy Sheneman has been analysing the evolving views of managers, investors and the SEC to cybersecurity risk. In her paper Contagion or competitive effects? Lenders’ response to peer firm cyberattacks, looks at debt markets, where (unlike equity markets) substantial private information gets exchanged, and risk assessments focus in downsides rather than upsides. So do peer cyber-attacks signal increasing risk to an industry (a contagion effect) or are they idiosyncratic and thus the firm’s incompetence (a competitive effect)? She used breach data from the Privacy Rights Clearinghouse from 2005, with breached firms removed; the primary result was a lack of contagion. This was measured from loan spread, but similar results arise from other debt variables such as loan size. She finds that in heavily regulated industries like telecomms and healthcare, where she expected a competitive effect, she found a small contagion effect instead.

    Tyler Moore and Noa Barnir gave a talk on Empirically Evaluating the Effect of Cybersecurity Precautions on Incidents in Israeli Enterprises. Their dataset is a survey of 2020 firms by the Israeli National Cyber Directorate in 2020-1, asking about 20 possible precautions, firm characteristics, and incidents. There’s an endogeneity problem in that firms with more precautions are more likely to detect incidents, and a timing problem of precautions adopted after attacks. Nonetheless their headline result is that four basic precautions (strong password policy, patching, backups and AV) cut the annual probability of experiencing an incident from 69% to 21%. Higher value firms (those that are large, or international) get attacked more, as with firms with more technical exposure (those that use cloud services, or that use information about the behaviour of visitors to their websites).

    Oleh Stupak wrapped up the morning with No research, no risk? An empirical study of determinants of information leakage attacks. Should we believe the claims that cyber costs $1-2 trillion per year, with a substantial part attributable to leakage of trade secrets and the like? If so, research-intensive industries should be hit worse, so do they get attacked more? Oleh used three 2010 Eurostat databases on business structure, tech and the digital economy, data on research and technician staff numbers, as well as other Eurostat data on the rate of information leakage and Verizon data breach reports from 2010-20. He finds that research-oriented manufacturing industries are more likely to fall victim to targeted information leakage attacks.

  3. The afternoon started with a panel on cyber-insurance, chaired by Tyler Moore. The first speaker was Scott Stransky, Head of the Cyber Risk Analytics Center at Marsh McLennan, which has been going for a year now. Marsh is a broker that helps customers buy cover; large risks may need to go to multiple insurers. As a result they have answers to 150 questions from each customer plus extensive data on claims, which also go through them. He finds three types of bias in databases like privacy rights clearinghouse: recency bias (it takes about a year for incident data to get there), US bias (America has more breaches, and more regulation) and large firm bias (publicly traded companies can’t sweep stuff under the rug). Marsh also licenses data from outside-in scanners such as Bitsight and Security Scorecard; Bitsight has 23 subscores of which 16 are predictive in the right direction and five are perverse (such as DKIM records).

    The second speaker was Erin Kenneally, Global Director of Cyber Insurance at SentinelOne. The ransomware epidemic has led some insurers to leave the market and premium increases are throttling sales volumes. Firms are fed up filling out long questionnaires and there are questions anyway about compliance with written information security policies. External data from firms like bitsight is hard to reconcile with actual controls. What we need now is better convergence between cybersecurity firms and cyber-insurance. We need the equivalent of smoke alarms and sprinklers – devices that actually reduce risk. What are they? MFA for one. We also need the equivalent of CCTV – things that provide telemetry so that incidents can be understood.

    The third speaker was Daniel Woods of Edinburgh University. He’s been studying the claims process; many firms get a lawyer to manage investigations so that any dangerous revelations are covered by attorney-client privilege. A lot of valuable information about offenders, their tools and their tactics is therefore never shared. His second question is whether insurance companies are the best stakeholders to fix the world, as their processes aren’t all that responsive.

    Discussion ranged over many topics including the reasons why people report incidents; shame is a factor, as a cyber incident is more shameful than hurricane damage. Convergence is hard because the culture of security companies is quite different from insurers, as well as the incentives. The principal-agent problem at the heart of insurance makes it intrinsically different from the security industry. Brokers like Marsh may be in a better position to reach out to threat intelligence firms and other stakeholders. Much of the data the industry has collected isn’t that useful as it’s in the form of pdfs; better NLP software may help. One question is whether insurance makes things worse, if ransomware gangs target insured firms; but perhaps they just target large firms, which are more likely to be insured. The industry is still not at a stage where it can take a nuanced view of the details of controls; security is too much a market for lemons. Insurers can run simulations of the risks from hurricanes and other natural catastrophes, but actuarial modeling of cyber catastrophes is in its infancy.

  4. Tuesday’s last session was started by Jonathan Goohs, who’s been Reducing Attack Surface by Learning Adversarial Bag of Tricks. In 2021, 28,000 CVEs were disclosed and thousands of patches appeared; how do you prioritise? An ideal scoring system would combine effort, danger, exploitability, the number of systems, the ease of identifying the vuln and its bug age. Assuming we can assign numbers, hazard minimisation is just a weighted knapsack problem. An alternative approach is oblivious patch planning. To compare the two, hundreds of thousands of simulations were run. The former approach reduces hazard more quickly than it reduces bugs, but if we’re not that sure of the detailed hazard scores it’s probably a good baseline. In dynamic games, we can draw inspiration from the FlipIt game.

    Next was Sébastien Gillard, on Efficient Collective Action for Tackling Time-Critical Cybersecurity Threats. Modern cyber security depends on collective action via CERTs, ISACs, bug bounty programs and open-source intelligence. How can its efficiency be measured? He studied Luxemburg CERT data from 2018-22 on a malware information sharing platform to which 1,908 organisations contributed 36,639 events. The time that events took decreased slowly until mid-2019, then rapidly once “galaxies” were introduced to deal with clusters of events. This provides evidence that collective action and knowledge integration can improve defence performance.

    Tuesday’s last speaker was Sara Banna who spoke on Treating the Symptoms or the Cause? Talent Acquisition in Response to Data Breaches. Do firms respond to breaches with a substantive response, such as by hiring cybersecurity talent, or in a symbolic way by hiring lawyers and PR people to deal with the symptoms? The answer turns out to be “both”, but this holds only for hacking and card-related breaches rather than the loss or theft of devices. High-profile firms are more likely to hire more substantive talent, however. She collected job postings from 2010-20 scraped by Burning Glass Technologies.

  5. Wednesday’s sessions started with Julien Piet telling us about Extracting Godl [sic] from the Salt Mines: Ethereum Miners Extracting Value. Decentralised exchanges typically run on ethereum whose miners order transactions to maximise their profits. They can include private transactions, creating information asymmetry. Their maximum extractable value (MEV) is what they can extract beyond the gas fee, and may include arbitrage or front-running. Piet has devised an MEV detection algorithm based on the fact that they give cycles in the transfer graph. He found that 90% of MEV transactions are done using private transactions, while under 3% of other DeFi trades are private, and estimated annual profits at $200m, based on a 10-day sample from February 2022. The profits were 63% from frontrunning, 21% backrunning and 16% arbitrage. Two-thirds of the profits went directly to miners, who are now getting 9% of their income from it, sometimes via trading and sometimes by charging elevated gas fees to trading bots. The ten top miners make over half the profits. What’s more, we see blocks that are valuable enough to motivate a large miner to fork the chain several times a week. If we were redesigning ethereum we might encrypt transactions until committed and randomise transaction order.

    Andrew Morin was next, talking on Breaking the Stablecoin Buck: Measuring the Impact of Security Breach and Liquidation Shocks. The collapse of stablecoins like IRON (last year) and three others (in recent weeks) remind us that algorithmic stablecoins are linked; even Tether had $1bn of assets in other stablecoins. Andrew has been studying the effects of breaches in exchanges backing stablecoins. Stablecoins function like cash but without the regulation; redemption can be arbitrarily delayed. Although the lifeblood of the cryptocurrency ecosystem, they are wide open to runs. Andrew found 37 significant breach events last year, and 50 liquidation shocks when short positions were wiped out at scale. His measurements lead him to conclude that Tether, the de facto reserve currency, is indeed vulnerable to both breaches and liquidity shocks.

    The third speaker was George Bissias and his topic was Pricing Security in Proof-of-Work Systems. How is security allocated among multiple proof-of-work blockchains? He graphed when bitcoin was more profitable than bitcoin cash or vice versa; the lead swaps back and forth with advantage rarely exceeding 2%; this is intimately tied to their exchange rate. In short, miners track equilibrium fairly closely; George sees this as arbitrage, driven in part by the difficulty adjustment algorithms. There are also secondary markets for hash rates. Price is often predictive of future hash rate; if the converse were ever true, it’s been exploited away.

  6. Daniel Woods kicked off the last refereed paper session with Characterising 0-Day Exploit Brokers, a study of the zerodium and crowdfense exploit markets. These work differently from bug bounty markets as there are continuing payments for exploits – including possible maintenance to defeat patching. Zerodium prices have been visible since 2016 and Crowdfense since 2019; full exploit chains for operating systems with persistence now attract seven-figure sums, and the focus is mobile. Less capable exploits such as remote code execution and privilege escalation are in sex figures. As for exploits of apps, messengers cost in the high six figures while browsers and email are in low six figures. Can exploit prices measure the cost of compromise? There’s some evidence for this as exploits that don’t need user interaction, or that are persistent, cost more. However, sometimes a temporary bounty is posted for a specific product, so there’s a demand side too.

    The second speaker was Henry Skeoch with Modelling Ransomware Attacks using POMDPs. A partially observable Markov decision process or POMDP models states, actions, rewards, observations and conditional probabilities of transitions and observations. Henry argues that they can improve on the standard game-theoretic analysis of ransomware in that it can also model the spread of malware across a target network from less important machines to more important ones, for which a more serious ransom might be paid. Henry has done a number of simulations to explore how different policies might be modeled.

    The last regular speaker of WEIS was Richard Clayton, talking on A “sophisticated attack”? Innovation, technical sophistication, and creativity in the cybercrime ecosystem. All attacks are described as sophisticated, and most aren’t. (There are exceptions, like Stuxnet and Solar Winds.) Why is attack sophistication talked up? Richard worked through a number of cases. One Lincolnshire schoolkid who did routine fake voucher scams was described by the court as such. The same applied to the TalkTalk exploit in 2015, which involved a SQL injection attack that was older than the kids that used it. Attackers want to boast; corporate victims want to deny they were negligent; its IT department wants more budget; it may want to engage law enforcement; law enforcement want to explain their failures or boost their successes; the security industry wants to sell stuff; individuals want to give Black Hat talks; and there’s a lot of ignorance. The bad effects of this include on sentencing; the proportion of frauds described as “sophisticated” rose fourfold from 2005-13, and meanwhile there’s no point in trying to divert young offenders to pentesting if they don’t have the necessary skills. The security ecosystem has many more oddities, and communities that don’t interact with each other much. The creativity involved in finding new technical attacks doesn’t have much to do with the creativity of inventing new crime scripts. Richard challenges the idea in the criminology literature of a “ladder” up which criminals climb; a better analogy is a set of rock pools where the best that a typical operator can do is become a bigger crab in their pool. One exception was the entrepreneur who moved from running a high-yield investment program to building a cryptocurrency exchange; but any business model that can easily be replicated by unskilled people should not be considered to be sophisticated.

  7. Carol Aplin kicked off the rump session. She been doing a correlation study for Marsh & McLennan between the self-assessment cyber controls questionnaire the clients fill out and the claims they later file. Firms with revenue over $1bn were more likely to be targeted; she found no useful signals among the small companies. Among big ones, the largest signal was AV software which made claims 2/3 less likely, then smaller signals from firewall policies and regular deletion of old accounts.

    Chitra Marti gave two short talks, the first on how hacker competition may affect attack impact. Will a hacker exploit a vuln, sell it to another hacker, or sell it to the firm? There are some interesting secondary effects around keeping it private; vulns become rival goods rather than public goods. The second was on platform liability with unattributed harm; should we hold users liable for having weak passwords on a platform, or punish the platform for not ensuring better passwords? Ideas to chitra.marti@nyu.edu.

    Geoffrey Simpson has been measuring the prevalence of visual impersonation techniques used to create lookalike domain names for business email compromise. He’s been studying coding tricks (like swapping 1 for l) and nonstandard subdomains such as hostedapp-williams.com in place of hostedapp.williams.com.

    Nan Clements has been studying whether people who attack hospitals are more likely to do so after hospitals have merged, yielding bigger targets with more complex and less manageable IT systems.

    Svetlana Abramova has been studying whether central bank digital currencies are likely to thrive, runnng a survey for the Austrian national bank in July 2021. She split the users into three groups: cryptocurrency owners, tech-savvy and cash-affine. About half of the sample were interested and half of those thought they might get some advantage. Offline functionality is going to be more important than P2P payments. The report can be got from svetlana.abramovauibk.ac.at. She then gave a second short talk about a study of the Ledger Data breach; that’s a firm selling a crypto wallet, and affected 270k users in 2020. Victims suffered some harassment and scams, including invitations to use dodgy wallets. She got IRB approval to spam 32k of them, getting 105 usable responses – but also a lot of negative pushback.

    Shuonan Niu has been studying the relationship between organisations’ externally observable security control behaviour and reported ransomware incidents. The biggest correlation is with open RDP (3389) ports from the Rapid7 database, matched to organisations using the Maxmind IP database.

    Zijun Ding gave two short talks. The first was on online targeting advertising, which is now more than half of the total but is resented and blocked, both by users and by Apple. He’s run experiments to explore how blocking affects user behaviour. Users don’t appear to benefit as much as they expected. The second short talk was on the welfare implications of global anti-tracking efforts. How could we model the effects of a large-scale ban? Competition between ad intermediaries is after all a prisoners’ dilemma, and a global ban might save everyone money.

    I gave a quick talk on which bugs get patched. We discovered similar attacks on NLP systems (the Bad Characters attack) and source code (the Trojan Source attack); the latter got patched eventually by most of the affected organisations, and the former only by one.

    Tyler gave a brief talk on behalf of Oliver Wyman, who’s been working on a cyber risk literacy and education index.

    Oleh Stupak’s subject was an economic approach to modeling endogenous network formation. How can we construct an efficient network in the presence of an intelligent attacker? He has a game-theoretic framework for building secure and efficient networks, where at each step the attacker tries to compromise one node while the defender protects k nodes. In some parameter ranges you end up with a “maxi-core network” with two protected core nodes each of which has a star relationship to each of the other nodes.

    Richard Clayton talked about the crime and abuse datasets available for researchers to license from the Cambridge Cybercrime Centre.

    The last speaker was Thomas Maillart, who’s funded by the Swiss defence department to study open source software ecosystems. His starting point is the Rawsec inventory of open-source cybersecurity tools. A bipartite graph of authors and projects is quite densely populated, suggesting that there is some real community and interchange; can this be used to predict the long-term sustainability of individual projects?

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *