I’m at Financial Cryptography 2020 and will try to liveblog some of the talks in followups to this post.
The keynote was given by Allison Nixon, Chief Research Officer of Unit221B, on “Fraudsters Taught Us that Identity is Broken”.
Allison started by showing the Mitchell and Webb clip. In a world where even Jack Dorsey got his twitter hacked via incoming SMS, what is identity? Your thief becomes you. Abuse of old-fashioned passports was rare as they were protected by law; now they’re your email address (which you got by lying to an ad-driven website) and phone number (which gets taken away and given to a random person if you don’t pay your bill). If lucky you might have a signing key (generated on a general purpose computer, and hard to revoke – that’s what bitcoin theft is often about). The whole underlying system is wrong. Email domains, like phone numbers, lapse if you forget to pay your bill; fraudsters actively look for custom domains and check if yours has lapsed, while relying parties mostly don’t. Privacy regulations in most countries prevent you from looking up names from phone numbers; many have phone numbers owned by their employers. Your email address can be frozen or removed because of spam if you’re bad or are hacked, while even felons are not deprived of their names. Evolution is not an intelligent process! People audit password length but rarely the password reset policy: many use zero-factor auth, meaning information that’s sort-of public like your SSN. In Twitter you reset your password then message customer support asking them to remove two-factor, and they do, so long as you can log on! This is a business necessity as too many people lose their phone or second factor, so this customer-support backdoor will never be properly closed. Many bitcoin exchanges have no probation period, whether mandatory or customer option. SIM swap means account theft so long as phone number enables password reset – she also calls this zero-factor authentication.
SIM swap is targeted, unlike most password-stuffing attacks, and compromises people who comply with all the security rules. Allison tried hard to protect herself against this fraud but mostly couldn’t as the phone carrier is the target. This can involve data breaches at the carrier, insider involvement and the customer service back door. Email domain abuse is similar; domain registrars are hacked or taken over. Again, the assumptions made about the underlying infrastructure are wrong. Your email can be reset by your phone number and vice versa. Your private key can be stolen via your cloud backups. Both identity vendors and verifiers rely on unvetted third parties; vendors can’t notify verifiers of a hack. The system failure is highlighted by the existence of criminal markets in identity.
There are unrealistic expectations too. As a user of a general-purpose computer, you have no way to determine whether your machine is suitable for storing private keys, and almost 100% of people are unable to comply with security advice. That tells you it’s the system that’s broken. It’s a blame game, and security advice is as much cargo cult as anything else.
What would a better identity system look like? There would be an end to ever-changing advice; you’d be notified if your information got stolen, just as you know if your physical driving license is stolen; there would be an end to unreasonable expectations of both humans and computers; the legal owner of the identity would be the person identified and would be non-transferable and irrevocable; it would not depend on the integrity of 3rd-party systems like DNS and CAs and patch management mechanisms; we’ll know we’re there once the criminal marketplace vanishes.
Questions: What might we do about certificate revocation? A probation period is the next thing to do, as how people learn of a SIM swap is a flood of password reset messages in email, and then it’s a race. I asked whether rather than fixing the whole world, we should fix it one relying party at a time? Banks give you physical tokens after all, as they’re regulated and have to eat the losses. Allison agreed; in 2019 she talked about SIM swap to many banks but had no interest from any crypto exchange. Curiously, the lawsuits tend to target carriers rather than the exchanges. What about SS7? There are sophisticated Russian criminal gangs doing such attacks, but they require a privileged position in the network, like BGP attacks. What about single signon? The market is currently in flux and might eventually settle on a few vendors. What about SMS spoofing attacks? Allison hasn’t seen them in 4g marketplaces or in widespread criminal use. Caller-ID spoofing is definitely used, by bad guys who organise SWATting. Should we enforce authentication tokens? The customer service department will be inundated with people who have lost theirs and that will become the backdoor. Would blockchains help? No, they’re just an audit log, and the failures are upstream. The social aspect is crucial: people know how to protect their physical cash in their wallet, and a proper solution to the identity problem must work like that. It’s not an impossible task, and might involve a chip in your driver’s license. It’s mostly about getting the execution right.
8 thoughts on “FC 2020”
The first refereed paper session was started by Federico Franzoni, who has been working on a better way of embedding botnet command and control in the blockchain. People have tried to use the main blockchain as a censorship-resistant broadcast medium for nefarious purposes (see zombiecoin), but the bandwidth’s limited and it costs money. Federico’s idea is to use the testnet rather than the mainnet. Rather than op_return being limited to 80 bytes per transaction, there’s no limit, and outputs can be below the dust limit, normally 546 satoshis. Coins are free in small quantities from faucets, and you can mine a few tBTC a day anyway, which lets you send a few hundred MB a day. Shared accounts mean that the channel can be bidirectional. This raises the question of whether nonstandard transactions are actually needed on testnet.
Qin Wang has been doing an analysis of Neo, one of the longest established blockchains, which is widely used in China. It uses a delegated Byzantine fault tolerance (dBFT) mechanism and claims to be secure against f = n/3 adversaries. It was known that moving from Barbara Liskov and Miguel Castro’s three-phase PBFT (as used in Hyperledger) to the two-phase dBFT could lead to insecurity. Qin’s analysis of NEO’s source code shows that there are indeed attacks on safety, making conflict decisions possible. The protocol itself is pretty Byzantine with a committee, a leader election, a pre-prepare and a prepare step, followed by view-change and reply. The CAP theorem says we can’t have all of consistency, availability and partition tolerance. Qin suggested a patch, using 2f+1 replicas, which has now been applied.
Sasha Golovnev was one of a couple of dozen people could not travel to the conference because of the air transport disruption in Asia, and gave a video talk. His subject was Breaking the encryption scheme of the Moscow internet voting system. A new system for electronic voting in three wards of the city of Moscow in 2018 had a public testing period, in which Sasha and Pierrick Gaudry broke it twice. There was no spec, but the source code was put online a day before the first public test. It turned out that it used ElGamal encryption with keys under 256 bits; the encryption was done three times with different keys, but the designers were unaware that triple encryption doesn’t strengthen ElGamal the way it does DES! Their first attack was simple key recovery where they tested the available NFS implementations and found that CADO-NFS was orders of magnitude faster than sage or magma, once they fixed some bugs, and could break the election system on a laptop in ten minutes. The election authorities changed to 1024-bit ElGamal, whereupon a second attack was found: the encryption not semantically secure leading to a one-bit leak from a subgroup attack – quite enough to distinguish between the two candidates in the election. The developers denied that this attack worked but silently changed the code anyway. There was also an ethereum blockchain for vote tallying, which vanished after the election result was declared, and the link between the decryption and he blockchain was broken when they keysize was increased. Quite apart from the shambolic development lifecycle and the lack of documentation beyond the source code, no attention was paid to coercion resistance. Sasha noted that the Netherlands banned electronic voting in 2008, Germany in 2009, and Norway in 2013.
The final speaker in the Monday morning session was Nils Wisiol, on how XOR Arbiter PUFs have Systematic Response Bias. XOR arbiter PUFs have the intrinsic feature that their response lies on a hyperplane, which is why people building circuits on them combine several. Since 2002 there have been half a dozen designs, attempting attempts to combine arbiter PUFs in novel ways, all of which got broken. In real silicon, all PUFs have a systematic response bias, and how does this work through? Quite simply, if the number of arbiter chains is even, the bias will tend to come through somehow, and no design so far has dealt with that properly. Designs must bear in mind bias, as well as the host of machine-learning attacks.
The first speaker after lunch was Kevin Negy, re-examining selfish mining. Some people dislike the existence of selfish mining as it violates the “folk theorem” that bitcoin is incentive compatible; they argue that selfish mining needs to persist for it to be profitable. Kevin argues that intermittent selfish mining is possible and profitable, once you account for difficulty adjustment on the main chain: a 49% selfish miner will make main-chain mining easier, after which it becomes honest and takes profit. So one needs to pay attention to how, and how quickly, blockchains react to variations on hash power; and do the game theory about whether new miners should be selfish or not.
Francisco Marmolejo Cossío was next, on Fairness and Efficiency in DAG-based Cryptocurrencies. He has a mathematical model of throughput versus fairness and efficiency for ledgers that don’t use a chain but a directed acyclic graph (DAG), where there a separate DAG of transactions, and where miners may have local information that’s private from other miners. Miners may also have to pay attention to how well they’re connected to the P2P network.
Bernhard Haslhofer’s subject is Stake Shift. Current proof-of-stake systems run with slightly stale measurements of stake and assume that’s OK, so there’s an interesting empirical question of how stake shifts in practice over periods of 1-14 days, and how. Algorand is one day and Ourobouros is 7 days, for example; as randomness comes from the blockchain you need some delay so that attackers can’t capture it. As there’s not enough data on POS ledgers, Bernhard collected stake shift data from POW chains. He concludes that the stake lag should be as short as reasonably smaller; that the shift spikes get smaller over time; and that the big stake shifts are associated with large stakeholders such as exchanges. Given the rate of exchange hacks, this might be a concerm.
The final talk of the session was given by video by Mingchao Fisher Yu on Data availability attacks. Scalable blockchain systems support light chains which only have headers. These may be taken from blocks that are deeply embedded in the main chain, or light nodes might rely on a full node to alert them to inconsistencies. A data availability attack involves tweaking a transaction or two in a block, to make this harder work. However as honest nodes broadcast, truth will eventually be known by all honest nodes. Mingchao has a proposal for coded Merkle trees based on erasure coding.
Shengjiao Cao has been working on decentralised privacy-preserving payment netting. She’s been working with central banks in Canada and Singapore on whether a permissioned blockchain could improve interbank payment systems. These used to use centralised overnight netting; as they move to real-time gross settlement, there’s a risk of liquidity gridlock when large payments pass through a series of banks. Existing systems rely on central bank systems to deal with this, and the question is how much the central bank has to be trusted. If the payments are on a ledger of which a central bank has an overall view, it can find a payment path to resolve gridlock without breaking any liquidity, priority or fairness constraints. She’s done experiments based on hyperledger: a smart contract collects proposals from participants and iterates a search of the nettable set.
Amani Moin presented a systematisation-of-knowledge paper on stablecoins. Such coins are basically IOUs for dollars or other currencies. They have a market cap of $4.6bn, most from tether, and may be important in view of Facebook’s proposal for Libra.
Lewis Gudgeon started Tuesday’s session with a systematisation of knowledge paper on layer 2, which aims to improve the throughput of blockchain payment systems from the 7 transactions per second of bitcoin to the 50,000 of Visa. Layer 2 protocols allow transactions via a medium outside, but tethered to, the layer 1 blockchain. Alice and Bob open a channel by locking coins on a layer 1 blockchain, and transact until they decide to close it; there’s also a dispute resolution mechanism involving layer 1. Payment channel networks can be built from conditional transfers where Bob sends Alice h(R), and Alice says “if you show me R by time t I’ll give you a coin.” They need routing algorithms which may be local or global. In practice, networks like Lightning appear to organise with hubs, which evolve into commit chains. Here a non-custodial operator is trusted for availability but not for funds; the channel is always open, like a banking network. Security involves protecting balances (honest users must not lose money even if people collude); second-order issues include the need to be online, so you have the risks of hot wallets, and network failure could mean mass exit. Current constructions are inefficient as they tie up collateral; we also need to understand exactly how much such networks cost.
Justin Cappos was next, talking by video on MicroCash which is implemented by probabilistic payments as suggested by David Wheeler, reimagined as a distributed lottery protocol on bitcoin. This is optimised for concurrent micropayments with an exact win rate.
Cristina Pérez-Solà followed with LockDown: Balance Availability Attack against Lightning Network Channels. The time lock in the lightning network can be leveraged into service denial: attackers use target nodes as middle nodes in payment chains to exhaust their capacity so they can’t earn fees or see transactions. They can even loop payment paths such as M ->B1 -> A -> B2 -> M; none of the implementations check for this.
Lioba Heimbach has been working on the game theory of payment channels, modelling them as network creation games. She assumes, to simplify things, that players have unlimited capital and that channels are created unilaterally; the complexity in the cost function comes from node centrality. Where closeness is valued, the socially optimal graph is the complete graph while the path graph dominates is centrality does. For payments the star graph may be the best. Stability is about the distance between the Nash equilibrium and the social optimum.
On Wednesday afternoon we had a panel session on crypto engineering for the real world. I started off, talking about the security economics of cryptocurrencies. Economic incentives are absolutely central to the operation of blockchains, yet there are much broader questions than just the game theory of selfish mining. All crypto products that have scaled end up getting entangled with liability, and blockchain won’t be an exception. Since 2013, Mt Gox led the world to custodial exchanges, which was like the move from gold merchants to banks in the 18th century: rather than having the merchant store your gold coins, you simply have a call on the bank’s store of gold – along with many others. This has catalysed fraud; as the work we reported in Bitcoin Redux discovered that most victims of bitcoin theft never actually owned a bitcoin! Instead they were ripped off my exchanges which pretended to give them bitcoin. Rather than worrying about the tiny volumes of Lightning transactions, the off-chain transactions we should worry about are the ledger transactions whereby Coinbase transfers value from one of its customers to another. These should fall under the e-money regulations. We also need to apply the payment services regulations, to compel exchanges to offer their customers decent two-factor authentication. We researchers need to expand our scope from the basic game theory of the transactions to the surrounding environment: to the interactions with customers, regulators and others.
Jean Camp’s talk was entitled “Peeling the lemons market”. People rely on mobile wallets, and they’re a lemons market. Wallets integrate various mobile transactions; permissions are opaque, and there’s no personalised risk information. App developers are busy people who want to get the product finished. It takes effort to make an app usable: it has to be easy to understand, aligned with users’ mental models. We are terrible at communicating what users – and the only other industry that calls its customers “users” is the drug / tobacco industry. The icon for spectre looks like Casper bringing the marshmallows! To change market behaviour you need to regulate, and reframe things so that security becomes a gain. The app store should display a certain number of locks in order to rate payment products.
Peter Landrock talked on business models in technology. This typically has three phases: the initial phase with enthusiastic investors, the second where you hit reality and have to start finding customers, as well as dealing with regulators; and the third where the service providers take control – as dealing with individual customers is expensive. You absolutely need scalable ways of dealing with unsophisticated customers. The maths may be interesting but there’s much more to it! For digital signatures, for example, it’s WYSIWYS – what you see if what you sign. You need to design products that the service providers can use if you want to get them standardised.
Allison Nixon has seen many patterns of abuse that might be fixed if the incentives were different; a failure to align them causes a huge amount of waste. She sees cryptocurrency victims getting no help from exchanges and going to social media instead; Allison ended up as unpaid customer support. It would be great to get statistics of fraud by institution so prospective customers could understand the fraud rates, and perhaps the app store could rate their apps. In many cases, such as SIM swap attacks on cryptocurrency wallets, things have continued unabated for years because there was nothing the users could do, and the phone companies weren’t held to account. Nobody got warnings, and the system essentially betrayed them: they were told they were safe, but they were not. Consider the recent phone-data selling scandal: the regulator did nothing for years until a third party, a newspaper, kicked up a stink.
Alex van Someren used to make HSMs 25 years ago, and they’re still a thing – audience members may be using them as an anchor for a chain of trust. All sorts of abstruse hardware techniques are used to create the dependable platform, and it usually breaks elsewhere. There will always be a weakest link, whether it’s the specification holes in PKCS11 or elsewhere. As an HSM vendor, he noted this caused implementers to pay less attention to the security of the rest of the system. So it’s best to suspend your belief in the blockchain or other mechanism; your platform isn’t concrete but quicksand. You need to do threat modelling at every layer of the stack of the whole system. With banking, the law has been built around how banks operate, yet the cryptocurrency people who take a libertarian view are then surprised when people don’t trust them. Sooner or later you have to rely on legislation.
Discussion started with the fact that often you only get redress when you sue; the difference between risk-based and harm-based regulation; cyberinsurance, and its ability to influence security (not much); the different styles of regulation at industry, national and supranational level; and the tension between safety and privacy.
Thursday’s first talk was a video recording by Nicolas Christin presenting work by his student James Arps who spent a year studying OpenBazaar, an online drugs market. Just as the centralised music-sharing site Napster was followed by peer-to-peer systems like gnutella, the idea here is to provide a decentralised successor to Silk Road that the FBI could never seize. As Darkmarket, they raised $4m. The research question was whether it fulfils its promise. In 2012, Silk Road was grossing $15m a year; Alphabay later managed $200m a year. OpenBazaar is based on IPFS and Kademlia and supports Tor; to search it you use Obiwan, which by default lists only legal items. To find marijuana, a little more effort is demanded of the user. James’ crawler recursively finds all the peers by asking for their neighbours, and asks for their offerings. Between June 2018 and Sep 2019 he found 24,379 items for sale. There were 5,230 users and 1,254 sellers of which 167 were active. Most users came in via Tor but only 40% of active sellers did. Active sellers using Tor are much longer lived; most regular users were there for one day or less, and there were about 80 users active at any time on average. Most advertised items were legal but most sales were of illegal drugs. The volumes are tiny compared with centralised markets: some $86,000 over more than a year compared with hundreds of millions for the centralised markets. There’s a significant number of sellers offering to take Zcash, but users’ opsec was flaky: over 17% of Tor users leaking a clear IP address at some point because of persistent user IDs. In questions, Nicolas said that the lack of demand was probably due to the difficulty of finding stuff; they had to censor illegal search results in order to get access to venture capital. In any case, centralised markets were working rather well until mid-2019 so there was no reason for people to go elsewhere; and the centralised markets had the real advantage of their escrow systems to mitigate the risk of ripoffs.
The second talk was by Tong Cao, who has been Exploring the Monero peer-to-peer network. Attackers can see which bitcoin nodes are richest, and the network topology is also in plain sight, opening the possibility of attacks on routing. Monero is a much smaller system; its selling point is greater anonymity, using stealth addresses and ring signatures. Its peer-to-peer network has each node maintaining a white list of nodes they interact with, plus a grey list of nodes that are on neighbours’ whitelists. Tong wrote a neighbour finder packet that uses timestamp similarity as a side channel to tell which IP address is being promoted from the grey list to the white list on each node. He found that 13% of nodes maintained 83% of connections; half of them were in the USA
Artemij Voskobojnikov has been studying the Perception and Management of Risk Among North American Cryptocurrency (Non)Users. Security usability research to date has focussed on bitcoin; what about the other 5000-odd “cryptoassets”? He identified three use cases: investment, transactions, and others such as betting on basketball games. The main types of loss were errors such as deleting a private key, and exchange shutdowns. Fears included fraudulent ICOs, how you transfer assets to your heirs when you die, and armed robbery. Usability issues included the need to wait for days to get registered on an exchange, the difficulty of downloading and using a wallet, and the fact that some users believed they had access to their private key on exchanges such as Coinbase. Risk management techniques included multiple wallets. Reasons for non-use included lack of knowledge, lack of regulation, the lemons market in both assets and exchanges, and the fact that crypto assets are seen as abnormal. He suggests that the industry should optimise the onboarding process, provide wallets aimed at users with different levels of sophistication and probably modelled after regular online banking, and provide sandpits for new users to play and acquire confidence without risking real money.
Friedhelm Victor started the last session of FC2020 with a talk on Address clustering heuristics for Ethereum. The heuristics used to cluster addresses in bitcoin don’t work in ethereum, so new techniques are needed. His ideas include exploiting exchange deposit addresses: these sweep in eth to central wallets and so are visible. Shapeshift and Binance are among the largest clusters found this way; Coinbase use a different system. Another is airdrop multiparticipation: when tokens are given away for some purpose, users try to collect them with multiple addresses. Friedhelm illustrated the heuristic working on the bionic airdrop. Such heuristics are all about usage patterns rather than intrinsic structure but still cluster 17.9% of addresses. Another are for future research is ICOs, DeFi and smart contracts in general.
Josselin Feist has been looking for Actual Flaws in Important Smart Contracts. He surveyed 23 professional audits that found 246 bugs, and found that the bugs were distributed much the same as in regular software; a large proportion can be found automatically, but not all, and there’s no obvious relation between high-quality unit tests and observed reliability. Josselin believes that unit tests merely confirm developers’ expectations, while the bugs come up in edge cases that they didn’t think about. Seventeen of the audits are now public. He cross-checked with two other companies and found similar results from 18 and 19 of their audit reports respectively.
Ningyu He gave a video talk on Characterizing Code Clones in the Ethereum Smart Contract Ecosystem. Ningyu found that a lot of scam smart contracts were clones of code from the game Fomo3d, and so tried to analyse their prevalance at the scale of the ecosystem. He uses a fuzzy hash algorithm as a similarity metric: he splits code at opcodes such as jumps, hashes the resulting pieces of code to characters, and then concatenates them to a signature. This seems to work well enough for smart contracts because of their simplicity. He clustered results by airdrop exploits, ICOs and ERC-20 tokens. He worked with PackShield’s ethereum vulnerability scanner which finds that about a third of contracts are vulnerable, while over 58% of cloned contracts are. More generally, over 46,000 vulnerable contract pairs were fund, and Ningyu believes that copy-and-paste behaviour by developers is a significant cause of vulnerability (as with regular software). He developed different techniques for matching DApps
The last speaker of FC2020 was Søren Debois talking on Smart Contracts for Government Processes. There’s been a lot of pressure on Danish local governments to get into blockchains, so Søren and his students studied whether smart contracts might help with their real needs. They looked at a system for paying parents who have to take time off work to care for chronically ill children, which has complex legal rules and mandated steps that clerks often miss, leading to appeals. One might put hashes of the case state on the ethereum blockchain, with the confidential medical records elsewhere, so that both parents and the appeals board can track them. One might hope for better outcomes if automating the execution of decisions eliminated bureaucratic foot-dragging. But what about insiders, hackers and mistakes? Local governments tend to get hacked a lot and end up paying ransomware. And who updates the contract when the law changes, or a bug is discovered? But the real deal-breaker was the fear of what happens if a local government loses its access token. In questions, other issues that came up were rigidity in the sense that people have to bend the rules to get stuff done, and local government fear of losing control of the process.
Peter Gutman’s keynote talk at the AsiaUsec workshop on the Friday was entitled “Availability and Security: Choose any One”. Availability requirements aren’t always well understood by equipment designers; an example comes from electricity generators, where some data centre operators remove the safety cutouts as it’s rational to risk running a generator to destruction rather than allowing the much more valuable data centre operations to fail early. This is just one example of over-reacting to potential faults. The interaction with security is even more complex, as security is in some sense fault intolerance. There is almost nothing in any spec about the trade-off between security and availability. Consider fault mitigation: how do you “continue with degraded functionality” if one of your bounds checks has failed? One technique in safety systems is rejuvenation, or restarting the system; this can deal with slow memory leaks. But what if a system has extreme mtbf requirements, perhaps to run for decades? You can’t flip a 240MW generator in and out just to clear some state in a signalling protocol. There are PLCs in SCADA systems that predate SSL, and the standard one-year certificate expiry built into CA business models has the wrong time constant by more than an order of magnitude. Can you have a button that says “ignore expired certs, just run anyway” – the idea of “break-glass security”? And what about systems for which availability dominates security? You might not be bothered about a PLC sending spam so long as the control system keeps running. Another case is where false alarms are unacceptable, such as in nuclear weapons control, where the device is armed by a signal that is extremely unlikely to be generated at random in the event, for example, of a plane crash followed by a live wire moving back and forth over the signal wire. In fact, much of our standard model of computer security and access control comes from 1960s mainframes and is obsolete in a world of embedded and mobile devices.