Financial Cryptography 2016

I will be trying to liveblog Financial Cryptography 2016, which is the twentieth anniversary of the conference. The opening keynote was by David Chaum, who invented digital cash over thirty years ago. From then until the first FC people believed that cryptography could enable commerce and also protect privacy; since then pessimism has slowly set in, and sometimes it seems that although we’re still fighting tactical battles, we’ve lost the war. Since Snowden people have little faith in online privacy, and now we see Tim Cook in a position to decide which seventy phones to open. Is there a way to fight back against a global adversary whose policy is “full take”, and where traffic data can be taken with no legal restraint whatsoever? That is now the threat model for designers of anonymity systems. He argues that in addition to a large anonymity set, a future social media system will need a fixed set of servers in order to keep end-to-end latency within what chat users expect. As with DNS we should have servers operated by (say ten) different principals; unlike in that case we don’t want to have most of the independent parties financed by the US government. The root servers could be implemented as unattended seismic observatories, as reported by Simmons in the arms control context; such devices are fairly easy to tamper-proof.

The crypto problem is how to do multi-jurisdiction message processing that protects not just content but also metadata. Systems like Tor cost latency, while multi-party computation costs a lot of cycles. His new design, PrivaTegrity, takes low-latency crypto building blocks then layers on top of them transaction protocols with large anonymity sets. The key component is c-Mix, whose spec up as an eprint here. There’s a precomputation using homomorphic encryption to set up paths and keys; in real-time operations each participating phone has a shared secret with each mix server so things can run at chat speed. A PrivaTegrity message is four c-Mix batches that use the same permutation. Message models supported include not just chat but publishing short anonymous messages, providing an untraceable return address so people can contact you anonymously, group chat, and limiting sybils by preventing more than one pseudonym being used. (There are enduring pseudonyms with valuable credentials.) It can handle large payloads using private information retrieval, and also do pseudonymous digital transactions with a latency of two seconds rather than the hour or so that bitcoin takes. The anonymous payment system has the property that the payer has proof of what he paid to whom, while the recipient has no proof of who paid him; that’s exactly what corrupt officials, money launderers and the like don’t want, but exactly what we do want from the viewpoint of consumer protection. He sees PrivaTegrity as the foundation of a “polyculture” of secure computing from multiple vendors that could be outside the control of governments once more. In questions, Adi Shamir questioned whether such an ecosystem was consistent with the reality of pervasive software vulnerabilities, regardless of the strength of the cryptography.

I will try to liveblog later sessions as followups to this post.

16 thoughts on “Financial Cryptography 2016

  1. The first refereed paper was by Youngsam Park on Understanding Craigslist Rental Scams. He answered a Craigslist and was asked to send a deposit to Nigeria, so realised it was a scam. He did a quantitative survey of rental scams and looked at their monetization methods. He built a rental ad crawler which fed a scam identifier and a simple email conversation engine based on keywords. 2m rental ads in Cragslist found 30k scam ads in three categories: clones of genuine ads (9%) of which two-third are from Nigeria according to geolocation; two-thirds were credit reference scams where the “landlord” signs up the would-be tenant for credit checking in order to get a phone-verified Craigslist account, which can be sold on for $5; and about a quarter are fake pre-foreclosure rental ads which do a bait-and-switch to sign up the victim for a non-free realtor service. The victim can’t get a refund unless he gets denial letters from three properties on the list; the amounts are $50–200. Only 46.8% of scam ads were flagged, and the median time to flag an ad was 13 hours.

    Ian Molloy spoke next on Graph Analytics for Real-time Scoring of Cross-channel Transactional Fraud. He was looking for analytics that would look only at “X is paying Y” rather than the channel specifics, in an assignment for ABN Amro. The available data were payer, payee and time period for June-August 2012, and which transactions were reported as fraudulent, covering hundreds of millions of accounts worldwide and billions of transactions. His hypothesis was that transaction networks define a “community” of normal transactions, so he constructed 1, 2, 4 and 10-week graphs. Simple shortest-paths metrics worked well regardless of the time period; strongly connected components less so; while pagerank of the payee was fairly powerful, and even better when weighted with transaction amount. Egonets occasionally highlighted interesting anomalies but they tended to be wealthy customers rather than crooks. WHen asked for an analytic that would respond within 100ms, they built a classfier using random forests, using precomputed pagerank and optimising shortest path using minimum spanning trees, key points and breadth-first search. They trained it on month two and tested it on month three, getting an AUC of 0.93.

    Earlence Fernandes’ talk was on Android UI Deception Revisited: Attacks and Defenses. Evil Android apps lurk until a target activity is detected in foreground, then launch an overlay to hijack it; this is hard to deal with as the UI doesn’t show provenance and is increasingly complex. Bianchi and others had proposed an overlay mutex defence, What the App is That?at Oakland last year; this turns out to have timing and other side-channel vulnerabilities on the app verifier service it uses, which Earlance uses to display overlay windows without getting caught. He ran a demo of his code stealing Facebook credentials. He wrote and tested a better overlay mutex (which enforces temporal integrity), and tested whether it could catch a MITM attack where Alice, on opening skype, is given Facebook messenger instead which relays the call via a skype client on the attacker’s phone. This specific attack could be blocked, but there are limits to what a background non-system app can do by way of defence in Android; such mechanisms should really be built into the platform.

    The morning’s last speaker was Jassim Aljuraidan, speaking on Introducing Reputation Systems to the Economics of Outsourcing Computations to Rational Workers. He’s interested in how to stop cheating in outsourced computations; for example, if matching faces to surveillance videos, why not just return “no” as that’s the right answer almost all the time? He models this as a game where the outsourcer allocates N jobs to k workers in each round and checks a subset t_i of results from worker i, with probability p of spotting cheating when you test it; each job has cost m and markup m. He develops models initially for two workers, where it’s clear that if the cheating detection probability p is low, it’s hard to do much; you’re better investing in raising it than in paying workers more. You have to have some fat in the system, so that penalties bite, and you’d better distribute work evenly among workers, and maybe you’d better stop workers organising.

  2. The first speaker after lunch was Ian Miers, speaking on Accountable Privacy for Decentralized Anonymous Payments. He’s the inventor of zerocash, which does not comply with anti-money-laundering rules, and wonders whether these rules could be recast to be more privacy friendly. He uses zkSNARKs to build a decentralised anonymous payment system that enforces specific transaction policies while protecting user privacy. Zerocash enforced only one policy, namely that the monetary inputs to a transaction must not be less than its outputs; you can enforce other things too. You can keep state on the blockchain by creating a coin that’s a counter, and have real names by getting a trusted third party issue signed counters. He discussed the mechanics of enforcing jurisdiction; a £10k limit for suspicious activity reports (authority signs the transaction to say they’re aware of it); and ways to allow the authorities to trace specific coins (which is perhaps too powerful as it traces all descendant transactions forever) or users. Tracing can be audited after the fact. The use of zkSNARKs means that all policies are visible; covert policies might need something like trusted hardware.

    Amira Barki was next, talking on Private eCash in Practice. In her system a merchant doubles as the bank; it’s aimed at tolling and ticketing, with offline operation, unlinkability, unframeability and revocability, with payments within 300ms. It uses Chase, Meiklejohn and Zaverucha’s algebraic MAC construction to create a new partially blind signature scheme and thus a token where the signed balance available can be updated by the toll gate. An implementation took 205 or 315 ms depending on whether the phone was already switched on.

    Abdelrahaman Aly has been working on Practically Efficient Secure Single-Commodity Multi-Market Auctions with a view to interconnecting electricity markets in Europe, where the regulator wants energy auctions to establish an hourly market clearing price within ten minutes; how do you link up auctions between (say) Belgium, the Netherlands and Luxembourg? The producers would prefer not to disclose volume and price. Multiparty computation was used for beet auctions with Danisco in this paper; Abdelrahaman has adapted the ideas with a push-relabel algorithm for dealing with the network flows and get a greedy algorithm that’s linear in the number of bids but polynomial in the number of markets. He did a prototype using NTL and GMP and tested it distributing the 2000 energy bids made one day in Belgium by 2–4 markets. Two markets took just over ten minutes to process while four markets took double; so a practical implementation looks feasible.

    Sandra Guasch’s topic is How to Challenge and Cast Your e-Vote. Ben Adida’s Helios system is widely used in academia as a research platform and even for IACR elections; to this she adds the twist that a voter can cheat by showing simulating proofs to a coercer, while the voting machine cannot cheat the voter in the same way. This is done by a trapdoor key which enables the voter and only the voter to cheat – and also to approve the audited transaction afterwards.

  3. Tuesday’s session was kicked off by Patrick Carter, talking about CuriousDroid. Now that Android devices account for almost half of all device sales and over 80% of mobile sales, they are attracting serious efforts to write banking malware in particular. Dynamic analysis of suspect code is hard as apps are event/GUI based, so driving the GUI is essential for testing; but Google’s MonkeyRunner needs source code. CuriousDroid provides UI stimulation for malware sandboxes, and (given root privilege) can run on devices as well as emulators. It decomposes the UI by catching window focus change, infers input types using heuristics and any available text hints, and generates appropriate inputs using the nextFocus property to get things in the right order, and writing inputs directly to the input driver. He tested 38,572 apps from the Andrubis database which scores apps for being benign or malicious. CuriousDroid took 8827 apps where Andrubis couldn’t make up its mind, and reclassified almost all of them as benign; in addition, apps that used dangerous things like SMS, dynamic code, native code and networking were often found to exercise this code in ways that Monkey didn’t trigger. Andrubis merely uses the Monkey to exercise each activity in turn but doesn’t catch behaviours that depend on doing activities in the right order. This is where CuriousDroid tests apps more thoroughly, using fuzzing and backtracking to try to find paths into lurking code. In questions, Adi Shamir suggested that malware writers could use the vibration sensor to check whether the screen was really touched; Patrick replied that if malware starts doing that, he’ll hack the vibration sensor driver too.

    Alberto Coletta was next with Droidseuss which specifically analyses Android banking Trojans. They use social engineering, often with QR codes, to trick users into installing malware that pretends to be bank software, and use web injectors to manipulate banking data entry forms. Alberto has a mobile app that monitors the command and control channels, which can be via URLs, IP addresses, phone numbers of even Google cloud services. These are extracted from malware samples by visual inspection of the code or using Tracedroid to run it; he has developed heuristics to rank them. He analysed 7946 samples of malware from five common families in VirusTotal; less than one percent of important malicious endpoints found by the tool turned out to be false positives, by checking against the Alexa top million; over 90% appeared to be true positives (the great majority of them from China). Possible countermeasures against his approach include evading dynamic analysis (e.g. by environmental monitoring) and using many random names for both packages and control channels. In questions, it was pointed out that websites in the Alexa top million, but not in the top 100,000, are often hacked.

    The last speaker of the mobile malware session was Ahmad Reza-Sadeghi, talking about DroidAuditor. How do you catch attacks like Soundcomber, which involves collusion between one app with the “record audio” permission and another with Internet permission? Detecting application-level attacks is hard, as discussed in the two previous talks! Ahmad starts from an analysis of the patterns of interprocess communication, and draws the big pictures of application lifecycle, sensors, file system and so on. In previous work, he put hooks into all of these, and DroidAuditor uses this “Android Security Module” framework to monitor app behaviour as a behaviour graph. This catches app interaction and can in principle detect app collusion attacks, confused deputy attacks, and application-layer privilege escalation (though to detect escalation to root, the framework would have to run in TrustZone – a problem for future work).

    Tristan Caulfield followed with a one-paper session on social interaction; his topic is Discrete Choice, Social Interaction, and Policy in Encryption Technology Adoption. Designing security mechanisms is all very well, but most are never adopted. What works? He presents a model where agents choose between technologies based on profitability, a social component exists in the form of network effects, and a tax or subsidy is used to discourage or encourage adoption. The model supports multiple attributes (cost, functionality, usability) to deal with products such as email encryption, WhatsApp against SMS, and encrypted file systems versus the default. The first two have a high social component, and in his model we get adoption of WhatsApp but not encrypted email (unless it’s somehow subsidised). The model is tested by looking at the spike in adoption of encrypted email after the Snowden revelations (seen as a policy intervention that imposed a tax on plaintext email). It predicts, correctly, that adoption would return to almost the previous low level. In questions I noted that people adopted ssh to get X session teleportation, and got the crypto free; similarly people adopted WhatsApp to stop paying for texts, and got the crypto free. Tristan agreed; the original of his paper discussed this but it was accepted only as a short paper.

  4. The first talk in the session on cryptanalysis was by Abdalnaser Algwil on Failures of Security APIs: A New Case. As neither the speaker nor his coauthor Jeff Yan could get a Barbados visa in time, the talk was a combination of video and skype. They presented an API attack on the CCaptcha system which serves Chinese language Captchas to any web server that installs a php toolkit. This kit however uses APIs that expose multiple attacks. The Captcha works by presenting ten images: a Chinese character, and a comparison box with its three radicals mixed up with six other radicals. The user has to select the right three. However the images are served individually and have static numbering. The first attack is to download all 60k of them and build a dictionary; this can be done by looking at what radicals come with what character in a puzzle corpus. They built a dictionary in two weeks and solved 1833 out of 2000 test Captchas. A second attack was to abuse the verification API to use the server itself to do a brute-force attack on an instance of the problem; this could even be used to brute-force a dictionary. They propose an improved architecture with composite images instead of fragments, and unique puzzle IDs. Lessons to be learned include the need to articulate your security policy and trust assumptions in cross-platform web apps, and think about the relationship between APIs and architecture in the light of this.

    The second cryptanalysis talk was from Berry Scoenmakers on Explicit Optimal Binary Pebbling for One-Way Hash Chain Reversal. Hash chains were invented by Lamport and have been proposed in applications from micropayments through the authentication of digital streams to a recent proposal for authenticating DME in air navigation. In this app there’s one hash for each second in the day, over 86k, so computing the chain from the seed each time is too expensive, and one stores intermediate values, or pebbles; binary pebbles have the distances between them proportional to powers of two. The present paper has an explicit construction for optimal binary pebbling; it’s easiest to understand from the visualisation in the paper. It achieves the optimal result, of storing k hashes and doing k/2 hashes in each round for a hash chain of 2^k elements. He also presents a result for Jakobsson’s speed-2 pebbles. Future work might include cascades and bootstrapping.

    The third talk was from Luke Valenta on Factoring as a Service. He’s reduced the factorisation of a 512-bit RSA modulus to four hours on an Amazon EC2 cluster using a c4.8xlarge instance with 32 virtual cores and 60Gb RAM which costs $1.78 an hour for on-demand instances and less for spot purchase at times of low demand. Polynomial selection takes 120 CPU hours, sieving is 2800 hours, both highly parallelisable; linear algebra takes 250 CPU hours on one core. He used the CADO-NFC code and optimised it for EC2 using Slurm; it’s not easy to parallelise the task too much without bidding up the price of spot instances. Speeding up linear algebra was even harder but sieving extra relations helps a lot. You can speed things up by throwing money at the problem, but there are diminishing returns; $75 for 4 hours seemed the inflexion point (larger clusters lead to more failures). Curiously, the factorisation of numbers greater than 512 bits i export controlled, and the FREAK paper last year claimed that RSA-512 was solvable in “at most weeks” (which motivated this work). The number of servers supporting RSA-512 fell from 8.9M to 2.9M following FREAK; there are about 10,000 hosts using RSA-512 for DNSSEC (and a million using RSA-1024), as well as about 1% of DKIM keys. (There was even one 128-bit RSA key, which they factored on a laptop in under a second.) The whole experiment took $12,000 which was covered by a grant from Amazon.

    In the fourth talk, Sietse Ringers explained how The self-blindable U-Prove Scheme from FC’14 is Forgeable U-prove is an attribute-based credential scheme with selective disclosure; the prover can decide which attributes to disclose, and disclosures are supposed to be unlinkable. The scheme in question uses pairing crypto with linear combinations of base points, and gives too much freedom; there is an attack involving two or more colliding users, where you end up being able to recombine the signatures on base points. The ingredients were fixed base points across all credentials, and invertibility of exponents.

    The final paper, Abihishek Anand’s A Sound for a Sound: Mitigating Acoustic Side Channel Attacks on Password Keystrokes with Active Sounds, was also given by video and skype for visa reasons. Various people had shown that the keys on a keyboard often make a sufficiently distinctive sound to be recognised individually, so that an attacker who gets a microphone close to a victim (or compromises an app on their machine with access to the microphone) can steal a password. Abihishek used a state-of-the-art acoustic analyser (which used a time-frequency distance measure on the spectra of key press and release events, and identified letters from the press and release events most frequently occurring together) and found he could recognise a six-character password two-thirds of the time after training the recogniser with 20 presses of each key. He then explored sound masking; adding white noise from 1–8KHz reduced recognition to a third, and the replay of previously recorded false keystrokes cut it to zero. He did a usability study and found that the false keystrokes were more distracting, leading to a higher password entry error rate, but one that was still usable.

  5. The rump session was started by our sponsor Jeremy Stephen from Bitt in Barbados, asking whether central banks should hold bitcoin. The bank keeps parity at two Barbadian dollars to the US dollar but that has hedging risks. Diversity can help the central bank of a small economy that’s a net importer and vulnerable to currency shocks.

    The second item was an anniversary privacy panel of Matt Smith, David Chaum, Jean Camp, Ahmad Reza-Sadeghi and Avi Rubin. The opening question was Apple vs the FBI. The failure of the trust infrastructure is now clear enough to all, and people are increasingly worried. Regulators try to help but don’t understand the implications, as in HIPAA. Some people believed that companies would not be able to resist law-enforcement pressure; others that suitably distributed systems could withstand coercion.

    Garick Hileman argued that China should embrace bitcoin and get its central bank to promote it as a reserve asset and hold it to dilute the power of the Fed; lots of bitcoin mining happens in China, and the total value of bitcoin is less than of gold.

    Shin’ichiro Matsuo announced The LEDGER Journal – the first academic journal on cryptocurrencies. The first issue will be out in May. Accepted papers will be signed by the authors and timestamped in the blockchain. He also announced BSafe.network, a blockchain research network.

    Joseph Carrigan talked about educating the user about their role in security.

    Joe Bonneau talked of Tyler’s FC13 paper on postmodern Ponzi schemes; four years later people still play them hoping to be faster than anyone else. But how can you can be sure a Ponzi scheme really is a Ponzi scheme? You can solve this with Ethereum, and Joe exhibited some examples he found such as “Pyramid doubler”. We need an updated paper on post-postmodern Ponzi schemes!

    Tyler Moore replied with a postscript, that Arthur Budovsky, the founder of Liberty Reserve, pleaded guilty to fraud on high-yield investment programs in January. His firm had been based in Costa Rica; federal prosecutors tied $1.4bn of transactions to the Ponzi schemes compared with Tyler’s estimate of $6m a month.

    Tyler also talked about the Journal of Cybersecurity which has just started publishing and provides a venue for interdisciplinary cybersecurity research.

    The first powerpoint karaoke was by Alex Halderman, who was nominated by the audience to talk on “Efficient electronic cash”, a slide deck from FC 1997 in Anguilla.

    Next up was Yvo Desmedt talking on how to avoid keeping on coming back to a finite number of islands; he suggests that by 2096, FC will be held in camps rather than hotels.

    Jason Cronk talked on the problems of putting servers between apps on phones; Nfc is inconvenient; but most phones go through NATs and firewalls which make direct IP communications hard. He suggests Tor could be a fix; each phone could be an onion. Alternatives such as ricochet, RCS and Google push all have drawbacks.

    JT Hamrick put in for the longest paper name award with “The Impact of DDoS and Other Security Shocks on Bitcoin Currency Exchanges: Evidence from Mt. Gox”. He used data from bitcoinity.com for an event study of several dozen DDoS attacks on bitcoin from 2011-13. Attacks led to a drop in kurtosis and skewness.

    Ivan Damgard announced a postdoc job on crypto, and The third workshop on theory and practice of MPC in Aarhus from May 30 to June 3 this year.

    The second powerpoint karaoke performance was Ian Goldberg giving David Krawitz’ 1997 Anguilla paper.

    Moritz Neikes reran the PrivaTegrity demo which didn’t work yesterday. A message in an Android emulator went through as a c-Mix batch and decrypted as “Hello democracy infrastructure!”

    Ryan Castellucci talked on dumb things people use as Bitcoin private keys; one of the top ten addresses at DefCon used “1” as the private key. He tried the first 2^38 keys and found the largest historical balance as 4.7BTC. Scraping data from the blockchain and trying it as private keys found a total historical balance of 440BTC, largest over 60. Oh, and Brainwallet closed because their random button just called Math.randon().

    I described how when we move to delay-tolerant networks, as when you try to make phone payments work despite flaky and intermittent connectivity, the supposed bug in Needham-Schroeder is actually a feature.

    Shaanen Cohney was with the Factoring-as-a-service team. Why did they do the project? Matt Blaze owned up in a twitter exchange to a 512-bit RSA key from 1992, whereupon Steve Bellovin said that having a 512-bit key in the same building as Nadia Henninger was asking for trouble. Matt replied that she had a key to his office, but he was sure she’d rather factor his key anyway. Shaanen displayed the factors.

    Mitchell Arnett and Kevin Delmolino talked about etherscrape.com; their code analyses and matches ethereum contracts to map the ecosystem using code analysis, clustering and other techniques.

    Foteini Baldimtsi spoke on indistinguishable proofs of work or knowledge, where the verifier can’t tell whether the prover knows a secret or did some work. She suggests them for reducing spam, and for hybrid cryptocurrencies where regulators are trusted but everyone else has to work. Finally she announced a Summer School on Blockchain Technologies from May 29 to June 2nd.

    The last human speaker was Sean Andrews introducing The O-Bill: A New Digital Financial Instrument. Obillex does factoring, releasing case from accounts payable, particularly where small firms supply public-sector customers, making connectors between incompatible trade finance systems. Their O-bill is operational now with Birmingham City Council.

    The session ended with Crypto Wars, a Japanese video from 1997 portraying the first FC as an attempt by researchers and business to escape the chaos of US crypto laws.

  6. Adi Shamir started his anniversary keynote on “Financial Cryptography: Past, Present, and Future” with a history of the Financial Cryptography conference. Its origins lie in a mailing list ran by Robert Hettinga who started the Digital Commerce Society of Boston with Ray Hirschfeld, Vince Cate and others; at one point this traffic was 10% of Adi’s inbox. The two big topics were electronic money (with a special emphasis on micropayments), and legal opinions on topics like digital signatures and money laundering. Many papers were shockingly optimistic by today’s standards, such as “How to make personalised web browsing simple, secure and anonymous”. There was lots of work on attacks, and comments on real-world payment mechanisms, and legal opinions from professors such as Peter Swire on the uses and limits of financial crypto. It was a brave new world.

    Ron Rivest had a paper on Perspectives on Financial Cryptography pointing out that historically, most payment schemes haven’t worked well; other predictions he made included that everyone with a PC would be able to mint his own currency, that cyberbucks wouldn’t replace real bucks, that privacy is already lost (and must be regained), that user profiling is not so bad, that governments will not allow payer or payee anonymity for large payments (which has come true for cash), that there would be no anonymity for small payments as it costs too much CPU and hassle, so it’s easier to regulate (but regulation and privacy law are broken), that anonymity will be bought and sold, that there would be no multi-app smartcards (but our smartphones are the multi-app platform devices instead), that smartcards would provide anonymity (phones don’t), that smartcards would be more expensive to break into (side-channel attacks trashed that generation of smartcards), no large-value digital coins (true, except for bitcoin), no transferable coins (they exist), micropayments will thrive (wrong, the Internet works on ads, although they’re micropayments between companies), general PKIs not necessary for payments (correct, as special systems are used rather than the browser shambles), money and voting are close (unclear; can’t use bitcoin for elections, or helios for payments), you can get anything you want (no, most problems are social-political).

    On FBI vs Apple, he predicts that Apple is bound to lose in the end. The FBI was clever to choose the best possible test case; they were just waiting for the opportunity. Apple botched the job of making sure they couldn’t help the authorities even if they wanted to; they’d have been better off if the FBI could break into the phone on its own. A very similar case happened in Israel in 2015 when a corrupt lawyer’s phone was seized; the police broke into it within a few months with the help of Israeli startups.

    By the second conference in 1998 the focus had shifted to topics that have now lost favour (such as certificate revocation, watermarking, and SET). Fast forward to FC09 and we find the economics of information security, anonymity and privacy, authentication, private computation, fraud detection, auctions and a special panel on password schemes. These are all modern topics but new payment schemes are completely missing; people had got bored after so many failures. But this was when bitcoin was about to emerge. Satoshi’s white paper came out in 2008 and mining started in 2009, yet the first mention at FC was in the last paper in 2012. By 2013 the opening session in Okinawa was devoted to bitcoin, and the full workshop started in 2014. We were followers rather than leaders, and took four years to even notice.

    Will bitcoin succeed? The bitcoin community behaves like rebels but wants to be the emperor, and to be mainstream you have to behave differently. There’s no adult supervision and no effective governance, with many competing proposals. By comparison the credit card industry makes fraud victims good, and this peace of mind is missing for bitcoin. As for the blockchain technology, will it be adopted? No mining, just distributed consensus, provable timeline and unforgeability. Maybe.

    Possible research challenges include preventing misuse of hacked bitcoin wallets (perhaps with smart contracts that limit the possible payees), managing incentives if all the banks put all transactions through a chain with no proof of work and no monetary rewards, managing timeline if we move from a single chain to a DAG, state actors who remotely compromise millions of machines (making geographical server diversity insufficient unless you have implementation diversity too), how you can do auctions where the threat model isn’t strategic behaviour but hacking (so trust crypto not servers – translate Vickrey auction bids into probabilities so stop a cheater who bids $1,000,001 to the opponent’s $1,000,000). In general how do you run the world when there’s a fixed probability of hard cheating, where one of the actors might have hacked a competitor?

    Adi ended up with fifteen predictions for 15 years, three each on cybersecurity, crypto, quantum, privacy and payments.

    1. Cybersecurity is terrible, and will get worse.

    2. The Internet of Things will be a security disaster.

    3. Cyber warfare will be the norm rather than the exception in conflicts.

    4. RC4 and SHA-1 will be phased out while AES and SHA-2/3 will remain secure (he expects a SHA-1 collision within the year)

    5. Improved factoring and DL algorithns will be found requiring key sizes beyond 2048 (he feels it will not be a fully polynomial algorithm; 4096 should be OK).

    6. Elliptic curves will fall out of favour (there’s a very strange current situation with the NSA moving away from it with no explanation).

    7. Research will still pour into quantum crypto and quantum computing, as the physics community is geared up to accept large amounts of government money.

    8. But there will be no full size quantum computers capable of factoring RSA keys.

    9. No-one will use quantum crypto.

    10. Governments will not tolerate anonymity.

    11. Most people will not demand or expect real privacy; that war is already lost.

    12. Tools to fight cybercrime and attacks will further diminish privacy.

    13. Bitcoin will fade away but leave a legacy

    14. Blockchain will be hyped, but succeed only in limited circumstances.

    15. An endless stream of new payment mechanisms will be presented at future Financial Crypto conferences.

    In questions, Moti Yung noted that even in 2009 people were interested in real payment systems such as PayPal and experimental ideas such as hashcash; David Chaum suggested that the privacy paradox is explained to some extent by Maslow’s hierarchy of needs, and people only start worrying about informational privacy once their basic information needs are met by devices such as smartphones, so things would get better. Adi suggested the organisers invite both David and him for FC 2031, to see who’s right.

  7. Wednesday’s first regular paper was by Michael Herrmann on Leaky Birds: Exploiting Mobile Application Traffic for Surveillance. He surveyed 1260 apps to see to what extent they facilitate global surveillance via bad analytics. The apps were sampled from the Play Store at different popularity levels: the ten most popular by category, then ten each from the 25th and 50th more popular percentile, from 42 categories. What could GCHQ achieve using their BADASS program, which uses unencrypted traffic for mass surveillance and was disclosed by Snowden? Michael uses graph analysis to link many of a user’s app sessions together across apps and then improves them using passive fingerprinting techniques based on Android’s TCP timestamp behaviour, which increases at a steady 100Hz. He found that the more popular apps leak more identifiers such as Android ID, Google Ad ID, IMEI, and MAC, with the greediest users all being ad networks. In summary, naive clustering will let the GCHQ analyst link about 25% of your Android app traffic, but with fingerprints he can link 58% of it. What’s more, it doesn’t make a huge difference if you actually use the app or just let it start. The feasible countermeasure could be an https everywhere browser extension, although TCP timestamps are visible outside https, so you’d have to reinitialise the counter from time to time.

    The second speaker in the session on surveillance and anonymity was Moritz Neikes, on Footprint Scheduling for Dining-Cryptographer Networks. In dining cryptographer networks, collisions waste bandwidth, and Chaum’s original paper envisages users reserving slots; but now collisions can happen in the reservation process. More sophisticated protocols had since been proposed by Pfitzmann and others. Moritz proposes footprint scheduling, where each reservation slot request is encoded in multiple bits for efficient collision detection; with appropriately clever coding, and users committing to PRNG seeds, you can check whether people behaved themselves. The mechanism is more efficient than either Chaum’s or Pfitzmann’s.

  8. The anniversary panel was on “The Promises and Pitfalls of Distributed Consensus Systems: From Contract Signing to Cryptocurrencies”. Adi Shamir doubted that existing Byzantine agreement protocols can scale to consensus among millions of users. David Chaum was a grad student at Berkeley when Leslie Lamport was doing the early Byzantine work, and impressed by it, but early prototypes failed. His own thesis was about how to admit new nodes by a public random process; see his website. I pointed out that what a blockchain gives that a timestamping service doesn’t is that it published data that can’t be censored, which in the long run will be an unacceptable to governments as anonymity; they will insist on the ability to censor some types of information. Peter noted that the blockchain isn’t a decision mechanism per se but just a common data structure. Adi suggested we consider Swift (which banks would live to replace), trade documents such as letters of credit (which would involve companies, customs and other stakeholders too), and government databases such as the land registry. Florian Kerschbaum noted that almost no-one mentions AribaPay but it’s bigger than bitcoin and even PayPal, being the biggest trade invoicing system in the world.

    The “security level” of consensus systems is another issue. In classical cryptography you can cause the opponent to do exponentially more more, while in systems like bitcoin the other side can just mine harder – and there are issues of whether the mining pools have compromised the underlying democratic ideal, leaving the blockchain open to political manipulation. It is not a good answer to say that anyone with a spare billion dollars could redeem the blockchain from the Chinese miners.

    Can a blockchain be used for voting? Peter Ryan argued that the interesting issues here are where the technology meets real people, with very rich attack models including vote buyers, sellers and coercers, where blockchains don’t buy much. Many voting systems assume a public bulletin board, where block chains might help; but the real work is elsewhere. Adi noted that people don’t pay to vote, or to reach consensus for that matter, so this may be an example of distributed algorithms without direct incentives. There are much more serious issues around voting from gerrymandering to the influence of big money. David Chaum argued in favour of introducing random-sample elections rather than trying to fix the existing legacy; you could do a global election for under a million bucks.

    On the currency front, the German Bundesbank just posted a profit of 3.2 billion euros; why should it give up the power to mint euros? Bitcoin does seem have have interacted with the banking system, for example when Cyprus went bust and people moved their money into bitcoin, but greeks used Euros and in Zimbabwe they used dollars. People will use any safe alternative, and what bitcoin has to offer is some anonymity plus the ability to handle large transactions. Adi argued that while proof-of-work looked like a wonderful idea initially, it now looks likes like a terrible idea; big companies have gone to China where electricity is cheap because of coal. It’s dirty, and centralised; why don’t the greens protest against bitcoin?

    What about stellar, ripple or tendermint as replacements? Regardless our our pessimism here, people are actually building and selling private blockchains and similar Byzantine fault-tolerant platforms to corporates. David Chaum remarked that distributed money systems have been done before, by Mondex; as it relied on tamper-resistance you could still get your shopping if the Internet fell over. He called on us all to be optimistic and try to push systems that can be influential. Adi replied that the panel had clearly failed as its goal was to reach a consensus! Florian said that to make stuff happen you have to pay attention to the non-technical interests of the stakeholders.

  9. The afternoon session was on web security and data privacy. First off were Victor van der Veen and Radhesh Krishnan Konoth explaining How Anywhere Computing Just Killed Your Phone-Based Two-Factor Authentication. Now that people bank on their phones rather than their laptops, we’re seeing attacks that use a malicious app to forward one-time passwords. Zeus in the Mobile, Spitmo and Citmo rely on social-engineering the user to sideload such code. However, not vendors allow people to install apps from the browser and synchronise browser data, it’s about to become an awful lot easier. They define a “2FA synchronisation vulnerability” as any usability feature that blurs the boundaries between factors, making attacks easier. The first example is a compromised browser that remotely installs SMS forwarding malware. On responsibly disclosing this in Feb 2015, Google told them it was too hard to worry about; however it’s easy enough to bypass Bouncer by obfuscation of dynamic class loading, and they made this point by publishing an SMS backup app that let a webpage execute arbitrary code. (Their account was suspended.) Other usable security mechanisms Google relied on are circumventable, particularly in older versions of Android, and Google’s assumption that SMS-based 2FA is wrong, as they use it themselves. They played a video of using a malicious Chrome plug-in to break 2FA by logging in to Google’s own 2FA, and another showing an attack in iOS; it uses the new Continuity feature of iOS 8.0+ which synchronises SMS messages between your iPhone and your Mac. They recommend that Continuity not be used, and that Android require on-phone confirmation for remote app install as well as disabling app activation through URIs. A Google engineer replied during questions that they thought such attacks outside their threat model as people who compromise the Android browser do other, stealthier things. Adi remarked that the attacks on Android and iOS were quite different, but neither was surprising when both factors are done on a single platform that gets infected. The point of 2FA is to remove single points of failure and you can’t expect much from it beyond that. Moti had the final word: an infected browser is enough as it can let you authenticate and then do a man-in-the-browser attack.

    The second talk was from Google’s Juan Lang talking on Security Keys: Practical Cryptographic Second Factors for the Modern Web. SMS 2FA is not ideal for all sorts of reasons to do with coverage, latency, usability and the fact that people can still be phished. A better approach is to have a properly designed token, where the FIDO alliance is developing standards. Their U2F protocol is standard public-key crypto; the user device mints a keypair, sends the public key to the server, and uses the private key to sign stuff. Enhancements include site-specific keys, no unique ID per device, attestation of who made the device, and a test of user presence (which you trigger by touching the device, thus making phishing harder); basically it’s a smartcard redesigned for the modern web. He went through the protocol details which include a signature counter which, like the ATC in an EMV card, lets the relying party detect physical cloning. It does assume that the platform has no malware as the key has no UI, and that the browser enforces the same-origin policy correctly, so there are no cross-site vulnerabilites on the web page. In questions I asked whether they’d have a mode that logged things that were signed; Juan said this is just for 2FA not transaction logging. It’s been supported by Chrome since v41 and will be supported in Firefox soon; internal trials showed Google users took 10s rather than 20s with the old OTP devices while the support costs have fallen steeply as users were migrated. V2 of the protocol will have device-centric authentication.

    The third talk was from Bill Robertson, entitled Include Me Out: In-Browser Detection of Malicious Third-Party Content Inclusions. You can try to defend against malicious third-party content using the same-origin policy (SOP), iframe-based isolation, language-based isolation and content security policies (CSPs). A CSP lets you refine the SOP to make it more inclusive or more restrictive, and is supported in recent browsers; for example, a web app developer could allow images to come from several places but be strict about scripts. But writing security policy is hard. So his research question is whether they can be made in an automated way. The goal is to detect malicious content before it can attack the browser, and the method is to create inclusion trees which records the provenance of remotely-loaded resources (including where on the page they appeared) as well as DNS (giving site popularity and roles such as CDNs), content characteristics (including characters in new languages), and train a classifier on them. The training was done by crawling the Alexa top 200k, and 20 popular shopping sites with 292 ad-injecting extensions, using VirusTotal as the ground truth. From June 1 to July 14 they found 177 new malicious hosts that later appeared in VirusTotal, plus 89 likely redirectors. It turned out that VT took 8 days to find half of them and 17 to get the lot. Then ten students looked at 50 random websites from the Alexa top 500 and found 31 malicious inclusions. Their prototype system is called “excision”.

  10. The first talk on Thursday was by Okke Schrijvers analysing the Incentive Compatibility of Bitcoin Mining Pool Reward Functions. He started with his conclusion: that game theory is a really useful lens for studying bitcoin. traditional protocol analysis looks at whether an adversary can trick you; distributed computing at how many compliant nodes are needed; but we also need to think about what keeps nodes complaint. Satoshi Nakamoto argued that a majority of honest miners would be enough; at FC two years ago Eyal and Sirer considered game-theoretic ideas such as strategy and utilities, and showed that the selfish mining threshold depended on network speed. Others have considered DOS between pools, pool hopping, and block withholding games between pools. Now Okke considers a single pool with n miners of mining power m_i, finding a partial solution at tome t which is a full solution with probability 1/D. The pool operator sees all partial solutions, and the miners decide whether to report full solutions at t or at t+1. He wants incentive compatibility (miners always report at once); budget balance (the operator takes on no risk); proportional payments (miner i gets m_i in expectation); and to optimise reward guarantees for miners. Solo mining doesn’t give steady payments; pay-per-share doesn’t give budget balance; proportional payments aren’t always incentive compatible, as a rational miner will delay early in the round. He proposes a new incentive-compatible payment scheme: if k > D, pay out proportionally, else pay 1/D per share and the remainder to the reporter of the full solution. He also provides an analysis of variance based on the 99-percentile time to earn X bitcoin. In conclusion, mining pools are hard to design properly; miners can game a pool even without outside options. You have to do the math.

    Next, Jason Teutsch spoke on When Cryptocurrencies Mine Their Own Business. He too is interested in whether Nakamoto consensus is secure if miners are rational rather than honest, and shows that a minority (38.2%) miner can indeed execute a 51% attack if the rest of the miners are merely rational. The idea is that he offers another minority of the majority miners a different puzzle to work on, then mines on his private fork on the blockchain. He can benefit without double-spending by making side payments to the distraction chain. The break-even point is 38.2% (root of p^2-3p+1+0; private block reward 1 > expected honest mining p plus puzzle rewards (1-2p)/p)) and the attack advantage grows rapidly where the miner has more than that. As for implementation, Ethereum lets you you can exchange money for computation, and you can make the puzzles the cryptocurrency’s own mining problem. It follows that any miner with sufficient initial capital and nontrivial computational power can double-spend on Ethereum, by offering a reward and then forking it back. Of course, the miners may be once bitten twice shy, so it may be better tactics to stick to the 38.2% attack. Finally, although this attack is like selfish mining, it differs in that it gives a per-block advantage rather than a relative one. In questions, Adi Shamir argued that the attack assumes that the coin value remains constant; in practice such an attack would destroy confidence and the value of the coin would fall to zero.

    The last speaker on bitcoin was Aviv Zohar. Optimal Selfish Mining Strategies in Bitcoin. He’s also been exploring what happens if all miners are selfish and we scale up to higher transaction rates or larger blocks, whether there are novel selfish attacks, and how fixes for such attacks should be evaluated. He has an algorithm that computes optimal attacks in the model of Eyal and Sirer; he has shown that some defences, such as Eyal and Sirer’s proposed randomisation isn’t as effective as thought; and that where the network is slow, miners of any size can gain from delays. Where the network is fast, he has a lower bound of 23.2% (which is tight, but lower than Eyal and Sirer’s conjecture of 25%. These results come from a model where miners maximise their fraction of blocks on the chain. This is nonlinear, but reduces to a series of undiscounted average-reward Markov decision processes and can be solved using off-the-shelf software; the advantage is zero if an only if the penalty per honest block is a computable parameter ρ. Hellman’s “Freshness preferred” algorithm and bitcoin-NG don’t work well. In questions, someone asked about ways of punishing surprise chains; analysing reward functions that do that is future work.

    The session on crypto protocols followed immediately, with Lucjan Hanzlik talking on Blind Signatures from Knowledge Assumptions. He has a scheme that is round-optimal, setup-free and practical; what’s new is that instead of using random oracles he used knowledge assumptions. It’s based on combining the Okamoto-Uchiyama cryptosystem with Boneh-Boyen homomorphic signature. Verification uses bilinear groups.

    The last paper of the session was Joseph Carrigan on KBID: Kerberos Bracelet Identification. His application is healthcare, where complex passwords and other authentication mechanisms always come second to getting the job done, and people work hard to circumvent security, such as getting a nurse to wiggle the mouse every so often. It was inspired by Brace, which compares an accelerometer in a bracelet with mouse movements. He adds strong authentication by a process whereby the user touches the bracelet, logs on, and touches again; it uses skin conductance to communicate token data. He’s working on storing hashes or kerberos tickets rather than full tickets to save memory, and on a move to capacitative coupling. One has to think of the “doorknob attack” where a bad man builds a doorknob-in-the-middle, and about RF emissions from human bodies. In questions, people mentioned wearable authentication devices that use RF; Joe is trying not to use RF. Someone asked asked “how long will it be before employees figure it out, and put a conductor from the device to their PC?” Joe responded that he hoped to make it usable enough that people wouldn’t try to game it.

  11. Katharina Krombholz started the last session of FC with a talk on The Other Side of the Coin: User Experiences with Bitcoin Security and Privacy. She did a user study of 990 online responses and ten qualitative interviews, where the invited some respondents to a shady bar in Vienna where you can pay with bitcoin. To persuade bitcoin users to participate, she paid bitcoin; she bought €1500 of bitcoin on a Friday, and the Greek referendum that weekend she had €2200 to spend on volunteers. To filter out automated submissions she used Captchas, metadata, referrers and open text questions. The participants were 85% male, age 15–72 with a median of 28, and half reported an IT background. The biggest country was the USA, then Tor, then the UK. 11% were miners, 50% reported using bitcoin at least once a week, and their total reported holdings were 8000BTC, or $2.5m. Web wallets are popular; Coinbase is top with 314 (32%) users, then Bitcoin Core and Xapo. Only 35% of Coinbase users have backups but they only have 238BTC between them. The fattest wallet was Armony with 3818 BTC (over $1m); less than 5% of participants use that. Users with over $100 don’t back up significantly more often, though. People’s top worry was fluctation in the market price, with wallet security second and more technical treats way down. 32.3% think bitcoin is fully anonymous, and 25% use it over Tor despite Alex Biryukov’s attack. Participants reported losing a total of 660BTC, with 22.5% reporting losing bitcoins at least once. 43% said it was their fault, 26% hardware failure, 24% software failure, and 18% security breaches such as Mt Gox. The typical lesson learned was to back stuff up properly in future; she highlighted the ability of MyCelium to create paper backups (and users of this wallet back up the most as it’s easy). It may be surprising how many people use hosted web wallets given that bitcoin is supposed to be decentralised. In questions, the proportion of lost bitcoins that ended up in others’ hands might be about half, as the “my own fault” category included.

    The second talk was by Patrick McCorry about Refund Attacks on Bitcoin’s Payment Protocol. He’s studied protocols used to help mechants; the leading one is BIP 70, which is used by 100,000 merchants and run by Coinbase or Bitpay, who shield customers from having to handle bitcoin addresses. The merchant signs a message to the customer; the customer sends a payment message and her refund transactions to the merchant and the acquirer. So Patrick looked for the refund protocol, and couldn’t find it. It turned out that refund addresses are not authenticated. In a “silkroad attack’, a malicious customer pays a drug dealer via an honest one by giving the honest merchant the drug dealer’s bitcoin address as his refund address. In a second attack, a rogue trader offers cheap goods and tells people he takes payments over a reputable trader like Amazon. The rogue trader does a MITM on the transaction from Amazon to the customer and puts his own refund credentials in place of Amazon’s. He tried out these attack methods against CEX, Dell, Bitroad and other merchants. He suggests that acquirers stop accepting refund addresses over email, and proposes and improved protocol. The acquirers acknowledged that both attacks are a problem.

    The third speaker was Ingolf Becker on Are Payment Card Contracts Unfair? (declaration: I’m a coauthor). We studied the gap between what banks say customers should do to protect PINs, and what customers actually do. In Europe at least this turns on whether the customer’s behaviour amounted to “gross negligence” and there have been court cases with odd findings. A representative bank, HSBC, requires customers to not write down or share their pins or use them other than for HSBC bank accounts. We surveyed PIN use; the typical customer’s primary PIN is used several times a week but others are used much more rarely. About a third wrote pins down, typically in a diary or phone; PIN reuse was common with over 11% of people using their main bank PIN to unlock their phone. A significant minority shared PINs with relatives, a feature that old people found particularly useful. In conclusion, the banks’ T&Cs fail a simple test of reasonableness, and we have urged the European Banking Authority to include usability testing in their future requirements for authentication methods. Finally, authentication methods must not discriminate against the elderly, the less able and other vulnerable groups. In questions, he noted that people who use their bank PIN as an unlock PIN for their mobile phone are probably contravening the Payment Service Directive. (blog)

    The last speaker of the conference was Marie Vasek, speaking on The Bitcoin Brain Drain: A Short Paper on the Use and Abuse of Bitcoin Brain Wallets. A brain wallet generates a private key from a user-selected password, and Marie got interested in exactly how bad this is after someone reported having money stolen from a brainwallet with the xkcd password “correct battery horse staple”. She assembled a corpus of 300 billion candidate passwords and trawled the blockchain, using brainflayer to run tests; she found the most money had been protected by passphrases. There were 884 distinct brainwallets with 845 different passwords with a total of 1806 BTC, which was about $100,000 at the time. Notable passwords included “one two three four five six seven” and “” (the empty strong was the most popular). About half of brainwallets have a dime or less in them; the largest had almost $400,000, with a power law distribution. As for attacks, she noticed 1895 drains on 884 wallets, with 69% drained exactly once, 14 drainers, and the top four drainers netting $35,000 between them. Now half of all brainwallets are drained within minutes. Drainers have gotten better with time, as the median number of hours to drain has fallen from nine hours to minutes over the past five years. In questions, there was a spike during the bitcoin stress test in July last year, and dips when new attackers came online; and drains were checked by examining drain characteristics manually.

  12. The bitcoin workshop started with a keynote talk from Nathaniel Popper, a journalist from the New York Times who wrote a book “Digital Gold” on bitcoin. His interest is more in people who use the system, rather than the developers. You can get the specs right, but if people don’t use it it won’t matter. Nathaniel visited Hal Finney before he died (in January 2009) and saw he was communicating with Satoshi, so he doubts the two were the same person. Bitcoin wasn’t working then and Hal was helping make it work, by providing a second node and helping debug the code. Satoshi’s initial emails were derided and rejected; Hal was there to stand up for him in the first two or three months. Mike Hearn stepped for a while in when Hal bowed out, and it eventually took off with help from some anarchist lists. Ross Ulbricht came on the scene in late 2010, when Satoshi and Marty and Gavin were wondering when someone would use it. In the summer of 2010 various things took off including Mt Gox.

    When Silk Road went online, the price of bitcoin doubled from 50c to $1, and then over a few months surged to $32. Ross started out with eight garbage bags of psychedelic mushrooms he’d grown near his cabin, and others started selling stuff before he ran out. Suddenly you could send money to Amsterdam to buy heroin or MDMA; the great thing about bitcoin was that a heroin addict could figure out how to use it. Transactions went through reliably; it worked (whatever the other complaints). Some people say Silk Road as bitcoin’s original sin, but all press is good press and many of the developers heard of bitcoin because of it. A whole new community of developers got drawn in such as Roger Ver and Eric Vorhies who started looking for other applications and evangelising bitcoin as the future of money in libertarian circles. When apps like Satoshi Dice came along, core developers considered it was spam, but it kept on going and established the idea of microtransactions. Other entrepreneurs like Charlie Shrem and the Winkelvoss twins who backed him didn’t acquit themselves so well; Charlie got arrested for money laundering. But when the early entrepreneurs went to Silicon Valley conferences and persuaded some rich guy to buy $75,000 worth of bitcoin, that itself would cause a price spike, and this started the press cycle, which got more people buying. This continued basically until Mt Gox had a problem and the price collapsed.

    The next stage was the role the government played. New York’s financial regulator Ben Lawsky and Fincen’s Jennifer Shasky Calvery held the first meetings on bitcoin in 2013, and were persuaded that bitcoin had enough interesting legal uses, so that the goal should be to clean it up; essentially Lawsky became a bitcoin advocate, and responded to the Mt Gox failure by saying people shouldn’t give up. This affected the Senate hearings later in the year. The lesson Nathaniel draws is that a new financial technology should establish itself in a niche market and grow out, but has to maintain the confidence of its users.

    Questions covered whether we should think of bitcoin by analogy with other financial instruments; Nathaniel said it was perhaps more like the Internet, with bitcoin being like the Arpanet, as the harbinger of something that works at greater scale perhaps with a universal language for ledgers to talk to each other. What of ethereum? Nathaniel has met lots of passionate users but is not a technologist; it needs its Roger Ver to evangelise it. He doesn’t ask what a technology is but who is using it. What about sovereign risk, with mining pools dominated by China? That’s a real worry and touches the block size debate. It’s not clear the incentives are set up right for a truly distributed system. As for who Satoshi is, bitcoin wasn’t a bolt from the blue, but came together from things that had been done before, such as the cipherpunks movement and the financial crisis. He thinks Satoshi might have been someone like Nick Szabo. Could the US have banned bitcoin? Probably; European regulators were waiting to see which way the USA would jump, and if they’d said no then bitcoin might be marginalised to places like Argentina and China; without dollar clearance everything would have been much harder and slower. Finally, Adi asked if regulators would tolerate an anonymous system; Nathaniel said the debate on terrorist financing was more in Europe than the USA, whose regulators say they’re not interested in information on every bitcoin address, merely those that are being filled by transfers from US financial institutions. They will of course use blockchain analytics to try to trace illegal transactions.

  13. I then gave Khaled Baqer’s talk on the DOS attack on bitcoin in 2015, Stressing Out: Bitcoin “Stress Testing”. Khaled had been unable to get a visa for Barbados in time, so I ran quickly through his slides and then let a question-and-answer session take up the rest of the time. His paper is on the DOS attack on Bitcoin. The paper collects a lot of statistics about the attack and develops the methodology needed to identify spam transactions. In discussion, bit coin developers said they now have mechanisms to prioritise transactions by fee in times of mempool stress, but that many miners are focussed on hash payouts rather than fees.

    Second on was Joe Bonneau on Why buy when you can rent? Bribery attacks on Bitcoin-style consensus. Given bitcoin’s vulnerabilities, why aren’t their more attacks? Mining hardware is illiquid with high entry costs and low salvage value. However if there some efficient way to rent lots of bitcoin mining hardware, lost of attacks would start to make sense. The closest thing we have now is cloud mining; this lets hardware owners offload their risk while saving new miners the trouble of setting up their hardware. However the cloud mining exchanges won’t let you mine in arbitrary ways. You could pay people directly, or set up a mining pool that you run at a loss (some charge a zero fee, so you could charge a negative one), or pay in-band bribes, or use Teutsch’s puzzle contract. So, again, why are such attacks not happening? Maybe miners just focus on capital cost and electricity, and don’t think about the protocol; maybe if you offered a discount mining pool they wouldn’t notice. Is this sustainable? Perhaps an attack will get to the threshold, and we’ll see a tragedy of the commons. This is early work and early days, but we basically game-theoretic models that allow collusion and side-payments between miners

    Next was Ethan Heilman on Blindly Signed Contracts: Anonymous On-Blockchain and Off-Blockchain Bitcoin Transactions. New cryptocurrencies such as zerocash, zcash and zerocoin support mixes; Ethan presents a scheme that can run on bitcoin using four transactions in three blocks and also helps against bitcoin theft. The idea is that an intermediary blindly issues vouchers based on blindly signed transaction contracts. There are two or more transactions in block i and two or more in block i+2. The anonymity set is quite transparent and can be built up using ephemeral addresses. He also has a proposal for micropayment channel networks, where the micropayments are off-blockchain and are also based on intermediaries and fair-exchange mechanisms. Both schemes resist DOS and sybil attacks; the first resists a malicious intermediary while the second resists an honest but curious one.

    Mathieu Turuani was next on the Automated Verification of Electrum Wallet. This wallet splits trust over a client and server; normally it uses a key in each, though if the server is compromised or down then client can get another get from a vault to extract her bitcoins. Similarly, if you lose your laptop you can use the vault key along with the server key. The paper contains the details of how he used CL-ATSE to verify the secrecy properties, by checking for terms that an intruder cannot produce. He found and corrected a confirmation replay attack, and got safety results for the case whether one (but not both) of the client and server are compromised.

    The morning’s last speaker was Aggelos Kiayias on Proofs of Proofs of Work with Sublinear Complexity. Some nodes do full verification while others do simplified payment verification that looks only at the last k blocks. Aggelos wonders whether an even more lightweight verifier is possible, with a sublinear complexity in the length of the blockchain. How do you prove to such a verifier that a certain block is k blocks deep in the main chain? There have been various proposals for a richer back-link structure. He proposes such a structure with recursively embedded chains; this allows provers to find relatively compact subchains to send to lightweight verifiers. The difficult thing is to ensure that the lightweight chain has been mined enough; his security proof against a malicious prover involves a simulation argument that an attack would yield another attack that would convince a full verifier.

  14. Kevin Demolino and Mitchell Arnett talked on Step by Step Towards Creating a Safe Smart Contract: Lessons and Insights from a Cryptocurrency Lab. Blockchain apps are hard to write; for example, the etherpot was a lottery run on ethereum but had interesting bugs; a blockhash bug stopped it looking back more than 255 blocks. How can you build safe smart contracts? There are many issues from the difficulty a typical developer has in thinking about incentives, through the need for really defensive programming, to a lack of documentation, crude debugging tools and unhelpful compiler errors. They have produced some open-source materials on Serpent syntax and related issues for a smart contract programming class. However smart contracts aren’t easy.

    Joe Bonneau was next on EtherIKS: Using Ethereum to audit a CONIKS key transparency log. He wrote a WEIS paper last year on why namecoin didn’t work; decentralised namespaces get squatted. So he favours centralised key servers: you get scale, you can deliver address privacy, you can cut name pollution and you can provide decent account recovery. But how do you stop the server being owned by a government? The key thing is to ensure consistent behaviour. With the right design you only need to warn the user if their key is wrong. Churn is high; about 1% of the Signal userbase changes key every day (whereupon all of your contacts get warned). He developed CONIKs, a consistent system that can be publicly audited and thus sends many fewer messages. It looks a lot like a private blockchain, signed by the keyserver that commits to the entire directory. The remaining problem is to ensure that someone checks the tree for well-formedness. The idea of this paper is that you can get the ethereum network to do it. You can also add new features, such as allowing power users to control their own name at the cost of being responsible for their recovery keys. So far the CONIKS and Ethereum data structures have evolved separately, but they could converge to increase efficiency.

    The keynote was from Gustav Simonsson of Ethereum, reporting a security audit of their protocol and their client implementation. They had Cigital and others working on evaluation from April 2014 up till the public launch over a year later. They also had a public bounty program; Nick van der Bijl found a buffer overflow in the go client that let you create ether out of thin air. There was a lot of detail on other bugs found and the client changes that followed. Not all bugs were in the code; one was in the formal spec (it failed to handle the overflow of -2^(255)). They are still sufficiently paranoid to have crossed out the word “safe” from their motto “A safe decentralised software platform” and they have still got bugs in via their bug bounty program. The main lesson is “don’t make your own blockchain” and the second lesson is “if you must, develop several different clients so you can find bugs by studying how their behaviour differs”.

  15. Andrew Miller presented a position paper on On Scaling Decentralized Blockchains. He’s been doing measurement studies; bitcoin’s current limit is 27 transactions per second, corresponding to a blocksize of 4Mb. On the P2P network there are about 5,400 nodes; smaller blocks propagate faster, with the 50% effective throughput being 124 blocks/sec but the 90% being only 13 blocks/sec. Miners are routing around this using the (centralised) bitcoin relay network, which does not claim to be maintained any more. There could be 10–100 times as much capacity if the P2P protocol were modified. So the block size debate may get more complex still; we may get to “side-plane” systems that use the blockchain to anchor other mechanisms that are faster or cheaper or have other features (this is his own preference for scaling up the blockchain). His takeaway message is that people involved in these debates should look at the data.

    Malte Moser then presented Bitcoin Covenants. Bitcoin’s scripting language is limited but now includes time, and he proposes to extend it further so as to restrict the possible outputs of the script. This he calls a covenant, and can be used for example to force a script to fire by sending one bitcoin to a specified public key. They can be thought of as coloured coins and could be recursive. This can be used to make “vaults” – even if someone steals your bitcoin private key, you can recover them using a second key; there’s a blog post on how to implement secure bitcoin vaults. It can also be used to overlay bitcoin-NG on top of bitcoin to improve bandwidth and latency.

    The last talk was by Iddo Bentov on Cryptocurrencies without Proof of Work. Proof of work has led to bitcoin centralisation, but abandoning it requires us to ensure the money supply is issued fairly, and that the overall protocol be robust. He discussed possible alternative constructions where most players are honest, based on chains of activity.

Leave a Reply

Your email address will not be published. Required fields are marked *