Financial Cryptography 2015

I will be trying to liveblog Financial Cryptography 2015.

The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.

It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”

I’ll try to blog the refereed talks in comments to this post.

11 thoughts on “Financial Cryptography 2015

  1. The first regular talk was by Anand Kashyap on Are you at risk? There have recently been many targeted attacks, from Shamoon, Luckycat and Stuxnet, to the GCHQ attack on Belgacom. Is there a pattern? Anand and colleagues at Symantec Research have been treating targeted attacks like a public health issue, using a case-control study to compute odds ratios for potential risk factors. They analysed firms by sector and size, and individuals by role, level, location and number of LinkedIn connections. The case was the spearphishing emails from Symantec’s email gateway from 2011-3, while the control was their spam database. National security was the most attacked, with an odds ratio over 22; then railroads with 11; banks are a fair bit down the list, and electricity utilities even further down; while some industries like construction and legal services are unlikely to be targeted. Big organisations are attacked more, and the job roles most targeted were support staff followed by managers. As for location, people in Europe, Australia and China were more likely to be targeted and people in the USA or Brazil less so. He suggests that a weighted average of risk factors might be used to build risk profiles for cyber-insurance schemes.

    Rebekah Overdorf was next on Computer-Supported Cooperative Crime. Do cyber-criminals have a gang-like structure or a mob-like structure, and can we disrupt them by taking out leaders or experts? She analysed leaked message logs from the BlackHatWorld, L33tcrew and Carders forums. First, she built a communications graph and used the Louvain method for community detection and LDA for topic modelling. BlackHatWorld, which discusses SEO, had 18 communities with four or more members, the biggest being 212 (for video upload and CAPTCHA solving), then 202 (on bots). Carders has a more mob-like structure; it has fewer but larger communities, with the biggest having 800 members (discussing drugs and Apple products as well as carding) and a number being about the Dunbar number in size. Carders has hacked and shut down; its members went to L33tcrew, which is the biggest of the three, and shows multi-tier communities with most communities within 150 members. Then she looked at degree, betweenness, closeness and eigenvector centrality which align roughly with trust, information, low transaction costs and pagerank. All are positively correlated which indicates powerful individuals, with the exception that in L33tcrew the closeness centrality was negatively correlated; this suggests an “association of mobsters”. Finally she looked at self-policing by looking at banned users; the change in clustering coefficient and average path length were significant on Carders but not strong, suggesting that self-policing was not that effective. She concludes that traffic analysis is a perfectly viable way of determining the structure and nature of cybercrime gangs and mobs.

    The third speaker of the session was Marie Vasek explaining why There’s No Free Lunch, Even Using Bitcoin. She’s been looking at scams that were fraudulent by design and that have bitcoin addresses, such that money in and out can be measured using the blockchain. Some are simple: easycoin which supplies a wallet into which you can put your bitcoin, which the scammers then take if it’s over a threshold. (The scammer was taking about $20,000 a week up till the end of 2014.) Ponzi schemes include high-yield investment programs, some of which have moved into bitcoin (lasting on average 125 days and taking $6.4m in total, of which $3m were taken by the scammer) and others that were “born bitcoin” (lasting on average 37 days, taking $840k of which the scammer took 5%). The second-biggest group was mining scams, which sell mining products that are never delivered, taking $2.9m (AMC and IceDrill each took over $1m). None of these scams are really new; they’re just old ones adapted for bitcoin. In total, $11m were taken from 13000 victims in 42 scams.

  2. Brad Moore started the second session with a talk on Multi-Class Traffic Morphing for Encrypted VoIP Communication. Wright and others have shown that VOIP encryption leaks as the variable-bit-rate packet lengths disclose speakers’ identities, languages and even some key phrases. Naive methods won’t morph French to look like either English or Spanish as it has many more medium-length packets and fewer small or large ones. Brad finds the most bandwidth-efficient superdistribution to which all communications of interest can be morphed. He tested it with audio data from Librivox, encoded using the Silk codec that’s used in skype. His muffler adds about 50% to the audio corpus size, with the exact number depending on the number of classes of speakers used. The advantages of the algorithm are that it’s lightweight, fast, and maintains much of the bandwidth savings possible from VBR.

    Janaka Alawatugoda’s paper was on Protecting encrypted cookies from compression side-channel attacks. Kelsey showed that if user-supplied input is concatenated with a secret and compressed before encryption, then there may be an adaptive chosen-plaintext attack to leak the secret. Given that http uses compression, this can be used to attack secrets in the http body such as anti-CSRF tokens. Previously proposed mitigations such as disabling compression and randomising secrets all have disadvantages. How can we analyse them? Janaka first devises three models of security: cookie recovery, random cookie indistinguishability and chosen cookie indistinguishability. He then proposes separating secrets from user inputs, so that the non-secret part is compressed first, then concatenated with the secret and encrypted; this works as secrets are small and gives a chosen-cookie-indistinguishable scheme. He also analysed a fixed-dictionary compression technique.

    David Fifield has been working on Fingerprinting web users through font metrics. Advertisers use a wide range of subtle tricks to track you, and Tor Browser does it best to block them. David’s new idea is font metric fingerprinting, a refinement of the existing font probing trick; rather than measuring what fonts are present, it measures glyph dimensions. Basically it draws unicode characters and measures their bounding boxes. Unicode is big and complex; so are fonts; and fonts vary across systems. In a test with 1016 mTurk users, font metric gives 7.6 bits (two bits more than user-agent); they found that the 43 characters with the most entropy (starting with the Rupee) are as good as testing all of unicode, at least on this mTurk user set. Further variation comes from hinting, aliasing, and character combining/justification issues; see here for the gory detail. Right now, this is the best identification attack on Tor Browser; that’s being worked on.

  3. Olga Ohrimenko started Monday’s last session with a talk on Sorting and Searching Behind the Curtain. She wants to compute rankings on encrypted data without the overhead of FHE or the constrains of order-preserving encryption. The key is to split a private sort operation between an untrusted cloud CPU and a trusted cryptoprocessor. It uses Batcher’s sort (based on pairwise comparison and exchange). She can also rank documents by the frequency of occurrence of encrypted terms.

    Tarik Moataz was next with Resizable Tree-Based Oblivious RAM. His aim is to make ORAM cheaper, and not limited to a fixed number of blocks. To this end he came up with naive, lazy and dynamic scaling techniques, and methods for oblivious merging. For the details, see the paper.

    Monday’s last speaker was Wouter Lueks, on Sublinear Scaling for Multi-Client Private Information Retrieval. He set out to improve on Goldberg’s 2007 private information retrieval scheme to use it in certificate transparency. This is implemented as an extension to Percy++. It’s reasonably fast, by PIR standards, and doesn’t need cooperation between clients. For the code see here.

  4. Tuesday’s proceedings started with a talk on Relay Cost Bounding for Contactless EMV Payments by Tom Cluthia. Contactless EMV cards are everywehere now; the “tap-and-pay” mechanism for small payments is easy, but is it secure? These cards use ISO 14443 at the low level and EMV variants with various crypto tweaks at higher layers. Tim set out to implement relay attacks, having tried them first on passports and oyster cards. A Proxmark III let them eavesdrop the comms between card and reader; their relay implementation used mobile phones. Tom did a live demo; after several attempts he made a payment from the stage in San Juan to a terminal in Birmingham. Such relay attacks had been done before by Emms and others; the point is how to defend against realistic attacks (using phones not lab equipment) and with minimal changes to hardware (changing only payloads). The Visa and Mastercard protocols differ in details but involve the card sending a MAC and/or a digital signature on transaction details. The specs say a transaction should finish in 500ms while relaying takes 1s, but we can do faster relays with careful pipelining; the core is sending the unpredictable number and amount to the card, and get the crypto results back. Organic transactions took from 330 to 637 ms while the fastest relayed transaction was 485ms, so there is no time limit that will stop all attacks. The crypto takes 100-360ms while the relaying takes 200. His proposal, “PaySafe”, times the nonce exchange instead. The response time is 28 to 36ms, and relaying messages takes much longer. Rearranging the protocol to do this would appear to prevent low-cost relay attacks. He offers a formal model to argue for the soundness of his design; it verified that the attacker can’t look at the time-bound step, then send a message to the card, and then reply to the reader, which is all fairly simple.

    Serge Vaudenay followed in the same theme with Private and Secure Public-Key Distance Bounding: Application to NFC Payment. He discusses distance-bounding protocols and various attacks on them, such as where a remote prover has a partially-trusted local shill; none of the proposals so far deals with all of them. He proposes a new one-time protocol in which the prover and verifier set up a mutually authenticated shared secret, with which they then run asymmetric distance bounding protocol; he has a security proof for this.

    The third speaker was Gunes Acar and its title was Purchase details leaked to PayPal. Could payment providers accumulate information about our purchase habits and abuse it? Their data contain our real identifiers and sensitive shopping data. They looked at 881 merchants who target US customers and take US$ payments, to see what personal transaction details are shared with PayPal. They did a whole random transaction with a fresh VM right up to the payment click, where they stopped. They found for example that PayPal calls Adobe’s Omniture on its checkout data, and passes payment metadata, browser characteristics, and the referrer data (which may leak what you bought). PayPal’s privacy policy says it may provide information to third parties “in connection with” the services they provide to PayPal. In fact 52% of web shops shared product names, item numbers and descriptions to PayPal. He says that since the study, PayPal has stopped leaking this stuff to Adobe.

    Sander Siim then told us How the National Tax Office Evaluated a Tax Fraud Detection System Based on Secure Multi-party Computation. In Estonia VAT was 31% of revenue, but fraud amounted to 3% of the government’s budget; so in 2013 parliament mandated an invoice data annex to the VAT return whereby all firms must report all transactions with other firms exceeding 1000 Euros, with a view to checking that the buying and selling companies were declaring the same amount. Following lobbying by businesses about the accounting overhead and business secrets, the president vetoed the bill. Sander and colleagues approached the tax and customs board with a proposal to use secure multiparty computation, and built a research prototype using Sharemind. The idea is to break VAT declarations into three shares, on servers run by the tax board, the business association, and another party, and run distributed computations to get risk scores for companies. His tests show that it’s feasible; the Estonian economy’s monthly tax returns could be processed in ten days. The remaining problem is that the tax board currently keeps its algorithms proprietary, and would prefer not to reveal them in case they get gamed; at present we don’t know how to run obscure queries in practical amounts of time. Revised legislation has now been accepted and secure MPC is on the tax board’s roadmap for the next few years. In conclusion, MPC can solve some real problems, although business processes may have to be changed. For more see here.

  5. Sebastian Uellenbeck led off Tuesday’s second session with a talk on Tactile One Time Pad: Leakage-Resilient Authentication for Smartphones. In his scheme, your phone vibrates a number of times; you add this mod 10 to the next digit of your pin and enter the result. The idea is to make authentication robust against shoulder surfing, and the big question is whether it’s usable in practice. He got people to play a game to see how quickly they could pick up the knack; half got to the point where they could authenticate in a bit over 200ms per pulse giving about 20 seconds for the whole process. They tested audibility with various phones and at various distances: at half a meter, long vibrations can be detected in an office environment but short vibrations can’t. (They can be detected in an anechoic chamber.) In questions, it was asked whether there might be user side channels, such as grimaces as they count; the speaker said that subjects may do this if you don’t tell them not to.

    Asadullah Al Galib’s subject was User Authentication Using Human Cognitive Abilities. People have different cognitive skills and abilities, so Asadullah has devised an authentication game that involves matching a shape to a grid of 25 samples. This measures a number of mental processes. Visual search is affected by both bottom-up and top-down influences like knowledge; visual working memory can be measured with a visual pattern test; automatic processing can be triggered by an appropriate prime, which in his game is about coloured lines guiding the movement of gold coins that games get on successful mapping. They evaluated it with 23 users in controlled conditions and 129 in an uncontrolled (mTurk) environment. They used five games to train each user in the registration phase, and it took 2.5 minutes to do an authentication; the equal error rates were probably about 5%. An impersonation attack was tried where attackers tried to mimic three users with lower capabilities; they failed to authenticate. In questions, it was asked whether ego depletion would undermine the system; the speaker had not looked at that.

    Tuesday’s last speaker was Stephan Heuser, speaking on Smart and Secure Cross-Device Apps for the Internet of Advanced Things. He’s interested in how Android apps might share resources across devices, such as where you do a video conference on your phone but use a nearby TV screen for a better display. You might want to keep your credentials in your phone, and you might not want to have to fiddle about installing a new app on every device in sight. What sort of framework might make things work better across the “Internet of Things”? He has a proof of concept implementation called Xapp (pronounced cross-app) where you have a “host” app on the TV and a “manager” app on your phone; each executes modules within sandboxes which can together make up a distributed app. There is a crypto protocol for secure assembly and control of resources at runtime; a manager can take ownership of a host by touching it, then issue access tokens to establish secure channels, upload modules, and assemble them into an app. It is built on top of Remote-OSGi, a service-based component framework, which provides much of the mechanics but which had omitted to think about security enough to provide transport security, endpoint authentication or access control.

  6. The Financial Crypto rump session started with a toast to the late Hal Finney, proposed by fellow cypherpunk Ian Goldberg, one of the many who was inspired by Hal to learn about crypto. The rump session was dedicated to him.

    The first speaker was Hani Dawoud whose company Skyhigh Networks is developing tools for symmetric searchable encryption; he’s hiring.

    Sander Siim announced a free public SDK for MPC.

    Andrew Miller ha a startup called Microdesic doing auditable private offline e-cash, which is unconditionally anonymous unless you double-spend. He’s willing to discuss his protocol design with potential reviewers.

    Joe Bonneau announced BTC-Tech, an online course on bitcoin and cryptocurrency technologies from Princeton on piazza.com.

    Marie Vasek announced the new open-access Journal of Cybersecurity, and two open postdoctoral fellowships open at SMU.

    Rainer Boehme advertised the New Security Paradigms Workshop which is in the Netherlands in September.

    Ittay Eyal talked on the miner’s dilemma, aliter prisoners in swimsuits. Miners form pools to get stable income, but can free-ride by withholding blocks; from miners attacking pools we move to pools attacking each other, looking for the optimal infiltration rate and searching for an equilibrium. This turns out to be a classic prisoners’ dilemma, with a dominant strategy of attacking other pools.

    Andrew Miller came up again to talk about classes and student projects he’s running in cryptocurrencies, and to ask for ideas for extending this. He’s also working on the social graph of bitcoin transactions. Finally, he asks how best to handle the ethics of launching an altcoin experiment: should the experimenter offer to buy back all the coins at a fixed price when the experiment ends?

    Yvo Desmedt announced tenure-track positions in cybersecurity at UT Dallas.

    David Fifield talked about how the evolution of obfuscation proxies for Tor tells the story of the censorship arms race, with obfs4 adding a per-bridge private key to prevent bridge probing.

    Jason Cronk is organising an Atlanta bitcoin consumer fair in April.

    Jean Camp and Sven Dietrich speculated about future NSA value added services from backup through faster browsing (through smart prefetching) to personalised caller ID, automatic phone call transcripts and instantaneous location of mislaid mobile phones. Then come social network link analysis, forgotten contact details and finding old classmates; then identity theft notification, a year-end summary of all your expenses, leading to automatic tax filing and per diem calculation. The final tranche runs from instant infidelity notification, ex avoidance and automatic restraint order generation.

  7. The crypto primitives session started with a talk by Thomas Gross on Signatures and Efficient Proofs on Committed Graphs and NP-Statements. He’s interested in what you can prove about graphs that you want to keep confidential; claimed potential applications include social graphs, relationships between police case files, and the topology of interactions between financial institutions. An issuer can certify a graph, and a prover can convince verifiers of predicates on graph properties in zero knowledge. It’s based on the Camenisch-Lysyanskaya signature scheme.

    Brent Waters was next with Efficient Statically-Secure Large-Universe Multi-Authority Attribute-Based Encryption. The idea of attribute-based encryption is that rather than encrypting a message to an individual or a group, you encrypt to everyone in the system who is a doctor and also a Pfizer researcher (or some other combination of attributes). The problem is to build system that resist collusion attacks by people who have the required attributes between them but not individually. His scheme is based on bilinear pairings and has the advantage over previous schemes that it can cope with multiple mutually-mistrustful authorities. There are some tricky details around ensuring collusion resistance without coordination.

    Rachid El Bansarkhani’s topic was Augmented Learning with Errors: The Untapped Potential of the Error Term. He’s interested in improving the performance of lattice-based cryptosystems. Current systems waste many error bits for a small number of message bits. Starting from a scheme of Micciancio and Peikert from 2012, he proposes a more efficient version that can be based on any trapdoor function, achieves CCA1-secure encryption, and leads to CCA2 schemes for authenticated encryption.

  8. Can Gemicioglu has developed BabelCrypt: The Universal Encryption Layer for Mobile Messaging Applications. Now that email has been largely replaced by cloud-based messaging systems, how can we get privacy? The threat model is not just service providers (whether corrupt or compelled) but compromised accounts and malicious third-party code. There are standalone secure messaging apps but most people won’t change their networks. Babelcrypt encrypts text for any underlying messaging application by providing a new keyboard to encrypt text and a screen overlay to decrypt it. The novelty isn’t the crypto, but the system engineering: he uses Android’s facility for pluggable keyboards, and also scans the display for signs of ciphertext, which is encrypted a line at a time (it does not yet to images or videos). The crypto can be straightforward or OTR. He’s thought hard about usability.

    Michael Brenner was next on Ash’s Evil Twin; how do you detect an evil twin access point? The SSID is completely configurable, and with an unencrypted network you can’t really check anything. Michael’s contribution is the first real-world study of user wifi behaviour in regard to evil twin hotspots. Three quarters of users had up to 22 open wifi networks configured on their devices; on session establishment, over 10 networks were visible on average; looking at T-mobile sessions, they found that about three quarters were done automatically. This led them to build METDS, a stopgap countermeasure that uses analytics to look at the surrounding network environment. Not only is it a lot more work for an attacker to fake surrounding network SSIDs, but METDS will filter on GPS location as well (this is done last to minimise energy use). The hard cases are when you connect to Starbucks at a location you’ve not visited before; in that case there is a user interaction required.

    The morning’s final talk was by Alexandra Dmitrienko on Market-driven Code Provisioning to Mobile Secure Hardware. How can you ship apps to SIM cards, secure elements (SEs), NFC chips and trusted execution environments (TEEs)? At present, third-party app developers almost never use such assets; vendors lock them down not just for “security” but for lack of a business model. She proposes a code distribution scheme with developer-centric access controls, based on existing javacard mechanisms, so that an app can request the installation of a javacard applet on which it depends. Business-level changes are likely to require much more rapid development (and approval) or applets which means that direct vendor control won’t work any more. She proposes an applet manager to do on-demand applet installation in the SE. She did a test implementation on jCardSim which is available here. Questions touched on whether an app developer would have to bundle versions that work with and without the applet, and whether the user would have to pay extra money to install the applet; how would the incentives align? It would depend on the specific industry and application.

  9. Mathias Humbert started the Wednesday afternoon session with a talk On Non-cooperative Genomic Privacy. Now that genotyping costs under $100, there are thousands of places where it’s shared, not just the big sites like 23andme (with 800k users). Genomic data is non-revocable, anonymisation is ineffective and relatives’ risks are interdependent. Is it worth protecting your genome if many of your family members have already revealed theirs? Mathias studies the spectrum of selfish to altruistic behaviour of N agents with interdependent risks, with a persistent exogenous adversary. A two-player disclosure game always has a pure Nash equilibrium defined by the players’ best responses; but if the discrepancy between outcomes is high enough, players will follow opposite strategies. For the altruistic case, he defines a familial Nash equilibrium for which the same holds, although the equilibria are shifted up and to the right by players internalising some of the externalities. He evaluated the social welfare effects not just in theory but with respect to a specific Utah family with three generations (he can deal with an N-player disclosure game by extending the Bayesian network framework). He concludes that misaligned incentives lead to inefficient equilibria, and altruism does not always lead to better outcomes. Finally, he notes that a workshop on genomic privacy will be collocated at Oakland this year.

    Jens Grossklags spoke next on Incentives to Share Private Information for Population Estimates. Jens also uses game theory to explore individual and population-level incentives for, and attitudes to, information sharing. One problem is where users give consent to the use of data with different levels of precision. He analyses the game in which individuals select these levels; its Nash equilibrium is largely a function of the number of players, and precision will go to zero as they increase. Yet surprisingly having many individuals giving bad data is better than a few giving good data. If there is a minimum precision level in practice, below which individuals must opt out completely; it turns out to be possible to define a minimum precision level with which all individuals are willing to contribute and where the results are still good. This gives a different approach to the analysis of threshold public good games.

    Paolo Palmieri was next with Paying the Guard: an Entry-Guard-based Payment System for Tor. Tor has recently started pushing users to stick with a single entry guard for months at a time. Might it make sense for users to subscribe to a guard, as they do for VPNs? He concludes that proper incentives are needed, and proposes a modification to the entry guard selection process, for it to act as the point of sale. Guards then pay downstream relays with signed IOUs that are settled later. His hope is that incentivising relay operators will cause them to come out in the open and legitimise the business.

    Ivan Pustogarov was the day’s last speaker with Proof-of-Work as Anonymous Micropayment: Rewarding a Tor Relay. Tor is not a peer-to-peer system, and many clients couldn’t contribute as they are behind NAT; paying with money can be a psychological obstacle, and cryptocurriences are often not anonymous. His proposal is that Tor clients will be enrolled in an altcoin mining pool and be paid in priority tickets. Questioners asked whether Tor clients would have to download and run dodgy binaries; the speaker suggested putting mining software in the Tor browser.

  10. Thursday started with Yongjun Zhao talking about Privacy Preserving Collaborative Filtering from Asymmetric Randomized Encoding. There’s over a decade of research into how to do collaborative filtering in ways that don’t leak individual preferences, in a variety of server-based and decentralised ways, whether with crypto machinery or various trusted parties. Yongjun’s proposal is a scheme that maps a user preference (such as “I like game of thrones”) plus a random key to a longer string with the properties that it’s hard to work back from an encoding to the original preference, but it’s easy to check whether two encodings are from the same preference on certain conditions. A possible application is secure friend detection. In questions it was pointed out that the encoding need not be asymmetric.

    Jakob Juhnke was next on Anonymous and Publicly Linkable Reputation Systems. He’s tackling basically the same problem but his approach is to give users keys and have a public reputation board; it’s basically like an electronic election system where people have anonymity unless they cast multiple votes for the same product. It’s based on group signatures, with public linkability so that anyone can decide whether two signatures were created by the same user; and no set of colluding users can frame an honest one.

    Sebastian Biedermann introduced a new attack with Hard Drive Side-Channel Attacks using Smartphone Magnetic Field Sensors. Magnetic fields have already been used in side-channel attacks on smartcards. Now every smartphone has a magnetic field sensor, designed to measure the earth’s magnetic field of 50-75 microTesla. Can these be used in attacks, such as perhaps to extract information from a target machine by observing the movement of its disk drive read head? They can indeed. A smartphone can measure a disk write as a deviation of perhaps 2-3 microTesla at 7cm from a laptop. Fingerprints were classified by Pearson correlation. It’s fairly easy to detect application startup (error rate 3%) and to tell which operating system is in use. They also found they could tell which VM was active under Xen in a server on which a smartphone was laid; it could find out which server was hosting a target web page (error rate 15%) and which files had already been downloaded. In response to questions, he has not yet tried creating a covert channel to get information from a classified laptop to a nearby smartphone.

    Gus Gutoski’s subject was Hierarchical deterministic Bitcoin wallets that tolerate key leakage. In managing bitcoin wallet keys, it is convenient to have a master public key from which child public keys can be derived without compromising any corresponding secret keys; an application of the old idea of forward-secure public key crypto to facilitate applications such as corporate financial delegation and trustless audit. Bitcoin has a standard BIP32 for such a hierarchical deterministic wallet; its vulnerability is that the master public key plus any child private key gives the master private key. Gus’s innovation is to have N master keys instead of one, and arrange things so that an attacker needs N+1 child private keys to break the system. This enables the financial delegation and trustless audit use cases to be run simultaneously.

  11. Yonatan Sompolinsky started the last session, discussing Secure High-Rate Transaction Processing in Bitcoin. Could bitcoin ever hit the 10,000 transactions per second handled by Visa? The block size is in now limited to 1Mb; each block needs a hash and we’re currently getting one per 10 minutes. Suppose we increased the block size to 1Gb? One problem is that Bitcoin security deteriorates at high throughput, as larger blocks and network delays will mean more forks and also make selfish mining easier. Yonatan points out an error in Satoshi’s analysis (his assumption that block creation is Poisson breaks down for a big network) and suggests a correction. He proposes a “GHOST” (greedy heaviest observed subtree) protocol in which at each fork you pick the child with the largest subtree; although discarded blocks’ content is ignored, their existence is not; the candidate that gave rise to the largest number of discovered hashes (successful or not) takes precedence. This recovers the 50% security threshold of the protocol, and the main chain is expected to converge in finite time. This is backed up with simulation results.

    Yoad Lewenberg was next on Inclusive Block Chain Protocols. Building on the previous paper, Yoad suggests that blocks should include several links to predecessors including known childless blocks, providing the miner’s worldview. He discusses the best algorithms to parse such data and whether miners of offchain blocks should get any reward (his solution is that the longer the abandoned chain the less they should get). These changes should bring the advantage of improved throughput; and paying miners who produce off-chain blocks some reward is fairer to poorly connected miners.

    The last talk of FC 2015 was by Mark Simkin on VeriStream – A Framework for Verifiable Data Streaming. At CCS 21, Schroeder and Schroeder proposed authenticating streams using Chameleon authentication trees (which use trapdoor hash functions). These have limitations, including finite stream length, which the current paper sets out to fix. He did an implementation and evaluation. Questions asked what the advantage might be over simple MACs; the answer is that public-key mechanisms allow updates.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *