8 thoughts on “Security Protocols 2017

  1. Joan Feigenbaum started the workshop by discussing the multiple objectives of lawful surveillance protocols. At SPW 2014, she lamented how Snowden revealed catastrophic failure of institutions including that of the crypto and security research community, in that we had failed to oppose mass surveillance sufficiently effectively or forcefully. After her talk, Jeremie Koenig had wondered how to move from the feudal Internet to a renaissance version. Since then she and others have worked on privacy-preserving and accountable surveillance with limited scope and built-in oversight. Combining crypto protocols with black-letter law is her way forward. She was motivated by the high country bandits case where the FBI intersected 150,000 citizens’ cellphone metadata to find one bank robber. Can we devise protocols to protect a search for an unknown individual whose metadata have a combination of properties, under a “John Doe warrant”, as it’s called in the USA? She has proposed using privacy-preserving set intersection. Discussion followed on whether this was a realistic model, given that countries such as Britain allow traffic data search algorithms to have joins as well as intersections, given the other engineering and political constraints on lawful access, and given the demand for more creative methods for mining the data.

    Simon Foley was next, talking about getting security objectives wrong. He studied an industrial control system; initially the objective seemed to be that any SCADA communications over the Internet should run over a VPN and be encrypted. He searched for Siemens kit using the S7 protocol over port 102 using Shodan and found a basket of vulnerabilities; they tended to be things like service denial where the VPN policy was irrelevant; it was all about whether you blocked port 102 on your firewall (and if you did, remote management couldn’t work). Discussion started on how you can measure security if the definition isn’t stable. When dealing with multiple objectives, maybe we should go for a partial order; instead of secure composition, aim for secure replacements, so that a component that’s not resisting the threat du jour too well can be replaced by a better one. Trying to define security too closely leads to long feature lists and heavy compliance budgets; comparison is simpler. In an ideal world, you might be lucky to get a lattice of policies. But feature interactions still happen and real administration is about dealing with them.

  2. After the tea break, Frank Stajano’s topic was the safety of asymmetric social protocols. By this he means protocols whereby we’re offered bargains, coupons, apps and other marketing messages; the sender is better off after such protocols but the receiver may not be, especially if the sender is not honest. How can offers be screened? The receiver’s dilemma is how to assess not just the value of the offer but the trustworthiness of both the offer’s maker and the offer’s sender. These are not the same; frauds are often passed on by clueless friends, associates and affiliates. Akerlof and Schiller’s Phishing for Phools has bad news on the first count; scammers are guaranteed to exist at equilibrium, while Konnikova’s The Confidence Game teaches that even the most astute receivers cannot assess sender trustworthiness. Insurance markets can’t help much, so what can we do? Escrow and compensation frameworks have been proposed but are hard to implement; eBay and Amazon have built reputation systems as a practical mitigation. Frank’s proposal is a marketplace for social protocol insurers who can compete with each other to advise consumers on the quality and sources of offers. Discussion was on how such a market might emerge; we have the Consumers’ Association, plus a new startup in the form of Agari. We also have insurance provided by credit card companies, and malware screening by app stores. How could quality insurance work as a market, without collusion between sender and insurer and without adverse selection, moral hazard and having too many third parties reading your emails? Half of the people in the room are using gmail anyway …

    Paul Wernick’s interested in simulating perceptions of security, and has been trying to model qualitatively them using systems dynamics. Security perceptions of developers, users and intruders are different, and each can make sense in context. The user doesn’t know much; they’re told of breaches in the press, and nagged to buy upgrades; eventually they may become victims. The intruder knows what worked and didn’t in the past, and may have a zero-day up their sleeve. The developer knows of historical attacks and known weaknesses, as well as how much money they make from the system. They might bundle fixes with free upgrades, force upgrades (W10) or charge for older versions; what are the optimal business models? He showed a system dynamics model of the vulnerability lifecycle for discussion. We touched on topics from the general methodological issues with systems dynamics to how security fixes will be funded for car software for twenty or thirty years. Perhaps there will be a premium for vintage equipment that cannot be hacked?

  3. Tuesday’s sessions were kicked off by Partha Das Choudhury talking about self-attestation of things. His work was inspired by popular fraud countermeasures; people in India who sent in a copy of their ID to apply for a phone would find that someone at the phone shop had used it to apply for a loan, so they would endorse the photo “for Vodafone SIM application”. He proposes that a Thing should have a Last Will and Testament which the purchaser sets, via her phone and the cloud, on first use; it is then enforced by a “Caveat instance” which handles policy setup and change. Day-to-day enforcement is via the gateway service which acts as the user’s proxy. Discussion turned on the value and the mechanisms for tagging data sent from a Thing to the cloud, trust in subsequent uses, and the scalability of control. One possible deployment strategy is preventing identity theft; another might be supporting mandatory access control in a regulated environment. It amounts to a DRM system, whose security problems and effects on innovation we already know; so perhaps the deployment route is via an app store operator.

    Mark Ryan also had a real-world protocol for inspiration. As a teenager, he reassured his parents by leaving a sealed envelope with his plans for the evening, which he could retrieve unopened and destroy if he came back on time. He has been thinking of protocols to allow employers to decrypt employees’ email but allowing them to see how much of their email has been read. Such protocols might also be for verifiable oversight of government surveillance. He seeks a minimal piece of hardware, and relies on an append-only Merkle-tree log, as used in certificate transparency. The cop enters his decryption request into the log and sends the hashes as proof to the decryption device. Attacks based on forking the hash tree are foiled by having the device sign a beacon periodically; financial data might be used as a source of public randomness. Variants allow users to see how many ciphertexts were decrypted, or which ones. This is a simple example of how tamper-evident devices can extend the scope of crypto protocols; there remains a question of whether the log is needed at all in the world of SGX as the device could interact with it directly.

  4. Milan Broz is working on full disk encryption. The requirement that sectors encrypt to sectors means that a simple implementation can protect confidentiality but not integrity. New technologies include DIF/DIX extensions to sector size and dm-verity for Android (a Merkle tree with a root hash signed in firmware), but cheap hardware avoids such costs. TPMs allow integrity checking at higher levels but again are not universal. Milan has been developing software to do the work instead, storing the authentication tag in sector metadata. He is not entirely happy with any of the well-known modes of operation. His challenge is this: identify or devise optimum mode of operation for performance on Linux implementations that call the kernel crypto API, and understand the trade-offs. One objection was that rewriting SSD drives with salted encryption might wear them out more quickly; so the interaction with garbage collection may bear close study.

    The last speaker before lunch, Thanh Bui, started from the observation that man-in-the-middle attacks end up with the two attacked parties having different views of the private key in use. So why not use a distributed ledger to record hashes of all private keys? This might be a totally distributed ledger like a bitcoin blockchain, or an untrusted third party with independent auditors as in certificate transparency. He builds up a family of protocols of increasing complexity. Discussion ranged from the latency and throughput of existing blockchains, and the difference from existing key confirmation or device pairing protocols; the main problem is that this approach can’t deal with a one-sided impersonation attack.

  5. In my talk after lunch, I suggested an “innovation stack” for discussing protocol change, inspired by John Groenewegen’s multilayer analysis of power industry innovation. We posit four layers: culture at the top, which changes over centuries; then ecosystems such as Windows, which can last decades; then individual firms which take years to decades to build; and at the lowest level individuals, whose actions may be mediated by markets, manners or habits. Entrepreneurs try to build up firms, then to build their firms into dominant positions in ecosystems, and perhaps even aim for culture change and world domination. This framework enables systematic discussion of observed protocol evolution over the last 25 years. There are some cases of ecosystem evolution, such as ATM networks and EMV; some tussles for control, such as SET v TLS and the emergence of mobile phone payments; and bugfixes which defend the ecosystem but add cruft and decrease adaptability. So how can a company be built into an ecosystem? The key is to provide a platform for innovation; to facilitate innovation by others. I suggested that one way forward, given the theme of the workshop, is to look for applications where we can support innovation in dispute resolution, perhaps by participants themselves. These are less likely to be found in standard topics such as key escrow privacy or cloud vs privacy tensions, which appear to be technically interesting but have little real play; they are more likely to be found in applications where incomplete contracts, disputes over quality of goods and tussles over evolving large contracts lead to many of the real-world disputes. We might perhaps look at the rating systems of eBay and Amazon, or even the Silk Road escrow system, as pioneers.

    The second speaker was Nam Ngo, talking about the security economics of vulnerabilites in decentralised organisations. If such an organisation were to be run entirely by software, for example on Ethereum or some similar platform, then the code is the company. The harsh lesson is TheDAO hack which cost the Ethereum ecosystem $50m; a simple time-of-check-to-time-of-use vulnerability found instant monetisation, unlike traditional hacks which needed multiple steps and delays to turn vulnerability into money. The introduction of option and futures markets brings the possibility of rapid monetisation in any case, and anonymity failure can also lead to real losses. Discussion revolved around how one might conceivably automate means of dispute resolution following exploitation of a vulnerability; there are precedents in multiplayer games and even in the government response to loopholes being discovered in tax codes.

  6. After coffee, Hugo Jonker woke us up by discussing attacks on the academic publication process. Real-life attacks have included faking datasets, manipulating peer reviews via citation rings and citation stacking, and even wholly fake papers. (Indeed, audience members joked, is string theory a wholly fake discipline?) From a security viewpoint, when everyone else is gaming the h-index, honest effort increases quadratically, Hugo is also interested in venue metrics like impact factor. Huge distinguishes hacking 9which exploits implementation detail) from gaming (which exploits policy). Just as you have an attack surface, you have a gaming surface of attacks on a metric like the h-index or the i10-index. He discussed the possible intrusion detection strategies; anomaly detection can involve detecting outliers and comparing them with peers. He wrote some scripts and found extremely similar papers, and one case of a 2016 paper cited 50 times in 2015 (which brought laughter as a previous SPW had proceedings two years late). He’s also looking at peer analysis, rapid increases in publications or h-index, and other indications of cheating.

    Tuesday’s last speaker was Jonathan Weekes, who’s working on software defined networks and investigating whether you can control your neighbour’s bandwidth. Low-costs switches might be able to cope with only 8k rules in their flow table, and cache misses cause rule churn. Openflow switches are FIFO by default, removing the oldest rule to make way for a new one. Yet this is very vulnerable to DoS, and Jonathan discussed different types. For example, in a “spray attack” you send packets to 8k destinations, flushing the flow table of your local switch and slowing down your neighbour’s network. He evaluated this o a Pica8 3297 switch with two legitimate and two hostile rackservers; a gentle flushing attack increased the ping time from 0.6ms to over 5ms, and a full-bore attack over 100ms. Yet there are no logging systems for rules with which such attacks could be detected. Are there better cache algorithms, and good ways of monitoring? Discussion centred on the difficulty of detecting and understanding DoS attacks generally.

  7. Wednesday’s first speaker, Markus Voelp, is working on a new security model for SGX. The motivating problem is how to survive future progress in cryptanalysis, illustrated by the SHA-1 break. There was discussion of further motivating examples for encryption. Proactive secret sharing has been around for over 20 years, but merely refreshing keys will not be enough, especially if ciphers, protocols or hardware are compromised later. His idea is to model leaks from SGX, used to protect highly sensitive information in a protected corporate environment, but where new critical apps run alongside legacy and potentially vulnerable ones. His model is that we can protect ciphertexts; some of them may be exfiltrated, but the bandwidth for this is limited. His proposal is a ‘permanently reencrypting enclave’ which encrypts data three times using different algorithms; data are refreshed as needed by decryption and re-encryption. In discussion, I suggested that a robust framework for using multiple ciphers could have other uses, such as resolving disputes over whether to use America’s AES or China’s SM4 by using both, and by providing extra assurance against power and timing attacks; however one would have to develop and test appropriate modes of operation. Tuomas Aura pointed out that such a scheme would need to deal with chosen plaintext and chosen ciphertext attacks to protect databases.

    Marios Choudary is interested in whether we can get security from disjoint paths, in a world where no CA is immune from compromise, but where we have multiple public channels such as wifi and GSM. If the adversary can do perfect and coordinated MITM attacks on all channels, then the problem’s the same as a single channel with an active attacker, but in real life the channels will be attacked in different ways, with time lags between attack behaviour. Path diversity can achieved in various ways, such as from VPNs and using cloud service providers. His proposal is that both Alice and Bob run proxies at two distant locations, say AE, BE in Europe and AU, BU in the USA; then keys are set up pairwise using authenticated encryption between AE and BE, and between AU and BU; by comparing keys, tags and ciphertexts between the different channels they can give Charlie a really hard time. In discussion it was pointed out that an active attacker could introduce delay; but then at least the attacker has to emerge from the shadows, and often situational awareness is the big soft spot. And when the attacker’s agents can’t communicate quickly enough, you can frustrate them.

  8. Dylan Clarke kicked off the last session by explaining why end-to-end security is not enough. He’s been comparing the Estonian and Norwegian voting systems, Snapchat and Skype, which teach that one needs to consider endpoint protection, logging and economic incentives as essential complements. He’s particularly interested in the verifiability of end-to-end security. How can users act on information that they receive? How can this be applied to elections? Discussion explored the fact that it’s not just about receipt-freeness, but properties such as accountability, recoverability and resilience against fraudulent claims of fraud. Comprehensibility also matters, and that’s a lot easier with wooden ballot boxes than with e-voting.

    The last speaker at this year’s protocols workshop was Peter Ryan, whose topic was whether he could adapt human-interactive security proof (HISP) techniques to make PAKEs auditable. HISPs use an unspoofable, out-of-band channel, and have to deal with ‘under-the-radar’ attacks where the adversary tries to be the first to learn if the codes agree; he aborts if they don’t, leaving the victim to assume a network failure. Bill Roscoe had the idea of using “time-locked” crypto or puzzles as a delay wrapper to deny the first-mover any timing advantage. Password-authenticated key exchange (PAKE) protocols fortify a key exchange with a shared password that must be protected against offline guessing. Can the idea go across? While HISPs and stateless, PAKEs are stateful and a simple delay won’t work; instead Peter proposes using a stochastic fairness protocol that involves blinding and shuffling; basically a novel application of the exponentiation mixes used in some voting schemes. That’s known to be secure against passible attackers, and Peter has adapted it to block active attacks too. He is currently thinking about other applications of his new construct.

Leave a Reply

Your email address will not be published. Required fields are marked *