Security Protocols 2015

I’m at the 23rd Security Protocols Workshop, whose theme this year is is information security in fiction and in fact. Engineering is often inspired by fiction, and vice versa; what might we learn from this?

I will try to liveblog the talks in followups to this post.

11 thoughts on “Security Protocols 2015

  1. Simon Foley and Olgierd Pieczul started the workshop with a joint talk on “The dark side of the code”. They study how programmer error can introduce surprises into protocols. They follow Dijkstra’s view that security is the absence of surprise, even under attack. The novel surprises bubble up from the parts of the system programmers don’t have to understand; the literary analogy is the novel “flatland”, where two-dimensional beings are surprised by things that arrive from the third dimension. Most programmers live in flatland in the sense that they think only of the APIs immediately visible to them. As an example, they considered a bookmarking application. This used to be complex but now can be four lines of code in a web application framework. This code hides the fact that it’s executing SQL statements (so their might be an injection) and connecting to a user-supplied location (so there might be bad stuff there). Correlating application behaviour with threats is a human task, but frameworks hide it; the programmer would never think of a TOCTTOU or DNS rebinding attack. A prudent developer would at least put in a note saying “Application may connect to arbitrary addresses. Administrator should provide adequate security controls to limit application network access” but most organisations would ignore that. In short, the security gap is getting larger, as tool chains get longer and more complex; conventional approaches like formal methods don’t help as they also work at the high level of abstraction, and ignore the low level where the trouble is. Their suggestion is automated anomaly detection. They mine logs to learn expected application behaviour in the absence of attack, then plug these into the JVM security manager to capture all permission checks, and finally verify at runtime that actual behaviour matches norms. In discussion, it was pointed out that programmers would learn to incorporate test cases such as redirects to stop their code failing when used in surprising ways; in general, there would be issues around when the intrusion detection was trainable and when it wasn’t.

    Keith Irwin is interested in “Redesigning security protocols to compel security checks”. Many protocols assume that checks are done on names, signatures, timestamps and so on, but programmers often skip them; for years, Safari failed to check domain names properly on delegated SSL/TLS certificates, and lots of free Android apps skip at least part of certificate verification. Code without checks is always simpler! Keith wants to redesign protocols so they break in an obvious way if the checks are skipped, or at least ensure that code with checks is simpler and easier. The basic principle is to use a function G to compute an important key, using the output of the important checks. So to check a message m isn’t zero, use C/m as an input to G, and so on. There was discussion about how many possible checks should be forced. His original title for the talk was “lazy programmer protection”.

    1. After lunch, Mohammad Torabi Dashti noted that a malicious postal service can easily do unidirectional (inbound) active attacks on postal chess, but they eventually get detected as they lead to loss of sync and illegal moves. He describes these as “derailing attacks” when used to corrupt state and deny service. Are there real examples? Indeed so, with an RFID key update protocol used for music festival passes providing an example. Can the idea be generalised? Mohammad has an abstract model of derailing attacks where the principals have safe, transitional and unsafe states. Possible transition to an unsafe state is a reachability problem, and similarly there’s a nested reachability problem whether an attacker can get a protocol to a state from which a safe one can’t be reached.

      Virgil Gligor was next on establishing software-only roots of trust for commodity systems. He assumes that systems have pervasive malware, but can obtain an authentic trustworthy OS image. Is it possible to establish that an authentic image is loaded in a malware-free state, or else detect the existence of malware? In short, malware must either come out, or go away, and without either secrets (as with a TPM or software obfuscation) or remote verification (as in SPW 2013). Previous proposals lacked an atomic link to authentic image booting, leading to a possible TOCTTOU attack between attestation and boot, did not scale to realistic system size, and lacked concrete assurance. In his SWORT protocol, the verifier is trusted while the prover is not. The verifier makes a random challenge to the prover, which computes a checksum over memory by recursive random walking, and carefully times the result. It exploits the prover’s memory hierarchy to ensure that bytes which should be in SRAM, or various levels of cache, must actually be there, rather than being swapped out or otherwise redirected. This establishes an untampered execution environment within which a further hash of the boot image can be carried out and checked; if so an updated authentic image can be passed over, and they can try again. One issue is a proxy attack where the prover offloads the computation to a faster machine; another is devices with multiple heterogeneous processors. The watchdog-timer reset attack enables TOCTTOU attacks on some similar systems, as does a future-posted DMA transfer; and careful work is needed to detect malicious code in the I-cache, by invalidating and synchronising it. Issues raised in discussion ranged from processor speed scaling through embedded devices such as memory wear levellers to peripherals with which devices are synchronised.

      1. Virgil’s talk was (at least to me) fascinating. I was familiar with the basics of software attestation, but none of subtle ways in which it could be circumvented.

  2. Micah Sherr and Tavish Vaidya work on location privacy controls for consumer drones. Amazon’s selling 10,000 drones a month, and there are ever more incidents where consumer drones enter prohibited airspace or overfly places such as crime scenes; while some are malicious, most are due to ignorance. For example, flying drones in Washington DC is illegal, but you can find Youtube videos of random consumers doing it. Is it possible to prevent law abiding but ignorant operators of consumer drones getting into trouble, while protecting the privacy of people on the ground? At present there’s work on a hardcoded list for controlled and prohibited airspace such as airports and government facilities, but this won’t scale for crime scenes or private houses; the day after a drone crashed on the White House lawn, there was a mandatory upgrade, but this does not scale. Simply uploading your address to a central server is poor privacy, as is broadcasting all drone locations and intentions. Their solution is a voluntary protocol for privacy protection, and this turns out to be harder than it looks; it must limit not just locations but capabilities (camera, IR, microphone …) and provide some bidirectional privacy. They have a policy language for stakeholders to specify rules. They found that 802.11 (which most drones support) works OK; so parliament can transmit a “no overfly” signal while in session, and you can ask drones not to overfly your house. Assuming that vendors will obey the law, what should the legal protocol do? He assumes a PKI that will issue no-cost certs to people who want a no-fly zone over their house. Unsolved problems include conflict resolution, e.g. if apartment owners in the same block have conflicting wishes; how to prevent overflights of sites whose existence is secret; whether you could build a “drone trap” using don’t-overfly-me beacons; and whether countries like Germany with strong privacy laws might simply order drone makers to observe the existing householder opt-outs that are blurred on Google maps.

    Sjouke Mauw flew even higher with a humorous talk on location-private interstellar communication. Are we happy with the aliens we have, or do we want to reach out to more, at the risk they don’t come in peace? Perhaps we should not communicate unless we can assure location privacy; but what can we plausibly say about the adversary model? He argues that we should rationally only worry about slightly stronger adversaries rather than with quasi-divine beings that can control time and space. In theory we could create a network of private communication probes that self-replicate and communicate back by further self-replication; but this would be so slow that some kind of random relay network would be preferable, and could be built using roughly the same mechanisms as ad-hoc wireless sensor networks. Any new artificial signals recognised by any relay would be flooded to N other relays. You can even talk about honeyplanets! (Is the earth one, created by one alien civilisation as bait for another?) At present, all this is rather literary as we can’t yet send probes far enough. Hey, we can’t even read filesystems 15 years old, let alone answers to messages sent 20,000 years ago.

  3. Dan Thomas opened the second day with a talk on the lifetime of Android API vulnerabilities. He studied one serious vulnerability, in the bridge between java and javascript, which exposes all the public methods of a java object called from javscript. This allows malicious javascript to execute arbitrary code; so a user who runs Angry Birds in a cafe with compromised wifi can be infected instantly and then infect others. From API 17 (in 2012) recompiled apps can be safe, but apps targeting older APIs were still vulnerable on newer platforms until Android 4.4.3 which fixed the bug properly by blocking calls to getClass. Over 80% of phones are still vulnerable. They scanned 102,174 apps and found that 59% of the vulnerable ones had not upgraded their target API version. On Device Analyser (a sample of 23,000 mobile phones that volunteer to run a monitoring app) an outdated device runs a vulnerable app 1.38 times a day, while an up-to-date device only exposes itself 0.6 times a day. In conclusion, two-sided fixes are hard for API vulnerabilities, even with one coordinating party (Google) with a strong influence on both sides. Fixing even a severe vulnerability takes 5.2 +- 1.2 years.

    Radim Ostadal is interested in the challenges of fiction in virtual environments, and playing with genetic algorithms in networks as a means of creating and mitigating service denial attacks. He was looking for distributed applications of genetic fuzzers. He distributed a population of candidate programs to network hosts, and ran variants of an http GET flooding attack. This turned out to be hard, as results for real and virtual networks were very different. In discussion, it was remarked that simulations of virtual networks are critically dependent on the hypervisor.

  4. Mark Ryan and Jiangshan Yu are interested in the fact and fiction of device attacker models. Crypto researchers assume a secure private key, but this is fiction in the real world. The fact is malware and the hope is that infected machines can be restored to a safe state, whether regularly by update or in response to detected attacks. What can we say about periodically compromised systems? His idea is key usage detection: you should learn when your keys are used by others. The idea is that Alice creates an ephemeral keypair, signs it with her long-term private key, and send this cert to a log maintainer. The log has all her ephemeral certs stored in chronological order in a Merkle tree, which makes it fairly straightforward to detect whether an extra key was generated, either for you or (with a bit more work) your counterparty. The log maintainer can be malicious so long as he wishes to appear honest. If there’s a single log maintainer, you can prove that no third party communicated to your key without your knowledge, by arranging a lextree of users each of which has a chrontree of keys; but then no-one has any privacy, and the log becomes highly critical. In discussion, it was pointed out that users would be incentivised to maintain their own logs.

    Sandy Clark’s talk was on shifting the costs of web tracking. Software engineering can teach us how to build payment networks or fly-by-wire aircraft, but cannot cope with the growing complexity of interconnected things because of the number of humans in the system and the fact that they’re in competition or even in conflict. We need to use game theory instead. Web user tracking is now well into this region; the number of marketing firms and the competition between them is growing. So is the identifiability of users; fonts gave 15 bits of identity eight years ago but 22 now. On mobile, Android is getting worse while iPhone is still pretty good. Cookies are unblockable because of syncing, evercookies, and so on. If we can’t block them, can we fool them instead? She has proof-of-concept code for deceiving canvas fingerprinting. To minimise linkability, she favours “composite privacy” where you appear to be an outlier every time rather than herd privacy where everyone looks the same. It will slowly poison the marketers’ datasets and cost a lot more to fix up the tracking. There’s a question here about whether you’re trying to give privacy to everybody (which would break the Internet’s business model) or to the 1% of people who care (in which case simpler strategies might work). In any case, privacy games should be seen in economic terms; can we do the least possible amount of deception to cause the opponents (whether advertisers or governments) the most inconvenience? Can we use evolutionary ga,e theory? Can we stay ahead by changing the game more quickly? Above all, privacy is not a game that can be won, or one that will end; instead, play not to lose.

  5. The first talk after lunch was by Frank Stajano, on “Pico without public keys”. In 2013, Adobe lost 153m passwords and all their customers were forced to reset. Such forced resets arising from cross-user and cross-domain vulnerabilities can be prevented using public-key authentication protocols, such as those used by Pico, which has a different public keypair for each verifier. But can you get the same result with shared-key mechanisms? This may be desirable anyway to save energy. Well, there is already a strong random password for every site, remembered by the token, that’s used to bootstrap the Pico system and also in Pico Lens, which sends passwords to non-Pico-aware websites. Frank has developed a password-manager-friendly spec, available at http://pmfriendly.org/, which enables websites to easily support not just Pico Lens but a variety of other software agents that manage passwords. This is designed to help deal with all the fiddly details around managing passwords around multiple websites and multiple machines. The small detail to be fixed is establishing a secure channel to a website; do we replace TLS with Kerberos? The old discussion around the costs of revocation continues.

    The next talk was by me and Khaled Baqer, entitled “Do you believe in Tinker Bell? The social externalities of trust.” At present, the security economics of certification authorities is dreadful, as it’s two-sided; we don’t choose the CAs we trust, but the ones that merchants selected (often because they were thought to big to fail). Is there a better way? In JM Barrie’s play, the fairy Tinker Bell almost dies because kids don’t believe in fairies any more, but is saved when the audience believes in her. Similarly, the strength of the Greek gods waxed or waned with the number of people who sacrificed to them. Can we redesign certification to be incentive-compatible, in the sense that the power of a CA comes from the number of users (rather than merchants) who trust it? As a proof of concept, we discuss a mechanism for motivating Tor users to cooperate in clubs to provide more relays and more bridges. Each club has a secretary who becomes a bridge authority, and when members reach out to victims of censorship, a token is generated to reward the members who provided the introduction and the bridge. They can bank the token to amass reputation, or spend it to get improved quality of service. The sybil problem (of secret policemen pretending to be victims) is solved first by having multiple competing clubs with their own outreach mechanisms, and second by monitoring whether the service is actually defeating censorship. This mechanism builds on previous Tor incentive proposals such as BRAIDS and TEARS but relies primarily on social and economic mechanisms rather than on fancy cryptography.

  6. Luca Vigano’s theme was “Security is beautiful”. Security is often interpreted as a fastidious burden, while the biggest threat is often from the users themselves. The response to over-prescriptive technology leading to bad user experiences? The cartoon solution: change all your passwords to “incorrect” so that whenever you get it wrong, the system reminds you that “Your password is incorrect”! In fact, “Security is mortals’ chiefest enemy” according to MacBeth act 3 scene 5 (though Shakespeare meant it in the sense of ‘overconfidence’). Having to type two or three passwords is usable, but not beautiful. Luca calls for security to be an intrinsic part of system design, and integrated so as to make a system beautiful. Rather than preventing insecure interactions with technology by forcing specific paths on users, it should engage them – like Peppa Pig’s secret club that all the other kids wanted to join.

    Wednesday’s last speaker was Joan Feigenbaum, on the use of security and privacy technology as a plot device. From stories such as NCIS episode S10E21 which portrays hacking into Mossad databases, and How to Get Away with Murder S01E04 which showed a breakin to a phone company, movie viewers might think that an intelligent geeky guy can get in anywhere. At the other extreme of unreasonableness, it seems extraordinarily difficult for policemen or spies in a movie to trace a phone: the actors always seem to have to “keep him talking” (e.g. Law and Order S07E03). A more realistic story is the “pacemaker attack” in S02E10 Homeland where the attacker’s accomplice got into the victim’s office so as to interfere with the controller of his medical device. What about interfering with the victim’s car, as in the carshark work? It might not appeal to the run-of-the-mill murderer, but what about a CIA black op? Or would you use a cyber attack to discredit him as a bad driver, or lock him in the car and turn up the heat so he arrives at a big speech covered in sweat? Plot suggestions are welcome…

  7. The last day’s talks were kicked off by Taha Ali discussing bitcoin, and the perils of an unregulated global P2P currency. Bitcoin’s mission is to stop central banks debasing the currency; to stop banks lending money in waves of credit bubbles; and to stop the system making us needlessly vulnerable to fraud. These create significant transaction costs that make micropayments hard. Dealing with them means removing reliance on central banks, using pseudynomous credentials and having an unregulated P2P network. These choices drive not only bitcoin’s technical design, but also its vulnerabilities, such as dark markets, hacking attacks and money laundering. Since the Silk Road was taken down, a dozen or more other dark markets have sprung up with double the number of listings. Many are centralised, but not all; OpenBazaar distributes the whole market on a torrent, while Grams lets you search multiple darknet markets. This is not all bad; it derisks crime and according to Aldridge and Decary-Hetu may have saved over 1000 lives. But there are new types of villainy; now bitcoin wallet stealers are 20% of all malware, and the cryptolocker ransomware got 250k victims, harvesting $100m in 30 days. There are interesting issues around money laundering. And finally, there are potential side-effects from the blockchain providing a communications channel that cannot be censored. For example, the traditional way of taking down botnets is to go after the command and control network; yet zombiecoin provides a way of embedded C&C traffic in the blockchain. There was discussion of service denial attacks on a Silk-Road-style escrow service. Could the FBI run a service denial attack by pretending to be a large number of first-time buyers, and falsely claim that the drugs were never received, to block payment to the sellers? Well, a solution is for the escrow agent to ignore correlated complaints.

    Paul Wernick asked, “Will technology make information security impossible?” His starting point is two science fiction stories. John Brunner’s “The productions of time” (1967) which hypothesised a way of recording people’s experiences and emotions directly from their brains and playing them back for paying “listeners” to experience them vicariously. The second is Asimov’s “Dead past” (1956) which hypothesises a chronoscope that can look into the past, providing global pervasive surveillance. Each of these paints a dystopian world in which privacy is gone. Guys, here we are … the chronoscope is being built as we all carry devices with cameras and microphones, all of which are backed up in Oregon. Will it become impossible for any human to keep any secret? What does that do to cryptography? In discussion, it was argued that this is a double-edged sword; with pervasive surveillance, crypto keys may become less important as the bad guys (or policemen with warrants) just get access to plaintext, while the integrity of information can be verified by looking directly at its history. The compromise of the password protecting a signature key is less important if a video of the contract negotiation that led to the contract signature is available. Technical countermeasures against pervasive surveillance bear some consideration, from flooding datastreams with random padding to denying humans access to secrets. But what technical countermeasures might help protect privacy? And what are the likely social effects of the death of privacy? There was a recent piece in the Guardian on this, with a link to a video.

  8. Arash Atashpendar has been thinking about quantum key exchange, where you might wish to estimate information leakage from the public discussion about errors in their candidate keys. Typical protocols start off with a shared conventional secret key to enable Alice and Bob to select k bits in the quantum keystream that they will compare. These bits’ locations are unknown but they are leaked in order on the public channel. He analyses the associated uncertainty and has dome simulation results for the cases where it’s hard to calculate the Renyi entropy and the min-entrupy that you need to do privacy amplification or randomness extraction. It would be nice to get clean formulae to bound the asymptotic leakage.

    The last speaker at the workshop was Changyu Dong, whose topic is efficient data intensive secure computation. Is this fictional or real? Secure computation was established as a topic by Andrew Yao 33 years ago, and we now know that any function can be evaluated with a secure computation protocol with polynomial overhead; but it’s 10,000 times slower than normal computation. Can we make it real? Might conventional optimisation approaches, such as data structures and parallelism, help? For example, computing the intersection of two private sets each containing a million 32-bit integers currently takes 6 hours (Huang, NDSS 2012). Using Bloom filters, all you seem to need is a secure AND of the two filters; but this leaks slightly. The fix is a garbled Bloom filter built using secret sharing; for details see his CCS 2013 paper. He can now make this run dozens to hundreds of times faster than a naive implementation; that’s the benefit of an appropriate data structure. Parallelism can also be added efficiently since Craig Gentry came up with fully homomorphic encryption. Changyu uses Craig’s BGV scheme, with each polynomial representing a set; their product is the intersection. In discussion, we touched yet again on what problems can and cannot be solved using cryptography; in the case of secure computation, no cryptographic algorithm can block active attacks on the statistical disclosure control aspects. There are further interactions with incentive-compatible multiparty computation.

Leave a Reply

Your email address will not be published. Required fields are marked *