Euro S&P

I am at the IEEE Euro Security and Privacy Conference in London.

The keynote talk was by Sunny Consolvo, who runs Google’s security and privacy UX team, and her topic was user-facing threats to privacy and security. Her first theme was browser warnings, which try to stop users doing what they want to; it’s an interruption, it’s technical and there’s no obvious way forward other than clicking through the warning. In 2013 their SSL warning had a clickthrough rate of 68% while their more explicit and graphic malware warning had only 23% clickthrough. Mozilla’s SSL warning had a much lower 33%, with an icon of a policeman and more explicit tests. After four years of experimenting with watching eyes, corporate styling / branding and extra steps – none of which worked very well – they tried a strategy of clear instruction, attractive preferred choice, and unattractive alternative. The text had less jargon, a low reading level, brevity, specifics, an illustration and colour. Her CHI15 paper shows that the new design did much better, from 69% CTR to 41%. It turns out that many factors are at play; a strong signal is site quality, but this leads many people to continue anyway to sites they have come to trust. The malware clickthrough rate is now down to 5%, and SSL to 21%. That cost five years of a huge team effort, with back-end stuff too as well as UX. It involved huge internal fights, such as with a product manager who wanted the warning to say “this site contains malware” rather than “the site you’re trying to get to contains malware” as it was shorter. Her recent papers are here, here, and here.

A second thread of work is a longitudonal survey of public opinion on privacy ranging from government surveillance to cyber-bullying. This has run since 2015 in sixteen countries. 84% of respondents thought limiting access to online but not public data is very or extremely important. 84% were concerned about hackers vs 55% worried about governments and 53% companies. 20% of Germans are very angry about government access to personal data versus 10% of Brits. Most people believe national security justifies data access (except in South Korea) while no country’s people believes the government should have access to police non-violent crime. Most people everywhere support targeted monitoring but nowhere is there majority support for bulk surveillance. In Germany 53% believed everyone should have the right to send anonymous encrypted email while in the UK it’s 39%. Germans were pessimistic about technology with only 4% believing it was possible to be completely anonymous online. Over 88% believe that freedom of expression is very or extremely important and less than 1% unimportant; but over 70% didn’t believe that cyberbullying should be allowed. Opinions are more varied on extremist religious content, with 10.9% agreeing it should be allowed and 21% saying “it depends”.

Her third thread was intimate partner abuse, which has been experienced by 27% of women and 11% of men. There are typically three phases: a physical control phase where the abuser has access to the survivor’s device and may install malware, or even destroy devices; an escape phase which is high-risk as they try to find a new home, a job and so on; and a life-apart phase when they might want to shield location, email address and phone numbers to escape harassment, and may have lifelong concerns. Risks are greater for poorer people who may not be able to just buy a new phone. Sunny gave some case stories of extreme mate guarding and survivors’ strategies such as using a neighbour’s phone or a computer in a library or at work. It takes seven escape attempts on average to get to life apart. After escape, a survivor may have to restrict childrens’ online activities and sever mutual relationships; letting your child post anything can leak the school location and lead to the abuser turning up. She may have to change career as it can be impossible to work as a self-employed professional if she can no longer advertise. The takeaway is that designers should focus on usability during times of high stress and high risk; they should allow users to have multiple accounts; they should design things so that someone reviewing your history should not be able to tell you deleted anything; they should push 2-factor authentication, unusual activity notifications, and incognito mode. They should also think about how a survivor can capture evidence for use in divorce and custody cases while minimising the trauma. Finally she suggests serious research on other abuse survivors of different age groups and in different countries. For more see her paper here.

I will try to liveblog the rest of the talks in followups to this post.

14 thoughts on “Euro S&P

  1. The first regular talk was by Laurent Simon, on how compiler optimisations break security guarantees that engineers try to build into their code. (Disclosure: I’m a coauthor.) Back in 1989, a standard allowed C compilers to ignore code that produces no output and produces “no needed side effects”. The C standard also doesn’t care about the memory layout of your program. As a result the C standard is not appropriate where we want to control side-effects of code, such as in cryptography, or when we’re hardening code using bit scattering or trying to defend against rowhammer. As a result cryptographers use all sorts of tricks to outwit the compiler; Laurent discussed a constant-time choose. Laurent gave some examples of hos this fails in surprising ways; the takeaway is that such tricks get found out in newer compilers which as a result are less dependable than old ones. It is time to end this and make the compiler writer an ally rather than an enemy, and support for features that enable programmers to express intent is now considered a good thing. Laurent therefore added support for memory zeroisation and constant-time choice to the clang/llvm framework. There’s a now constant-time choose function, which could replace perhaps 37 constant-time choose hacks in OpenSSL. There is very little overhead for elliptic curve operations, and the constant-time choose is even slightly faster for a Montgomery ladder. A ton of work remains to be done, everywhere from the hardware to the rest of the software toolchain, and there is the potential for real impact. In questions he noted that over 90% of the research papers on SGX were about timing attacks.

    The second speaker was Toby Murray talking abut the Covern project which is trying to extend techniques for verifying information flow control properties from single-thread to concurrent systems, and to cope with dynamic policies. His target device is the cross-domain desktop compositor, an MLS network appliance that uses an FPGA to composite video from sources at different levels for Australian military use. It requires the input mode and the display mode to be the same, so that the user can’t enter Secret text into an Unclassified window by mistake. Verifying this property is nontrivial because of shared-memory concurrency.

    Carlos Cotrini was the morning’s last speaker, who has been working on mining succinct and predictive attribute-based access control rules from logs. The state of the art might be apriori-c which looks for rules that affect at least a threshold of employees; this can easily approve rules which might be Trojans. He introduces a new metric of “reliability” that depends on refinements; every rule that subsumes it must have high confidence. This can spot over-permissive rules; he measures precision with respect to the test log. He has an algorithm, Rhapsody, which mines more precise and shorter rules. In questioning it was pointed out that the F1 scores for precision were all under 0.2.

  2. The afternoon session was started by Anupama Aggarwal who’s been investigating browser extensions. These things are easy to develop and deploy, while having privileged access to browsing behaviour; Chrome has 512 million installs and Firefox over 100m. Unfortunately some are malicious; they may steal your browsing history or even your credentials. How can we detect spying extensions at scale? She ran 42k extensions in a controlled environment and looked for telltale traffic and monetisation signals, finding 218 spying extensions with 1.5m installs. Only 12 had suspicious activity reports; many have useful functionality with over half having a rating of 4/5. Anyway she used the 218 bad and 10,000+ good extensions to train a neural network classifier; she found that the best signal was the API calls. she then found that an LSTM classifier worked particularly well on gthe API sequence, and using that she found 90 extensions marked suspicious of which 65 were confirmed, with over 1 million installs. This gives a robust countermeasure that can be run entirely on the client side.

    Next was Pascal Berrang who’s been studying genetic privacy. Previous research mostly focused on DNA and worked out various inference attacks, though there has been some work on microRNA. Pascal wanted to get a robust methodology for working across genetic types and found a dataset of 75 individuals (including 21 mother/child pairs) with data over 8 years including on DNA methylation. He worked out the practicalities of building a Bayesian network for the correlations between methylation sites for individuals who might or might not be related; this involved simplifying things by looking only at 32,000 independent pairs of positions which can be learned and evaluated separately. He then separated the kinship and time components. He found he can build models based on data and external knowledge and quite a lot of information leaks; for example there’s a set of methylation sites that are strongly associated with Crohn’s disease, and there is correlation between the methylation of mother and child that’s 95% accurate given 500 positions.

    Patrick Speicher has been investigating how we can prioritise security investments, taking as an example how a German email service provider could get the best bang for the buck when trying to stop bulk surveillance by the five eyes. Should they use DNSSEC, DANE, TLS or even relocate infrastructure? Building a realistic model of this, rather than a toy one, involved gathering nontrivial data; he builds a property graph and propagation rules which take account of cross-dependencies between protocols. A particular issue is stupid certificate validation: for example, if you send email to gmail.com and the attacker returns gmx.evilsite.com, some mail servers only check that the certificate is good for evilsite.com. The result is a Pareto frontier of mitigations available for a given budget. He ran the analysis for a number of possible attacker and defender countries, checking their main mail services’ location and routing, and making assumptions about how stealthy the attackers would be. For example, if Russia spent nothing on mitigation, America could read 21% of its email; while America can get 81% of Brazil’s traffic. Defence gets more expensive more more attacking countries, and total defence is really expensive; both Russia and Brazil would spend nine figures to keep the NSA out, between protocol, deployment and political measures. There is a website to play with here.

  3. Mauro Tempesta started the last session with a talk on synthesis of firewall policies. His new language IFCL is designed to cope with complex firewalls that have multiple nodes each with an associated ruleset. He models firewalls as logic formulas by removing control-flow actions and then unfolding rules into predicates over pairs of packets. You can then use tools like SMT solvers to analyse and compare firewall policies. He was implemented this in a tool called FWS. For example their department has 3 subnets, 530 iptables and 5 user-defined rulesets; analysis takes seconds. With 22 subnets and 4841 iptables, analysis usually takes under 3 minutes.

    Martin Strohmeier has been studying privacy in aviation and specifically what can be learned. Aircraft can be tracked by anyone; you can use popular websites like http://www.flightaware.com (which are filtered) or ADS-B (which isn’t) or opensky network (research); you can also use gnuradio to collect air band traffic up to 500km including SSR and ADS-B (though with many receivers you get much more). The implications are different for government and corporate actors. For example, Google execs were caught making multiple trips to Pacific islands. Martin collected 2m transponder addresses and flight data from the opensky network which has thousands of software radios. He trained some classifiers on known summits with state aircraft and then looked. The countries visiting the UK the most were the GCC countries Dubai, Qatar, Bahrain and Saudi Arabia; there are also regular meetings between the USA, german, the Netherlands and so on. In 2016, two journalists in Geneva set up a “Dictator alert” which tweets automatically whenever a dictator’s plane arrives in Geneva. This led to the seizure of 11 luxury cars from the dictator of Equatorial Guinea. Tracking 75 corporate jets flying a media of 91 flights each helped him discren 7 M&A acitivities by five different firms, including Johnson and Johnson’s acquisition of Actelion. As for Trump his plane was put on a blocklist when he became a candidate but you can see his track on noncompliant databases. Owning the plane through a shell company is weak; charter jets could be better. There’s more https://www.avsec-oxford.com/blog>here.

    Teemu Rytilahti has been looking at attacks involving NTP based on several months’ probing network traffic in 249 countries, which enabled him to build an NTP graph. He also reviewed reported incidents. The data he’s collected to map out the infrastructure are available online and he also has anonymised samples of client traffic. There are all sorts of obscure protocol attacks and denial-of-service tricks that can follow from causing devices to get the time wrong.

    Tuesday’s last speaker was Daniele Asoni, presenting Taranet, which provides anonymity at the network layer by hiding communication metadata. The idea is that all the ASes on the path collaborate to provide privacy. It improves on previous proposals like Dovetail, Phi and Hornet had low privacy and performance over 75% of plaintext; Taranet has high privacy but at a cost of performance down to 25%. It uses fixed packet lengths and layered encryption to try to defeat traffic confirmation attacks. Packet counting and inter-packet timing are the classic passive attacks, so Taranet pads traffic to constant bit rate in each “flowlet”; among the active attacks, packet dropping is maybe the hardest to block, as honest nodes remove jitter but can’t create replacement packets that will pass the last AS which may be malicious and in cahoots with the first AS on the path. The fix is to include packets that can be split to repair losses. He built a prototype and has some performance numbers: 3Gbit/sec on a single core. The main drawback is the bandwidth overhead caused by the chaff traffic.

  4. Wednesday’s first talk was by Will Robertson, presenting work largely done by his coauthor Kaan Onarlioglu os Akamai who could not be here for visa reasons. Their topic is forensic security, by which they mean a strong guarantee that their data is irrecoverable – whether by a sibling or by a government. Full-disk encryption can be foiled by modern solid-state drives for various reasons. For example read/write blocks are typically 4K while erase blocks may be 256K, and wear-levelling algorithms get in the way of overwriting, as do overprovision and garbage collection. Prior work has shown that existing secure erase and explicit overwrite commands don’t work well. Their project, Eraser, assumes a strong adversarial model, and a TPM or secure key vault, but assumes nothing about physical block mapping. The problem is scaling key management, and the core of their design is a key hierarchy: a file key tree with leaf nodes corresponding to individual files and deletion carefully optimised. They have a stackable block driver implementation for linux, with performance comparable to dm-crypt.

    Micah Morton was inspired by recent attacks on browser sandboxing to wonder whether performance optimisations might undermine web server sandboxing too. Attacks do exist and were more severe than expected. The asynchonous web server nginx uses an event-driven architecture rather than forking a new process for each connection, enabling it to defeat Apache for large websites. However, given (say) an ROP attack to corrupt application memory, it turns out that there’s a lot of shared security-critical data. He built a memory access instrumentation tool to map this. An example of this kind of failure is the “Cloudbleed” leak in Cloudflare last year. Nginx’s comfiguration data pointer table is not too hard to find in memory and has pointers to many variables of interest. Such vulnerabilities are starting to appear in Apache too, as seen in the Equifax breach last year. In short, stealthy data-only attacks are a growing threat.

    Amit Vasudevan has been wondering how to provide practical security for commodity systems on cheap hardware. His solution is a micro-hypervisor for the Raspberry Pi. So how do you slice the Pi? The Pi3’s Cortex chip has four cores and four exception levels, two GPU cores in the VC4 chip which masters the bus, and an undocumented MMU, interrupt controller and NVRAM; it has neither an IOMMU nor secure memory. He assumes we want to protect a single OS against an attacker with no physical access. His design, UberPi, provides secure boot, app liveness and app DMA protection; it can also reserve specific system peripherals (such as the timer) for use by the microhypervisor core or specific apps, in which case accesses can be trapped and inspected. Doing this without fancy hardware is the key innovation. It is 5544 lines of code, took 6 months to write, and the overhead is 3.5-6%. He claims it’s the first practical hypervisor for the Pi3.

    Christian Vaas has been working on vehicle location; in addition to already proposed vehicle-to-vehicle protocols, there’s a new proposal for a protocol using cellular radio, to support cooperative cruise control and collision avoidance. Attackers might try to cause collisions or do surveillance. Christian has been working on authenticating vehicles as they enter and leave a formation using the vehicle’s trajectory, and doing this co-presence verification privately. The immediate output is a means of screening VANET messages. He’s experimented with both GPS and gyros to acquire turn information. The signals used are turns left and right, and period of time spent going straight. He did simulations for false accept rates and experiments for false rejects. It takes 500-800 metres to accept a candidate vehicle as a follower, and the same to deauthenticate it when it breaks away. False accept and reject rates can be around 7-10%. See >a href=”https://www.semanticscholar.org/paper/Get-in-Line%3A-Ongoing-Co-Presence-Verification-of-a-Vaas-Juuti/10b8483a5ad95f5b6ed585b45f0d743122576b01″>the paper for more.

  5. Mario Werner has been working on sponge-based control-flow protection. How can we authenticate a control-flow graph efficiently? Mario uses a cryptographic sponges, a design by Bertoni et al from 2007, of which an example is SHA3. He discusses various modes of operation and comes up with a mode tuned for the threat model faced in IoT devices. CFI can be added to a processor without 30Kgates.

    Enes Göktaş has been working on a new code reuse attack, PIROP, that works without information about memory location. Many techniques are used to shield code from attacks using ROP gadgets, from re-randomisation and multi-variant execution to destructive code reads, which prevent the reuse of discovered target addresses. His trick is to start off from stack snapshots and use relative addressing, massaging the least significant bytes of code addresses, to get the program build the ROP payload. He created two proof-of-concept attacks, one in Firefox and two in Asterisk. In questions he clarified that the randomised part of the addresses you massage is typically the most significant bits.

    Jon Stephens has been working a new obfuscation technique that enables you to hide control flows in a program using covert channels. Instead of setting a-key, b=a and then doing decrypt(media,b), you do leak(a) and then b-retrieve(), using a timing channel. This breaks standard reverse engineering techniques, including emulation (unless the attacker emulates the hardware too, so as to get at “invisible” state). In effect the analysis tool under-taints, like other tricks that hide information in control flow. He can hide information in the hardware using a cache timing channel, or in the libraries using JIT compilation – thus forcing the analyst to understand or emulate everything. Covert channels can also be composed, and various factors affect the stealth of the process. Here is the project website.

  6. Wei Bai has been studying the usability of searchable encryption. The key is how users search email, both locally and in cloud-based mail servers. There are any issues around features such as multi-word search and partial-word search. He recruited 160 participants who each tackled 16 choice questions. Price was the most important feature, followed by privacy, then local storage; sophisticated search features came later, with participates willing to pay only a few tens of cents for them. he concludes that there may be a niche demand for fancy encrypted search features, but it’s likely to be more attractive for vendors to focus on privacy and convenience (in the form of local storage).

    Sebastian Ramacher has been thinking about speeding up cryptocurrency payments; but is there a way to penalise off-blockchain double spending? Existing e-cash constructions can be used to leak a secret key if a digital coin is double-spent. He has come up with new definitions and a generic mechanism for making signature schemes resistant to double spending using Shamir secret sharing. You use the context to select the sharing polynomial and then the message is the x coordinate, so if a signature is applied to two separate messages then the key is leaked.

    Charles Wright wants a halfway house between surveillance and privacy. Could we have exceptional access that really is exceptional, rather than the rule? He wants something that stops mass surveillance without increasing complexity or creating fat targets. His idea is that each message key should have a recovery key that can be brute-forced, with details largely stolen from the bitcoin mining function, plus from Adrian et al‘s attack on Diffie-Hellman. Law enforcement is assumed to have the capability to break Diffie-Hellman with a given prime, to keep the script kiddies out, and then a brute force is needed for each session key. The AntMiner does 10GH/J ad the cheapest electricity in the USA is 5c, so 80-bit keysearch costs $1bn while 60 bits cost $1K. Using such mechanisms the designer can set both the fixed cost and the marginal cost of decryption. In questions I noted that even if the UK government would be happy with a limit of 1000 decryptions a year, then a choice between burning a million dollars worth of carbon per key and using a hardware security module that allowed only a thousand decryptions would be ethically and economically clear.

  7. The last session of the day started with Lucas Bang talking about side-channel attacks where the attacker can reduce the search space by adaptive input choice. How do we model this in the presence of noise? Lucas models the attacker’s belief as an evolving probability distribution. This Bayesian framework is used together with symbolic execution to generate possible program behaviours. He ends up with the attacker’s information gain being bounded by the Shannon entropy of the path constraint probabilities, weighted by the attacker’s existing knowledge or belief about the secret. There is a library called Barvinok that does the model counting efficiently, and gives an attack. As an example, he analysed a database where a single record was secret but could be found by a timing inference attack.

    Takuya Watanabe has found that blocking other users on social media can leak privacy. An attacker can control a number of signalling accounts some of which block him; when the victim visits a target website it uses javascript to send her to some signalling accounts and then measures the delay – which will be greater if she’s blocked by that account. Experiments show that signalling a bit takes 3 hits on Facebook, 10 on Twitter and 20 on Tumblr. Ultimately, user identification succeeded in almost all cases, and Reed-Solomon codes can be used for an efficient implementation. Twitter responded by adopting same-site cookies and a referrer-based defence, paid a big bounty of $2,500, and persuaded Mozilla to adopt them too; eBay claimed to have deployed defences last night. Overall, at least 12 services are, or have been, vulnerable.

    Damian Poddebniak started working on using rowhammer to break EdDSA and has now generalised it to fault attacks on deterministic signatures in general. Fault attacks started with heat, light and voltage glitches; then rowhammer removed the need for physical access. He signs the same message again and again, induces faults, and gets the private key out using algebra. You can also hammer the public key in some cases. Damian did a number of experiments with minisign. A good mitigation is XEdDSA which is more resistant, and public keys should always be protected with a checksum; perhaps messages should be too. In questions, it was asked why not just check the signature, as with the defences used for twenty years against glitch-based attacks? The answer is that a rowhammer that causes a bit flip in the message wont be detected, and also that rowhammer attacks tend to be repeatable in that they flip the same bits on the same silicon.

  8. John Schank started Thursday’s talks talking about CRYSTALS, a suite of public-key cryptographic algorithms for a post-quantum world. 12 of the submissions to the NIST process use the “noisy dot product” approach similar to or inspired by Lindner-Peikert. His toolkit is based on his submission Kyber and has short keys by post-quantum standards; the public key for Kyber768 is about 1K. He discussed the various design choices, trade-offs and optimisations.

    Benjamin Harsha has been working on just-in-timme hashing. Hash functions used for password protection use a variety of tricks to slow down attackers and this is getting ever more important now that equipment like the Antminer can try SHA hashes four orders of magnitude faster than a normal CPU. Benjamin’s idea is to use just-in-time computation, hashing each successive input character of the password with the salt as it’s entered. He suggests using it with Argon2.

    Luke Valenta has been studying how elliptic curve cryptography is used in the real world. He’s been trying to evaluate the feasibility of a curve swap attack on ECDH, which involves downgrading to a curve where logs can be computed. He didn’t find anyone vulnerable despite extensive scanning (28.8M https, 7.9M ssh, 215k IKEv1, 101k IKEv2). He did however find that 8.5m https hosts completely ignored the list of curves he offered. He checked key reuse, finding that 5.5M (22%) of https hosts reused at least one session key and 640k (2.6%) used the same session keys as another host. Of 4M client hellos, 682K (16%)support secp160r1, a curve with only 80 bits of security. He also checked invalid point attacks, where by sending an illegal input you get some key bits out (this can be a risk with curve-blind implementations though it’s against all standards except JWE). He found that out of six libraries that support ECDH for JWE, only the Apache code actually checks public-key validity. Some of the https and other implementations will accept bad keys but won’t reuse their own session keys, so there isn’t an obvious attack. He recommends that all crypto developers use the wycheproof test suite to make it less likely that their code will be vulnerable to known attacks.

  9. The systematisation-of-knowledge session consisted of a single paper by Nicolas Papernot on security and privacy in machine learning. Machine learning isn’t magic and wasn’t designed with adversaries in mind; the huge error space creates a huge attack surface. An example was Microsoft’s Tay chatbot which was poisoned by adversaries feeding it racist comments. Another example was a video of Mickey Mouse being run over which made it past YouTube’s filters for childrens’ content. A widespread problem is that classifiers don’t generalise well from training data, so may pick up on images that no human would classify in that set, and there may be small perturbations of the training data that cause startling misclassifications. There has been work in this direction on spam filters for over a decade; recently we’ve found that deep neural networks are just as vulnerable. Many classifiers also overfit, particularly to outliers which just get memorised; this can violate privacy as the exact training image is classified so much better than similar ones that you can infer training set data. ML techniques are being used in new applications, such as autonomous vehicles, which expose interesting attack surfaces. There are now many attacks and defences. Interesting open problems include learning models that provide integrity at inference. Is there a good definition for robustness? Can we enforce domain constraints and semantics? Is there any way to move to domains without human oracles? Can we mask gradients in learning? Can we use differential privacy as a regulariser? Can we identify unfair correlations? Can we avoid having mechanisms for accountability and transparency enabling attacks on training and inference? The economist Charles Goodhart said in 1975 that “When a measure becomes a target, it ceases to be a good measure”; machine learning people should take note.

  10. The second keynote was by Lujo Bauer and his title was “From password policies to adversarial machine learning, it’s all about the user.” Many people see “usable security” as a side field rather than the core of our field; Lujo disagrees. Even in core areas, human behaviour if often at the heart of the definition of the serious problems. His first example is file system semantics, which confuse everyone from regular users to sysadmins. He did a usability study of the Windows access-control interfaces, and this led him to wonder whether it was the interface or the semantics. He tested a grid with windows semantics against one with specificity semantics for conflict resolution, which turned out to be much better. This taught him that it’s not just the UI.

    His second example was passwords. Complaining about them is pointless as they’ll be here for a long time; so what’s the best way to guard against offline attacks? Methodology is hard as small user experiments, analysis of leaked passwords and so on all have drawbacks. He’s done some work on password strength meters, trying to figure what are the best guessing algorithms to use and which correspond to what attackers do in practice. The performance of tools like hashcat depends critically on how you configure them; hanging out on the right places will teach you how to get much better performance. As a result he believes he now has robust metrics. He then developed a neural network to measure password strength and which turns out to perform better by many of the password guessing tools; with some effort, he managed to make it small enough to fit in a web page (by which he means under 1Mb). One takeaway from this is that understanding humans extends to understanding attackers too.

    His third big example was adversarial machine learning where he has been looking for realistic attack scenarios. When can attacker affect input enough to fool a classifier while remaining inconspicuous? Changing a picture of a cat into a dog isn’t deception if it affects humans and classifiers equally. His discussion example was eyeglasses that persuade a classifier to misidentify a person. This sort of work teaches that adversarial machine learning attacks only really make sense in context.

    His final example was the design of programming languages and APIs. He laments the lack of research into what tools enable people to write code without vulnerabilties this has started to be tackled in the last couple of years by his students and Matthew Smith’s. Instead of starting at the programming language theory and working up through the languages, the libraries and development tools and the folklore to the people, we should start with the user and work downwards.

  11. Paul Rösler has been studying security of group chat in instant messaging. They include not just confidentiality and authentication, but traceable delivery and closedness (only group members can invite a new one). The threat model explicitly includes the server who can break traceable delivery in WhatsApp and Signal (so messages can be dropped undetectably), and can also break closedness in WhatsApp (whose server can add itself to any group). Advanced goals include access to members’s old messages, and their keys; forward secrecy; and post-compromise security (which Signal reaches using ratcheting, although it doesn’t do closedness properly and is vulnerable to the compromise of any group member; the ratcheting needs to be done for the group rather than pairwise). Ordering is hard, as both Signal and WhatsApp deliver messages on receipt and the WhatsApp server can re-order the last 2000 messages. Re-ordering messages can give rise to real-world attacks, but notifying users of out-of-order receipt is very hard from the user engineering viewpoint, and is an open question. This work was initially presented at Real World Crypto; since then they’ve found an exploit on WhatsApp’s group invite links. In short, it’s harder than it looks to secure dynamic groups in asynchronous networks. Both they theory and the practice need some work; the theory doesn’t all quite line up. Paul proposes better definitions in the paper as well as practical improvements but does not have a provably-secure design to link them up yet.

    Next, Mathieu Turuani presented a formal analysis of the Neuchâtel e-voting protocol, which is actually used in Switzerland. In such systems you have to trust either the client (e.g. Helios) or the server (e.g. Scytl, the basis of this system). The voters are mailed credentials with a four-digit code for each candidate and two six-digit codes to confirm and finalise your vote. Your client software constructs a zero-knowledge proof of your votes and sends it to the ballot server; you get back the four-digit codes to check and you then confirm if they’re right. (If not, you vote by hand or re-run with new credentials.) Mathieu’s task was to formally verify a number of properties such as the impossibility of recording two different ballots for the same voter. It involved extending the Proverif tool to do stateful analysis. This was also the first use of ProVerif on an industrial e-voting protocol, and it may now be used for proving other protocols used in Swiss cantons.

    Marjan Skrobot has been working on game-based password authenticated key exchange. PAKEs have been around for 25 years but only started being used recently because of patent issues. Protocols like J-PAKE are mostly analysed using games but have unknown composition properties. He has been exploring models in which composition of a front-end PAKE and a baxk-end symmetric key protocol might be explored. There’s an existing result by Brzuska et al where a public session matching algorithm exists; Marjan extends this to the case of a RoR forward secure PAKE and a SYM secure SKP.

  12. Ziyun Zhu is mining threat intelligence reports to figure out the semantics of malware campaigns. These are extremely diverse and usually in natural language; so he’s interested in extracting concrete indicators and semantic roles automatically. The tool he’s built is called chainsmith and he uses a shortened, four-phase version of the kill chain model. He worked out methods of parsing and built a training set of over 14k messages to train a classifier which he tested against a database of millions. He found 26 campaigns starting from social engineering (e.g. about your tax return or social media account). He found that the missing codec is the best bait, and fake antivirus is the worst.

    Ke Xu is using deep learning to spot Android malware. The choke point in malware analysis used to be manual feature extraction; her innovation, DeepRefiner, does feature extraction automatically from the bytecode and the xml. Hidden layers do automatic feature engineering and a softmax layer figures out which feature combinations are benign or malicious. Skip-gram modelling produces tables of suspicious code constructs and a long short-term memory (LSTM) classifier to spot them. The system takes two seconds on a typical Android platform to vet a typical app.

    Erwin Quiring has been exploring links between steganography and adversarial machine learning. The image forensics community has many techniques to spot altered images of the kind used in AML attacks. Can one say more? Well, the black-box attack threat is rather similar in both cases; the attacker is trying to cross a decision boundary with small changes that are imperceptible to human observers. He has three case studies. In the first he used a substitute learning model to detect and remove image watermarks: a neural network figures out where a Broken Arrows watermark might reasonably be or not be, and remove it without using any detailed knowledge of how it works. In the second, he uses a 1 1/2 class model to harden watermarking schemes. In the third he uses closeness-to-the-boundary defences from watermarking against model-stealing attacks. In questions I asked whether it might be useful to repurpose stirmark from a watermarking test suite into a test suite for the machine learning community; Erwin thought this might be worth trying.

    Finally, the best paper awards of EuroS&P were given to Mario Werner and Amit Vasudevan.

  13. Thanks for writing this up. Fun way to spend my first morning of summer break. Having something like this sure beats navigating a conference website (or proceedings) and skimming abstracts.

Leave a Reply

Your email address will not be published. Required fields are marked *