Category Archives: Academic papers

Technology assisted deception detection (HICSS symposium)

The annual symposium “Credibility Assessment and Information Quality in Government and Business” was this year held on the 5th and 6th of January as part of the “Hawaii International Conference on System Sciences” (HICSS). The symposium on technology assisted deception detection was organised by Matthew Jensen, Thomas Meservy, Judee Burgoon and Jay Nunamaker. During this symposium, we presented our paper “to freeze or not to freeze” that was posted on this blog last week, together with a second paper on “mining bodily cues to deception” by Dr. Ronald Poppe. The talks were of very high quality and researchers described a wide variety of techniques and methods to detect deceit, including mouse clicks to detect online fraud, language use on social media and in fraudulent academic papers and the very impressive avatar that can screen passengers when going through airport border control. I have summarized the presentations for you; enjoy!

 Monday 05-01-2015, 09.00-09.05

Introduction Symposium by Judee Burgoon

This symposium is being organized annually during the HICSS conference and functions as a platform for presenting research on the use of technology to detect deceit. Burgoon started off describing the different types of research conducted within the Center for the Management of Information (CMI) that she directs, and within the National Center for Border Security and Immigration. Within these centers, members aim to detect deception on a multi-modal scale using different types of technology and sensors. Their deception research includes physiological measures such as respiration and heart rate, kinetics (i.e., bodily movement), eye-movements such as pupil dilation, saccades, fixation, gaze and blinking, and research on timing, which is of particular interest for online deception. Burgoon’s team is currently working on the development of an Avatar (DHS sponsored): a system with different types of sensors that work together for screening purposes (e.g., border control; see abstracts below for more information). The Avatar is currently been tested at Reagan Airport. Sensors include a force platform, Kinect, HD and thermo cameras, oculometric cameras for eye-tracking, and a microphone for Natural Language Processing (NLP) purposes. Burgoon works together with the European border management organization Frontex. Continue reading Technology assisted deception detection (HICSS symposium)

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

Systemization of Pluggable Transports for Censorship Resistance

An increasing number of countries implement Internet censorship at different levels and for a variety of reasons. Consequently, there is an ongoing arms race where censorship resistance schemes (CRS) seek to enable unfettered user access to Internet resources while censors come up with new ways to restrict access. In particular, the link between the censored client and entry point to the CRS has been a censorship flash point, and consequently the focus of circumvention tools. To foster interoperability and speed up development, Tor introduced Pluggable Transports — a framework to flexibly implement schemes that transform traffic flows between Tor client and the bridge such that a censor fails to block them. Dozens of tools and proposals for pluggable transports  have emerged over the last few years, each addressing specific censorship scenarios. As a result, the area has become too complex to discern a big picture.

Our recent report takes away some of this complexity by presenting a model of censor capabilities and an evaluation stack that presents a layered approach to evaluate pluggable transports. We survey 34 existing pluggable transports and highlight their inflexibility to lend themselves to feature sharability for broader defense coverage. This evaluation has led to a new design for Pluggable Transports – the Tweakable Transport: a tool for efficiently building and evaluating a wide range of Pluggable Transports so as to increase the difficulty and cost of reliably censoring the communication channel.

Continue reading Systemization of Pluggable Transports for Censorship Resistance

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Why password managers (sometimes) fail

We are asked to remember far too many passwords. This problem is most acute on the web. And thus, unsurprisingly, it is on the web that technical solutions have had most success in replacing users’ ad hoc coping strategies. One of the longest established and most widely adopted technical solutions is a password manager: software that remembers passwords and submits them on the user’s behalf. But this isn’t as straightforward as it sounds. In our recent work on bootstrapping adoption of the Pico system [1], we’ve come to appreciate just how hard life is for developers and maintainers of password managers.

In a paper we are about to present at the Passwords 2014 conference in Trondheim, we introduce our proposal for Password Manager Friendly (PMF) semantics [2]. PMF semantics are designed to give developers and maintainers of password managers a bit of a break and, more importantly, to improve the user experience.

Continue reading Why password managers (sometimes) fail

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!

Pico part II: What’s wrong with QR code password replacement schemes, and how to fix them!

Users don’t want to authenticate, they want to do useful or enjoyable things like sending emails, ordering groceries or playing games. To alleviate the burden of having to type passwords, Pico and several other schemes, such as SQRL and tiQR, let the user simply scan a QR code; then a cryptographic protocol authenticates the user behind the scenes and initiates a session. But users, unless they are on the move, may prefer to run their email or web browsing sessions on their full-size computer instead of on their  smartphone, whose user interface is relatively limited. Therefore they don’t want an authenticated session between their smartphone and the website but between their computer and the website, even if it’s the smartphone that scans the QR code.

In the original 2011 Pico paper (footnote 37), the website kept track of which “page impression” from a web browser was related to which Pico authentication by including a nonce in each login page QR code and having the Pico sign and return it as part of the authentication. Since then, within the Pico team, there has been much discussion of the so-called Page Impression Nonce or PIN, infamous both for the attacks it enables and its unfortunate, overloaded acronym. While other schemes may have called it something different, or not called it anything at all, it was always present in one form or another because they all used it to solve this same problem of linking browser sessions to authentications.

For example, in the SQRL system each QR code contains a URL, part of which is a random nonce (the PIN in this system). The SQRL app must sign and return this URL, thus associating the nonce with the app’s per-verifier public key. The web browser then starts its session by making another request which includes the URL (and thus the PIN) and gets back a session cookie.

So what’s the problem?

The problem with this kind of mechanism is that anyone else who learns the PIN can also make that second request, thus logging themselves in as the user who scanned the QR code. For example, a bad guy can obtain a QR code and its PIN from the login page of bank.com and display it somewhere, like the login page of randomgameforum.com, for a victim to scan. Now, assuming the victim had an account at bank.com, the attacker obtains a bank.com session that the victim unsuspectingly initiated with their smartphone.

Part of the problem is that QR codes are not human-readable. Some have suggested that a simple confirmation step (“Do you really want to login to bank.com?”) might prevent such attacks, but we decided this wasn’t really good enough from a security or a usability perspective. We don’t want users to have to read the confirmation dialog and press the OK button every time they authenticate, and realistically they won’t, especially if they never normally do anything other than press OK.

Moreover, the confirmation step doesn’t help at all when the relaying of the QR code is combined with traditional phishing techniques. Consider receiving this email:

From: security@bank.com
To: victim@example.com
Subject: Urgent: Account security threat
---
Dear Customer

<compelling phishing mumbo jumbo>

To keep your account secure, please scan this QR code:

<login QR code with PIN known by the sender>

Kind regards,

Account security department

and if you oblige:

Do you really want to login to bank.com?

Now the poor user thinks “Well yes, I do, that’s exactly what the account security team asked me to do” and even worse: “I’m definitely not being phished, I remember what those security people kept telling me about checking the address of the website before logging in”.

How to fix it

The solution we came up with is called session delegation. Instead of having a nonce in each QR code, which anyone can later trade-in for an authenticated session, we have the website return a session delegation token to the Pico (not the web browser) as part of the authentication protocol. The Pico may then delegate the session to the browser on the bigger computer by sending it this token, via a secure channel. (For further details see section 4.1 of our “lousy phish” paper.) The price to pay for this strategy is that it requires a channel from the Pico to the browser, which is much harder to provide than the one in the opposite direction (the visual “QR code” channel).

We made a prototype which used Bluetooth for the delegation channel but, because Bluetooth was sometimes difficult to set up and not universally available, we even thought about using an audio cable plugged into the microphone jack of the computer. However, we were still worried about the availability and usability of these hardware-based solutions. We did a lot of research into NAT and firewall traversal techniques (such as STUN and TURN) to see if we could use peer-to-peer IP connectivity, but this is not possible in all cases without a separate signalling channel. In our latest prototype we’re using a “rendezvous point”, which is a very simple relay server we’ve designed, running in the public Internet. The rendezvous point is the most universal and usable solution, but does come with some privacy concerns, namely that the untrusted rendezvous server gets to see the Pico/computer IP address pairs which are communicating. So we still allow privacy-conscious users to adopt less convenient alternatives if they’re willing to pay the price of setting up Bluetooth, connecting cables or changing their firewall/NAT settings, but we don’t impose that cost on everyone.

The drawback of this approach is that the user’s computer requires some Pico software to receive the delegation tokens, via the rendezvous point or whatever other channel. Having to install these hurts the “deployability” of the system as a whole and could render it completely useless in situations where installing new software is not possible. But another innovation, making the delegation token take the form of a URL, means there is always a last-resort fallback channel: manual transcription. If a Pico user can’t install the software on, or doesn’t want to trust, a particular computer, they can always still retype the token URL. There are other security concerns related to having URLs which will log your browser into someone else’s account, but you’ll have to read the lousy phish paper for a more detailed discussion of this topic.

There is clearly much interest in finding a replacement for passwords and several schemes (such as US 8261089 B2Snap2Pass, tiQR, US 20130219479 A1, QRAuth, SQRL) propose using QR codes. But upon close inspection, all of the above use a page impression nonce, making them vulnerable to session hijacking attacks. We rejected the idea that this could be solved simply by getting the user to carry out more checks and instead we propose an architectural fix which provides a more secure basis for the design of Pico.

For more information about Pico, have a look at our website, sign up to our mailing list and stay tuned for more Pico-related posts on Light Blue Touchpaper in the near future.

Pico part I: Russian hackers stole a billion passwords? True or not, with Pico you wouldn’t worry about it.

In last week’s news (August 2014) we heard that Russian hackers stole 1.2 billion passwords. Even though such claims sound somewhat exaggerated, and not correlated with a proportional amount of fraudulent access to user accounts, password compromise is always a pain for the web sites involved—more so when it causes direct reputation damage by having the company name plastered on the front page of the Financial Times, as happened to eBay on 22 May 2014 after they lost to cybercriminals the passwords of over 100 million users. Shortly before that, in April 2014, it was the Heartbleed bug that forced password resets on allegedly 66% of all websites. And last year, in November 2013, it was Adobe who lost the passwords of 150 million users. Keep going back and you’ll find many more incidents. With alarming frequency we hear of some major security exploit that compromises an enormous number of passwords and embarrasses web sites into asking their users to pick a new password.

Note the irony: despite the complaints from some arrogant security experts that users are too lazy or too dumb to pick strong passwords, when such attacks take place, all users must change their passwords, not just those with a weak one. Even the diligent users who went to the trouble of following complicated instructions and memorizing “avKpt9cpGwdp”, not to mention typing it every day, are punished, for a sin they didn’t commit (the insecurity of the web site) just as much as the allegedly lazy ones who picked “p@ssw0rd” or “1234”. This is fundamentally unfair.

My team has been working on Pico, an ambitious project to replace passwords with a fairer system that does not require remembering secrets. The primary goal of Pico is to be easier to use than remembering a bunch of PINs and passwords; but, incidentally, it’s also meant to be much more secure. On that note, because Pico uses public key cryptography, if a Pico-based web site is compromised, then its users do not need to change their login credentials. The attackers can only steal the users’ public keys, not their private keys, and therefore are not able to impersonate them, neither at that site nor anywhere else (besides the fact that, to protect your privacy, your Pico uses a different key pair for every one of your accounts). This alone, even aside from any usability improvements, should be a good enough reason for web sites to convert to Pico.

We didn’t blog it then, but a few months ago we produced a short introductory video of our vision for Pico. On the Pico web site, besides that video and others, there are also frequently asked questions and, for those wanting to probe more deeply, a growing collection of technical papers.

phished

This is the first part in a series on the Pico project: my research associates will follow it up with further developments. Pico was recently featured in The Observer and on Sophos’s Naked Security blog, and is about to feature on BBC Radio 4’s PM programme on Tuesday 19 August at 17:00 (broadcast on Thursday 21 August 2014, with a slight cut; currently on iPlayer, starting at 46:28 . Full version broadcast on BBC World Service and downloadable, for a while, from the BBC Global News Podcast, starting at 21:37 ).

Update: the Pico web site now has a page with press coverage.

Largest ever civil government IT disaster

Last year I taught a systems course to students on the university’s Masters of Public Policy course (this is like an MBA but for civil servants). For their project work, I divided them into teams of three or four and got them to write a case history of a public-sector IT project that went wrong.

The class prize was won by Oliver Campion-Awwad, Alexander Hayton, Leila Smith and Mark Vuaran for The National Programme for IT in the NHS – A Case History. It’s now online, not just to acknowledge their excellent work and to inspire future MPP students, but also as a resource for people interested in what goes wrong with large public-sector IT projects, and how to do better in future.

Regular readers of this blog will recall a series of posts on this topic and related ones; yet despite the huge losses the government doesn’t seem to have learned much at all.

There is more information on our MPP course here, while my teaching materials are available here. With luck, the next generation of civil servants won’t be quite as clueless.

First Global Deception Conference

Global Deception conference, Oxford, 17–19th of July 2014

Conference introduction

This deception conference, as part of Hostility and Violence, was organized by interdisciplinary net. Interdisciplinary net runs about 75 conferences a year and was set up by Rob Fisher in 1999 to facilitate international dialogue between disciplines. Conferences are organized on a range of topics, such as gaming, empathycyber cultures, violence and communication and conflict. Not just the topics of the different conferences are interdisciplinary, this is the case within each conference as well. During our deception conference we approached deception from very different angles; from optical illusions in art and architecture via literary hoaxes, fiction and spy novels to the role of the media in creating false beliefs amongst society and ending with a more experimental approach to detecting deception. Even a magic trick was part of the (informal) program, and somehow I ended up being the magician’s assistant. You can find my notes and abstracts below.

Finally, if you (also) have an interest in more experimental deception research with a high practical applicability, then we have good news. Aldert Vrij, Ross Anderson and I are hosting a deception conference to bring together deception researchers and law enforcement people from all over the world. This event will take place at Cambridge University on August 22-24, 2015.

Session 1 – Hoaxes

John Laurence Busch: Deceit without, deceit within: The British Government behavior in the secret race to claim steam-powered superiority at sea. Lord Liverpool became prime minister in 1812 and wanted to catch up with the Americans regarding steam-powered boats. The problem however was that the Royal Navy did not know how to build those vessels, so they joined the British Post in 1820 who wanted to build steam powered boats to deliver post to Ireland more quickly. The post was glad the navy wants to collaborate, although the navy was deceptive; they kept quiet, both to the post, the public and other countries, that they did not know how to build those vessels, and that were hoping to learn how to build a steam boat from them, which succeeded, importantly whilst successfully masking/hiding from the French and the Americans that the British Navy was working on steam vessels to catch up with the US. So the Navy was hiding something questionable (military activity) behind something innocent (post); deceptive public face.

Catelijne Coopmans & Brian Rappert: Revealing deception and its discontents: Scrutinizing belief and skepticism about the moon landing. The moon landing in the 60s is a possible deceptive situation in which the stakes are high and is of high symbolic value. A 2001 documentary by Fox “Conspiracy theory: Did we land on the moon or not?” The documentary bases their suspicions mainly on photographic and visual evidence, such as showing shadows where they shouldn’t be, a “c” shape on a stone, a flag moving in a breeze and pictures with exactly the same background but with different foregrounds. As a response, several people have explained these inconsistencies (e.g., the C was a hair). The current authors focus more on the paradoxes that surround and maybe even fuel these conspiracy theories, such as disclosure vs. non-disclosure, secrecy that fuels suspicion. Like the US governments secrecy around Area 51. Can you trust and at the same time not trust the visual proof of the moan landing presented by NASA? Although the quality of the pictures was really bad, the framing was really well done. Apollo 11 tried to debunk this conspiracy theory by showing a picture of the flag currently still standing on the moon. But then, that could be photoshopped…

Discussion: How can you trust a visual image, especially when used to proof something, when we live in a world where technology makes it possible to fake anything with a high standard? Continue reading First Global Deception Conference