Category Archives: Academic papers

Security Protocols 2015

I’m at the 23rd Security Protocols Workshop, whose theme this year is is information security in fiction and in fact. Engineering is often inspired by fiction, and vice versa; what might we learn from this?

I will try to liveblog the talks in followups to this post.

Talk in Oxford at 5pm today on the ethics and economics of privacy in a world of Big Data

Today at 5pm I’ll be giving the Bellwether Lecture at the Oxford Internet Institute. My topic is Big Conflicts: the ethics and economics of privacy in a world of Big Data.

I’ll be discussing a recent Nuffield Bioethics Council report of which I was one of the authors. In it, we asked what medical ethics should look like in a world of ‘Big Data’ and pervasive genomics. It will take the law some time to catch up with what’s going on, so how should researchers behave meanwhile so that the people whose data we use don’t get annoyed or surprised, and so that we can defend our actions if challenged? We came up with four principles, which I’ll discuss. I’ll also talk about how they might apply more generally, for example to my own field of security research.

There exists a classical model of the photon after all

Many people assume that quantum mechanics cannot emerge from classical phenomena, because no-one has so far been able to think of a classical model of light that is consistent with Maxwell’s equations and reproduces the Bell test results quantitatively.

Today Robert Brady and I unveil just such a model. It turns out that the solution was almost in plain sight, in James Clerk Maxwell’s 1861 paper On Phyiscal Lines of Force in which he derived Maxwell’s equations, on the assumption that magnetic lines of force were vortices in a fluid. Updating this with modern knowledge of quantised magnetic flux, we show that if you model a flux tube as a phase vortex in an inviscid compressible fluid, then wavepackets sent down this vortex obey Maxwell’s equations to first order; that they can have linear or circular polarisation; and that the correlation measured between the polarisation of two cogenerated wavepackets is exactly the same as is predicted by quantum mechanics and measured in the Bell tests.

This follows work last year in which we explained Yves Couder’s beautiful bouncing-droplet experiments. There, a completely classical system is able to exhibit quantum-mechanical behaviour as the wavefunction ψ appears as a modulation on the driving oscillation, which provides coherence across the system. Similarly, in the phase vortex model, the magnetic field provides the long-range order and the photon is a modulation of it.

We presented this work yesterday at the 2015 Symposium of the Trinity Mathematical Society. Our talk slides are here and there is an audio recording here.

If our sums add up, the consequences could be profound. First, it will explain why quantum computers don’t work, and blow away the security ‘proofs’ for entanglement-based quantum cryptosystems (we already wrote about that here and here). Second, if the fundamental particles are just quasiparticles in a superfluid quantum vacuum, there is real hope that we can eventually work out where all the mysterious constants in the Standard Model come from. And third, there is no longer any reason to believe in multiple universes, or effects that propagate faster than light or backward in time – indeed the whole ‘spooky action at a distance’ to which Einstein took such exception. He believed that action in physics was local and causal, as most people do; our paper shows that the main empirical argument against classical models of reality is unsound.

Media coverage “to freeze or not to freeze” paper

On the 5th of January this year we presented a paper on the automatic detection of deception based on full-body movements at HICSS (Hawaii), which we blogged about here at LBT. We measured the movements of truth tellers and liars using full-body motion capture suits and found that liars move more than truth tellers; when combined with interviewing techniques designed to increase the cognitive load of liars, but not of truth tellers, liars even moved almost twice as much as truth tellers. These results indicate that absolute movement, when measured automatically, may potentially be a reliable cue to deceit. We are now aiming to find out if this increase in body movements when lying is stable across situations and people. Simultaneously, we are developing two lines of technology that will make this method more usable in practice. First, we are building software to analyse behaviors in real-time. This will enable us to analyse behavior whilst it is happening (i.e., during the interview), instead of afterwards. Second, we are investigating remote ways to analyse behavior, so interviewees will not have to wear a body-suit when being interviewed. We will keep you updated on new developments.

In the meantime, we received quite a lot of national and international media attention. Here is some tv and radio coverage on our work by Dailymotion, Fox (US), BBC world radio, Zoomin TV (NL), WNL Vandaag de dag (NL, deel 2, starts at 5:20min), RTL Boulevard (NL), Radio 2 (NL), BNR (NL), Radio 538 (NL). Our work was also covered by newspapers, websites and blogs, including the Guardian, the Register, the Telegraph, the Telegraph incl. polygraphthe Daily Mail, Mail Online, Cambridge News, King’s College Cambridge, Lancaster University, Security Lancaster, Bruce Schneier’s blog, International Business TimesRT,   PC World, PC Advisor, Engadget, News Nation, Techie News, ABP Live, TweakTown, Computer WorldMyScience, King World News, La Celosia (Spanish),de Morgen (BE), NRC (NL), Algemeen Dagblad (NL), de Volkskrant (NL), KIJK (NL), and RTV Utrecht (NL).

 

 

A dubious article for a dubious journal

This morning I received a request to review a manuscript for the “Journal of Internet and Information Systems“. That’s standard for academics — you regularly get requests to do some work for the community for free!

However this was a little out of the ordinary in that the title of the manuscript was “THE ASSESSING CYBER CRIME AND IT IMPACT ON INFORMATION TECHNOLOGY IN NIGERIA” which is not, I feel, particularly grammatical English. I’d expect an editor to have done something about that before I was sent the manuscript…

I stared hard at the email headers (after all I’d just been sent some .docx files out of the blue) and it seems that the Journals Review Department of academicjournals.org uses Microsoft’s platform for their email (so no smoking gun from a spear-fishing point of view). So I took some appropriate precautions and opened the manuscript file.

It was dreadful … and read like it had been copied from somewhere else and patched together — indeed one page appeared twice! However, closer examination suggested it had been scanned rather than copy-typed.

For example:

The primary maturation of malicious agents attacking information system has changed over time from pride and prestige to financial again.

Which, some searches will show you comes from page 22 of Policing Cyber Crime written by Petter Gottschalk in 2010 — a book I haven’t read so I’ve no idea how good it is. Clearly “maturation” should be “motivation”, “system” should “systems” and “again” should be “gain”.

Much of the rest of the material (I didn’t spend a long time on it) was from the same source. Since the book is widely available for download in PDF format (though I do wonder how many versions were authorised), it’s pretty odd to have scanned it.

I then looked harder at the Journal itself — which is one of a group of 107 open-access journals. According to this report they were at one time misleadingly indicating an association with Elsevier, although they didn’t do that on the email they sent me.

The journals appear on “Beall’s list“: a compendium of questionable, scholarly open-access publishers and journals. That is, publishing your article in one of these venues is likely to make your CV look worse rather than better.

In traditional academic publishing the author gets their paper published for free and libraries pay (quite substantial amounts) to receive the journal, which the library users can then read for free, but the article may not be available to non-library users. The business model of “open-access” is that the author pays for having their paper published, and then it is freely available to everyone. There is now much pressure to ensure that academic work is widely available and so open-access is very much in vogue.

There are lots of entirely legitimate open-access journals with exceedingly high standards — but also some very dubious journals which are perceived of as accepting most anything and just collecting the money to keep the publisher in the style to which they have become accustomed (as an indication of the money involved, the fee charged by the Journal of Internet and Information Systems is $550).

I sent back an email to the Journal saying “Even a journal with your reputation should not accept this item“.

What does puzzle me is why anyone would submit a plagiarised article to an open-access journal with a poor reputation. Paying money to get your ripped-off material published in a dubious journal doesn’t seem to be good tactics for anyone. Perhaps it’s just that the journal wants to list me (enrolling my reputation) as one of their reviewers? Or perhaps I was spear-phished after all? Time will tell!

Can we have medical privacy, cloud computing and genomics all at the same time?

Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.

As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?

We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.

The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.

Financial Cryptography 2015

I will be trying to liveblog Financial Cryptography 2015.

The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.

It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”

I’ll try to blog the refereed talks in comments to this post.

Technology assisted deception detection (HICSS symposium)

The annual symposium “Credibility Assessment and Information Quality in Government and Business” was this year held on the 5th and 6th of January as part of the “Hawaii International Conference on System Sciences” (HICSS). The symposium on technology assisted deception detection was organised by Matthew Jensen, Thomas Meservy, Judee Burgoon and Jay Nunamaker. During this symposium, we presented our paper “to freeze or not to freeze” that was posted on this blog last week, together with a second paper on “mining bodily cues to deception” by Dr. Ronald Poppe. The talks were of very high quality and researchers described a wide variety of techniques and methods to detect deceit, including mouse clicks to detect online fraud, language use on social media and in fraudulent academic papers and the very impressive avatar that can screen passengers when going through airport border control. I have summarized the presentations for you; enjoy!

 Monday 05-01-2015, 09.00-09.05

Introduction Symposium by Judee Burgoon

This symposium is being organized annually during the HICSS conference and functions as a platform for presenting research on the use of technology to detect deceit. Burgoon started off describing the different types of research conducted within the Center for the Management of Information (CMI) that she directs, and within the National Center for Border Security and Immigration. Within these centers, members aim to detect deception on a multi-modal scale using different types of technology and sensors. Their deception research includes physiological measures such as respiration and heart rate, kinetics (i.e., bodily movement), eye-movements such as pupil dilation, saccades, fixation, gaze and blinking, and research on timing, which is of particular interest for online deception. Burgoon’s team is currently working on the development of an Avatar (DHS sponsored): a system with different types of sensors that work together for screening purposes (e.g., border control; see abstracts below for more information). The Avatar is currently been tested at Reagan Airport. Sensors include a force platform, Kinect, HD and thermo cameras, oculometric cameras for eye-tracking, and a microphone for Natural Language Processing (NLP) purposes. Burgoon works together with the European border management organization Frontex. Continue reading Technology assisted deception detection (HICSS symposium)

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

Systemization of Pluggable Transports for Censorship Resistance

An increasing number of countries implement Internet censorship at different levels and for a variety of reasons. Consequently, there is an ongoing arms race where censorship resistance schemes (CRS) seek to enable unfettered user access to Internet resources while censors come up with new ways to restrict access. In particular, the link between the censored client and entry point to the CRS has been a censorship flash point, and consequently the focus of circumvention tools. To foster interoperability and speed up development, Tor introduced Pluggable Transports — a framework to flexibly implement schemes that transform traffic flows between Tor client and the bridge such that a censor fails to block them. Dozens of tools and proposals for pluggable transports  have emerged over the last few years, each addressing specific censorship scenarios. As a result, the area has become too complex to discern a big picture.

Our recent report takes away some of this complexity by presenting a model of censor capabilities and an evaluation stack that presents a layered approach to evaluate pluggable transports. We survey 34 existing pluggable transports and highlight their inflexibility to lend themselves to feature sharability for broader defense coverage. This evaluation has led to a new design for Pluggable Transports – the Tweakable Transport: a tool for efficiently building and evaluating a wide range of Pluggable Transports so as to increase the difficulty and cost of reliably censoring the communication channel.

Continue reading Systemization of Pluggable Transports for Censorship Resistance