Three NGOs have lodged a formal complaint to the Information Commissioner about the fact that PA Consulting uploaded over a decade of UK hospital records to a US-based cloud service. This appears to have involved serious breaches of the UK Data Protection Act 1998 and of multiple NHS regulations about the security of personal health information. This already caused a row in Parliament and the Deparatment of Health seems to be trying to wriggle off the hook by pretending that the data were pseudonymised. Other EU countries have banned such uploads. Regular LBT readers will know that the Department of Health has got itself in a complete mess over medical record privacy.
Posts filed under 'Security psychology
I will be trying to liveblog Financial Cryptography 2014. I just gave a keynote talk entitled “EMV – Why Payment Systems Fail” summarising our last decade’s research on what goes wrong with Chip and PIN. There will be a paper on this out in a few months; meanwhile here’s the slides and here’s our page of papers on bank security.
The sessions of refereed papers will be blogged in comments to this post.
The next three weeks will see a leaflet drop on over 20 million households. NHS England plans to start uploading your GP records in March or April to a central system, from which they will be sold to a wide range of medical and other research organisations. European data-protection and human-rights laws demand that we be able to opt out of such things, so the Information Commissioner has told the NHS to inform you of your right to opt out.
Needless to say, their official leaflet is designed to cause as few people to opt out as possible. It should really have been drafted like this. (There’s a copy of the official leaflet at the MedConfidential.org website.) But even if it had been, the process still won’t meet the consent requirements of human-rights law as it won’t be sent to every patient. One of your housemates could throw it away as junk before you see it, and if you’ve opted out of junk mail you won’t get a leaflet at all.
Yet if you don’t opt out in the next few weeks your data will be uploaded to central systems and you will not be able to get it deleted, ever. If you don’t opt out your kids in the next few weeks the same will happen to their data, and they will not be able to get their data deleted even if they decide they prefer privacy once they come of age. If you opted out of the Summary Care Record in 2009, that doesn’t count; despite a ministerial assurance to the contrary, you now need to opt out all over again. For further information see the website of GP Neil Bhatia (who drafted our more truthful leaflet) and previous LBT posts on medical privacy.
David Modic and I have just published a paper on The psychology of malware warnings. We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?
To our surprise, social cues didn’t seem to work. What works best is to make the warning concrete; people ignore general warnings such as that a web page “might harm your computer” but do pay attention to a specific one such as that the page would “try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”. There is also some effect from appeals to authority: people who trust their browser vendor will avoid a page “reported and confirmed by our security team to contain malware”.
We also analysed who turned off browser warnings, or would have if they’d known how: they were people who ignored warnings anyway, typically men who distrusted authority and either couldn’t understand the warnings or were IT experts.
We had a crypto festival in London in London in November at which a number of cryptographers and crypto policy folks got together with over 1000 mostly young attendees to talk about what might be done in response to the Snowden revelations.
Here is a video of the session in which I spoke. The first speaker was Annie Machon (at 02.35) talking of her experience of life on the run from MI5, and on what we might do to protect journalists’ sources in the future. I’m at 23.55 talking about what’s changed for governments, corporates, researchers and others. Nick Pickles of Big Brother Watch follows at 45.45 talking on what can be done in terms of practical politics; it turned out that only two of us in the auditorium had met our MPs over the Comms Data Bill. The final speaker, Smari McCarthy, comes on at 56.45, calling for lots more encryption. The audience discussion starts at 1:12:00.
Britain has just been hit by a storm; two people have been killed by falling trees, and one swept out to sea. The rail network is in chaos and over 100,000 homes lost electric power. What can security engineering teach about such events?
Risk communication could be very much better. The storm had been forecast for several days but the instructions and advice from authority have almost all been framed in vague and general terms. Our research on browser warnings shows that people mostly ignore vague warnings (“Warning – visiting this web site may harm your computer!”) but pay much more attention to concrete ones (such as “The site you are about to visit has been confirmed to contain software that poses a significant risk to you, with no tangible benefit. It would try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”). In fact, making warnings more concrete is the only thing that works here – nudge favourites such as appealing to social norms, or authority, or even putting a cartoon face on the page to activate social cognition, don’t seem to have a significant effect in this context.
So how should the Met Office and the emergency services deal with the next storm?
We have a vacancy for a postdoc to work on the economics of cybercrime for two years from January. It might suit someone with a PhD in economics or criminology and an interest in online crime; or a PhD in computer science with an interest in security and economics.
Security economics has grown rapidly in the last decade; security in global systems is usually an equilibrium that emerges from the selfish actions of many independent actors, and security failures often follow from perverse incentives. To understand better what works and what doesn’t, we need both theoretical models and empirical data. We have access to various large-scale sources of data relating to cybercrime – email spam, malware samples, DNS traffic, phishing URL feeds – and some or all of this data could be used in this research. We’re very open-minded about what work might be done on this project; possible topics include victim analysis, malware analysis, spam data mining, data visualisation, measuring attacks, how security scales (or fails to), and how cybercrime data could be shared better.
I’m at SOUPS 2013, including the Workshop on Risk Perception in IT Security and Privacy, and will be liveblogging them in followups to this post.
We have a vacancy for a postdoc to work on the psychology of cybercrime and deception for two years from October. It might suit someone with a PhD in psychology or behavioural economics with a specialisation in deception, fraud or online crime; or a PhD in computer science with a strong interest in psychology, usability and security.