I’m at the seventeenth workshop on the economics of information security, hosted by the University of Innsbruck. I’ll be liveblogging the sessions in followups to this post.
We’re delighted to announce that the new security lectureship we advertised has been offered to Alice Hutchings, and she’s accepted. We had 52 applicants of whom we shortlisted three for interview.
Alice works in the Cambridge Cybercrime Centre and her background is in criminology. Her publications are here. Her appointment will build on our strengths in research on cybercrime, and will complement and extend our multidisciplinary work in the economics and psychology of security.
I’m at the world’s first conference on ethics in mathematics and will be speaking in half an hour. Here are my slides. I will be describing the course I teach to second-year computer scientists on Economics, Law and Ethics. Courses on ethics are mandatory for computer scientists while economics is mandatory for engineers; my innovation has been to combine them. My experience is that teaching them together adds real value. We can explain coherently why society needs rules via discussions of game theory, and then of network effects, asymmetric information and other market failures typical of the IT industry; we can then discuss the limitations of law and regulation; and this sets the stage for both principled and practical discussions of ethics.
This is the title of a paper that appeared today in PLOS One. It describes a tool we developed initially to assess the gullibility of cybercrime victims, and which we now present as a general-purpose psychometric of individual susceptibility to persuasion. An early version was described three years ago here and here. Since then we have developed it significantly and used it in experiments on cybercrime victims, Facebook users and IT security officers.
We investigated the effects on persuasion of a subject’s need for cognition, need for consistency, sensation seeking, self-control, consideration of future consequences, need for uniqueness, risk preferences and social influence. The strongest factor was consideration of future consequences, or “premeditation” for short.
We offer a full psychometric test in STP-II with 54 items spanning 10 subscales, and a shorter STP-II-B with 30 items to measure first-order factors, but that omits second-order constructs for brevity. The scale is here with the B items marked, and here is a live instance of the survey for you to play with. Once you complete it, there’s an on-the-fly interpretation at the end. You don’t have to give your name and we don’t record your IP address.
We invite everyone to use our STP-II scale – not just in security contexts, but also in consumer and marketing psychology and anywhere else it might possibly be helpful. Do let us know what you find!
The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.
Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.
Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.
Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.
What do LBT readers think?
Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?
Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?
Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:
DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143
The National Audit Office has found as follows:
“For too long, as a low value but high volume crime, online fraud has been overlooked by government, law enforcement and industry. It is now the most commonly experienced crime in England and Wales and demands an urgent response. While the Department is not solely responsible for reducing and preventing online fraud, it is the only body that can oversee the system and lead change. The launch of the Joint Fraud Taskforce in February 2016 was a positive step, but there is still much work to be done. At this stage it is hard to judge that the response to online fraud is proportionate, efficient or effective.”
Our regular readers will recall that over ten years ago the government got the banks to agree with the police that fraud would be reported to the bank first. This ensured that the police and the government could boast of falling fraud figures, while the banks could direct such fraud investigations as did happen. This was roundly criticized by the Science and Technology Committee (here and here) but the government held firm. Over the succeeding decade, dissident criminologists started pointing out that fraud was not falling, just going online like everything else, and the online stuff was being ignored. Successive governments just didn’t want to know; for most of the period in question the Home Secretary was one Theresa May, who so impressed her party by “cutting crime” even though she’d cut 20,000 police jobs that she got a promotion.
But pigeons come home to roost eventually, and over the last two years the Office of National Statistics has been moving to more honest crime figures. The NAO report bears close study by anyone interested in cybercrime, in crime generally, and in how politicians game the crime figures. It makes clear that the Home Office doesn’t know what’s going on (or doesn’t really want to) and hopes that other people (such as banks and the IT industry) will solve the problem.
Government has made one or two token gestures such as setting up Action Fraud, and the NAO piously hopes that the latest such (the Joint Fraud Taskforce) could be beefed up to do some good.
I’m afraid that the NAO’s recommendations are less impressive. Let me give an example. The main online fraud bothering Cambridge University relates to bogus accommodation; about fifty times a year, a new employee or research student turns up to find that the apartment they rented doesn’t exist. This is an organised scam, run by crooks in Germany, that affects students elsewhere in the UK (mostly in London) and is netting £5-10m a year. The cybercrime guy in the Cambridgeshire Constabulary can’t do anything about this as only the National Crime Agency in London is allowed to talk to the German police; but he can’t talk to the NCA directly. He has to go through the Regional Organised Crime Unit in Bedford, who don’t care. The NCA would rather do sexier stuff; they seem to have planned to take over the Serious Fraud Office, as that was in the Conservative manifesto for this year’s election.
Every time we look at why some scam persists, it’s down to the institutional economics – to the way that government and the police forces have arranged their targets, their responsibilities and their reporting lines so as to make problems into somebody else’s problems. The same applies in the private sector; if you complain about fraud on your bank account the bank may simply reply that as their systems are secure, it’s your fault. If they record it at all, it may be as a fraud you attempted to commit against them. And it’s remarkable how high a proportion of people prosecuted under the Computer Misuse Act appear to have annoyed authority, for example by hacking police websites. Why do we civilians not get protected with this level of enthusiasm?
Many people have lobbied for change; LBT readers will recall numerous articles over the last ten years. Which? made a supercomplaint to the Payment Services Regulator, and got the usual bland non-reassurance. Other members of the old establishment were less courteous; the Commissioner of the Met said that fraud was the victims’ fault and GCHQ agreed. Such attitudes hit the poor and minorities the hardest.
The NAO is just as reluctant to engage. At p34 it says of the Home Office “The Department … has to influence partners to take responsibility in the absence of more formal legal or contractual levers.” But we already have the Payment Services Regulations; the FCA explained in response to the Tesco Bank hack that the banks it regulates should make fraud victims good. And it has always been the common-law position that in the absence of gross negligence a banker could not debit his customer’s account without the customer’s mandate. What’s lacking is enforcement. Nobody, from the Home Office through the FCA to the NAO, seems to want to face down the banks. Rather than insisting that they obey the law, the Home Office will spend another £500,000 on a publicity campaign, no doubt to tell us that it’s all our fault really.