Monthly Archives: October 2013

How to deal with emergencies better

Britain has just been hit by a storm; two people have been killed by falling trees, and one swept out to sea. The rail network is in chaos and over 100,000 homes lost electric power. What can security engineering teach about such events?

Risk communication could be very much better. The storm had been forecast for several days but the instructions and advice from authority have almost all been framed in vague and general terms. Our research on browser warnings shows that people mostly ignore vague warnings (“Warning – visiting this web site may harm your computer!”) but pay much more attention to concrete ones (such as “The site you are about to visit has been confirmed to contain software that poses a significant risk to you, with no tangible benefit. It would try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”). In fact, making warnings more concrete is the only thing that works here – nudge favourites such as appealing to social norms, or authority, or even putting a cartoon face on the page to activate social cognition, don’t seem to have a significant effect in this context.

So how should the Met Office and the emergency services deal with the next storm?

Continue reading How to deal with emergencies better

We're hiring again

We have a vacancy for a postdoc to work on the economics of cybercrime for two years from January. It might suit someone with a PhD in economics or criminology and an interest in online crime; or a PhD in computer science with an interest in security and economics.

Security economics has grown rapidly in the last decade; security in global systems is usually an equilibrium that emerges from the selfish actions of many independent actors, and security failures often follow from perverse incentives. To understand better what works and what doesn’t, we need both theoretical models and empirical data. We have access to various large-scale sources of data relating to cybercrime – email spam, malware samples, DNS traffic, phishing URL feeds – and some or all of this data could be used in this research. We’re very open-minded about what work might be done on this project; possible topics include victim analysis, malware analysis, spam data mining, data visualisation, measuring attacks, how security scales (or fails to), and how cybercrime data could be shared better.

This is an international project involving colleagues at CMU, SMU and the NCFTA.