Monthly Archives: February 2011

Why the Cabinet Office's £27bn cyber crime cost estimate is meaningless

Today the UK Cabinet Office released a report written by Detica. The report concluded that the annual cost of cyber crime in UK is £27bn. That’s less than $1 trillion, as AT&T’s Ed Amoroso testified before the US Congress in 2009. But it’s still a very large number, approximately 2% of UK GDP. If the total is accurate, then cyber crime is a very serious problem of utmost national importance.

Unfortunately, much of the total cost is based on questionable calculations that are impossible for outsiders to verify. 60% of the total cost is ascribed to intellectual property theft (i.e., business secrets not copied music and films) and espionage. The report does describe a methodology for how it arrived at the figures. However, several key details are lacking. To calculate the IP and espionage losses, the authors first calculated measures of each sector’s value to the economy. Then they qualitatively assessed how lucrative and feasible these attacks would be in each sector.

This is where trouble arises. Based on these assessments, the authors assigned a sector-specific probability of theft, one for the best-, worst- and average cases. Unfortunately, these probabilities are not specified in the report, and no detailed rationale is given for their assignment. Are the probabilities based on surveys of firms that have fallen victim to these particular types of crime? Or is it a number simply pulled from the air based on the hunch of the authors? It is impossible to determine from the report.
Continue reading Why the Cabinet Office's £27bn cyber crime cost estimate is meaningless

Measuring password re-use empirically

In the aftermath of Anonymous’ revenge hacking of HBGary over the weekend, some enterprising hackers used one of the stolen credentials and some social engineering to gain root access at rootkit.com, which has been down for a few days since. There isn’t much novel about the hack but the dump of rootkit.com’s SQL databases provides another password dataset for research, though an order of magnitude smaller than the Gawker dataset with just 81,000 hashed passwords.

More interestingly, due to the close proximity of the hacks, we can compare the passwords associated with email addresses registered at both Gawker and rootkit.com. This gives an interesting data point on the widely known problem of password re-use. This new data seems to indicate a significantly higher re-use rate than the few previously published estimates. Continue reading Measuring password re-use empirically

JPEG canaries: exposing on-the-fly recompression

Many photo-sharing websites decompress and compress uploaded images, to enforce particular compression parameters. This recompression degrades quality. Some web proxies can also recompress images/videos, to give the impression of a faster connection.

In Towards copy-evident JPEG images (with Markus Kuhn, in Lecture Notes in Informatics), we present an algorithm for imperceptibly marking JPEG images so that the recompressed copies show a clearly-visible warning message. (Full page demonstration.)

Original image:

Original image

After recompression:

The image after recompression, with a visible warning message

(If you can’t see the message in the recompressed image, make sure your browser is rendering the images without scaling or filtering.)

Richard Clayton originally suggested the idea of trying to create an image which would show a warning when viewed via a recompressing proxy server. Here is a real-world demonstration using the Google WAP proxy.

Our marking technique is inspired by physical security printing, used to produce documents such as banknotes, tickets, academic transcripts and cheques. Photocopied versions will display a warning (e.g. ‘VOID’) or contain obvious distortions, as duplication turns imperceptible high-frequency patterns into more noticeable low-frequency signals.

Our algorithm works by adding a high-frequency pattern to the image with an amplitude carefully selected to cause maximum quantization error on recompression at a chosen target JPEG quality factor. The amplitude is modulated with a covert warning message, so that foreground message blocks experience maximum quantization error in the opposite direction to background message blocks. While the message is invisible in the marked original image, it becomes visible due to clipping in a recompressed copy.

The challenge remains to extend our approach to mark video data, where rate control and adaptive quantization make the copied document’s properties less predictable. The result would be a digital video that would be severely degraded by recompression to a lower quality, making the algorithm useful for digital content protection.