Monthly Archives: August 2007

The interns of Privila

Long-time readers will recall that I was spammed with an invitation to swap links with the European Human Rights Centre, a plagiarised site that exists to make money out of job listings and Google ads. Well, some more email spam has drawn my attention to something rather different:

From: “Elanor Radaker” <links@bustem.com>
Subject: Wanna Swap Links
Date: Thu, 19 Apr 2007 01:42:37 -0500

Hi,H

I’ve been working extremely hard on my friend’s website bustem.com and if you like what we’ve done, a link from <elided> would be greatly appreciated. If you are interested in a link exchange please …

<snip>

Thank you we greatly appreciate the help! If you have any questions please let me know!

Respectfully,

Elanor Radaker

This site, bustem.com, is not quite as the email claims. However, it is not plagiarised. Far from it, the content has been written to order for Privila Inc by some members of a small army of unpaid interns… and when one starts looking, there are literally hundreds of similar sites.

Continue reading The interns of Privila

Econometrics of wickedness

Last Thursday I gave a tech talk at Google; you can now watch it online. It’s about work a number of us have done on searching for covert communities, with a focus on reputation thieves, phisherman, fake banks and other dodgy businesses.

While in California I also gave a talk on Information Security Economics, first as a keynote talk at Crypto and later as a seminar at Berkeley (the slides are here).

Phishing website removal — comparing banks

Following on from our comparison of phishing website removal times for different freehosting webspace providers, Tyler Moore and I have now crunched the numbers so as to be able to compare take-down times by different banks.

The comparison graph is below (click on it to get a more readable version). The sites compared are phishing websites that were first reported in an 8-week period from mid February to mid April 2007 (you can’t so easily compare relatively recent periods because of the “horizon effect” which makes sites that appear later in the period count less). Qualification for inclusion is that there were at least 5 different websites observed during the time period. It’s also important to note that we didn’t count sites that were removed too quickly for us to inspect them and (this matters considerably) we ignored “rock-phish” websites which attack multiple banks in parallel.

Phishing website take-down times (5 or more sites, Feb-Apr 2007)

Although the graph clearly tells us something about relative performance, it is important not to immediately ascribe this to relative competence or incompetence. For example, Bank of America and CitiBank sites stay up rather longer than most. But they have been attacked for years, so maybe their attackers have learnt where to place their sites so as to be harder to remove? This might also apply to eBay? — although around a third of their sites are on freehosting, and those come down rather quicker than average, so many of their sites stay up even longer than the graph seems to show.

A lot of the banks outsource take-down to specialist companies (usually more general “brand protection” companies who have developed a side-line in phishing website removal). Industry insiders tell me that many of the banks at the right hand side of the graph, with lower take-down times, are in this category… certainly some of the specialists are looking forward to this graph appearing in public, so that they can use it to promote their services 🙂

However, once all the caveats (especially about not counting almost instantaneous removal) have been taken on board, one cannot be completely sure that this particular graph conclusively demonstrates that any particular bank or firm is better than another.

Latest on security economics

Tyler and I have a paper appearing tomorrow as a keynote talk at Crypto: Information Security Economics – and Beyond. This is a much extended version of our survey that appeared in Science in October 2006 and then at Softint in January 2007.

The new paper adds recent research in security economics and sets out a number of ideas about security psychology, into which the field is steadily expanding as economics and psychology become more intertwined. For example, many existing security mechanisms were designed by geeks for geeks; but if women find them harder to use, and as a result are more exposed to fraud, then could system vendors or operators be sued for unlawful sex discrimination?

There is also the small matter of the extent to which human intelligence evolved because people who were good at deceit, and at detecting deception in others, were likely to have more surviving offspring. Security and psychology might be more closely entwined than anyone ever thought.

Phishing and the gaining of "clue"

Tyler Moore and I are in the final throes of creating a heavily revised version of our WEIS paper on phishing site take-down for the APWG eCrime Researchers Summit in early October in Pittsburgh.

One of the new results that we’ve generated, is that we’ve looked at take-down times for phishing sites hosted at alice.it, a provider of free webspace. Anyone who signs up (some Italian required) gets a 150MB web presence for free, and some of the phishing attackers are using the site to host fraudulent websites (mainly eBay (various languages), but a smattering of PayPal and Posteitaliane). When we generate a scatter plot of the take-down times we see the following effect:

Take-down times for phishing sites hosted at alice.it

Continue reading Phishing and the gaining of "clue"

Poor advice from SiteAdvisor

As an offshoot of our work on phishing, we’ve been getting more interested generally in reputation systems. One of these systems is McAfee’s SiteAdvisor, a free download of a browser add-on which will apparently “keep you safe from adware, spam and online scams”. Every time you search for or visit a website, McAfee gets told what you’re doing (why worry? they have a privacy policy!), and gives you their opinion of the site. As they put it “Safety ratings from McAfee SiteAdvisor are based on automated safety tests of Web sites (including of our own site) and are enhanced by feedback from our volunteer reviewers and insights from our own analysts”.

Doubtless, it works really well in many cases… but my experience is that you can’t necessarily rely on it 🙁

In particular, I visited http://www.hotshopgood.com (view this image if the site has been removed!). The prices are quite striking — significantly less than what you might expect to pay elsewhere. For example the Canon EOS-1DS Mark II is available for $1880.00, which frankly is a bargain : best price I can find elsewhere today is a whopping $5447.63.

So why is the camera so cheap? The clue is on the payments page — they don’t take credit cards, only Western Union transfers. Now Western Union are pretty clear about this: “Never send money to a stranger using a money transfer service” and “Beware of deals or opportunities that seem too good to be true”. So it’s not that the credit card companies aren’t taking a cut, but it is all about the inability to reverse Western Union transfers when the goods fail to turn up.

Here’s someone who fell for this scam, paying $270 for a TomTom Go 910 SatNav. The current going prices — 5 months later — for a non-refurbished unit start at $330, assuming you ignore the sellers who only seem to have email addresses at web portals… so the device was cheap, but not outrageously so like the camera.

I know about that particular experience because soemone has kindly entered the URL of the consumer forum into McAfee’s database as a “bad shopping experience”. Nevertheless, SiteAdvisor displays “green” for website in the status bar, and if I choose to visit the detailed page the main message (with a large tickmark on a green background) is that “We tested this site and didn’t find any significant problems” and I need to scroll down to locate the (not especially eye-catching) user-supplied warning.

This is somewhat disappointing — not just because of the nature of the site and the nature of the user complaint, but because since the 15th March 2007, www.hotshopgood.com has been listed as wicked by “Artists Against 419” a community list of bad websites, and it is on the current list of fraudulent websites at fraudwatchers.org. viz: there’s somewhat of a consensus that this isn’t a legitimate site, yet McAfee have failed to tap into the community’s opinion.

Now of course reputation is a complex thing, and there are many millions of websites out there, so McAfee have set themselves a complex task. I’ve no doubt they manage to justifiably flag many sites as wicked, but when they’re not really sure, and users are telling them that there’s an issue, they ought to be considering at least an amber traffic light, rather than the current green.

BTW: you may wish to note that SiteAdvisor currently considers www.lightbluetouchpaper.org to be deserving of a green tick. One of the reasons for this is that it mainly links to other sites that get green ticks. So presumably when they finally fix the reputation of hotshopgood.com, that will slightly reduce this site’s standing. A small price to pay! (though hopefully not a price that is too good to be true!)

House of Lords Inquiry: Personal Internet Security

For the last year I’ve been involved with the House of Lords Science and Technology Committee’s Inquiry into “Personal Internet Security”. My role has been that of “Specialist Adviser”, which means that I have been briefing the committee about the issues, suggesting experts who they might wish to question, and assisting with the questions and their understanding of the answers they received. The Committee’s report is published today (Friday 10th August) and can be found on the Parliamentary website here.

For readers who are unfamiliar with the UK system — the House of Lords is the second chamber of the UK Parliament and is currently composed mainly of “the great and the good” although 92 hereditary peers still remain, including the Earl of Erroll who was one of the more computer-literate people on the committee.

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

My appointment letter made it clear that I wasn’t required to publicly support the conclusions that their lordships came to, but I am generally happy to do so. There’s quite a lot of these conclusions and recommendations, but I believe that three areas particularly stand out.

The first area where the committee has assessed the evidence, not as experts, but as intelligent outsiders, is where the responsibility for Personal Internet Security lies. Almost every witness was asked about this, but very few gave an especially wide-ranging answer. A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not. The committee have a series of recommendations to address this — there should be BSI kite marks to allow consumers to select services that are likely to be secure, ISPs should lose mere conduit exemptions if they don’t act to deal with compromised end-user machines and the banks should be statutorily obliged to bear losses from phishing. None of these measures will fix things directly, but they will change the incentives, and that has to be the way forward.

Secondly, the committee are recommending that the UK bring in a data breach notification law, along the general lines of the California law, and 34 other US states. This would require companies that leaked personal data (because of a hacked website, or a stolen laptop, or just by failing to secure it) to notify the people concerned that this had happened. At first that might sound rather weak — they just have to tell people; but in practice the US experience shows that it makes a difference. Companies don’t like the publicity, and of course the people involved are able to take precautions against identity theft (and tell all their friends quite how trustworthy the company is…) It’s a simple, low-key law, but it produces all the right incentives for taking security seriously, and for deploying systems such as whole-disk encryption that mean that losing a laptop stops being synonymous with losing data.

The third area, and this is where the committee has been most far-sighted, and therefore in the short term this may well be their most controversial recommendation, is that they wish to see a software liability regime, viz: that software companies should become responsible for their security failures. The benefits of such a regime were cogently argued by Bruce Schneier, who appeared before the committee in February, and I recommend reading his evidence to understand why he swayed the committee. Unlike the data breach notification law the committee recommendation isn’t to get a statute onto the books sooner rather than later. There’s all sorts of competition issues and international ramifications — and in practice it may be a decade or two before there’s sufficient case law for vendors to know quite where they stand if they ship a product with a buffer overflow, or a race condition, or just a default password. Almost everyone who gave evidence, apart from Bruce Schneier, argued against such a law, but their lordships have seen through the special pleading and the self-interest and looked to find a way to make the Internet a safer place. Though I can foresee a lot of complications and a rocky road towards liability, looking to the long term, I think their lordships have got this one right.

Chip-and-PIN relay attack paper wins "Best Student Paper" at USENIX Security 2007

In May 2007, Saar Drimer and Steven Murdoch posted about “Distance bounding against smartcard relay attacks”. Today their paper won the “Best Student Paper” award at USENIX Security 2007 and their slides are now online. You can read more about this work on the Security Group’s banking security web page.

Steven and Saar at USENIX Security 2007

USENIX WOOT07, Exploiting Concurrency Vulnerabilities in System Call Wrappers, and the Evil Genius

I’ve spent the day at the First USENIX Workshop on Offensive Technologies (WOOT07) — an interesting new workshop on attack strategies and technologies. The workshop highlights the tension between the “white” and “black” hats in security research — you can’t design systems to avoid security problems if you don’t understand what they are. USENIX‘s take on such a forum is less far down the questionable ethical spectrum than some other venues, but it certainly presented and talked about both new exploits for new vulnerabilities, and techniques for evading current protections in concrete detail.

I presented, “Exploiting Concurrency Vulnerabilities in System Call Wrappers,” a paper on the topic of compromising system call interposition-based protection systems, such as COTS virus scanners, OpenBSD and NetBSD’s Systrace, the TIS Generic Software Wrappers Toolkit (GSWTK), and CerbNG. The key insight here is that the historic assumption of “atomicity” of system calls is falacious, and that on both uniprocessor and multiprocessing systems, it is trivial to construct a race between system call wrappers and malicious user processes to bypass protections. I demonstrated sample exploit code against the Sysjail policy on Systrace, and IDwrappers on GSWTK, but the paper includes a more extensive discussion including vulnerabilities in sudo‘s Systrace monitor mode. You can read the paper and see the presentation slides here. All affected vendors received at least six months, and in some cases many years advance notice regarding these vulnerabilities.

The moral, for those unwilling to read the paper, is that system call wrappers are a bad idea, unless of course, you’re willing to rewrite the OS to be message-passing. Systems like the TrustedBSD MAC Framework on FreeBSD and Mac OS X Leopard, Linux Security Modules (LSM), Apple’s (and now also NetBSD’s) kauth(9), and other tightly integrated kernel security frameworks offer specific solutions to these concurrency problems. There’s plenty more to be done in that area.

Concurrency issues have been discussed before in computer security, especially relating to races between applications when accessing /tmp, unexpected signal interruption of socket operations, and distributed systems races, but this paper starts to explore the far more sordid area of OS kernel concurrency and security. Given that even notebook computers are multiprocessor these days, emphasizing the importance of correct synchronization and reasoning about high concurrency is critical to thinking about security correctly. As someone with strong interests in both OS parallelism and security, the parallels (no pun intended) seem obvious: in both cases, the details really matter, and it requires thinking about a proverbial Cartesian Evil Genius. Anyone who’s done serious work with concurrent systems knows that they are actively malicious, so a good alignment for the infamous malicious attacker in security research!

Some of the other presentations have included talks about Google’s software fuzzing tool Flayer based on Valgrind, attacks on deployed SIP systems including AT&T’s product, Bluetooth sniffing with BlueSniff, and quantitative analyses of OS fingerprinting techniques. USENIX members will presumably be able to read the full set of papers online immediately; for others, check back in a year or visit the personal web sites of the speakers after you look at the WOOT07 Programme.

Electoral Commission releases e-voting and e-counting reports

Today, the Electoral Commission released their evaluation reports on the May 2007 e-voting and e-counting pilots held in England. Each of the pilot areas has a report from the Electoral Commission and the e-counting trials are additionally covered by technical reports from Ovum, the Electoral Commission’s consultants. Each of the changes piloted receives its own summary report: electronic counting, electronic voting, advanced voting and signing in polling stations. Finally, there are a set of key findings, both from the Electoral Commission and from Ovum.

Richard Clayton and I acted as election observers for the Bedford e-counting trial, on behalf of the Open Rights Group, and our discussion of the resulting report can be found in an earlier post. I also gave a talk on a few of the key points.

The Commission’s criticism of e-counting and e-voting was scathing; concerning the latter saying that the “security risk involved was significant and unacceptable.” They recommend against further trials until the problems identified are resolved. Quality assurance and planning were found to be inadequate, predominantly stemming from insufficient timescales. In the case of the six e-counting trials, three were abandoned, two were delayed, leaving only one that could be classed as a success. Poor transparency and value for money are also cited as problems. More worryingly, the Commission identify a failure to learn from the lessons of previous pilot programmes.

The reports covering the Bedford trials largely match my personal experience of the count and add some details which were not available to the election observers (in particular, explaining that the reason for some of the system shutdowns was to permit re-configuration of the OCR algorithms, and that due to delays at the printing contractor, no testing with actual ballot papers was performed). One difference is that the Ovum report was more generous than the Commission report regarding the candidate perceptions, saying “Apart from the issue of time, none of the stakeholders questioned the integrity of the system or the results achieved.” This discrepancy could be because the Ovum and Commission representatives left before the midnight call for a recount, by candidates who had lost confidence in the integrity of the results.

There is much more detail to the reports than I have been able to summarise here, so if you are interested in electronic elections, I suggest you read them yourselves.

The Open Rights Group has in general welcomed the Electoral Commission’s report, but feel that the inherent problems resulting from the use of computers in elections have not been fully addressed. The results of the report have also been covered by the media, such as the BBC: “Halt e-voting, says election body” and The Guardian: “Electronic voting not safe, warns election watchdog”.