Category Archives: Security economics

Social-science angles of security

Relay attacks on card payment: vulnerabilities and defences

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

How effective is the wisdom of crowds as a security mechanism?

Over the past year, Richard Clayton and I have been tracking phishing websites. For this work, we are indebted to PhishTank, a website where dedicated volunteers submit URLs from suspected phishing websites and vote on whether the submissions are valid. The idea behind PhishTank is to bring together the expertise and enthusiasm of people across the Internet to fight phishing attacks. The more people participate, the larger the crowd, the more robust it should be against errors and perhaps even manipulation by attackers.

Not so fast. We studied the submission and voting records of PhishTank’s users, and our results are published in a paper appearing at Financial Crypto next month. It turns out that participation is very skewed. While PhishTank has several thousand registered users, a small core of around 25 moderators perform the bulk of the work, casting 74% of the votes we observed. Both the distributions of votes and submissions follow a power law.

This leaves PhishTank more vulnerable to manipulation than would be the case if every member of the crowd participated to the same extent. Why? If a few of the most active users stopped voting, a backlog of unverified phishing sites might collect. It also means an attacker could join the system and vote maliciously on a massive scale. Since 97% of submissions to PhishTank are verified as phishing URLs, it would be easy for an attacker to build up reputation by voting randomly many times, and then sprinkle in malicious votes protecting the attacker’s own phishing sites, for example. Since over half of the phishing sites in PhishTank are duplicate rock-phish domains, a savvy attacker could build reputation by voting for these sites without contributing to PhishTank otherwise.

So crowd-sourcing your security decisions can leave you exposed to manipulation. But how does PhishTank compare to the feeds maintained by specialist website take-down companies hired by the banks? Well, we compared PhishTank’s feed to a feed from one such company, and found the company’s feed to be slightly more complete and significantly faster in confirming phishing websites. This is because companies can afford employees to verify their submissions.

We also found that users who vote less often are more likely to vote incorrectly, and that users who commit many errors tend to have voted on
the same URLs.

Despite these problems, we do not advocate against leveraging user participation in the design of all security mechanisms, nor do we believe that PhishTank should throw in the towel. Some improvements can be made by automating obvious categorization so that the hard decisions are taken by PhishTank’s users. In any case, we implore caution before turning over a security decision to a crowd.

Infosecurity Magazine has written a news article describing this work.

Fatal wine waiters

I’ve written before about “made for adware” (MFA) websites — those parts of the web that are created solely to host lots of (mainly Google) ads, and thereby make their creators loads of money.

Well, this one “hallwebhosting.com” is a little different. I first came across it a few months back when it was clearly still under development, but it seems to have settled down now — so that it’s worth looking at exactly what they’re doing.

The problem that such sites have is that they need to create lots of content really quickly, get indexed by Google so that people can find them, and then wait for the clicks (and the money) to roll in. The people behind hallwebhosting have had a cute idea for this — they take existing content from other sites and do word substitutions on sentences to produce what they clearly intend to be identical in meaning (so the site will figure in web search results), but different enough that the indexing spider won’t treat it as identical text.

So, for example, this section from Wikipedia’s page on Windows Server 2003:

Released on April 24, 2003, Windows Server 2003 (which carries the version number 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability.

becomes:

Released on April 24, 2003, Windows Server 2003 (which carries the form quantity 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other skin from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s evasion installation has none of the attendant workings enabled, to cut the molest outward of new machines. Windows Server 2003 includes compatibility modes to allow big applications to gush with larger stability.

I first noticed this site because they rendered a Wikipedia article about my NTP DDoS work, entitled “NTP server misuse and abuse” into “NTP wine waiter knock about and abuse” … the contents of which almost makes sense:

“In October 2002, one of the first known hand baggage of phase wine waiter knock about resulted in troubles for a mess wine waiter at Trinity College, Dublin”

for doubtless a fine old university has wine waiters to spare, and a mess for them to work in.

Opinions around here differ as to whether this is machine translation (as in all those old stories about “Out of sight, out of mind” being translated to Russian and then back as “Invisible idiot”) or imaginative use of a thesaurus where “wine waiter” is a hyponym of “server”.

So fas as I can see, this is all potentially lawful — Wikipedia is licensed under the GNU Free Documentation License so if there was an acknowledgement of the original article’s authors then all would be fine. But there isn’t — so in fact, all is not fine!

However, even if this (perhaps) oversight was corrected, some articles are clearly copyright infringements.

For example, this article from shellaccounts.biz entitled Professional Web Site Hosting Checklist appears to be entirely covered by copyright, yet it has been rendered into this amusement:

In harmony to create sure you get what you’ve been looking for from a qualified confusion put hosting server, here are a few stuff you should take into tally before deciding on a confusion hosting provider.

where you’ll see that “site” has become “put”, “web” has become “confusion” (!) and later on “requirements” becomes “food” which leads to further hilarity.

However, beyond the laughter, this is pretty clearly yet another ham-fisted attempt to clutter up the web with dross in the hopes of making money. This time it’s not Google adwords, but banner ads, and other franchised links, but it’s still essentially “MFA”. These types of site will continue until advertisers get more savvy about the websites that they don’t wish to be associated with — at which point the flow of money will cease and the sites will disappear.

To finish by being lighthearted again, the funniest page (so far) is the reworking of the Wikipedia article on “Terminal Servers” … since servers once again becomes “wine waiters”, but “terminal” naturally enough, becomes “fatal”. The image is clear.

Hackers get busted

There is an article on BBC News about how yet another hacker running a botnet got busted. When I read the sentence “…he is said to be very bright and very skilled …”, I started thinking. How did they find him? He clearly must have made some serious mistakes, what sort of mistakes? How can isolation influence someone’s behaviour, what is the importance of external opinions on objectivity?

When we write a paper, we very much appreciate when someone is willing to read it, and give back some feedback. It allows to identify loopholes in thinking, flaws in descriptions, and so forth. The feedback does not necessarily have to imply large changes in the text, but it very often clarifies it and makes it much more readable.

Hackers do use various tools – either publicly available, or made by the hacker themself. There may be errors in the tools, but they will be probably fixed very quickly, especially if they are popular. Hackers often allow others to use the tools – if it is for testing or fame. But hacking for profit is a quite creative job, and there is plenty left for actions that cannot be automated.

So what is the danger of these manual tasks? Is it the case that hackers write down descriptions of all the procedures with checklists and stick to them, or do they do the stuff intuitively and become careless after a few months or years? Clearly, the first option is how intelligence agencies would deal with the problem, because they know that human is the weakest link. But what about hackers? “…very bright and very skilled…”, but isolated from the rest of the world?

So I keep thinking, is it worth trying to reconstruct “operational procedures” for running a botnet, analyse them, identify the mistakes most likely to happen, and use such knowledge against the “cyber-crime groups”?

Government ignores Personal Internet Security

At the end of last week the Government published their response to the House of Lords Science and Technology Committee Report on Personal Internet Security. The original report was published in mid-August and I blogged about it (and my role in assisting the Committee) at that time.

The Government has turned down pretty much every recommendation. The most positive verbs used were “consider” or “working towards setting up”. That’s more than a little surprising, because the report made a great deal of sense, and their lordships aren’t fools. So is the Government ignorant, stupid, or in the thrall of some special interest group?

On balance I think it starts from ignorance.

Some of the most compelling evidence that the Committee heard was at private meetings in the USA from companies such as Microsoft, Cisco, Verisign, and in particular from Team Cymru, who monitor the “underground economy”. I don’t think that the Whitehall mandarins have heard these briefings, or have bothered to read the handful of published articles such as this one in ;login, or this more recent analysis that will appear at CCS next week. If the Government was up-to-speed on what researchers are documenting, they wouldn’t be arguing that there is more crime solely because there are more users — and they could not possibly say that they “refute the suggestion […] that lawlessness is rife”.

However, we cannot rule out stupidity.

Some of the Select Committee recommendations were intended to address the lack of authoritative data — and these were rejected as well. The Government doesn’t think its urgently necessary to capture more information about the prevalence of eCrime; they don’t think that having the banks collate crime reports gets all the incentives wrong; and they “do not accept that the incidence of loss of personal data by companies is on an upward path” (despite there being no figures in the UK to support or refute that notion, and considerable evidence of regular data loss in the United States).

The bottom line is that the Select Committee did some “out-of-the-box thinking” and came up with a number of proposals for measurement, for incentive alignment, and for bolstering law enforcement’s response to eCrime. The Government have settled for complacency, quibbling about the wording of the recommendations, and picking out a handful of the more minor recommendations to “note” to “consider” and to “keep under review”.

A whole series of missed opportunities.

Phishing take-down paper wins 'Best Paper Award' at APWG eCrime Researcher's Summit

Richard Clayton and I have been tracking phishing sites for some time. Back in May, we reported on how quickly phishing websites are removed. Subsequently, we have also compared the performance of banks in removing websites and found evidence that ISPs and registrars are initially slow to remove malicious websites.

We have published our updated results at eCrime 2007, sponsored by the Anti-Phishing Working Group. The paper, ‘Examining the Impact of Website Take-down on Phishing’ (slides here), was selected for the ‘Best Paper Award’.

A high-level abridged description of this work also appeared in the September issue of Infosecurity Magazine.

Web content labelling

As we all know, the web contains a certain amount of content that some people don’t want to look at, and/or do not wish their children to look at. Removing the material is seldom an option (it may well be entirely lawfully hosted, and indeed many other people may be perfectly happy for it to be there). Since centralised blocking of such material just isn’t going to happen, the best way forward is the installation of blocking software on the end-user’s machine. This software will have blacklists and whitelists provided from a central server, and it will provide some useful reassurance to parents that their youngest children have some protection. Older children can of course just turn the systems off, as has recently been widely reported for the Australian NetAlert system.

A related idea is that websites should rate themselves according to widely agreed criteria, and this would allow visitors to know what to expect on the site. Such ratings would of course be freely available, unlike the blocking software which tends to cost money (to pay for the people making the whitelists and blacklists).

I’ve never been a fan of these self-rating systems whose criteria always seem to be based on a white, middle-class, presbyterian view of wickedness, and — at least initially — were hurriedly patched together from videogame rating schemes. More than a decade ago I lampooned the then widely hyped RSACi system by creating a site that scored “4 4 4 4”, the highest (most unacceptable) score in every category: http://www.happyday.demon.co.uk/awful.htm and just recently, I was reminded of this in the context of an interview for an EU review of self-regulation.

Continue reading Web content labelling

Mapping the Privila network

Last week, Richard Clayton described his investigation of the Privila internship programme. Unlike link farms, Privila doesn’t link to its own websites. Instead, they apparently solely depend on the links made to the site before they took over the domain name, and new ones solicited through spamming. This means that normal mapping techniques, just following links, will not uncover Privila sites. This might be one reason they took this approach, or perhaps it was just to avoid being penalized by search engines.

The mapping approach which I implemented, as suggested by Richard, was to exploit the fact that Privila authors typically write for several websites. So, starting with one seed site, you can find more by searching for the names of authors. I used the Yahoo search API to automate this process, since the Google API has been discontinued. From the new set of websites discovered, the list of authors is extracted, allowing yet more sites to be found. These steps are repeated until no new sites are discovered (effectively a breadth first search).

The end result was that starting from bustem.com, I found 294 further sites, with a total of 3 441 articles written by 124 authors (these numbers are lower than the ones in the previous post since duplicates have now been properly removed). There might be even more undiscovered sites, with a disjoint set of authors, but the current network is impressive in itself.

I have implemented an interactive Java applet visualization (using the Prefuse toolkit) so you can explore the network yourself. Both the source code, and the data used to construct the graph can also be downloaded.

Screenshot of PrivilaView applet

The interns of Privila

Long-time readers will recall that I was spammed with an invitation to swap links with the European Human Rights Centre, a plagiarised site that exists to make money out of job listings and Google ads. Well, some more email spam has drawn my attention to something rather different:

From: “Elanor Radaker” <links@bustem.com>
Subject: Wanna Swap Links
Date: Thu, 19 Apr 2007 01:42:37 -0500

Hi,H

I’ve been working extremely hard on my friend’s website bustem.com and if you like what we’ve done, a link from <elided> would be greatly appreciated. If you are interested in a link exchange please …

<snip>

Thank you we greatly appreciate the help! If you have any questions please let me know!

Respectfully,

Elanor Radaker

This site, bustem.com, is not quite as the email claims. However, it is not plagiarised. Far from it, the content has been written to order for Privila Inc by some members of a small army of unpaid interns… and when one starts looking, there are literally hundreds of similar sites.

Continue reading The interns of Privila

Econometrics of wickedness

Last Thursday I gave a tech talk at Google; you can now watch it online. It’s about work a number of us have done on searching for covert communities, with a focus on reputation thieves, phisherman, fake banks and other dodgy businesses.

While in California I also gave a talk on Information Security Economics, first as a keynote talk at Crypto and later as a seminar at Berkeley (the slides are here).