Non-cooperation in the fight against phishing

October 16th, 2008 at 13:32 UTC by Richard Clayton

Tyler Moore and I are presenting another one of our academic phishing papers today at the Anti-Phishing Working Group’s Third eCrime Researchers Summit here in Atlanta, Georgia. The paper “The consequence of non-cooperation in the fight against phishing” (pre-proceedings version here) goes some way to explaining anomalies we found in our previous analysis of phishing website lifetimes. The “take-down” companies reckon to get phishing websites removed within a few hours, whereas our measurements show that the average lifetimes are a few days.

These “take-down” companies are generally specialist offshoots of more general “brand protection” companies, and are hired by banks to handle removal of fake phishing websites.

When we examined our data more carefully we found that we were receiving “feeds” of phishing website URLs from several different sources — and the “take-down” companies that were passing the data to us were not passing the data to each other.

So it often occurs that take-down company A knows about a phishing website targeting a particular bank, but take-down company B is ignorant of its existence. If it is company B that has the contract for removing sites for that bank then, since they don’t know the website exists, they take no action and the site stays up.

Since we were receiving data feeds from both company A and company B, we knew the site existed and we measured its lifetime — which is much extended. In fact, it’s somewhat of a mystery why it is removed at all! Our best guess is that reports made directly to ISPs trigger removal.

The paper contains all the details, and gives all the figures to show that website lifetimes are extended by about 5 days when the take-down company is completely unaware of the site. On other occasions the company learns about the site some time after it is first detected by someone else; and this extends the lifetimes by an average of 2 days.

Since extended lifetimes equate to more unsuspecting visitors handing over their credentials and having their bank accounts cleaned out, these delays can also be expressed in monetary terms. Using the rough and ready model we developed last year, we estimate that an extra $326 million per annum is currently being put at risk by the lack of data sharing. This figure is from our analysis of just two companies’ feeds, and there are several more such companies in this business.

Not surprisingly, our paper suggests that the take-down companies should be sharing their data, so that when they learn about websites attacking banks they don’t have contracts with, they pass the details on to another company who can start to get the site removed.

We analyse the incentives to make this change (and the incentives the companies have not to do so) and contrast the current arrangements with the anti-virus/malware industry — where sample suspect code has been shared since the early 1990s.

In particular, we note that it is the banks who would benefit most from data sharing — and since they are paying the bills, we think that they may well be in a position to force through changes in policy. To best protect the public, we must hope that this happens soon.

Entry filed under: Academic papers, Banking security, Security economics

5 comments Add your own

  • 1. Martijn Grooten  |  October 16th, 2008 at 16:38 UTC

    Good work! And let’s hope the take-down companies will follow your advice. I’m not sure where the anti-virus industry would have been without strong cooperation, but perhaps the virus threat would have been so immense, people would have gone back to writing important documents in longhand.

  • 2. Chris  |  October 17th, 2008 at 17:54 UTC

    Maybe it’s me, but when I read the paper, I immediately thought of it as describing an Assurance Game. Is there a reason you didn’t more formally articulate the choices facing the take-down firms game-theoretically? It seems as though you — in stark contrast to others doing related work — have actual data to use in constructing payoff matrices.

    I don’t mean this to sound like a “Why didn’t you write the paper I would have written?” comment. My very sincere apologies if it does.

  • 3. Tyler Moore  |  October 20th, 2008 at 15:14 UTC

    @Chris: Thanks for your comment. The purpose of this paper was to empirically measure the effect of not sharing phishing URLs. Modeling the trade-offs game-theoretically, as you suggest, is a natural thing to do. We chose to leave this as future work out of space considerations.

  • 4. Eric Olson  |  October 22nd, 2008 at 21:05 UTC

    I completely agree that time is the critical matter in taking down phishing sites.

    Unfortunately, I respectfully have to differ with your suggested method for improving those times. I believe their prescription is exactly the wrong one. Rather than improve protection for banks and consumers, this proposal would in fact have the opposite effect. Speed in detection and takedown both take technology, staff, and expertise, in other words, investment and lots of it. Mandating that the strongest players undermine their own return on that investment by giving away the data (derived at huge expense) to their feebler competitors will only incent the competent players to exit the market.

    Those with the best technology and people will simply devote their staff, budget and expertise to other products where they are not being told to give away the value they have worked so hard to create.

    he banks that rely on these providers will thus be left with only the least efficient, least competent vendors to choose from, and the performance and protection offered will suffer, not improve. For a more complete explanation of this differing opinion, including a discussion of why the A/V industry is not in fact a proper analog for this suggestion, please see

    A Contrary Perspective –­ Forced Data Sharing Will Decrease Performance and Reduce Protection

    Respectfully,
    Eric Olson – Vice President, Cyveillance, Inc.

  • 5. Clive Robinson  |  October 25th, 2008 at 07:25 UTC

    @ Eric Olson,

    “I believe their prescription is exactly the wrong one. Rather than improve protection for banks and consumers, this proposal would in fact have the opposite effect.”

    As a first order effect I think you are probably correct on this.

    Also I very much agree with you on the current likley outcome,

    “Those with the best technology and people will simply devote their staff, budget and expertise to other products where they are not being told to give away the value they have worked so hard to create.”

    However I would look on this as an issue that should be dealt with differently.

    First and formost the banks should not be alowed to externalise the risk of the services they provide onto their service users (customers and merchants).

    Secondly the user technology real realy must be sorted out both the browser and the service provision on the banks servers (Googles Chrome may be a step in this direction).

    Thirdly there is already examples from other financial services of “pooling of resources”. There are various organisations like credit refrence checking services medical and other insurance claims databases that are effectivly industry funded as comercial propersitions.

    It would be better for both the banks and their customers if there would effectivly be one or two organistons actually doing the take down process. The economy of scale and the efficiency of dedulication of effort would provide sufficient margin to reduce costs to the banks and be profitable for independent organisations.

Leave a Comment

Required

Required, hidden

Some HTML allowed:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Subscribe to the comments via RSS Feed


Calendar

October 2008
M T W T F S S
« Sep   Nov »
 12345
6789101112
13141516171819
20212223242526
2728293031