Richard Clayton and I recently presented evidence of the adverse impact of take-down companies not sharing phishing feeds. Many phishing websites are missed by the take-down company which has the contract for removal; unsurprisingly, these websites are not removed very fast. Consequently, more consumers’ identities are stolen.
In the paper, we propose a simple solution: take-down companies should share their raw, unverified feeds of phishing URLs with their competitors. Each company can examine the raw feed, pick out the websites impersonating their clients, and focus on removing these sites.
Since we presented our findings to the Anti-Phishing Working Group eCrime Researchers Summit, we have received considerable feedback from take-down companies. Take-down companies attending the APWG meeting understood that sharing would help speed up response times, but expressed reservations at sharing their feeds unless they were duly compensated. Eric Olsen of Cyveillance (another company offering take-down services) has written a comprehensive rebuttal of our recommendations. He argues that competition between take-down companies drives investment in efforts to detect more websites. Mandated sharing of phishing URL feeds, in his view, would undermine these detection efforts and cause take-down companies such as Cyveillance to exit the business.
I do have some sympathy for the objections raised by the take-down companies. As we state in the paper, free-riding (where one company relies on another to invest in detection so they don’t have to) is a concern for any sharing regime. Academic research studying other areas of information security (e.g., here and here), however, has shown that free-riding is unlikely to be so rampant as to drive all the best take-down companies out of offering service, as Mr. Olsen suggests.
While we can quibble over the extent of the threat from free free-riding, it should not detract from the conclusions we draw over the need for greater sharing. In our view, it would be unwise and irresponsible to accept the current status quo of keeping phishing URL feeds completely private. After all, competition without sharing has approximately doubled the lifetimes of phishing websites! The solution, then, is to devise a sharing mechanism that gives take-down companies the incentive to keep detecting more phishing URLs.
Here is our stab at devising a suitable sharing mechanism. We propose the creation of a members-only sharing club with compensation for net contributors paid for by net receivers. Take-down companies submit real-time copies of their entire feeds to a trusted third party (for the sake of argument, let’s assume that the APWG takes on this role). The APWG collates the individual feeds, marks the source of each submission (i.e., which take-down company) along with a timestamp. The APWG makes the amalgamated feed available immediately to all members. The members pick out phishing URLs impersonating their own clients, while ignoring the rest. Crucially, the expensive task of verifying phishing URLs and initiating take-down continues to be performed by the take-down company.
Periodically, the combined feed is audited to determine the reciprocity of contributions. Take-down companies provide a list of their clients to the auditor. The auditor then computes the number of phishing websites impersonating each take-down company’s clients that are missed by the takedown company but identified by others. The auditor also tallies the time difference for phishing websites that are identified by others first.
For example, suppose bank A1 has hired take-down company A to remove phishing sites on its behalf, and bank B1 has hired take-down company B. Suppose 500 phishing sites impersonate A1, and that A identifies 400 while B identifies an additional 100 sites missed by A. Likewise, suppose another 500 phishing sites impersonate bank B1, and that B identifies 300 while A identifies an additional 200 sites missed by B. B has received a net of 100 useful phishing sites more from A than B has given to A. Consequently, B should pay A a previously-agreed ‘finder’s fee’ for identifying these extra 100 websites.
The ‘finder’s fee’ provides additional incentive for take-down companies to invest in better phishing website detection. Designed properly, such a sharing club can overcome the potential for free-riding that companies such as Cyveillance fret about, while increasing sharing to shorten phishing website lifetimes.
Some subtleties must be mentioned, however. If the finder’s fee is big enough, some companies may be tempted to cheat to minimize their payout. For instance, underperforming take-down companies could claim to have independently discovered missing data from their feed shortly after collecting it from the shared feed. This can be mitigated by adding a credible threat of detection — inserting a few dubious fake phishing URLs that only appear in the shared feed. If the company claims to have ‘independently’ rediscovered these URLs, then they will be caught cheating. Another issue is that the auditing system does incur some overhead, which could be avoided if sharing was made unconditional.
To sum up, we recognize that many take-down companies will be reticent to share. However, we feel that sharing is too important to the goal of tackling phishing to brush aside because of a few inevitable complications. For the good of protecting consumers, the anti-phishing industry should learn to co-operate!
10 thoughts on “How can we co-operate to tackle phishing?”
Financial news feeds are expensive if you want them real-time, but you can get the same data free if it’s time-delayed.
Would this model also work for phishing information feeds? It would allow some of the benefits of sharing, while protecting the investment of take-down companies.
@Pete: Timeliness is really essential for phishing URL feeds. Only sharing feeds between take-down companies after a time delay undermines the value of sharing to all sides. The reason why take-down companies should share with each other is that each can learn about sites from each others’ feeds.
Our proposal would have take-down companies who gain more from sharing compensate those take-down companies who gain less.
Do ordinary users have effective ways to report phishing sites? I get a regular stream of phishing emails, and would happily report them if I knew where to send the URLs.
You can submit phishing reports to a number of places. The best place to send them is to the Anti-Phishing Working Group, who create a feed which is given to the take-down companies and banks. Here’s instructions on how to submit to the APWG:
You can also submit to PhishTank, which is a volunteer group that creates a public feed:
Unfortunately, their feed is slow to be processed, since PhishTank relies on volunteers to vote on each submission’s veracity.
Most banks will also accept phishing notifications when their own brand is targeted. You can usually find information from each bank’s home page.
I used to submit to phishtank.
When I processed wrongly-addressed mail (for accounts that don’t exist; using the luser_relay of postfix) I was getting 16,000 phishes in 12 hours – before I added a rate limiter.
One problem was people would vote as NOTAPHISH something that (if you read the HTML) clearly was a phish. It seems one genuince link in the mail is enough to get some people to approve of it.
Another twist on the cheating angle… if a take-down company gets paid a fee for every phishing site it finds, would that not be an incentive to create phishing sites?
Say take-down company A in your example creates (through suitable cutouts or middlemen) a series of phishing sites for bank B1 based on clones of real phishing sites. They can then ‘detect’ these ‘fake-phishing’ sites and sell the list to take-down company B at a profit. If they’re particularly disreputable they can clean up from the phished account details, but otherwise adding a slow trickle of fake-phishing sites to their feed may increase their profitability at the expense of the competition.
This then turns into an arms race based on who can create the most fake-phishing sites. With real phishing sites or bank customers being merely collateral damage.
@barbedtrebble: We have studied phishing URL submission and voting patterns in PhishTank in other research. See:
We didn’t observe any intentionally malicious voting at the time, but we did notice that PhishTank appears especially vulnerable to manipulation.
@Theo: Good point, disreputable take-down companies could also try to cheat by including fake phishing websites. Presumably this could be detected by a clever auditor, but even so, any sharing club can only work if there is a basic level of trust between take-down companies. The take-down companies I have encountered seem open to sharing so long as they are compensated if they contribute more than others.
I’m not certain your model adressess a fundemental problem.
There are five main classess of player in the game,
A, The Attackers,
B, The Banks,
C, The Customers,
D, The Takedown Co’s,
E, The legislators.
The primary relationship you are actually addressing is between the banks and their customers,
By altering the relationship between compeating entities (D_D, B_B)
There are however a number of other relationships bank to bank (B_B) for instance.
Some of these relationships are very new (B_D) and have not realy been tested. Others (A_B, A_C, A_D) are in a state of flux and others are reasonably expected to change (E_B, E_A, E_D) and others effectivly unknown (A_A, A_D) but assumed.
Your model only losely addressess these additional relationships, and realy only from the perspective of possibly increasing security on the bank to customer (B_C) relationship.
As noted by Theo Markettos the takedown to takedown (D_D) relationship can quite easily be abused for competative advantage in your model.
But importantly so can the other relationships.
Infact it could be quite benificial for Banks to deliberatly abuse the bank to takedown company relationship (B_D) of their compettitiors and likewise for takedown companies to abuse the B_D relationship of their compettitors.
With a little thought you can realise that this would be a very effective place to gain either commercial advantage, or regulatory relief.
And it is not without past president. Further you do not appear to have considered the nature of quite important relationships (B_B, A_A, A_D, etc)
You also need to realise that the bank to takedown company relationship is mostly to the benifit of the banks interms of externalising not just risk but liability.
Also that is likley to be a transitory relationship, that is if the legislators raise the bar in various ways for the banks (E_B), or takedown companies (E_D). With even quite small changesthe banks will in house the activity fairly rapidly to reduce the risk/liability to themselves, and the takedown companies are probably aware of this so may well have only very short term interests.
All that said I am not to saying I think what you are doing is the wrong thing but just the first hesetant step on the journey.
And like all journies it is best to have not just a clear view of the objective but alternative routes etc in mind should the original journey need to be changed due to changes in the environment.
I don’t think cheating would be a problem. Takedown companies wouldn’t be able to claim they spotted a phish first when the received it from the joint feed because the joint feed processor knows who submitted what originally. Also, the bad guys create plenty of phishing sites. No company needs to do that nor would anyone jeopardize their business by doing such.
There’s another wrinkle to this I haven’t seen mentioned. Many of the takedown companies are contractually forbidden to share the URLs with third parties. This is because the takedown companies often get spam data from ISPs who don’t want that info being shared with others for various reasons. The model would likely have to be restricted to phish sites discovered directly by the takedown company lest you have to persuade others of the same benefits (and convince folks to revisit contracts).
Lastly, I think the delayed sharing approach is viable. It works in the antivirus world. While not ideal from a detection time perspective, it’s better that not detected at all.
Another metric that might be worth considering is the quality of the data contributed by each collaborator. As you describe it, there is no disincentive to contribute URLs which do not and have never contained phishing sites; it would therefore be possible for a contributor to boost its score by contributing a large number of untested URLs (for example, every URL seen in an email, perhaps) in the hope that some turn out to be phishing sites!
This should be relatively simple to counteract, however, if the percentage of false positives in the feeds from each contributor are taken into account. Some level of false positives is expected, of course, but abnormally-high rates should be penalised.