apComms backs ISP cleanup activity

The All Party Parliamentary Communications Group (apComms) recently published their report into an inquiry entitled “Can we keep our hands off the net?”

They looked at a number of issues, from “network neutrality” to how best to deal with child sexual abuse images. Read the report for the all the details; in this post I’m just going to draw attention to one of the most interesting, and timely, recommendations:

51. We recommend that UK ISPs, through Ofcom, ISPA or another appropriate
organisation, immediately start the process of agreeing a voluntary code for
detection of, and effective dealing with, malware infected machines in the UK.
52. If this voluntary approach fails to yield results in a timely manner, then we further recommend that Ofcom unilaterally create such a code, and impose it upon the UK ISP industry on a statutory basis.

The problem is that although ISPs are pretty good these days at dealing with incoming badness (spam, DDoS attacks etc) they can be rather reluctant to deal with customers who are malware infected, and sending spam, DDoS attacks etc to other parts of the world.

From a “security economics” point of view this isn’t too surprising (as I and colleagues pointed out in a report to ENISA). Customers demand effective anti-spam, or they leave for another ISP. But talking to customers and holding their hand through a malware infection is expensive for the ISP, and customers may just leave if hassled, so the ISPs have limited incentives to take any action.

When markets fail to solve problems, then you regulate… and what apComms is recommending is that a self-regulatory solution be given a chance to work. We shall have to see whether the ISPs seize this chance, or if compulsion will be required.

This UK-focussed recommendation is not taking place in isolation, there’s been activity all over the world in the past few weeks — in Australia the ISPs are consulting on a Voluntary Code of Practice for Industry Self-regulation in the Area of e-Security, in the Netherlands the main ISPs have signed an “Anti-Botnet Treaty“, and in the US the main cable provider, Comcast, has announced that its “Constant Guard” programme will in future detect if their customer machines become members of a botnet.

ObDeclaration: I assisted apComms as a specialist adviser, but the decision on what they wished to recommend was theirs alone.

9 thoughts on “apComms backs ISP cleanup activity

  1. It’s good to seem that something is being done about this, but I wonder if this is moving in the right direction? I suspect that voluntary efforts will fail (for the reasons outlined in the post), and recommendation 52 will have to be implemented. This will mean a centrally-developed one-size-fits-all implementation mandated for all ISPs, regardless of how effective it will actually be.
    Would it not be better to create economic incentives for ISPs (e.g. through fines) to come up with effective measures, rather than dictate a specific solution?

  2. I will put my neck out here and make a prediction…

    Both methods will fail to deal with the issue in the longterm (although short term it will have an effect).

    My reasoning is based on the problem of,

    “code for detection of, and effective dealing with, malware infected machines in the UK.

    Basicaly how do you know that any PC is infected with malware without activly “hacking it” or commiting one of a number of criminal offences?

    The answer is going to be by “watching it’s output behaviour” (not what it sends that would be illegal and problematical).

    So the sending spam, DDoS, etc as currently done by bot nets etc will act as an indicator.

    How long before the bot net operators change their tactics to cheat the “mask”?

    The solution to the problem is an economic one. Of which there are two options.

    The first is “you pay for your output” across the board.

    The question is who do you pay it to and what do they do with the money.

    The downside is of course that popular sites like the BBC will be hit hard by such a policy.

    As will AV companies and software producers sending out endless high volume updates.

    However the likes of the “Premium News” providers are starting to put up pay walls any way…

    The real problem is human nature (specificaly greed) if something is “effectivly free” it will be abused by a small percentage of the population who will use it disproportionatly to others for their own gain.

    Once they have an established business model they will try every trick they can to avoid any “fair use” measures. It is after all one of the main reasons we have bot nets.

    Which brings you around to the second option penalties for inappropriate use or “fines”.

    The first problem with this is establishing the rules such that impact to the community is minimised. However vested interests will fight tooth and nail to defend their current business model…

    Then of course comes the issue of enforcment and on whom. The “unfortunate victim” whose machine has been infected or those responsable directly (malware writers) or indirectly (shoddy software suppliers). I would bet it would be the victim…

    Oh and there is a significant danger in 52 that is it will be used by the Treasure as “offset income” Thus OfCom or whom ever will be given statutory rights of inspection and penalties, and it will be used as a way to reduce the money currently paid by the Treasury…

  3. @Clive
    So the sending spam, DDoS, etc as currently done by bot nets etc will act as an indicator. How long before the bot net operators change their tactics to cheat the “mask”?

    Well if the botnet doesn’t send spam or perform DDoS attacks, then the damage it does is really rather limited — and we’ve won anyway!

  4. @ Richard,

    “Well if the botnet doesn’t send spam or perform DDoS attacks,…”

    If spam or DDoS are the only things they can do with it hence my “etc”. It is this “what else” that actualy wories me not spam and DDoS (which whilst anoying can be to a certain extent controled). I have a few ideas on what might happen but I don’t want to advertise them for obvious reasons.

    Personaly I thing spam and DDoS happen because the cost is virtualy zero to the bot net operator and importantly to the owner/operator of the “owned” machine as well.

    Those points aside as the ISP’s are reluctant to take action currently (and to be honest who can blaim them) the bot net operators currently have little or no reason to be covert about the individual bots output of spam.

    Thus it is “uncharecteristic traffic” from the bot, and relativly easy to spot currently.

    If ISP’s get the legal clarity etc they feel they need to procead then initialy the bots will get shut out.

    However the bot operators have an investment in their network and will not wish to lose it. Therefore they are likely to rapidly change the networks output to a less obvious form to get inside the mask.

    The ISP’s in turn will put in place a more restrictive mask which will cause the bot net operators to change again.

    We enter a new CM / CCM war. We have seen this type of war befor on RX spam filters.

    Past history tells us that it will be protracted and the costs of filtering is grosley asymetric in favour of the botnet operator. Worse the cost tends to go up exponentialy versus long term effectivness.

    Additonaly at some point the ISP customers are going to complain that they are being effected by the filter masks.

    Simplisticaly rather than get into a protracted war take an aproach that changes th cost equation.

    Such as change that “distrubution cost” from zero to some realistic figure and the “victims” who have had their machines “owned” will have an incentive to do something about it as they are bearing the cost.

    However as a sugestion for the direction of a potential solution it is just a hypothosis.

    A practical solution would just from the technical side be fraught with difficulties.

    And from the political side probably impossible without some kind of statutory obligation. (Mind you work out some way the Treasury would benifit from it and the legislation would be in place by oh Monday next week 😉

    Jokes aside the abuse of Internet resources is not just due to spam etc.

    Think about just how much bandwidth “patch tuesday” takes up or the latest “AV update” or the level of traffic from various media and entertainment networks

    Comparativly they are “free loading” just as much as the bot net operators.

    And this “free loading” is stoping inovation as their is no cost incentive to change the way things are currently done.

    I think you would find that many many software suppliers would change their development model quite rapidly if they had to actualy pay a realistic “distrubution cost” for shipping patches etc.

    And for all I say above I’m very much against changing the current method of paying for the Internet, simply because setting up the accounting and payment systems would become prohibitaly expensive.

    And this is my second major concern, “is the cost of solving the problem going to be worth it”.

    Anoying as spam and DDoS are the current method of dealing with them has been a free market success in many ways. In that it has opened a new competative market place that is actually less expensive than the alternatives are likley to be. Further as it is responding to a need rather than a legislative requirment it is agile in it’s response to rapidly changing conditions.

    In reality spam and DDoS can be likened to “stock loss” in the retail industry. It is an unavoidable cost of doing business, that has a tipping point where the cost of elimination is greater than the loss. Therefor you accept a trade off of a certain amount of undesirable activity for a moderate cost of keeping at a given level.

  5. I haven’t read the rest of the report, but the highlighted recommendations are just wrong. They might make work for security consultants, but they won’t reduce malware infection.

    An endpoint that is vulnerable because its operator is too lazy to patch is no different than the zombies that are mistakenly sending the traffic in the first place, because their operators are too lazy to patch. Is one group better-connected politically than the other? If upgrading security is such an onerous task for the vulnerable endpoint that it requires the assistance of regulators, how is this task less onerous for the pwned endpoints? Will the vulnerable endpoint notice the problem hasn’t gone away even after its ISP has raised rates to pay off the compliance mafia? Why doesn’t the vulnerable endpoint already pay its ISP extra to filter obnoxious traffic, and leave the plain-vanilla connection for those robust hosts that can handle it? Since we’re talking about UK regulators here, all the vulnerable endpoints that we care about are in the UK. Can we say that all pwned endpoints of interest will be similarly located? How is that going to work?

    On the other hand, an upstream that doesn’t want to handle obnoxious traffic may either drop it on the inbound or incent its downstreams and peers not to send it. Either of these actions it can perform unilaterally, without regulatory assistance.

    From your response to the report, I gather that the perceived problem is ISPs relaying bad traffic from their customers up to higher-tier operators, for example BT. Why doesn’t BT just renegotiate its terms of service with such ISPs, to charge them more for obnoxious traffic? I conclude that it doesn’t care enough to do so, it finds filtering in-house more effective, or it sees commercial opportunities (after all, it has a consultancy!) in this sort of regulatory mandate. I don’t have much sympathy for this last motivation.

    If we were good enough at characterizing obnoxious traffic to issue fines in a reliable fashion, we’d be good enough at recognizing it to just drop it on the floor. Adding layers of bureaucracy to this problem will not solve it. Any sort of fine regime will be capricious and unreliable. I’m afraid, too, that such a program is the camel’s nose of anti-neutrality. (The same could be said of my suggestion to renegotiate terms of service, but regulation seems worse on this count than negotiation.)

    Hey Clive, I like these arguments of yours on this issue much better than what I’ve seen from you on Schneier’s site! Keep fighting the good fight man! b^)

  6. @Jess

    Why doesn’t the vulnerable endpoint already pay its ISP extra to filter obnoxious traffic

    Modern malware doesn’t impact the host, so there’s no incentive on the host to deal with it. However, the host does pollute the rest of the Internet, so we need to find ways to ensure that the host is cleaned up. In practice that will involve the ISPs helping, but this costs them money and so they are generally less than enthusiastic.

    From your response to the report, I gather that the perceived problem is ISPs relaying bad traffic from their customers up to higher-tier operators, for example BT.

    Few ISPs are downstream of BT, so that’s not the issue. If you mean that the transit networks should be dealing with the traffic — it’s impractical to filter traffic in the centre of the network. We need to deal with it at the edges.

    If we were good enough at characterizing obnoxious traffic to issue fines in a reliable fashion, we’d be good enough at recognizing it to just drop it on the floor.

    Doesn’t work like that. Spam is often detected and reported by individuals who find it in their inbox. It wasn’t filtered by that ISP and won’t be filtered by others. Equally, botnet members are often found by examining command and control interchanges, not by looking at the bad traffic they’re sending out.

  7. Richard, thanks for responding. Your responses are the traditional ones, but from my perspective they don’t really hang together. On the one hand, you speak of pollution of the internet. On the other, you point out that malware has little effect on the pwned host, the ISP, or the tier ones. Which leaves what, the vulnerable target host? Then the vulnerable host should apply some patches. If the operator doesn’t know how, she should hire someone who does. If no patches exist, then how on earth can the pwned source host be fixed either?

    I get your point that filtering is most practical at the edge; that was why I suggested that if the vulnerable endpoint really cares about this “problem”, it would pay some service provider (maybe its ISP?) to operate a firewall on its behalf.

    I take your word for it that intercepting control messages is the state of the art in botnet discovery. Isn’t this something that any network operator could do, far more easily than a consumer? After the operator determines that one of its IP addresses is behaving badly, it just drops everything from that IP, and when the customer calls in to complain she is told to fix her box. This is then a TOS issue, and there is still no need for regulation.

    If the network operator has insufficient incentive to implement such a scheme, you keep following the packets until you reach someone who does have enough incentive. I contend that this trail will only stop when you reach the vulnerable target host, or its immediate provider. Since that’s where the problem exists, that’s where it should be fixed. In a global context, that’s the only place it can be fixed.

    I don’t think the spam analogy works well for your argument. Spam was a problem as long as we worried about whitelists, blacklists, open forwarders, etc. As soon as we started using email providers with decent spam filtering (gmail, in my case), the problem disappeared. I can’t remember the last time I clicked “report spam”, although I know that long ago 90% of the mail I got was spam. Sure there is still some gigantic volume of spam out there somewhere, and some minuscule population of morons whose responses to it make it worthwhile, but the majority of us just don’t have to worry about it. Most ISPs make you sign up to run your own mailserver before they’ll forward its traffic, so if they keep an eye on those who have signed up they don’t have to worry about capacity problems from spam. If a higher-level carrier doesn’t like the email volumes it gets from an ISP, it can start raising fees until the problem is fixed.

    If the problem isn’t serious enough for any interested party to fix it without outside incentives, how serious is it? I defer to the judgment of the network operator that actually sees the traffic.

  8. @Jess

    I get your point that filtering is most practical at the edge; that was why I suggested that if the vulnerable endpoint really cares about this “problem”, it would pay some service provider (maybe its ISP?) to operate a firewall on its behalf.

    Operating a firewall on “vulnerable hosts” does not remove the damage done by botnets. They’re doing gigabit DDoS attacks, sending billions of spams, hosting criminal websites that are hard to take down… for none of these issues is a firewall relevant.

    I take your word for it that intercepting control messages is the state of the art in botnet discovery. Isn’t this something that any network operator could do, far more easily than a consumer?

    ISPs can do this detection (and indeed Comcast now does so by looking at DNS traffic), but what’s the incentive for an ISP to bother when the attacks generally go outward, and all you do is create a big customer support headache? ISPs, as I noted, are beginning to review this approach. but only in a handful of countries so far.

    I don’t think the spam analogy works well for your argument.

    Then you have no idea of the costs that the spam load now imposes on mail system providers, or that the vast majoority of spam comes from botnets. By all means be happy with gmail, but that’s not what most businesses, or even most consumers use.

  9. @ Richard,

    I don’t know if you have seen this,

    http://searchsecurity.techtarget.com/tip/0,289483,sid14_gci1372715,00.html

    But it makes a similar point to that, that I made above about “the etc” (so I guess the cat’s wiskers are visable at the top of the bag now 😉

    So a bit more detail on why my take on botnets is this stealth problem is going to creep up on us and hit us very hard.

    First of they are different for a couple of reasons.

    1, Our attention is diverted to the big botnet DDoS attacks against services and SPAM pushing.

    2, Micro botnets are used to target information not services.

    The DDos attacks can be likened to the equivalent of the old site defacment. The gain for the botnet operator is publicity more than monetary. No stealth is intended the botnet machines are easily identified.

    As for SPAM well likewise it’s the same old game but the distrubtion is now distributed. Like the DDoS no stealth is intended by the botnet operator.

    The point is these large botnets advertise there location by the shear volume of traffic they send out. This is in the long run will be self defeating for the botnet operators.

    To an extent I agree with Jess that to an end user (who gets owned) on the outer edge of a network does not see the cost of the botnet they are part of directly. Therefore their knowledge of the problem is low and they don’t see the spam etc.

    Micro botnets on the other hand are used to get at information that has value in it’s own right, and are thus cracking by relay etc.

    This is they are savants for enumerating a target looking for vulnerabilities or sending highly targeted emails etc.

    The traffic they send is low and stealthy and slips easily beneath the noise floor thus the radar of traditional network intrusion detection.

    For instance if you are a cracker with a new zero day you will want to find out if network you are about to use it on is actualy a honey pot, thus keeping the zero day shelf life long.

    You therfore send carefully timed network packets and from the returned time stamps etc you see that the network is “virtual” not real. Which gives a high probability that you don’t want to run your zero day against it.

    Likewise controling the botnet can be done in a more subtal almost invisable and effectivly unstopable manner through the likes of google or other search engines.

    That is an end user machine making an occasional search on google etc is expected traffic and thus not cause for comment by either end. Thus an occasional “google” by the bot program is not going to cause any alarms to trigger.

    Therefor the bot net operator needs to get their bot net commands into the search engine cache so they can be found by the bots.

    The easiest way to do this is to post a message to well known high volume blogs that search engines hit every couple of hours.

    You put low bandwidth information into the post via Stego such as the use of comas and semi colons or other method such as spaces befor and after full stops etc.

    This alows a genuine looking post that has no obvious links or other such give aways to the blog owner so they are not likley to take the post down untill well after all the search engines have found it.

    The blog owner does not see any difference in their traffic so does not get alerted that way as the search engine services the individual bots not the blog site.

    The botnet operator can use as many different blogs as they like provided they are hit by the search engine robots.

    The botnet operator can also put an extra layer in. Some sites use a google add in to search their site etc so they can act as a proxie for the user.

    Some of these web sites realy don’t stop open searches being made through them (much like the old open sendmail relays).

    The question is then just how are these micro bot nets going to get found and how do you stop them.

    Effectivly they will act as anonymous “force multipliers” for the more savey and focused crackers with an extreamly low probability of detection.

Leave a Reply to Clive Robinson Cancel reply

Your email address will not be published. Required fields are marked *