A backwards way of dealing with image spam

November 20th, 2006 at 08:01 UTC by Richard Clayton

There is a great deal more email spam in your inboxes this Autumn (as noted, for example, here, here and here!). That’s partly because a very great deal more spam is being generated — perhaps twice as much as just a few months ago.

A lot of this junk is “image spam”, where the advertisement is contained within an embedded picture (almost invariably a GIF file). The filtering systems that almost everyone now uses are having significant problems in dealing with these images and so a higher percentage of the spam that arrives at the filters is getting through to your inbox.

So higher volumes and weaker filtering are combining to cause a significant problem for us all :(

But I have an interesting suggestion for filtering the images: it might be a lot simpler to go about it backwards :)

So read on!

At a large UK ISP with which I am familiar, incoming email volumes were 6 million/day this time last year, 12 million/day in the summer, 16 million/day in October and 26+ million/day several days last week. viz: the amount of spam is way up !

At the same time, a lot of the spam email relates to “pump and dump” spams where you are encouraged to buy obscure stocks, thereby raising their price, and the spam senders who bought at a low price make money. Since there’s no need (as there would be if they were drumming up buyers for fake pills or mortgage leads) for a website URL in the spam (just a ticker symbol) this removes a constant string that the email filtering systems can grab hold of, and it exposes the weaknesses of many of the text scanning algorithms (usually “Naive Bayes” schemes) which can be misled by the presence of many “good” words accompanying the “bad” ones.

However, the key change in spam in recent months has been the increasing incidence of image spam. A typical spam email now consists of a page of random text (often snarfed from news websites) which persuades the filtering systems that this is email that you’ll want to read — followed by a GIF which actually contains the spammer’s message (buy pills, purchase this stock, personalised Xmas cards, etc).

Image spam is a significant problem for spam filtering systems. Having parsed the text and concluded that it looks legitimate (or at least, not unusual) they would, until recently, ignore the GIF and pass the spam through to your inbox. Hence a combination of a lot more spam being sent, and the rise of image spam has led to a significant problem in your inbox.

The spam filtering companies are starting to fight back. The early attempts created a cryptographic hash of the images, so that when they were sent again they would be recognised. The spammers then arranged that every image was different by adding little dots of colour, or by other techniques (some quite exotic) that would ensure that a computer thought that the image was fresh and new and had to be allowed through the filtering — but that a human eye would recognise as the same old advert for slimming pills, erection pills, or this week’s ticker symbol.

The filtering companies have counterattacked this fuzzyness (for example, IronPort claim to be doing well). They are now using character recognition software (or even wavelets) to try and deduce the message hidden within the images — hoping thereby to feed the text those messages to their Naive Bayesian systems and hence be able once again distinguish between ham and spam. But this is extremely processor intensive and, since there’s lots of spam to process, this is becoming a major resource issue for ISPs and others…

… but I have a much simpler solution, which seems so far to have been overlooked. Why don’t we just block every email with an image in it?

Don’t be silly, I hear the cries, there’s lots of legitimate email with images in it and we don’t want to block that!

I agree. But let’s examine the nature of that legitimate email. One major class of image-including spam is the users of Outlook (and other Microsoft email products) who can arrange that email turns up accompanied by some “wallpaper” (so that the background is, for example, a pleasing shade of blue). But there’s only a few dozen wallpaper images — so why not create a cryptographic hash for each of these and then give wallpaper a free pass?

There’s then companies who, for corporate image reasons, send out a copy of their company logo with every email (you may think that’s clueless, but their marketing department begs to differ!) However, once again, there’s only a relatively small number of these logos AND THEY DON’T MORPH INTO NEW SHAPES ON EVERY EMAIL, so it is possible to envisage building a database of their cryptographic hash values and letting them through.

There’s then a lot of other oddments, filler spaces, fancy bars across the page, buttons, smileys and so on. But there’s really not very many of these, and they don’t morph, and so they can be added to the whitelist.

The key point, is that crytographic hashes are USELESS at recognising the spammer’s images because they are intentionally morphed to ensure that they will not be recognised by such a simple test. However, the legitimate images (wallpaper, logos etc) do NOT keep on changing, but remain constant. So once they have all been recorded into a trustworthy database they can be given a free pardon…

So what I suggest for the filtering companies is to build a database of “good” images (a few days scanning should pick out the candidate GIFs — anything you see twice the same might be a useful initial selector!). They can then provide somewhere for the marketing department to proactively upload their logos, and (once a human’s checked that no-one is cheating) every other GIF can be, by default, blocked.

The open community already has the systems it needs for this. Long ago when every advertising run used identical content for the emails, people used systems like Razor or DCC to discard incoming email that others had already identified to be spam. That doesn’t work terribly well anymore (because the spam sending engines morph the text sufficiently to fool the systems) but the infrastructure would be ideal for what I’m proposing!

Of course it’s not quite that simple, since there is a final class of image that regularly turns up in email — the JPEG of a new grandchild, the embarassing shot of the last drunken Friday night, or even an impressive sunset from someone else’s recent holiday. So doubtless the spammers will regroup, replace all their malware on compromised machines, and start shipping JPEG images rather than GIFs. However, I suggest that that might be less of a problem to deal with. Firstly, corporates may be happy blocking JPEGs outright — they’re not especially common in official company email. Secondly, the filtering problem should be a little simpler — a character recognition program is unlikely to find any character shapes in a sunset, so the majority of images will requite only superficial processing.

There’s doubtless many details to work out to create a viable scheme — but I suggest that seizing upon the property of “good” images (that they don’t change) looks like a better bet than attempting to pick information out of “bad” images, which will rapidly evolve protections against image processing techniques — perhaps ending up looking like visual CAPTCHAs (many weak, but possibly strong) and consuming a great deal of computing power to deal with!

Entry filed under: Security engineering

19 comments Add your own

  • 1. Joseph Bruno  |  November 20th, 2006 at 10:48 UTC

    Your new grandchild doesn’t have sharp edges. Text does. So edge-detect the JPEG and you’ll be able to classify it.

    But – hold on – isn’t the DCT itself an edge-detector? Won’t the power spectrum as revealed by the DCT coefficients be enough for classification, without even needing to decode the JPEG? Can you, by inspecting coefficients only, distinguish between text and grandchildren?

  • 2. Nick Towner  |  November 20th, 2006 at 11:47 UTC

    My impression is that people started putting their E-mail address in images to protect it from automatic trawling by spammers and that the spammers responded with the current wave of image spam.

    This is the underlying problem with technical solutions to spam: each technique can be turned around by the other side, and the arms race never ends.

  • 3. Piotr Zielinski  |  November 20th, 2006 at 12:46 UTC

    Another form of images some of us regularly receive are daily comics, Dilbert for example. They do have sharp edges and text. White-listing should take care of these, and corporates will be more than happy to block them I guess …

  • 4. darkcurrent  |  November 20th, 2006 at 13:44 UTC

    A more efficient (though clumsy) way is to actually give the controls in the hands of the user. Train the users and give them the tools to actually whitelist email-addresses that they receive email from.

    This method will work particularly well in a corporate environment where email received is usually from a known address.

    One has to recognize that no matter what technology / process you use (SPF, text recognition, hell even OCR), the fact remains that the ability of the spammer to adapt, morph and react is with a wide margin.

    Just my 2c!

  • 5. gustavo  |  November 21st, 2006 at 12:41 UTC

    So far I have a good deal of protection using bayesian-like filters, both dspam and crm114. What I fail to see is the paid content filters or antispam solutions to catch them.

    Everytime I see OCR and other options taken into account and dragging lots of cpu time. Can you all remember when a mail server was just I/O bound?

  • 6. igb  |  November 21st, 2006 at 13:45 UTC

    I looked at blocking .gif this morning. I can do it for myself, but doing it sitewide fails: it’s used for shipping plans and diagrams for civils work. Damn.

  • 7. .$author.  |  November 21st, 2006 at 20:06 UTC

    Inneffective Spam & Spam Filtering…

    The Register had an interesting article about the lack of return for pump’n'dump merchants of all the spam they send out. Essentially it looks like a rare example of the tragedy of the commons having a beneficial effect. Since spamming zillions with …

  • 8. Jason  |  December 11th, 2006 at 16:05 UTC

    Great, but why not make it automatic.

    Apply traditional greylisting rules.

    Then run content analysis and finger-print any images and catalogue them, then temp reject the message if the image hasn’t previously been seen.

    Obviously this will require some analysis of spams to check if this will work out.

    Use the randomisation against them (again!)

  • 9. Faz  |  December 20th, 2006 at 09:08 UTC

    The other problems are also the hotmails and yahoos where they have signatures or adverts at the bottom of each email with emoticon images. These days even a link is considered spam under Fortinet rules!

  • 10. Richard Clayton  |  December 20th, 2006 at 11:13 UTC

    I don’t think the hotmails and yahoos (very Gulliver’s Travels!) will not be creating different smileys every time — viz they are exactly the sort of identical images that will be straightforward to whitelist. I would be astounded if there were more than 100 or so images involved. Clearly, if the whitelist started growing towards the millions then the scheme would be infeasible — my contention is that it will be much shorter than that…

    … I do agree that if everyone scans their signature and appends it to their email then we’re looking at very long lists. From my experience, I don’t think that sort of behaviour is very common at the moment, and a default approach of “block all images” (which looks pretty attractive this month) is going to provide a certain level of disincentive to having that change.

  • 11. Rob  |  December 20th, 2006 at 18:57 UTC

    A bit OT, but I’d go so far as to say that appending scanned signatures is very uncommon, for good reason.

    I forget the statute (if any) underpinning this, but it seems to be pretty much taken as read these days that, for most purposes, typing one’s name at the end of an email counts as a signature.

    Incidentally the USPTO started recognising a thing called an S-signature a few years ago (for online applications). It consists of typing ones name, bracketed by forward slashes, hitting enter, then typing it again. For example:

    /Fred Bloggs/
    Fred Blogs

  • 12. Chris Lawrence  |  January 3rd, 2007 at 03:36 UTC

    How long will it be before spammers start doing this? I’ve already had this using text, where the text was a tiny font used as ‘pixels’ to spell out the real message, while the filters just picked up the text and passed it.

    Chris

  • 13. Tom Fuegi  |  January 5th, 2007 at 13:37 UTC

    Richard, Tom Fuegi here. It’s a good idea, but like most of the good spam-fighting ideas it just pushes the spammers “one more step down the path” while putting obstacles in the way of ordinary email.

    In this case the spammer’s answer to only allowing whitelisted GIFs is to use the whitelisted GIFs to spell out words, in a similar way to the technique Chris Lawrence describes in comment number 12. Spam fighting can then come back again, using the probability of such arrangements of GIFs and text occurring in non-spam messages, but we’d have saddled ourselves with a huge job of maintaining a list of every reasonable image or decoration and would have obtained only the usual brief respite.

    Also, many people are already fairly satisfied with the efficiency of existing tools for dealing with image spam. Very few unsolicited mails (from new correspondants) will legitimately contain an image attachment at all and proper use of spamassassin means that my usual spammers never score less than six points with an image spam. Personally I would not find a hash whitelist useful, indeed I don’t even bother with the costly image-analysis any more. I can make a very strong presumption that an unsolicited small image is spam and if the image is a GIF the presumption is always correct.

  • 14. Ray  |  January 5th, 2007 at 17:19 UTC

    What about people that use gif’s of their business card as their email signature.

  • 15. Richard Clayton  |  January 5th, 2007 at 17:37 UTC

    Ray said

    What about people that use gif’s of their business card as their email signature.

    I said in the original article…

    There’s then companies who, for corporate image reasons, send out a copy of their company logo with every email (you may think that’s clueless, but their marketing department begs to differ!) However, once again, there’s only a relatively small number of these logos AND THEY DON’T MORPH INTO NEW SHAPES ON EVERY EMAIL, so it is possible to envisage building a database of their cryptographic hash values and letting them through.

    … and that remains my view. It’s a database populating problem, and you either believe that’s tractable or you don’t.

    Chris & Tom’s comments (“but you can build a spam emal out of multiple images”) has some merit — and I had not considered this idea — but you do need a lot of these images and that’s going to be a pretty clear distinguisher for emails you don’t want.

  • 16. Justin Yackoski  |  January 8th, 2007 at 17:55 UTC

    I agree with Tom, its just an arms race…

    If you block things with multiple images, or even block images completely, then they’ll start using stuff like http://www.omgili.com/captcha.php (but more advanced/efficient) along with some creative js/css to write hard to decipher things without images at all.

  • 17. Pangolin  |  January 8th, 2007 at 21:38 UTC

    I am presently filtering all mail with a .gif attachment, and have a whitelist of valid senders who may send a gif. This seems to work well and eliminates the stock spam.

  • 18. Mario  |  February 14th, 2007 at 21:45 UTC

    One of the things that the original article, and most of the above comments forget, is that if you start preventing images from appearing in e-mails, you are slowly crippling the way the Internet was supposed to work. HTML e-mail *is* a formal RFC. Some of us work in corporate environments where most content is from known sources and where most senders are also known. In my non-corporate time, I also deal with some non-profits, sports clubs and other bodies where messages come from various ISPs, using various e-mail software, and using different styles of composition (some rich text, some plain ascii, etc). Plus, many pictures, of the “grandchild” type, but also of screen shots (sharp edges), and random other subjects. You just cannot apply an anti-GIF policy to that environment. This is why I think this solution has no future. It is too drastic and applies the wrong solution to the problem. It is like banning backpacks in London just in case they may contain explosives.

  • 19. Rob Jefferis  |  May 23rd, 2007 at 17:07 UTC

    Mario i would agree with your comment of

    “It is like banning backpacks in London just in case they may contain explosives.”

    if 75% or more of the backpacks did carry explosives because that is probably the spam to legit ratio of pics coming in.

    I think you would agree that if 75% of the people wondering around london with backpacks on had explosives in them, you would be pretty keen on a ban then.

Leave a Comment

Required

Required, hidden

Some HTML allowed:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Subscribe to the comments via RSS Feed


Calendar

November 2006
M T W T F S S
« Oct   Dec »
 12345
6789101112
13141516171819
20212223242526
27282930