Resumption of the crypto wars?

The Telegraph and Guardian reported yesterday that the government plans to install deep packet inspection kit at ISPs, a move considered and then apparently rejected by the previous government (our Database State report last year found their Interception Modernisation Programme to be almost certainly illegal). An article in the New York Times on comparable FBI/NSA proposals makes you wonder whether policy is being coordinated between Britain and America.

In each case, the police and spooks argue that they used to have easy access to traffic data — records of who called whom and when — so now people communicate using facebook, gmail and second life rather than with phones, they should be allowed to harvest data about who wrote on your wall, what emails appeared on your gmail inbox page, and who stood next to you in second life. This data will be collected on everybody and will be available to investigators who want to map suspects’ social networks. A lot of people opposed this, including the Lib Dems, who promised to “end the storage of internet and email records without good reason” and wrote this into the Coalition Agreement. The Coalition seems set to reinterpret this now that the media are distracted by the spending review.

We were round this track before with the debate over key escrow in the 1990s. Back then, colleagues and I wrote of the risks and costs of insisting that communications services be wiretap-ready. One lesson from the period was that the agencies clung to their old business model rather than embracing all the new opportunities; they tried to remain Bletchley Park in the age of Google. Yet GCHQ people I’ve heard recently are still stuck in the pre-computer age, having learned nothing and forgotten nothing. As for the police, they can’t really cope with the forensics for the PCs, phones and other devices that fall into their hands anyway. This doesn’t bode well, either for civil liberties or for national security.

11 thoughts on “Resumption of the crypto wars?

  1. Apparently a mathematician at GCHQ effectively invented public key encryption back in 1986 by discovering long number sequences, but there was no practical application for it. Around the same time, two Americans were studying the same thing, and came up with something similar, but inferior to the GCHQ findings. The American study then went on to become the RSA data security algorithm in 1989 along with the now industry standard ZIP (LZW) compression. I can’t confirm either way whether this story is true or not, but if it is indeed true, it just goes to show that as a country we don’t tend to capitalise on what we do come up with. Then again, GCHQ isn’t in the data security business…

  2. I think it’s a little unfair to suggest agencies are still Bletchley Park; the agility and ingenuity shown by BP far outstrips anything you’ll see from the current security authorities! 🙂

  3. It is quite apparent that the motivation for these measures, as was the case with most of the motivation to restrict good cryptology, is capturing more tax revenue. I can’t get excited about monitory of the various social networks, and anyone dumb enough to use e-mail for anything sensitive probably deserves what he gets. That said, you can’t put the genie back in the bottle. Today, anyone who has a serious need (meaning that he is willing to spend some money) for secure communications can get what he needs, and this is not likely to change. Remember that, even at the level of major governments, must penetrations of communications security have been made either by physical penetration or because of misuse of the security mechanism.

  4. @John,

    GCHQ aren’t particularly silent anymore about their PKI trio. For example, IEEE award spin. Of course, it was 1973 not 1986 (the RSA paper was 1977) and there were non-public uses made of the discovery.

  5. @ Richard I. Polis,

    “Today, anyone who has a serious need (meaning that he is willing to spend some money) for secure communications can get what he needs, and this is not likely to change.”

    Simple answer NO they cannot.

    Secure communications has a number of factors that need to be considered.

    The first and most obvious is securing the contents of the communication. Yes with the appropriate resources you can do this securely.

    Secand and perhaps more importantly for state level -v- individual level security is protecting the fact that communications have taken place. This is a generaly a very hard problem on public networks without the use of a third party that has to be trusted by the users.

    However the service cannot be trusted by users simply because as a user have in reality no control over the service provider, whilst the government of the country the service is in, in reality has considerable control over the service simply becauseit has the use at a minimum of the law to get the service to do their bidding.

    Unfortunatly a government does not have this control over a service if it is not based in the governments jurisdiction. It is this issue that this proto legislation is all about. In the past the issue did not realy arise due to limited services and could generaly be solved by the state by simple traffic analysis on the individual users.

    Trafic analysis was initialy thought up and developed at Bletchly park and it is a very powerful tool. In it’s simple form it generaly has three dimentions not related to communications content,

    1, Communications “Path”.
    2, Communications “times”.
    3, Communications “length”.

    You can use one or more of these dimensions to show that two or more individuals have communicated even if they have secured the content via the likes of “perfect secrecy” encryption.

    With public networks where data is not broadcast to all but routed between individuals it is usually simplest to look for the source and destination “addressess” and see under which users they fall.

    The likes of major third party services has put a big whole in traffic analysis of the “path” simply because they are a hiden routing mechanisum with an effectivly unbounded number of users. Thus a users destination address is the service not another user, and there might be thousands of users at any one time connecting to the service.

    Thus simple traffic analysis of the path fails in a similar way that TOR trys to achieve (ie decouple the communicating users). The advantage of a service like FaceBook is using it is not suspicious (almost the oposit in fact). However using a service like TOR is suspicious in many peoples view.

    Even though TOR is quite effective at obsucring the path it is not so good on the time or length dimensions.

    The killer for TOR is the use of an interactive service by a user (say a web service). If an observer can see the connection points for the user and a service the observer thinks the user might be using. It is fairly easy to cross corelate the time asspects to give a high probability of which user TOR “in path” connects to which TOR “out path” to a web service and the same for the reverse path through TOR. This failing is not due to a failing in TOR but to trying to reduce latency on an interactive service for the user, so it apples to most networks like TOR..

    However the likes of FaceBook have a big advantage over the likes of TOR. Importantly the “service” the user is connecting to is effectivly invisable inside their private network thus all an external observer gets to see is the users in path and the user end of the return path so there is nothing time wise to cross correlate.

    More importantly services like FaceBook act as a storage medium which enables individuals to hold conversation covertly by using a large random time difference in their individual communications with the service. Thus making “time based connection corelation” difficult at best.

    This leaves the “length” dimension, this is starting to be used to try to identify “illegal downloads” on P2P networks. Often a communications system does not allow for easy message pading and splitting and the length or magnitude of the communications length remains unobscured.

    Thus even though the communication might be decrypted and re-encrypted under different keys along the path, time shifted and sent along multiple paths, monitoring the suspected users end points for message length can alow cross corelation between the two users.

    However there is a problem with this on many services using HTTP or siimilar protocols. They can silently swallow parts of the communications such as control charecters fake HTTP tags etc etc, prior to storing the message. Thus provided the two communicationg parties use encryption they can pad their outbound message to any length they like and the service will crop it back to some arbitary and unknown length which the other user pulls off of the service at some future time and there by removing the message length corelation for the outised observer.

    So even the inoccent use of a service like FaceBook over HTTPS (yes it can be forced with some effort) can quite easily stop the various dimensions of simple trafic flow analysis. And as privacy concerns over the likes of FaceBook increase the use of HTTPS will become the norm not the exception and thus effectivly rendered “unsuspicious” or effectivly “innocent”.

    Some considerable time ago the UK Government thought they had cracked the monitoring problem with RIPA. Which allows the UK to demand the encryption keys from a user whos message had passed through any communications system connected to the UK. This was backed up with the threat of imprisonment for non compliance.

    All well and good but it does not work if a service is outside of the UK juresdiction or one or both users don’t role over when threatend (and lets face it which is preferable a few years in jail for non compliance or many years to life for some criminal acts)

    The service provider being out of UK jurisdiction is a real problem for the UK Government they have been stymied in some investigations already and have had to make request to the foreign government (in who’s jurisdiction the service falls). They in turn then compel the service to disclose through their legislation. This obviously only works if the foreign government decides to help which may well not be the case in future.

    Also almost before the ink had dried on the ministers signiture on RIPA people had come up with technical solutions to the key disclosure issue. Thus RIPA can with care be negated in a number of ways which makes it impotent when it is most likley to be needed by the UK government.

    So where is a government to go…?

    Well China has shown one direction with the so called “Great Firewall of China” and the (supposadly) mandatory use of Chinese Government issued firewall software on a users PC (only there are a number of holes in the system). Australia had an idea again for mandated software but this time at the Internet Service Provider that blocked “blacklisted” sites. Various other governments are looking at similar except with “white lists” instead.

    In the UK neither black lists or whitelists will sit well with various advocacy groups and their will be a lot of noise and some MP’s and Lords will listen. However there is the old “if you have nothing to hide” argument along with the almost iresistable (to politicos) “if you know what we know but cann’t tell you” argument which gives a potential crack.

    Put overly simply, if you can get at the full unencrypted contents of any data a user communicates you don’t need to use traffic analysis or compel either user or service to disclose a message contents.

    And due to the way SSL/TSL work (or more correctly don’t) it is possible to strip away ordinary “innocent” encryption quickly and easily thus obviating the user and service issues and also allowing attention to be focused on those using “suspicious” encryption.

    What the powers that be don’t seem to have twigged onto is that due to privacy concerns there are going to be “innocent” services set up that obviate the deep packet inspection.

  6. My previous comment was based on rather scant information and something I didn’t really follow up. I don’t claim to have any authority on computing as a whole. I’m just concerned that we don’t seem to capitalise much on scientific research. I don’t possess any degrees or serious academic qualifications regarding computing (as folk can tell) however, I do feel I can contribute at least as a concerned user of technology. I’ve already had personal data compromised resulting in fraudulent activity and financial loss.

    The Internet is a text based medium and as text has to be shunted around to make it work, Ross et al are providing us with a forum for discussion. It is becoming more and more apparent to me that the public/wild/commercial Internet doesn’t work. The problem is the data transit itself. Shunting data around (like me posting this comment for example) is a fundamental part of the Internet and how it works as a whole. When I hit the Submit button I have no idea of how the text is moved around, but I do know it is via HHTP then it’s probably parsed through a Perl script which takes all the + symbols out as if it was sent via plain text.

    I could digress even further and try to work out how it is moved, but as my IP address and email address are traceable, I don’t want to end up being challenged legally for such actions. I really don’t believe that PIN/Internet passwords are suitable in todays climate of viruses and identity theft. There is yet another facebook story

    Also there is the fact that governments around the world actively encourage anyone to get online. Its taken me more than 10 years to get this far and there is still a lot I don’t really understand. I missed the days of DOS and got to grips with Windows 95 then 98 and XP. The user skill/knowledge level is gradually diluted to no knowledge at all, which I added to. I used to show senior citizens how to use computers (mainly the Internet) and showed them how to search. The course I taught was based around the BBC webwise Internet course (now BBC First Click), but this was later scaled down. It assumed that the computer was running Windows (NT at the time), and I took responsibility for the laptop computer. Buying a computer was also suggested to the “clients”. Problem was that was as far as it was supposed to go, but those that did have computers required technical support which was way beyond the scope of the programme. I raised this with the other members of the steering group, and was advised to tell those that it wasn’t the service provided. However, where I could help out and sort out the relatively minor problems I did so. That was also beyond the scope of what was intended. On one particular occasion an elderly gentleman rolled up with a £1500 laptop (back in the days when they were expensive!) which worried me as he could’ve been mugged for it, so this adds not just crypto but personal security issues too.

    The same thing applies for wireless hotspots too. Now laptops and netbooks are even cheaper the media still piles pressure (as does government) for people to get online. I’ve always been interested in computers and computing since a friend acquired a ZX81 back in 1979. I later got an original 1983 Acorn Electron which served me well until 1998. The scope of computing has changed fundamentally over the years, and I think that a lot of developments simply aren’t required, such as the operating system on hard disc instead of being on ROM chips. This alone would help negate viruses and a more hardcore approach would be required to damage critical stuff that makes the computer work. Useful as the Internet is and a big up to Berners-Lee for not trying to profit from his work, the commercial hijacking of it has led to all the problems we have now.

    I used to love trying to get magazine listings working, often resulting in failure, but now as computing has evolved beyond recognition in a lot of respects especially from the >_ prompt; stereo sound, DVD, video etc, all stuff I could only dream about on my old Electron. Ironically it still works; whereas the PC I type this on is my second, and both cost over 5 times as much as my parents paid for it. I was just mastering 6502 machine code as Windows 95 came out. My apologies for rambling and digressing.

  7. > “One lesson from the period was that the agencies clung to their old business model rather than embracing all the new opportunities…”

    Any general suggestions about the direction NSA/GCHQ SHOULD take? Or citations to bibliographical references, for such suggestions?

  8. The proposal is no doubt a good idea. Imagine, workload of monitoring millions users log files. The bad guys want to make you busy in traffic analysis and they will do something else. There is a problem with this approach. The problem is you can monitor network traffic within UK but how will you monitor international traffic. A cyber attack may be launched from a remote place (somewhere in a foreign country). Every foreign country do not have these facilities of monitoring network traffic.

    There are so many other ways to abuse it.

  9. Intelligent Algorithm will be required that will alert Security Staff that person(s) / group(s) is doing some thing anti-social. If plans goes through then I am sure it will create many jobs opportunities. Good education, good students and good academic staff are pre-requisite for such project. Furthermore, the line “Not knowing the law is no excuse” should be omitted with “Tell everyone about law of regarding cyber crime, the penalties and charges etc”. I am help stop script kiddies and warn bad guys. Only professionals will probably mount a cyber attack.

    All the best

Leave a Reply

Your email address will not be published. Required fields are marked *