House of Lords Inquiry: Personal Internet Security

August 10th, 2007 at 00:01 UTC by Richard Clayton

For the last year I’ve been involved with the House of Lords Science and Technology Committee’s Inquiry into “Personal Internet Security”. My role has been that of “Specialist Adviser”, which means that I have been briefing the committee about the issues, suggesting experts who they might wish to question, and assisting with the questions and their understanding of the answers they received. The Committee’s report is published today (Friday 10th August) and can be found on the Parliamentary website here.

For readers who are unfamiliar with the UK system — the House of Lords is the second chamber of the UK Parliament and is currently composed mainly of “the great and the good” although 92 hereditary peers still remain, including the Earl of Erroll who was one of the more computer-literate people on the committee.

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

My appointment letter made it clear that I wasn’t required to publicly support the conclusions that their lordships came to, but I am generally happy to do so. There’s quite a lot of these conclusions and recommendations, but I believe that three areas particularly stand out.

The first area where the committee has assessed the evidence, not as experts, but as intelligent outsiders, is where the responsibility for Personal Internet Security lies. Almost every witness was asked about this, but very few gave an especially wide-ranging answer. A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not. The committee have a series of recommendations to address this — there should be BSI kite marks to allow consumers to select services that are likely to be secure, ISPs should lose mere conduit exemptions if they don’t act to deal with compromised end-user machines and the banks should be statutorily obliged to bear losses from phishing. None of these measures will fix things directly, but they will change the incentives, and that has to be the way forward.

Secondly, the committee are recommending that the UK bring in a data breach notification law, along the general lines of the California law, and 34 other US states. This would require companies that leaked personal data (because of a hacked website, or a stolen laptop, or just by failing to secure it) to notify the people concerned that this had happened. At first that might sound rather weak — they just have to tell people; but in practice the US experience shows that it makes a difference. Companies don’t like the publicity, and of course the people involved are able to take precautions against identity theft (and tell all their friends quite how trustworthy the company is…) It’s a simple, low-key law, but it produces all the right incentives for taking security seriously, and for deploying systems such as whole-disk encryption that mean that losing a laptop stops being synonymous with losing data.

The third area, and this is where the committee has been most far-sighted, and therefore in the short term this may well be their most controversial recommendation, is that they wish to see a software liability regime, viz: that software companies should become responsible for their security failures. The benefits of such a regime were cogently argued by Bruce Schneier, who appeared before the committee in February, and I recommend reading his evidence to understand why he swayed the committee. Unlike the data breach notification law the committee recommendation isn’t to get a statute onto the books sooner rather than later. There’s all sorts of competition issues and international ramifications — and in practice it may be a decade or two before there’s sufficient case law for vendors to know quite where they stand if they ship a product with a buffer overflow, or a race condition, or just a default password. Almost everyone who gave evidence, apart from Bruce Schneier, argued against such a law, but their lordships have seen through the special pleading and the self-interest and looked to find a way to make the Internet a safer place. Though I can foresee a lot of complications and a rocky road towards liability, looking to the long term, I think their lordships have got this one right.

Entry filed under: News coverage, Politics, Security economics

24 comments Add your own

  • 1. Martin Kochanski  |  August 10th, 2007 at 08:30 UTC

    As a software vendor, I have always argued for software liability. Adopting it will force the industry to grow up.

    The most frustrating aspect of software support is that the bugs that users encounter are mostly not ours. We spend a great deal of effort in identifying where a bug lies and in working round it when we can: but all too often a workround isn’t possible.

    To take a recent example: a particular make of scanner comes with a driver that allegedly allows “one-button” scanning, as the TWAIN standard mandates. The “one-button scan” function is indeed offered, and the appropriate button appears on the screen. Pressing that button results in nothing being scanned; all subsequent attempts at scanning cause the entire application (whether it’s Cardbox or another program) to crash in a way that cannot be intercepted or protected against.

    And there it ends. The driver was written in a hurry the night before the product was launched, minimally adapting an already bug-ridden sample program. The manufacturer has no resources to invest in providing technical support for a piece of software that was written once, some time ago, by someone unidentifiable who has probably left the company by now. By now all technical effort is being concentrated on a new model: in any case, modifying an already shipped driver generates no revenue and risks adding bugs.

    The example I have given is not security-related, but it is an example of how it is essential for mechanisms to be in place to apportion blame for bugs, anomalies and vulnerabilities. Since software liability cannot work unless we know who is liable, adopting it will force such mechanisms to come into existence.

    It is frequently complained nowadays that supporting software has ceased to be a branch of engineering and is becoming a branch of biology instead. With a dozen or so software components involved in almost any action you perform on a computer (such as typing this comment), this is inevitable unless proper engineering practices are followed. If software liability provides each manufacturer with an economic incentive to prove that he was not liable for a particular problem, this will be a step in the right direction.

  • 2. Ian Gorrie  |  August 10th, 2007 at 15:24 UTC

    Finally. I think it’s fantastic that these conclusions set direction for making software companies responsible for their work.

    One should not have to be a computer security expert to use the internet. One should not have to be a fraud expert to use a bank of a brokerage.

    Having a real money cost incentive for creation of quality products will very naturally drive the market to improve their handling of security in product development and maintenance. It may even effect the research and disclosure marketplace in a healthy way in addition.

  • 3. slightly hopeful cynic  |  August 11th, 2007 at 08:26 UTC

    I haven’t read all the lengthy stuff … but won’t this just result in companies shipping e.g. web browsers marked “not to be used for internet banking”?

  • 4. Richard Kelsall  |  August 11th, 2007 at 11:50 UTC

    A data breach notification law would be excellent. I am very much in favour.

    But I don’t understand the bit about “software companies should become responsible for their security failures”. I read Bruce’s testimony and his legal knowledge does not seem to extend beyond the USA. As someone who writes and sells software I would be very surprised if my programs are not subject to the English common law of negligence, established when a certain hapless snail drowned in ginger beer. Are you saying the law of negligence does not apply to software?

  • 5. Richard Clayton  |  August 11th, 2007 at 13:27 UTC

    To what extent (NB: IANAL) negligence applies will depend upon the contract you form with your customers. In the UK you cannot of course exclude death or serious injury (Unfair Contract Terms Act 1977), but it is common for software to be supplied “as is” without any guarantees of performance or specifications that will be met.

    It’s also common for such terms to be contained in “click-through” licenses — although many more cautious companies use sealed packages… But that’s straying from the point, which is that if your software contains a security vulnerability, a buffer overflow say, which — in 2007 — any half-way competent toolset (or code inspection) ought to identify, then the users of the software have no practical recourse against the vendor — either for negligence or indeed for whether the software was “fit for purpose”.

    This is very convenient indeed for the software industry, but less so for everyone else.

  • 6. Richard Kelsall  |  August 11th, 2007 at 16:23 UTC

    Ah, thank you, we certainly shouldn’t be allowed to exclude ourselves from negligence. That should be changed. Maybe we do need a statutory software liability regime to rectify this. It should be general purpose though, not specific to security problems.

    Horrible things click-through licenses. I once thought it would be fun to insert some outrageous term in the middle of a long complex license ‘I promise to wear a large green hat every Friday for the next ten years’, kind of thing. I wonder what I could include that would be legally enforceable …

  • 7. Richard G Brown  |  August 12th, 2007 at 13:03 UTC

    Can you point me at a link where the consequences of implementing recommendation three were discussed please? Actions are not without consequences and I would like some reassurance that the multiple bad things that would flow from implementing such a rule have been properly considered.

  • 8. Richard Clayton  |  August 12th, 2007 at 13:16 UTC

    The report is linked from the main article. Recommendation 3 is merely my counting; you’ll find what the Select Committee actually recommended at paragraph 4.41, and the discussion at paragraphs 4.25 through 4.37.

    As to “multiple bad things”, that will of course depend on the detail, and it may take a decade or so to drill down into that. However, to take two examples, the automobile and pharaceutical industries survive perfectly well with complex liability regimes — though of course neither industry is organised in the same way as it was in, say, the 1920s…

  • 9. Richard G Brown  |  August 12th, 2007 at 13:21 UTC

    “However, to take two examples, the automobile and pharaceutical industries survive perfectly well with complex liability regimes”

    Well, not quite.

    The ultra-expensive, risk-averse approach to pharmaceutical regulation almost certainly leads to more deaths than it saves: you also have to consider the patients that die before a life-saving drug is approved and those drugs that have life-saving value but whose merits become marginal after the costs of approval. If software liability ended up like pharmaceutical regulation, we should consider that we have failed.

  • 10. Richard G Brown  |  August 12th, 2007 at 13:23 UTC

    Sorry – should have thanked you for the pointer.

  • 11. Igor Drokov  |  August 12th, 2007 at 16:01 UTC

    One big difference between traditional industries and software that the value (and respectively liability) chain is much more convoluted in the “virtual world”.

    A car maker might have thousands of suppliers but they “own” the end-product. As far as consumers are concerned if brakes don’t work they can sue the car company even if the actual fault is caused by some third-party software controlling them. This is also one of the reasons why car makers used to void their warranty if the customer uses some after-market parts etc.

    Considering computer industry this approach doesn’t fit. Consumers buy the product: a computer with a diverse set of hardware, software and connectivity options, but no single company “owns” the whole set. If anything goes wrong who would have a job of finding out where the buck stops? Was it a bug in the browser or the operating system or the hardware or a combination of all three?

    With current hardware capabilities virtualisation is a reality, so the complexity of tracking down the root cause of problems. It is unfeasible to expect a software vendor to test for all imaginable platforms a customer might use and who would be able to evaluate consumer claims against those of vendors? A friend of mine has recently received an abusive email claiming that visiting his web site “destroyed” user’s computer, a web site is absolutely fine and but if the proposed legislation was in place, who would be tasked with a role of evaluating a flood of similar complaints from confused users?

    In my opinion, a strict software liability regulation will most likely:

    - cause software vendors to shift liability back to consumers by claiming they use an unsupported hardware/software combination.

    - create more “packaged” limited functionality computer platforms: this is happening already in forms of having a cable tv box that can record and play videos, gaming consoles that can browse the web etc

    - finally see arrival of personal network computers that do not have any functionality apart from displaying and connecting to the network (as computer terminals used to be)

    In any case, would be interesting to see how this develops and providing it doesn’t kill innovation and competition, any disruption is good for the industry and consumers.

  • 12. Richard Clayton  |  August 12th, 2007 at 16:28 UTC

    Do have a look at Bruce Schneier’s evidence as I suggested, where some of this was discussed.

    Yes of course establishing liability will sometimes be complicated — but courts deal with complicated things every day. Yes of course some companies will create low functionality devices, but this may not improve their market share. Yes of course some companies will try and dump the risk back onto consumers — but the legislature seldom tolerates consumers being mistreated for all that long, so this would only be short term.

    Clearly the world will not be the same when a liability regime is established. However, I think it’s likely to be a better place for almost everyone.

  • 13. Igor Drokov  |  August 12th, 2007 at 20:59 UTC

    Reading the report and evidence transcripts is actually fascinating… It is a unique collection of data and views from people across the industry. Btw is there a link to download pdfs of evidence accounts like your link to Bruce’s?

  • 14. Richard Clayton  |  August 12th, 2007 at 22:50 UTC

    The link in the main article leads to a page which links to both the report itself and the rather substantial evidence volume. This contains the corrected versions of the sessions, some of which are also linked to separately, rather than the complete set of uncorrected drafts linked from http://www.parliament.uk/parliamentary_committees/lords_s_t_select/Evidence1.cfm
    (which may not stay online forever anyway).

  • 15. Andy Steingruebl  |  August 13th, 2007 at 03:13 UTC

    Mr. Brown,

    I remain unconvinced that a regulatory system closer to one that governs pharmaceuticals rather than the current regulatory regime wouldn’t at least be progress. We can certainly point to problems with the current pharmaceutical regulatory regime, and lots of people do. At the same time though at least one thing it gives us some transparency into methodology and results.

    I’ve written a few small pieces that I think might be beneficial in this area:

    http://securityretentive.blogspot.com/2007/08/what-is-safe-enough.html
    http://securityretentive.blogspot.com/2007/05/analyzing-software-failures.html

    While I agree with your points about net utility caused by said regulatory regime, I’m not exactly enamored with the existing regime either and I wonder if you have any thoughts as to a better model.

    Thank you

  • 16. Richard G Brown  |  August 13th, 2007 at 06:28 UTC

    Andy,

    Thanks for the links to your postings; I’ll take a look.

  • 17. Roger Gammans  |  August 13th, 2007 at 12:36 UTC

    As a sysadmin and developer in general I think software liability would be a good thing – tough I fear the devil will be truly in the details.

    For instance, where will the open source commutity fit in? I hate to see open source distruibtion effectively outlawed as many developers and projects couldn’t afford the required insurance.

    Limiting the liability to the products purchase price probably won’t work either , as the price for a single license is slow low it hardly compensates for the loses of a security incident.

  • 18. Eric Norman  |  August 13th, 2007 at 21:34 UTC

    The impact of liability for security defects on the open source community is the same thing I wonder about. I suppose one could argue that since the user can inspect the source code, then open source software is the only software that is entitled to be distributed on an “as is” basis. But such an argument seems overly harsh and neither very complete nor compelling to me.

    I suppose the main thing that needs to happen is to make sure that the open source question is part of the debate.

    It will be interesting to see how this plays out.

  • 19. Igor Drokov  |  August 15th, 2007 at 10:29 UTC

    Reading the report and statements, I could not fail to notice that their Lordships adopted 5 out of 6 recommendations by ordinary users from a small computer club. Given the unprecedented access to a variety of experts the Committee had I found this fact fascinating.

    My post with quotes from the report about it: http://blog.cronto.com/index.php?title=do_you_listen_to_your_users

  • 20. Clive Robinson  |  August 15th, 2007 at 11:49 UTC

    @Igor Drokov

    In your three points above about what you thought might happen you neglected to mention TPM, DRM and Secure Licencing.

    In a way you kind of missed out an essential point with physical-v-virtual product. In the case of a defective car or part the manufacture can apply a series of tests to reveal if the claimed defective part is realy theirs (and have profited by the sale) or counterfit (for which they have no liability). Unfortunatly for virtual/data-bit only products like software this is not possible in the same way, it requires something in addition to the actual product.

    Ill thought out legislation (ie to customer orientated from the manufactures perspective) might well leave the manufacture wide open to supporting counterfit product. It might be a bit for bit copy, but importantly the manufacture has not derived any benifit from the use of it. Which would make legal liability a real minefield that could take courts many many years to set case law for, or worse preasure for new draconian legislation (see history of the Fritz Chip etc).

    No industry is going to willing go down that kind legal liability route unless there is no alternative open to it. Especially as it would encorage the likes of “amulance chasser” and “patent troll” lawyers who would dearly love a new “class action” playground to flex their muscels in and earn the “spare change” to buy the latest in luxury jets and yachts…

    After a little thought you will realise that the manufactures would if they have to accept legislation for liablility, like another route open to them, and will fight tooth and nail for their version and rules.

    Of the few workable systems out there currently they are likley to look at,

    1, Trusted Platform (TPM)
    2, Digital Rights Managment (DRM)
    3, Secure Licencing

    The prefered choice of major software suppliers would be TPM with the required additional cost (hardware) being enforced on all “appliance” manufactures. Which would if handled correctly (from the major software houses perspective) give the ultimate lock in for them and their chosen hardware associates with minimal cost to them whilst alowing all sorts of “new inovative” licencing models (for which you and I will be forced into accepting as ther will be no other choice as they effectivly own your platform).

    DRM has for many reasons had a fairly bad press, both for the apparent draconian / questionable / illegal attitudes and actions of some rights holders, and for the fact it appears (on the surface) to be fairly eaisily bypassed and probably always will be. It’s demise has also been predictade because some suppliers of digital media are showing the view point that DRM is way to much trouble for too little gain (cost-v-profit).

    As for secure licencing which is somewhat similar to DRM but is unique to each individual sale not a product or range (therefor no master key or equivalent to be found for any easyily distributable “class break”). Unfortunatly for the software manufacture Secure Licencing has many (if not most) of the bad points of DRM plus a significant added expense to the software manufacture. Effectivly they no longer produce millions of identical copies of their product and push them into the distributor chain, they now have to supply a million securly variant copies of the product directly to the end user if they want to maintain effective (control) security.

    So as the preasure for “Liability Legislation” increases then I think you will see the major software companies pushing for TPM, either directly or through another guise (National Security / Anti-terrorism / whatever else the idiot legislators will swallow).

    If the legislators are stupid enough (why do you think the Manufactures refer to it as “educating” the electorate / representatives) to go down that route (and belive me they will unless counter preasure is supplied) the effects on all “Digital Creativity” will be catastrophic. It will get to the point where you would not be able to take a photo, sound / movie clip / of your child to send to their addoring grandparents without having to pay a fee back to the major software houses.

    Likewise semi-proffesionals would not be able to create software or music or other digital art without having to pay a fee to the TPM system holder(s) to “licence” the key for the TPM so that they can make it available to others.

    Also small proffesional organisations will find themselves in a similar position of having to pay to get access to the market place.

    As for the big boys they will deal amongst themselves in the sme way they currently do with cross patent agrements etc, and will effectivly maintain a cartell…

    Oh and don’t think they have not considered the “Marketing Data” asspect of TPM where your every move gets reported back to the TPM organisation to be sold on for a proffit, and compleate lack of your privacy and personal security.

    With Legal Liability the law of “unintended consiquences” always applys and it might well hurt us end users considerably more than the current no liability unregulated “wild west” marketplace we currently have.

    The best way forward is probably a (semi) open marketplace where paid for products have to be shown to meat common criteria. Where the criteria are set by an independent foundation and testing tools are frealy available, and those with mandated liability (Banks etc) require the user to use products that meet the criteria.

    Slightly less desirable would be the likes of the Underwriters Laboritories (UL) which enabled the isurance industry to offer discounts to end users who used products that meet their requirments.

    I am not against “Lemon Laws” they have their place, when it comes to the likes of matters of “safety” but, they can as has been seen by numerus court cases be used inappropriatly. I suspect the original drafters of the legislation that allowed “Class Action” are somewhat saddend by the ways it has been (ab)used.

  • 21. giafly  |  August 17th, 2007 at 18:54 UTC

    Software liability is a problem for FOSS as it’s free. There’s no money to pay for insurance etc. This could be fatal for them, so Bruce Schneier suggests that FOSS is exempt.

    Faced with this, my company would convert all its software to FOSS and charge for other things – CPU time, support, consultancy, training etc. (We already get a lot of income from such things).

    Thus end-users would not benefit.

  • 22. William Poel  |  September 15th, 2007 at 17:14 UTC

    This looks (predictably) like another convoluted self-perpetuating beanfeast for lawyers – coincidentally the profession that supplies more politicians than any other.

    Do you seriously believe this Richard?

    “…people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic)…”

    Are you quite certain this has absolutely nothing to do with brown nosing and having time to waste playing the system?

    Far from being the problem, the answer lies in the notion “that individuals are just not well-informed enough to understand the security implications of their actions”

    Security starts with the users. There is no point in any of these proposed measures as long as users do not understand the basics of identity management starting with their own online identities. Minimal extra effort with TLDs would provide an “all SSL” domain hierarchy as the root of traceable trust.

  • 23. Richard Clayton  |  September 15th, 2007 at 17:32 UTC

    a) none of the Select Committee were lawyers by trade
    b) yes, having talked with them, yes I do believe that they are quick learners
    c) security may start with the users but it doesn’t end there. Users are NOT, in the main, quick learners, nor are they specially motivated (or qualified) to understand the complexities of modern systems and assess the risks and deploy appropriate countermeasures.
    d) I’m not entirely sure why SSL (whether tied to TLDs or not) provides significant trust or indeed traceability. The certificate issuers do a minimum of due diligence and guarantee nothing…

    At the time of writing, for example:

    https://tino-paypal.viscomp.biz

    is a PayPal phishing website with a valid cert issued by Equifax, but this is NOT a PayPal site, despite the look of it…

    So except for the lack of a green bar in the very newest browsers, this appears to be the genuine article and apparently a respected certificate owner says so. Expecting users to understand what is wrong here is to ask far far too much.

  • 24. William Poel  |  September 16th, 2007 at 00:08 UTC

    I didn’t say that the current SSL scheme was satisfactory. Most of the bits of the internet infested by US corporate and political interests leave a lot to be desired, starting with the “management” of TLDs by IANA/ICANN and Internic.

    Your “RentaGong” gang appear to be doing what most of the ignorant do-gooders have wanted to do from the moment the Internet escaped academia – namely fiddle mightily with what they perceive to be the symptoms, and avoid the challenges of grappling with the root causes.

    What you and they seem to driving at is a walled garden solution, probably run by a large organisation that will pay fat stipends to political advisers and consultants. Maybe Bill Gates was too hasty when he turned the Microsoft Network from a propreitary scheme into an open IP environmenet.

    Awaken Prestel and BT Gold! Your time has at last come…

Leave a Comment

Required

Required, hidden

Some HTML allowed:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Subscribe to the comments via RSS Feed


Calendar

August 2007
M T W T F S S
« Jul   Sep »
 12345
6789101112
13141516171819
20212223242526
2728293031