Last week the House of Commons Culture, Media and Sport Select Committee published a report of their inquiry into “Harmful content on the Internet and in video games“. They make a number of recommendations including a self-regulatory body to set rules for Internet companies to force them to protect users; that sites should provide a “watershed” so that grown-up material cannot be viewed before 9pm; that YouTube should screen material for forbidden content; that “suicide websites” should be blocked; that ISPs should be forced to block child sexual abuse image websites whatever the cost, and that blocking of bad content was generally desirable.
You will discern a certain amount of enthusiasm for blocking, and for a “something must be done” approach. However, in coming to their conclusions, they do not, in my view, seem to have listened too hard to the evidence, or sought out expertise elsewhere in the world…
Google/YouTube told them that 10 hours of video was posted every minute, and the amount is increasing. In the oral evidence session an MP helpfully suggested: “That video content is tagged. You do not need to look at every single minute of video content. Surely you could have people who would look at the video content which is tagged with labels which suggest it could be inappropriate.” Of course “happy_slapping.wmv” or “fluffy_bunnies.avi” must always contain exactly what it says on the tin (not!) but unaccountably Google said it was a “fair suggestion”, so perhaps my cynicism is misplaced.
However, back to blocking.
I submitted some evidence of my own, which the committee summarised, reasonably accurately:
Dr Richard Clayton, a researcher in the Security Group of the Computer Laboratory at Cambridge University and author of several academic papers on methods for blocking access to Internet content, pointed out that there was no single blocking method which was both inexpensive and discerning enough to block access to only one part of a large website (such as FaceBook). In his view, the fatal flaw of all network-level blocking schemes was the ease with which they could be overcome, either by encrypting content or by the use of proxy services hosted outside the UK.
The committee’s conclusion, having read this was:
At a time of rapid technological change, it is difficult to judge whether blocking access to Internet content at network level by Internet service providers is likely to become ineffective in the near future. However, this is not a reason for not doing so while it is still effective for the overwhelming majority of users.
which I suppose logically means that the committee thinks that blocking should now be discarded as a policy option — but somehow I think that isn’t their intended meaning.
The Committee should perhaps have a look at this Australian report, which found that ISP level content filtering (and in Australia the politicians want to use ISP level filtering to provide a child-friendly Internet) did work (up to a point) at Tier 3 (the smallest) ISPs. The up-to-a-point is that unlike previous tests the systems didn’t completely wreck the browsing experience by slowing it down. However, the systems blocked only 85-98% of illegal material and similar percentages of material suitable for adults but not for younger children. Interestingly some products were better at different categories.
Getting that many sites wrong is really quite significant, so it’s difficult to see this as a ringing endorsement for blocking the web. Additionally, the Australian report found that the blocking was useless on “non-web” protocols (such as peer-to-peer) and their report specifically didn’t consider cost, or ease of circumvention — so it’s not just UK politicians not wanting to consider evidence on that topic!
Finally, I should note that the Culture Media and Sport Committee has also ignored some rather more recent academic work. The MPs have put into their report that they were horrified to discover that child sexual abuse images took 24 hours to remove in the UK. What (should they ever learn of it) will they make of the recent discovery by Tyler Moore and myself that shows that if the website is hosted abroad then a month is more to be expected?
9 thoughts on “Listening to the evidence”
When you say they ignored more recent work, do you mean they had it and disregarded it, or are you suggesting they were unaware of it? The latter is rather more pardonable than the former, in my estimation.
…whatever the cost. Wow, all dictators love that blank check.
People can judge and censor their own exposure, and some exposure can/should be regulated, but cost is always an issue. Justice is a scale, and life sure is as well.
Gun rights, free speach, and check and balances, disrupt those and tyranny always ensues, which is often far worse than some little offensive images or words.
Strict scrutiny of ISP’s is crazy. The internet is becoming cable tv. GRR.
… Surely you could have people who would look at the video content which is tagged with labels which suggest it could be inappropriate.” Of course “happy_slapping.wmv” or “fluffy_bunnies.avi” must always contain exactly what it says on the tin (not!) but unaccountably Google said it was a “fair suggestion” …
Of course Google would think it is fair. They are told on tehnical level what to do, and if it fails by admitting more content to the user, they don’t have to care. they did what asked for. They would only have to care if it would block the majority of their content.
…However, the systems blocked only 85-98% of illegal material and similar percentages of material suitable for adults but not for younger children. Interestingly some products were better at different categories.
Getting that many sites wrong is really quite significant, so it’s difficult to see this as a ringing endorsement for blocking the web…
Actually, I believe it is an endorsement. For the moment, let’s assume we like the policy of making the web child friendly for everyone. If you plan to make the web child friendly, blocking 98% of unfriendly content would increase your child’s mean time to accidentally stumble onto unfriendly websites by up to a factor of 50. From once per week to once per year. (Some unfounded statistical assumptions are being made here)
However, you are right that this says nothing about the efficiency of of these products against those determined to beat them. They would make sure that their content is in the 2% that is not blocked.
@Alf However, you are right that this says nothing about the efficiency of of these products against those determined to beat them. They would make sure that their content is in the 2% that is not blocked.
More significantly, these tests always assume that the purveyors of the blocked content take no steps to evade the blocking. A glance at anyone’s in-box will show that this is an invalid assumption when considering the efficacy of spam filters.
From the report: “We believe that high profile facilities with simple, preferably one-click mechanisms for reporting directly to law enforcement and support organisations are an essential feature of a safe networking site. We would expect providers of all Internet services based upon user participation to move towards these standards without delay.”
LOL. They don’t know much about the sort of people who use social network sites do they? The ones with ids based on chan memes? When I recently wanted to report a crime to local police, all online and email ways of doing this had been disabled, I assumed because of pranksters. Have the committee checked what the police think about their idea for one-click reporting?
Check out what the muppets here in Oz are doing too!
I’m on plus net and I opted for their safe surfing trial which uses Aladdin. It gave me confidence my kids were less likely to accidentally stumble upon something I’d prefer they didn’t see.
Of course when they’re older they’d figure out how to get round it – but you’ll never stop someone who is deliberately trying to find that sort of material.
Isn’t there value in helping people avoid accidentally tripping over offensive material? The government can’t do much, but it can at least do that? Maybe I misunderstood, but the tone of the post suggested you didn’t think there was any point blocking traffic at the network level…?
I note that the Aladdin trial has recently finished — so I wonder if you are now lacking in confidence?
More seriously, yes there is value in preventing people tripping over things, but these blocking systems are significantly more effective when deployed on end-user machines rather than at the ISP. For example, an ISP filter can never inspect the content of “https” pages, whereas software on your machine can deal with this transparently.
Add to that the difficulty of personalisation (and setting different levels for different age children all on the Internet at the same time, but through the same ISP connection) and it doesn’t look like good engineering, or good accountancy either.
I’m surprised you advocate end-user software as good engineering. Surely that requires solid end-user system administration and security practices. It won’t take much more than a virus to blow that wide open. Hmmm.
I’d much prefer someone at my ISP to do this for me.
And https only matters if you’re doing DPI. Websites can be monitored so that only those with ‘sufficient’ content management practices are allowed onto the ‘safe’ web.
It’s not perfect. The unsafe web will never be more than a few clicks away, but at least this keeps it separate, and requires a deliberate decision to go there. I’m not convinced the government should force ISPs to do this – in my view it should be “opt-in” for a lot of the reasons you’ve outlined – but you’ve got to generate the demand somehow and get this market kickstarted.
So perhaps the government did listen to the evidence after all… 😉