Three Paper Thursday: The role of intermediaries, platforms, and infrastructures in governing crime and abuse

The platforms, providers, and infrastructures which together make up the contemporary Internet play an increasingly central role in the business of governing human societies. Although the software engineers, administrators, business professionals, and other staff working at these organisations may not have the institutional powers of state organisations such as law enforcement or the civil service, they are now in a powerful position of responsibility for the harms and illegal activities which their platforms facilitate. For this Three Paper Thursday, I’ve chosen to highlight papers which address these issues, and which explore the complex networks of different infrastructural actors and perspectives which play a role in the reporting, handling, and defining of abuse and crime online.

While much abuse regulation is pragmatic and administrative, based around safeguarding technical aspects of network health, these issues become more contentious where more contested value-decisions need to be made. In terms of the wider control of content and the use of online services, questions about what behaviours should be sanctioned and how this should be organised often result in protracted battles between providers, users, and governments. There are understandable problems with scale and detection – to what extent problems of abuse are actually solvable. Many platforms and providers have made the case, often with legal backing, that they are conduits, not publishers, asserting ‘technological neutrality’, and arguing that they should not be arbiters of moral regulation. However, in recent years, this has generally been less and less effective, with platforms increasingly being called to account for the harms which occur on their platforms and their own power to shape the world.

How abuse and crime are regulated, and the role of infrastructure providers in this, are fundamentally issues of privacy, power, and control. While there are increasing calls for platforms to take responsibility for regulating abuse (such as far-right content and organisation, fake news, and misogynistic bullying), there are also increasingly in the post-Snowden era countervailing (though not contradictory) calls for restricting co-operation with law enforcement and safeguarding user privacy. In an era where accurate public health messaging is increasingly vital and the spread of disinformation potentially carrying yet more drastic consequences, we may well see these dynamics changing, with platforms showing far more willingness to (at least attempt to) exert robust controls over how their platforms are used.

Abuse reporting and the fight against cybercrime (2017) – Mohammad Hanif Jhaveri, Orcun Cetin, Carlos Gañán, Tyler Moore, & Michel van Eeten

The paper sets out a comprehensive survey of the abuse reporting ecosystem at the intermediary level, focusing on the relationships and flows between a range of different kinds of actor. In particular, it highlights the importance of voluntary action and collaboration for effective abuse handling. ISPs receive thousands of abuse reports daily, including security issues, intellectual property issues, and content issues such as the circulation of child sexual abuse images. The extent to which providers and others are able to act against abuse is in fact constrained by a variety of different factors, which the authors describe in detail, along with a framework describing the current abuse handling infrastructure. They differentiate between three distinct pathways which this handling can take: direct remediation, in which notifiers (i.e. the people or automated systems who identify the compromised or offending resource) communicate directly with the owner of the resource, intermediary remediation, in which notifiers communicate the abuse or compromise to a third party (such as an ISP) who coordinates clean-up, and third party protection, in which the notifier sends information (generally regarding an abusive service whose owner is unlikely to respond to abuse complaints) to a security vendor who can then protect potentially vulnerable third parties. This third category includes classic examples such as spam lists and blacklists. 

While a great many abuse notifications are contributed by automated systems, there remains a crucial role for manual abuse reporting. Engagement on the part of abuse notifiers is often motivated by classic Open Source values of community participation and a sense of moral duty for individuals and volunteer groups, and brand reputation and intelligence sharing for corporations. For intermediaries, impacts on network operation, public reputation, and legal liability are key motivators, while for third parties incentives are weaker. Crucially, the authors identify a key role for government in filling in the gaps left by market provision, either as active participants, or incentivisers.

Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms (2017) – Sarah Myers West 

In contrast to this complex, voluntarily-emerging ecosystem of actors in a wide range of different organisations and of different kinds, the authors of the second article outline a struggle over the regulation of abusive content on social media platforms. This too plays out as a complex system of interacting values and incentives between actors, but with a far less mutual power dynamic: between the users and the provider. This paper details a user campaign to change Facebook’s policy on nudity, noting that although Facebook has substantial power to make executive decisions about content policy, they too rely on substantial manual labour by users identifying and reporting abusive content. Through a series of protests and other forms of collective action, users of Facebook aimed to shift the platform’s restrictive and gendered policies around nudity, in which women’s nudity was removed, but not equivalent men’s nudity. The author details a range of different examples of collective action, with a change to the policy being made with little fanfare after several years of online protest. The author argues that the dynamics of who is able to navigate and influence the content moderation system largely reflects existing social capital and their ability to harness attention economy. The groups benefited by the eventual change (breastfeeding mothers and breast cancer survivors) were generally those who managed to attain high visibility through celebrity endorsements (rather than making use of connectivity, as might be expected). Casting platforms as ‘network gatekeepers’, the author argues that those with access to gatekeepers, political power, information production ability, and the existence of alternative choices are more able to influence these policies.

The power to structure: exploring social worlds of privacy, technology, and power in the Tor Project (2020) – Ben Collier

In contrast to the previous two papers (the first of which looks at cooperation between a range of different actors and organisations, and the second of which looks at relationships between users and providers), the final paper looks at the internal differences in values and perspectives within a single organisation. For this, I use an example of an Internet infrastructure which takes a very ‘pure’ approach to these issues, swinging as far as possible towards user privacy and dramatically restricting its own ability to control its users’ behaviour: the Tor anonymity network. This paper reports on the results of my PhD research, which was conducted over 4 years and involved interviews with 26 members of the Tor community, as well as substantial archival research reading through the Tor Project’s extensive mailing lists. When I began, I thought Tor would be fairly homogeneous, with a well-defined set of core values, but in fact, I found a community with three distinct and contrasting ways of making sense of privacy and the role that Tor should play in the world. The first of these was the ‘engineer’ perspective, which stems from the design and development work of Tor and understands privacy as a structure, with Tor acting to reshape the ‘pinch points’ of online power through changing the way the Internet works. The second was the ‘activist’ perspective, associated with the PR, policy, and outreach work of Tor, which understands privacy as a struggle, with Tor as part of a social movement for privacy, and connected intrinsically to other struggles for social justice. The final perspective was the ‘infrastructuralist’ perspective, associated with the administrative and maintenance work involved in running Tor relays, which understands privacy as a service, with Tor taking the role of a neutral service provider which simply exists to facilitate the actions of its users. 

Each of these perspectives are associated with different kinds of work within Tor, which are also distinct sites at which abuse is encountered or reasoned about: abuse is differently-conceived in design and development from abuse as encountered by administrators, or the way that abuse is talked about by PR officers or policy professionals. While the activist perspective is happy to condemn particular uses of Tor (by the far right, for example), the infrastructuralist perspective is adamant that taking any moral line on user conduct is a risky strategy. I argue that Tor’s success is partly down to its ability to square each of these different ways of understanding it, using shared mental models of Tor’s users as a way to allow these three perspectives to work together while having conflicting understandings of the political salience of Tor. I also argue that changes in the broader culture of information security (and a crisis within Tor and its aftermath) have led to a reorientation of these perspectives.


Across all three papers, the importance of different kinds of work – in particular, the often-overlooked role of manual administrative work and the perspectives and incentives of the people who do it – is clear. While design work clearly has a role to play in abuse handling, the administrative work of manually creating, processing, and actioning abuse reports is still crucial. In each of these cases, handling abuse relies on complex networks of different types of work and incentives that need to be reconciled. Far from being simplistically top-down, or reflective of ‘technological neutrality’, in practice the infrastructure provider in each of these cases is at the centre of a tangled network of tense internal and external conflicts between these different actors, their working practices, and their values.

Leave a Reply

Your email address will not be published. Required fields are marked *