All posts by Yi Ting Chua

Three paper Thursday: Online Extremism and Radicalisation

With the recent United States presidential election, I have chosen to focus the theme of this Three Paper Thursday on extremism and radicalisation. This topic has got increasing media attention during the past six years in the United States, through both a general rise in the public prominence of far-right, racist rhetoric in political culture (often attributed to the Trump presidency), and a series of high-profile violent events associated with far-right extremism. These events range from the riots in Charlottesville, Virginia (which turned violent when rally attendees clashed with counter-protesters and a vehicle drove into a crowd marching through downtown, killing one protester (Heim, Silverman, Shapiro, & Brown, 2017), to the recent arrest of individuals plotting a kidnap of the Governor of Michigan. This far-right violence brought to light the continued existence of right-wing extremism in the United States. This has historical roots in well-known organisations such as the Ku Klux Klan (KKK), a secretive, racist, terrorist organisation founded in 1865 during Reconstruction as part of a backlash against the acquisition of civil rights by African-American people in the South (Bowman-Grieve, 2009; Martin, 2006).

In contemporary online societies, the landscape and dynamics of right-wing extremist communities have changed. These communities have learned how to exploit the capacities of online social networks for recruitment, information sharing, and community building. The sophistication and reach of online platforms has evolved rapidly from the bulletin board system (BBS) to online forums and now social media platforms, which incorporate powerful technologies for marketing, targeting, and disseminating information. However, the use of these platforms for right-wing radicalisation (the process through which an individual develops and/or accepts extreme ideologies and beliefs) remains under-examined in academic scholarship. This Three Paper Thursday pulls together some key current literature on radicalisation in online contexts.

Maura Conway, Determining the role of the internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77-98. https://www.tandfonline.com/doi/full/10.1080/1057610X.2016.1157408.

The first paper comments on future directions for research in understanding and determining the role of the Internet in violent extremism and terrorism. After guiding readers through an overview of current research, the author argues that there is a lack of both descriptive and explanatory work on the topic, as the field remains divided. Some view Internet as mere speech platforms and argue that participation in online radicalised communities is often the most extreme behaviour in which most individuals engage. Others acknowledge the affordances of the Internet but are uncertain in its role in replacing or strengthening other radicalisation processes. The author concludes that two major research questions remain to be answered: whether radicalisation can occur in a purely online context, and if so, does it contribute to violence? In that case, the mechanisms merit further exploration. The author makes six suggestions for future researchers: a) widening current research to include movements beyond jihadism, b) conducting comparison research (e.g., between platforms and/or organisations), c) studying individual users in extremist communities and groups, d) using large-scale datasets, e) adopting an interdisciplinary approach, and f) examining the role of gender.

Yi Ting Chua, Understanding radicalization process in online far-right extremist forums using social influence model. PhD thesis, Michigan State University, 2019. Available from https://d.lib.msu.edu/etd/48077.

My doctoral dissertation examines the impact of participation in online far-right extremist groups on radicalisation. In this research, I applied social network analysis and integrated theories from criminology (social learning theory) and political science (the idea of the echo chamber) to understand the process of attitudinal changes within social networks. It draws on a longitudinal database of threads saved from eight online far-right extremist forums. With the social influence model, which is a regression model with a network factor, I was able to include the number of interactions and attitudinal beliefs of user pairs when examining attitudinal changes across time. This model allows us to determine if, and how, active interactions result in expression of more radical ideological beliefs. Findings suggested that online radicalisation occurred at varying degrees in six of seven forums, with a general lowered level of expressed extremism towards the end of observed time period. The study found strong support the proposition that active interactions with forum members and connectedness are predictors of radicalisation, while suggesting that other mechanisms, such as self-radicalisation and users’ prior beliefs, were also important. This research highlighted the need for theory integration, detailed measures of online peer association, and cross-platform comparisons (i.e. Telegram and Gab) to address the complex phenomena of online radicalisation.

Magdalena Wojcieszak, ‘Don’t talk to me’: effects of ideologically homogeneous online groups and politically dissimilar offline ties on extremism. New Media & Society, 12(4) (2010) pp 637-655. https://journals.sagepub.com/doi/abs/10.1177/1461444809342775.

In this article, the author is interested in answering two questions: 1) does participation in ideologically homogeneous online groups increase extreme beliefs, and 2) how do offline strong and weak ties with dissimilar beliefs affect extreme beliefs? The author uses online survey data and posts from neo-Nazi online forums. The outcome is measured by respondents’ responses to 10 ideology-specific statements. Other variables in the analysis included the level of participation in online groups, perceived dissimilarity of offline ties, news media exposure and demographics. Findings from a multivariate regression model indicate that participation in online groups was a strong predictor of support for racial violence after controlling for demographic factors and news media exposure. Forum members’ attitudes are subjected to normative influences via punitive or rewarding replies. For individuals with politically dissimilar offline ties, the author finds a weakened participation effect.

Together, these papers highlight the complexity of assessing the role played by the Internet in the radicalisation process. The first paper encourages researchers to tackle whether online violent radicalisation occurs via six different approaches. The other two papers show support for online radicalisation while simultaneously calling attention to the effect of other variables, such as the influence of offline relationships and users’ baseline beliefs prior to online participation. All of these papers cross academic disciplines, highlighting the importance of an interdisciplinary perspective.

References

Bowman-Grieve, L. (2009). Exploring “Stormfront”: A virtual community of the radical right. Studies in Conflict & Terrorism, 32(11), 989-1007.

Heim, J., Silverman, E., Shapiro, T. R., Brown, E. (2017, August 13). One dead as car strikes crowds amid protests of white nationalist gathering in Charlottesville; two police die in helicopter crash. The Washington Post. Retrieved from https://www.washingtonpost.com/local/fights-in-advance-of-saturday-protest-in-charlottesville/2017/08/12/155fb636-7f13-11e7-83c7-5bd5460f0d7e_story.html?utm_term=.33b6686c7838.

Martin, G. (2006). Understanding Terrorism: Challenges, Perspectives, and Issues. Thousand Oaks, California: Sage Publications.

Identifying Unintended Harms of Cybersecurity Countermeasures

In this paper (winner of the eCrime 2019 Best Paper award), we consider the types of things that can go wrong when you intend to make things better and more secure. Consider this scenario. You are browsing through Internet and see a news headline on one of the presidential candidates. You are unsure if the headline is true. What you can do is to navigate to a fact-checking website and type in the headline of interest. Some platforms also have fact-checking bots that would update periodically on false information. You do some research through three fact-checking websites and the results consistently show that the news contains false information. You share the results as a comment on the news article. Within two hours, you receive hundreds of notifications with comments countering your resources with other fact-checking websites. 

Such a scenario is increasingly common as we rely on the Internet and social media platforms for information and news. Although they are meant to increase security, these cybersecurity countermeasures can result in confusion and frustration among users due to the incorporation of additional actions as part of users’ daily online routines. As seen, fact-checking can easily be used as a mechanism for attacks and demonstration of in-group/out-group distinction which can contribute further to group polarisation and fragmentation. We identify these negative effects as unintended consequences and define it as shifts in expected burden and/or effort to a group. 

To understand unintended harms, we begin with five scenarios of cyber aggression and deception. We identify common countermeasures for each scenario, and brainstorm potential unintended harms with each countermeasure. The unintended harms are inductively organized into seven categories: 1) displacement, 2) insecure norms, 3) additional costs, 4) misuse, 5) misclassification, 6) amplification and 7) disruption. Applying this framework to the above scenario, insecure norms, miuse, and amplification are both unintended consequences of fact-checking. Fact-checking can foster a sense of complacency where checked news are automatically seen as true. In addition, fact-checking can be used as tools for attacking groups of different political views. Such misuse facilitates amplification as fact-checking is being used to strengthen in-group status and therefore further exacerbate the issue of group polarisation and fragmentation. 

To allow for a systematic application to existing or new cybersecurity measures by practitioners and stakeholders, we expand the categories into a functional framework by developing prompts for each harm category. During this process, we identify the underlying need to consider vulnerable groups. In other words, practitioners and stakeholders need to take into consideration the impacts of countermeasures on at-risk groups as well as the possible creation of new vulnerable groups as a result of deploying a countermeasure. Vulnerable groups refer to user groups who may suffer while others are unaffected or prosper from the countermeasure. One example is older adult users where their non-familiarity and less frequent interactions with technologies means that they are forgotten or hidden when assessing risks and/or countermeasures within a system. 

It is important to note the framework does not propose measurements for the severity or the likelihood of unintended harm occurring. Rather, the emphasis of the framework is in raising stakeholders’ and practitioners’ awareness of possible unintended consequences. We envision this framework as a common-ground tool for stakeholders, particularly for coordinating approaches in complex, multi-party services and/or technology ecosystems.  We would like to extend a special thank you to Schloss Dagstuhl and the organisers of Seminar #19302 (Cybersafety Threats – from Deception to Aggression). It brought all of the authors together and laid out the core ideas in this paper. A complimentary blog post by co-author Dr Simon Parkin can be found at UCL’s Benthams Gaze blog. The accepted manuscript for this paper is available here.

APWG eCrime 2019

Last week the APWG Symposium on Electronic Crime Research was held at Carnegie Mellon University in Pittsburgh. The Cambridge Cybercrime Centre was very well-represented at the symposium. Of the 12 accepted research papers, five were authored or co-authored by scholars from the Centre. The topics of the research papers addressed a wide range of cybercrime issues, ranging from honeypots to gaming as pathways to cybercrime. One of the papers with a Cambridge author, “Identifying Unintended Harms of Cybersecurity Countermeasures”, received the Best Paper award. The Honorable Mention award went to “Mapping the Underground: Supervised Discovery of Cybercrime Supply Chains”, which was a collaboration between NYU, ICSI and the Centre.

In this post, we will provide a brief description for each paper in this post. The final versions aren’t yet available, we will blog them in more detail as they appear.

Best Paper

Identifying Unintended Harms of Cybersecurity Countermeasures

Yi Ting Chua, Simon Parkin, Matthew Edwards, Daniela Oliveira, Stefan Schiffner, Gareth Tyson, and Alice Hutchings

In this paper, the authors consider that well-intentioned cybersecurity risk management activities can create not only unintended consequences, but also unintended harms to user behaviours, system users, or the infrastructure itself. Through reviewing countermeasures and associated unintended harms for five cyber deception and aggression scenarios (including tech-abuse, disinformation campaigns, and dating fraud), the authors identified categorizations of unintended harms. These categories were further developed into a framework of questions to prompt risk managers to consider harms in a structured manner, and introduce the discussion of vulnerable groups across all harms. The authors envision that this framework can act as a common-ground and a tool bringing together stakeholders towards a coordinated approach to cybersecurity risk management in a complex, multi-party service and/or technology ecosystem.

Honorable Mention

Mapping the Underground: Supervised Discovery of Cybercrime Supply Chains

Rasika Bhalerao, Maxwell Aliapoulios, Ilia Shumailov, Sadia Afroz, and Damon McCoy

Cybercrime forums enable modern criminal entrepreneurs to collaborate with other criminals into increasingly efficient and sophisticated criminal endeavors.
Understanding the connections between different products and services is currently very expensive and requires a lot of time-consuming manual effort. In this paper, we propose a language-agnostic method to automatically extract supply chains from cybercrime forum posts and replies. Our analysis of generated supply chains highlights unique differences in the lifecycle of products and services on offer in Russian and English cybercrime forums.

Honware: A Virtual Honeypot Framework for Capturing CPE and IoT Zero Day

Alexander Vetterl and Richard Clayton

We presented honware, a new honeypot framework which can rapidly emulate a wide range of CPE and IoT devices without any access to the manufacturers’ hardware.

The framework processes a standard firmware image and will help to detect real attacks and associated vulnerabilities that might otherwise be exploited for considerable periods of time without anyone noticing.

From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum

Jack Hughes , Ben Collier, and Alice Hutchings

This paper proposes a systematic framework for analysing forum datasets, which contain minimal structure and are non-trivial to analyse at scale. The paper takes a multi-technique approach drawing on a combination of features relating to content and metadata, to predict potential key actors. From these predictions and trained models, the paper begins to look at characteristics of the group of potential key actors, which may benefit more from targeted intervention activities.

Fighting the “Blackheart Airports”: Internal Policing in the Chinese Censorship Circumvention Ecosystem

Yi Ting Chua and Ben Collier

In this paper, the authors provide an overview of the self-policing mechanisms present in the ecosystem of services used in China to circumvent online censorship. We conducted an in-depth netnographic study of four Telegram channels which were used to co-ordinate various kinds of attacks on groups and individuals offering fake or scam services. More specifically, these actors utilized cybercrime tools such as denial of service attack and doxxing to punish scammers. The motivations behind this self-policing appear to be genuinely altruistic, with individuals largely concerned with maintaining a stable ecosystem of services to allow Chinese citizens to bypass the Great Firewall. Although this is an emerging phenomenon, it appears to be developing into an important and novel kind of trust mechanism within this market

Usability of Cybercrime Datasets

By Ildiko Pete and Yi Ting Chua

The availability of publicly accessible datasets plays an essential role in the advancement of cybercrime and cybersecurity as a field. There has been increasing effort to understand how datasets are created, classified, shared, and used by scholars. However, there has been very few studies that address the usability of datasets. 

As part of an ongoing project to improve the accessibility of cybersecurity and cybercrime datasets, we conducted a case study that examined and assessed the datasets offered by the Cambridge Cybercrime Centre (CCC). We examined two stages of the data sharing process: dataset sharing and dataset usage. Dataset sharing refers to three steps: (1) informing potential users of available datasets, (2) providing instructions on application process, and (3) granting access to users. Dataset usage refers to the process of querying, manipulation and extracting data from the dataset. We were interested in assessing users’ experiences with the data sharing process and discovering challenges and difficulties when using any of the offered datasets. 

To this end, we reached out to 65 individuals who applied for access to the CCC’s datasets and are potentially actively using the datasets. The survey questionnaire was administered via Qualtrics. We received sixteen responses, nine of which were fully completed. The responses to open-ended questions were transcribed, and then we performed thematic analysis.

As a result, we discovered two main themes. The first theme is users’ level of technological competence, and the second one is users’ experiences. The findings revealed generally positive user experiences with the CCC’s data sharing process and users reported no major obstacles with regards to the dataset sharing stage. Most participants have accessed and used the CrimeBB dataset, which contains more than 48 million posts. Users also expressed that they are likely to recommend the dataset to other researchers. During the dataset usage phase, users reported some technical difficulties. Interestingly, these technical issues were specific, such as version conflicts. This highlights that users with a higher level of technical skills also experience technical difficulties, however these are of different nature in contrast to generic technical challenges. Nonetheless, the survey shown the CCC’s success in sharing their datasets to a sub-set of cybercrime and cybersecurity researchers approached in the current study. 

Ildiko Pete presented the preliminary findings on 12thAugust at CSET’19. Click here to access the full paper.