Depictions of cybercrime often revolve around the figure of the lone ‘hacker’, a skilled artisan who builds their own tools and has a deep mastery of technical systems. However, much of the work involved is now in fact more akin to a deviant customer service or maintenance job. This means that exit from cybercrime communities is less often via the justice system, and far more likely to be a simple case of burnout.Continue reading Cybercrime is (still) (often) boring
Infrastructure used to be regulated and boring; the phones just worked and water just came out of the tap. Software has changed all that, and the systems our society relies on are ever more complex and contested. We have seen Twitter silencing the US president, Amazon switching off Parler and the police closing down mobile phone networks used by crooks. The EU wants to force chat apps to include porn filters, India wants them to tell the government who messaged whom and when, and the US Department of Justice has launched antitrust cases against Google and Facebook.
Infrastructure – the Good, the Bad and the Ugly analyses the security economics of platforms and services. The existence of platforms such as the Internet and cloud services enabled startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But criminals also build infrastructure, from botnets through malware-as-a-service. There’s also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even “respectable” infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this?
I argue that this is not simply a matter for antitrust lawyers, but that computer scientists also have some insights to offer, and the interaction between technical and social factors is critical. I suggest a number of principles to guide analysis. First, what actors or technical systems have the power to exclude? Such control points tend to be at least partially social, as social structures like networks of friends and followers have more inertia. Even where control points exist, enforcement often fails because defenders are organised in the wrong institutions, or otherwise fail to have the right incentives; many defenders, from payment systems to abuse teams, focus on process rather than outcomes.
There are implications for policy. The agencies often ask for back doors into systems, but these help intelligence more than interdiction. To really push back on crime and abuse, we will need institutional reform of regulators and other defenders. We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability. It may make a real difference if we can push up offenders’ transaction costs, as online criminal enterprises rely more on agility than on on long-lived, critical, redundant platforms.
This was a Dertouzos Distinguished Lecture at MIT in March 2021.
As cybercrime researchers we’re often focused on the globalised aspects of online harms – how the Internet connects people and services around the world, opening up opportunities for crime, risk, and harm on a global scale. However, as we argue in open access research published this week in the Journal of Criminal Psychology in collaboration between the Cambridge Cybercrime Centre (CCC), Edinburgh Napier University, the University of Edinburgh, and Abertay University, as we have seen an enormous rise in reported cybercrime in the pandemic, we have paradoxically seen this dominated by issues with a much more local character. Our paper sketches a past: of cybercrime in a turbulent 2020, and a future: of the roles which state law enforcement might play in tackling online harm a post-pandemic world.Continue reading Friendly neighbourhood cybercrime: online harm in the pandemic and the futures of cybercrime policing
I’ll be trying to liveblog the seventeenth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (December 14/15) and streamed live on the CEPS channel on YouTube. The event was introduced by the general chair, Lorenzo Pupillo of CEPS, and the program chair Nicolas Christin of CMU. My summaries of the sessions will appear as followups to this post, and videos will be linked here in a few days.
With the recent United States presidential election, I have chosen to focus the theme of this Three Paper Thursday on extremism and radicalisation. This topic has got increasing media attention during the past six years in the United States, through both a general rise in the public prominence of far-right, racist rhetoric in political culture (often attributed to the Trump presidency), and a series of high-profile violent events associated with far-right extremism. These events range from the riots in Charlottesville, Virginia (which turned violent when rally attendees clashed with counter-protesters and a vehicle drove into a crowd marching through downtown, killing one protester (Heim, Silverman, Shapiro, & Brown, 2017), to the recent arrest of individuals plotting a kidnap of the Governor of Michigan. This far-right violence brought to light the continued existence of right-wing extremism in the United States. This has historical roots in well-known organisations such as the Ku Klux Klan (KKK), a secretive, racist, terrorist organisation founded in 1865 during Reconstruction as part of a backlash against the acquisition of civil rights by African-American people in the South (Bowman-Grieve, 2009; Martin, 2006).
In contemporary online societies, the landscape and dynamics of right-wing extremist communities have changed. These communities have learned how to exploit the capacities of online social networks for recruitment, information sharing, and community building. The sophistication and reach of online platforms has evolved rapidly from the bulletin board system (BBS) to online forums and now social media platforms, which incorporate powerful technologies for marketing, targeting, and disseminating information. However, the use of these platforms for right-wing radicalisation (the process through which an individual develops and/or accepts extreme ideologies and beliefs) remains under-examined in academic scholarship. This Three Paper Thursday pulls together some key current literature on radicalisation in online contexts.
Maura Conway, Determining the role of the internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77-98. https://www.tandfonline.com/doi/full/10.1080/1057610X.2016.1157408.
The first paper comments on future directions for research in understanding and determining the role of the Internet in violent extremism and terrorism. After guiding readers through an overview of current research, the author argues that there is a lack of both descriptive and explanatory work on the topic, as the field remains divided. Some view Internet as mere speech platforms and argue that participation in online radicalised communities is often the most extreme behaviour in which most individuals engage. Others acknowledge the affordances of the Internet but are uncertain in its role in replacing or strengthening other radicalisation processes. The author concludes that two major research questions remain to be answered: whether radicalisation can occur in a purely online context, and if so, does it contribute to violence? In that case, the mechanisms merit further exploration. The author makes six suggestions for future researchers: a) widening current research to include movements beyond jihadism, b) conducting comparison research (e.g., between platforms and/or organisations), c) studying individual users in extremist communities and groups, d) using large-scale datasets, e) adopting an interdisciplinary approach, and f) examining the role of gender.
Yi Ting Chua, Understanding radicalization process in online far-right extremist forums using social influence model. PhD thesis, Michigan State University, 2019. Available from https://d.lib.msu.edu/etd/48077.
My doctoral dissertation examines the impact of participation in online far-right extremist groups on radicalisation. In this research, I applied social network analysis and integrated theories from criminology (social learning theory) and political science (the idea of the echo chamber) to understand the process of attitudinal changes within social networks. It draws on a longitudinal database of threads saved from eight online far-right extremist forums. With the social influence model, which is a regression model with a network factor, I was able to include the number of interactions and attitudinal beliefs of user pairs when examining attitudinal changes across time. This model allows us to determine if, and how, active interactions result in expression of more radical ideological beliefs. Findings suggested that online radicalisation occurred at varying degrees in six of seven forums, with a general lowered level of expressed extremism towards the end of observed time period. The study found strong support the proposition that active interactions with forum members and connectedness are predictors of radicalisation, while suggesting that other mechanisms, such as self-radicalisation and users’ prior beliefs, were also important. This research highlighted the need for theory integration, detailed measures of online peer association, and cross-platform comparisons (i.e. Telegram and Gab) to address the complex phenomena of online radicalisation.
Magdalena Wojcieszak, ‘Don’t talk to me’: effects of ideologically homogeneous online groups and politically dissimilar offline ties on extremism. New Media & Society, 12(4) (2010) pp 637-655. https://journals.sagepub.com/doi/abs/10.1177/1461444809342775.
In this article, the author is interested in answering two questions: 1) does participation in ideologically homogeneous online groups increase extreme beliefs, and 2) how do offline strong and weak ties with dissimilar beliefs affect extreme beliefs? The author uses online survey data and posts from neo-Nazi online forums. The outcome is measured by respondents’ responses to 10 ideology-specific statements. Other variables in the analysis included the level of participation in online groups, perceived dissimilarity of offline ties, news media exposure and demographics. Findings from a multivariate regression model indicate that participation in online groups was a strong predictor of support for racial violence after controlling for demographic factors and news media exposure. Forum members’ attitudes are subjected to normative influences via punitive or rewarding replies. For individuals with politically dissimilar offline ties, the author finds a weakened participation effect.
Together, these papers highlight the complexity of assessing the role played by the Internet in the radicalisation process. The first paper encourages researchers to tackle whether online violent radicalisation occurs via six different approaches. The other two papers show support for online radicalisation while simultaneously calling attention to the effect of other variables, such as the influence of offline relationships and users’ baseline beliefs prior to online participation. All of these papers cross academic disciplines, highlighting the importance of an interdisciplinary perspective.
Bowman-Grieve, L. (2009). Exploring “Stormfront”: A virtual community of the radical right. Studies in Conflict & Terrorism, 32(11), 989-1007.
Heim, J., Silverman, E., Shapiro, T. R., Brown, E. (2017, August 13). One dead as car strikes crowds amid protests of white nationalist gathering in Charlottesville; two police die in helicopter crash. The Washington Post. Retrieved from https://www.washingtonpost.com/local/fights-in-advance-of-saturday-protest-in-charlottesville/2017/08/12/155fb636-7f13-11e7-83c7-5bd5460f0d7e_story.html?utm_term=.33b6686c7838.
Martin, G. (2006). Understanding Terrorism: Challenges, Perspectives, and Issues. Thousand Oaks, California: Sage Publications.
Online underground marketplaces are an essential part of the cybercrime economy. They often act as a cash-out market, enabling the trade in illicit goods and services between pseudonymous members. To understand their characteristics, previous research mostly uses vendor ratings, public feedback, sometimes private messages, friend status, and post content. However, most research lacks comprehensive (and important) data about transactions made by the forum members.
Our recent paper (original talk here) published at the Internet Measurement Conference (IMC’20) examines how an online illicit marketplace evolves over time (especially its performance as an infrastructure for trust), including a significant shift through the COVID-19 pandemic. This study draws insights from a novel, rich and powerful dataset containing hundreds of thousands contractual transactions made by members of HackForums — the most popular online cybercrime community. The data includes a two-year historical record of the contract system, originally adopted in June 2018 as an attempt to mitigate scams and frauds occurring between untrusted parties. As well as contractual arrangements, the dataset includes thousands of associated members, threads, posts on the forum, which provide additional context. To study the longitudinal maturation of this marketplace, we split the timespan into three eras: Set-up, Stable, and COVID-19. These eras are defined by two important external milestones: the enforcement of the new forum’s policy in March 2019, and the declaration of the global pandemic in March 2020.
We applied a range of analysis and statistical modelling approaches to outline the maturation of economic and social characteristics of the market since the day it was introduced. We find the market has centralised over time, with a small proportion of ‘power users’ involved in the majority of transactions. In term of trading activities, currency exchange and payments account for the largest proportion of both contracts and users involved, followed by giftcards and accounts/licenses. The other popular products include automated bots, hacking tutorials, remote access tools (RATs), and eWhoring packs. Contracts are settled faster over time, with the completion time dropping from around 70 hours in the early months to less than 10 hours during the COVID-19 Era in June 2020.
We quantitatively estimate a lower bound total trading value of over 6 million USD for public and private transactions. With regards to payment methods preferably used within the market, Bitcoin and PayPal dominate the others at all times in terms of both trading values and number of contracts involved. A subset of new members joining the market face the ‘cold start’ problem, which refers to the difficulties of how to establish and build up a reputation base while initially having no reputation. We find that the majority of these build up their profile by participating in low-level currency exchanges, while some instead establish themselves by offering products and services.
To examine the behaviours of members over time, we use Latent Transition Analysis to discover hidden groups among the forum’s members, including how members move between groups and how they change across the lifetime of the market. In the Set-up Era, we see users gradually shift to the new system with a large number of ‘small scale’ users involved in one-off transactions, and few ‘power-users’. In the Stable Era, we see a shift in the composition and scale of the market when contracts become compulsory, with a growth of ‘business-to-consumer’ trades by ‘power-users’. In the COVID-19 Era, the market further concentrates around already-existing ‘power-users’, who are party to multiple transactions with others.
Overall, the marketplace provides a range of trust capabilities to facilitate trade between pseudonymous parties with the control is becoming further centralised with administrators acting as third-party arbitrators. The platform is clearly being used as a cash-out market, with most trades involving the exchange of currencies. In term of the three eras, the big picture shows two significant rises in the market’s activities in response to two major events that happened at the beginning of Stable and COVID-19 eras. Particularly, we observe a stimulus (rather than transformation) in trading activities during the pandemic: the same kinds of transactions, users, and behaviours, but at increased volumes. By looking at the context of forum posts at that time, we see a period of mass boredom and economic change, when some members are no longer at school while others have become unemployed or are unable to go to work. A need to make money and the availability of time in their hand to do so may be a factor resulting in the increase of trading activities seen at this time.
Some limitations of our dataset include no ground truth verification, in which we have no way to verify if transactions actually proceed as set out in the contractual agreements. Furthermore, the dataset contains a large number of private contracts (around 88%), in which we only can observe minimal information. The dataset is available to academic researchers through the Cambridge Cybercrime Center‘s data-sharing agreements.
For a slightly different Three Paper Thursday, I’m pulling together some of the work done by our Centre and others around the COVID-19 pandemic and how it, and government responses to it, are reshaping the cybercrime landscape.
The first thing to note is that there appears to be a nascent academic consensus emerging that the pandemic, or more accurately, lockdowns and social distancing, have indeed substantially changed the topology of crime in contemporary societies, leading to an increase in cybercrime and online fraud. The second is that this large-scale increase in cybercrime appears to be the result of a growth in existing cybercrime phenomena rather than the emergence of qualitatively new exploits, scams, attacks, or crimes. This invites reconsideration not only of our understandings of cybercrime and its relation to space, time, and materiality, but additionally to our understandings of what to do about it.Continue reading Three paper Thursday: COVID-19 and cybercrime
Underground forums contain discussions and advertisements of various topics, including general chatter, hacking tutorials, and sales of items on marketplaces. While off-the-shelf natural language processing (NLP) techniques may be applied in this domain, they are often trained on standard corpora such as news articles and Wikipedia.
It isn’t clear how well these models perform with the noisy text data found on underground forums, which contains evolving domain-specific lexicon, misspellings, slang, jargon, and acronyms. I explored this problem with colleagues from the Cambridge Cybercrime Centre and the Computer Laboratory, in developing a tool for detecting bursty trending topics using a Bayesian approach of log-odds. The approach uses a prior distribution to detect change in the vocabulary used in forums, for filtering out consistently used jargon and slang. The paper has been accepted to the 2020 Workshop on Noisy User-Generated Text (ACL) and the preprint is available online.
Other more commonly used approaches of identifying known and emerging trends range from simple keyword detection using a dictionary of known terms, to statistical methods of topic modelling including TF-IDF and Latent Dirichlet Allocation (LDA). In addition, the NLP landscape has been changing over the last decade , with a shift to deep learning using neural models, such as word2vec and BERT.
In this Three Paper Thursday, we look at how past papers have used different NLP approaches to analyse posts in underground forums, from statistical techniques to word embeddings, for identifying and define new terms, generating relevant warnings even when the jargon is unknown, and identifying similar threads despite relevant keywords not being known.
 Gregory Goth. 2016. Deep or shallow, NLP is breaking out. Commun. ACM 59, 3 (March 2016), 13–16. DOI:https://doi.org/10.1145/2874915Continue reading Three Paper Thursday: Applying natural language processing to underground forums
This is a guest post by Cassandra Cross.
Romance fraud (also known as romance scams or sweetheart swindles) affects millions of individuals globally each year. In 2019, the Internet Crime Complaint Centre (IC3) (USA) had over US$475 million reported lost to romance fraud. Similarly, in Australia, victims reported losing over $AUD80 million and British citizens reported over £50 million lost in 2018. Given the known under-reporting of fraud overall, and online fraud more specifically, these figures are likely to only be a minority of actual losses incurred.
Romance fraud occurs when an offender uses the guise of a legitimate relationship to gain a financial advantage from their victim. It differs from a bad relationship, in that from the outset, the offender is using lies and deception to obtain monetary rewards from their partner. Romance fraud capitalises on the fact that a potential victim is looking to establish a relationship and exhibits an express desire to connect with someone. Offenders use this to initiate a connection and start to build strong levels of trust and rapport.
As with all fraud, victims experience a wide range of impacts in the aftermath of victimisation. While many believe these to be only financial, in reality, it extends to a decline in both physical and emotional wellbeing, relationship breakdown, unemployment, homelessness, and in extreme cases, suicide. In the case of romance fraud, there is the additional trauma associated with grieving both the loss of the relationship as well as any funds they have transferred. For many victims, the loss of the relationship can be harder to cope with than the monetary aspect, with victims experiencing large degrees of betrayal and violation at the hands of their offender.
Sadly, there is also a large amount of victim blaming that exists with both romance fraud and fraud in general. Fraud is unique in that victims actively participate in the offence, through the transfer of money, albeit under false pretences. As a result, they are seen to be culpable for what occurs and are often blamed for their own circumstances. The stereotype of fraud victims as greedy, gullible and naïve persists, and presents as a barrier to disclosure as well as inhibiting their ability to report the incident and access any support services.
Given the magnitude of losses and impacts on romance fraud victims, there is an emerging body of scholarship that seeks to better understand the ways in which offenders are able to successfully target victims, the ways in which they are able to perpetrate their offences, and the impacts of victimisation on the individuals themselves. The following three articles each explore different aspects of romance fraud, to gain a more holistic understanding of this crime type.
We have yet another “post-doc” position in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk (for the happy reason that Ben is leaving us to become a Lecturer in Digital Methods in Edinburgh).
Hence, once again, we are looking for an enthusiastic researcher to join us to work on our datasets of cybercrime activity, collecting new types of data, maintaining existing datasets and doing innovative research using our data. The person we appoint will define their own goals and objectives and pursue them independently, or as part of a team.
We are specifically interested in determining how cybercrime has changed in response the COVID-19 pandemic and our funding requires us to identify new trends, to collect (and share) relevant data, and to rapidly provide an analysis of what is happening, with the aim of assisting in optimising technical and policy responses. We are also expanding our data collection into examining the online activities of extremist groups — with a specific focus on pandemic related issues.
An ideal candidate would identify datasets that can be collected, build the collection systems and then do cutting edge research on this data – whilst encouraging other academics to take our data and make their own contributions to the field. However, we recognise that candidates may be from a technical background and hence stronger at the collecting side, or from a social science background and hence stronger on providing compelling insights into what our data reveals. Along with a CV we expect to see a covering letter which sets out what type of research might be done and the skills which will be brought to bear, along with an indication where help would need to be sought from colleagues in our interdisciplinary environment.
Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter.