Tag Archives: tech abuse

Human HARMS: Threat modelling social harms against technical systems

by Kieron Ivy Turk, Anna Talas, and Alice Hutchings

When talking about the importance of cybersecurity, we often imagine hackers breaking into high-security systems to steal data, money or launch large-scale attacks. However, technology can also be used for harm in everyday situations. Traditional cybersecurity models tend to focus on protecting systems from highly skilled external threats. While these models are effective in cybersecurity, they do not adequately address interpersonal threats that often do not require technical skills, such as those found in cases of domestic abuse.

The HARMS model (Harassment, Access and infiltration, Restrictions, Manipulation and Tampering, and Surveillance) is a new threat modelling framework. It is designed to identify non-technical and human factors harms that are often missed by popular frameworks such as STRIDE. We focused on how everyday technology, such as IoT devices, can be exploited to distress, control or intimidate others. 

The five elements of this model are Harassment, Access and infiltration, Restrictions, Manipulation and tampering, and Surveillance. Definitions and examples of these terms are provided in Table 1.

The threat model can be used to consider how a device or application can be used maliciously to identify ways it can be re-designed to make it more difficult to commit these harms. Imagine, for example, a smart speaker in a shared home. This could be used maliciously by an abusive individual to send distressing messages to be read aloud or set alarms to go off in the middle of the night. Equally, if the smart speaker is connected to calendars, scheduled events could be changed or removed, so users miss meetings and appointments. Furthermore, connected devices can be controlled remotely or automatically through routines, causing changes that the user does not understand and making them doubt their memory or even their sanity. They could also monitor conversations through built-in microphones or keep track of the commands others have used on the device through logs.

Importantly, any one type of harm is not constrained to one of these categories – in fact, many possible attacks will span multiple components of HARMS. For example, a common yet severe online harm is doxxing, wherein a malicious user obtains sensitive information about a user and shares it online. This encompases many aspects of the HARMS models as information may be obtained through surveillance, but be released with the intention of harassing other users. Any threat analysis utilising HARMS must therefore consider possible overlaps between elements to identify a broader set of attacks.

The human HARMS model approaches threat modelling from a unique angle compared to widespread methodologies such as STRIDE. There exist various overlaps between methods, which can be used to obtain a greater perspective of possible attack types. The Surveillance component of HARMS concerns privacy, as does Information disclosure in STRIDE. However, surveillance covers malicious observation and monitoring of people, whilst information disclosure focuses on data storage and leaks. Other risks can only be identified through one model, such as Harassment (HARMS) and Repudiation (STRIDE). We recommend using multiple threat modelling methodologies to encourage improved analysis of security, privacy, and possible misuse of novel systems.

As smart home technology, connected devices, and online platforms continue to evolve, we must think beyond just technical security. Our HARMS model highlights how technology, even when working as intended, can be used to control and harm individuals. By also incorporating human-centered threat modelling into designing software development, in addition to traditional threat modelling methods, we can build safer systems that help prevent them being used for abuse.

Paper: Turk, K. I., Talas, A., & Hutchings, A. (2025). Threat Me Right: A Human HARMS Threat Model for Technical Systems. arXiv preprint arXiv:2502.07116.