8 thoughts on “Security and Human Behaviour 2022

  1. Alice Hutchings started the workshop with a talk on “Airstrikes in Cyberspace”, describing changes in the patterns of DDoS and website defacement attacks with the Russian invasion of Ukraine. There was a spike of attacks from Ukraine on Russia seven hours after the attack, and a response from Russia two days later. Alice and Anh have interviewed two of the perpetrators who admitted to a mix of motives, both financial and in terms of notoriety.

    Maria Bada has been using a prisma flow technique to analyse the gaps in our understanding of online offenders. This boiled down some 3000 articles to 39 significant ones. There’s a significant shortage of work on offender psychology, as well as on the social factors affecting hackers and other types of offender.

    Jean Camp’s talk was on “From signalling to safety”. Security is a lemons market with most people unable to discriminate between fraudsters and bankers, or between ransomware and Microsoft updates. Designers expect too much expertise from users, as well as constant vigilance. The result is a chronically unsafe system, especially for the old. The goal should be safe decision making, with systems that fail slowly and locally. Jean has been working on personalised blocking, which enables users to fingerprint the websites they visit, just as the websites fingerprint us. Here, local data can be more reliable, as it’s easier to poison a shared public data source. She’s been testing this with a variety of users.

    Yi Ting Chua has been studying the ways in which tech is incorporated into traditional crime, from bullying through stalking to murder. Police forces face increasing problems with securing and processing digital evidence, as social media, IoT and cloud services complicate digital forensics. If a student decides to become a digital forensic examiner, what do they do? There’s a huge gap in education, training, standards, certification. And that’s before we talk of prosecutors, judges and other players.

    Sergio’s topic is detecting multi-accounts in underground communities. How can we tell whether the same user is behind multiple pseudonyms in an underground crime forum? Sometimes we get lucky when snippets of text are quoted; even careful criminals make opsec mistakes, particularly when they want to be contacted or to get paid; and you can use stylometry. Sergio is developing a methodology to do this in a systematic way.

    Leonie Tanczer works on intimate partner abuse, and the ways in which tech can be used to monitor, control or coerce a partner. This is widespread and common; domestic abuse victims and charities are struggling with it. She plans to use forum data and NLP to analyse abuser behaviour; she’s also working with a hospital and with homicide investigators to collect data on victims. Charities that provide cyber helplines are increasingly supporting abuse victims rather than fraud victims.

    The last speaker of the first session was Marie Vasek, who’s been analysing why cyber-criminals pretend to be from certain countries. Note that some “legit” businesses do this too, and there are grey areas between legit and outright criminal behaviour. Years ago, it was noted that 419 scammers claimed to be from Nigeria, to filter out the most gullible prospects; romance scams also have a set of distinct patterns; rental scammers pretend to be local even when they’re not; and high-yield investment programs tended to have UK addresses despite being operated from overseas.

    Discussion started off on how bad people learn their tricks; there’s quite a lot of research on pathways to crime, and they’re mixed. How can police be motivate to more about tech abuse? It’s hard, because their focus is on high-impact crimes such as homicide, even if the abuse contributes to that. There are structural shortages of experts because they get squeezed between the legal profession and the police, which grab much of the available money. Google has made some attempts to get rid of stalkerware but there are too many dual-use products. Parental control software conditions the next generation; and the behaviour of police and other government officials is also problematic. In theory a rape survivor should not have to show their phone to the police; the police can ask but the victim can refuse. However this may result in an investigation being abandoned. Social norms vary by country, though; Brits are comfortable with levels of surveillance that Germans find creepy. In the USA, cops have been testing the DNA of victims as well as assailants; if they find that the victims are wanted for a crime, they don’t test the assailant. California now proposed to make this practice illegal. Another issue is online manipulation; chatbots are often programmed to keep people engaged, which is essentially the same as grooming. When we talk about normalising abusive discourse, we have to think of AI interactions too, and of normalising the reuse of data for different purposes than those for which it was provided.

  2. Zinaida Benenson started the second session with a talk on sysadmins and gender. She surveyed sysadmins. Sexism is well embedded in sysadmin interactions, with admins talking down to female clients and women’s technical abilities taken less seriously. To succeed, female techies adopt a variety of strategies including a rough conversational style and less female clothing. A critical factor is the number of women in the relevant team.

    Peter Grabosky has been studying the ways in which liberal democracies have been teaching tactics and methods to authoritarian states. France’s General Paul Aussareses supervised torture in Algeria, then taught the Brazilian, Argentinian and other militaries. Similarly, the US Army School of the Americas taught repression to many Latin American officers; and the British Empire was just as guilty. The Nazi racial hygiene practices, including forced sterilisation, were justified by invoking those of California. So beware of using or sharing technologies of excess.

    Alan Mislove has been working on implied identity in ad auctions. When they end up discriminating, whether on age or race or gender, the most influential aspect seems to be the images. Alan ran a number of facebook ads using images from shutterstock and the same text. Ads with black images were 20% more likely to be delivered to black users; images of young women were more likely to be delivered to young women and old men. He tried again with styleGAN which generates images which he ran through the DeepFace classifier to estimate age and race. This enabled him to perturb the same image to make it older or younger, black or white, male or female. This confirmed the same biases, even more strongly.

    Simon Parkin has been wondering how small to medium enterprises might, as a practical matter, “do more” about cybersecurity. With no governance structures, how can smaller companies mate their business experience with advice on, say, phishing attacks or ransomware? How can one generate conversations rather than edicts? Many business people may see ransomware as a kind of ghost that appears in your computers overnight. What channels are available to educate busy people?

    Awais Rashid’s question is “Designing for whom?” or “Why doesn’t Johnny write secure software?” Are devs stupid, or does the way we make them design software make them feel or act stupid? There are two gaps – the one between the toolsmiths and the devs, and the one between the devs and the users. Bridging the first needs a collaborative approach, while for the second the appropriate philosophy might be equitable privacy. Awais’s team is working on the first in the context of the big CHERI/Arm project currently funded by the UK government, and on the second by doing field work with vulnerable populations.

    Daniel Thomas is working on measuring Android security with a number of people from other universities and from Google. As long as customers can’t measure security we get stuck with a market for lemons. They have device farms, test labs and crowdsourced data. What should the output be? A single score? Something for journalists to put in reviews? A minimum standard for vendors (maybe part of ETSI TS 103 645)?

    Kami Vaniea has been working on providing useful phishing feedback. Edinburgh’s helpdesk was A href=”https://www.research.ed.ac.uk/en/publications/a-case-study-of-phishing-incident-response-in-an-educational-orga>floored when LastPass sent out an email that many people thought was a phish, and reported. Genuine phish are more targeted and don’t do that; anything sent to everyone is blocked within minutes. So organisations want people to report phish but give contradictory advise. Kami is building a prototype system, PhishEd, tries to make phish reporting a positive experience.

    The discussion started from the long history of women in system administration and moved to the difficulty of telling marketing mails from phish. It’s hard to measure the security of a phone as it evolves with updates and potentially risky apps, and there are multiple audiences for security metrics – vendors, experts, journalists and users (Google thinks it better if independent raters tell their OEMs which products are rubbish, than if Google tried to). There are also huge differences between countries; Bangladesh and Rwanda are much worse than the UK, for example. How do you deal with vendors that operate only in low-cost markets? And authoritarian governments, like those in the Middle East, are lucrative markets for firms that sell surveillance systems. Might sanctions help prevent this problem? Possibly, but rather hard to achieve, especially against countries that have oil or that are high-priority intelligence targets. On the corporate side, it’s not at all clear that cybersecurity management should be constantly brought to the attention of the boards of normal midsize companies.

  3. First up after lunch was Alessandro Acquisti, who’s been studying census-guided grants. There’s been a lot of theoretical work on privacy-preserving analytics such as differential privacy (DP) and homomorphic encryption, but do they do any real work? A lot of thing are missing such as how you deal with non-response, errors and robustness generally. Alessandro has been starting from census data to estimate the effects of DP on local measurement of child poverty. Out of $1.5tn of census-guided funding in 2021, child poverty data led to $11.7bn of “title 1” grants, of which $1.06 were lost due to known data loss but an order of magnitude less would be misallocated due to the noise added by DP.

    Nicolas Christin studies how users browse the web in order to understand how they’re exposed to scams and how they recover from compromise. Previous concepts of a browsing session need to be updated; people nowadays use a lot of tab switching, and operate within clusters of activity associated with work, email, social media and so on. Much of the danger lies in periphery sites, which are not in the Alexa top 10 million; although they represent only 5% of browsing time, they account for 89% of the Google Safe Browsing naughty list. What this means is that predictive analytics aren’t likely to help much.

    Bel Collier looked at an NCA influence operation to suppress DDoS-for-hire purchases using Google ads, so started to look at who else was doing this. Quite a lot of influence operations are now being run now to nudge a range of behaviours. Examples include men who’ve just been stabbed, and who might be tempted to get revenge; fire safety advice for people who just bought candles or matches; and misleading ads targeting men in Europe thinking of moving illegally to the UK. The private-sector marketing infrastructure is well developed and governments are starting to use it. Some examples are rather creepy, and the ethical framework seems to be lacking.

    Ahana Datta has plenty stories from time in government. She asked for them not to be recorded or blogged in an attributable way. Civil servants frequently make perverse decisions about IT because of internal incentives; people did not want to make any long-term effort if they were due to rotate to their next job next year. They switch from centralised to decentralised and back again; they decide to spend the £2m left in their budget on something they can show to a minister by the end of a year. For them, risk is a spreadsheet with a list of things can go wrong and traffic lights next to them. It’s impossible to fix the incentives if nobody can understand the risks.

    Frank Krueger is a cognitive social neuroscientist at GMU; his research on trust is transdisciplinary and founded in neurobiology. You can trust either a person or a machine, either locally or remotely. To analyse this you need to be able to work at different levels; game theory perhaps at the behavioural level, down through the neurofunctional level to the neurochemical basis in the likes of oxytocin. The neurofunctional level is complex involving several networks (salience, default-mode, central-executive); trust can be based there on calculus, knowledge or identification. There will be a BAA coming up for a Trust and Influence Program.

    Damien McCoy studies fake news, among other things. People engage six times more with misinformation than with regular news, and two-thirds of the information consumed by the far right is misinformation. The large tech firms appear overwhelmed; how can we do independent audits of their claims? Damien has been working on auditing ad disclosure, and found that Facebook had done retroactive deletion of political ads. Their access to the API of Facebook’s repository was pulled so the discrepancies they discovered between public and API disclosures can’t be verified by anyone else any more. They also found misinformation campaigns, and spammy commercial ads that referred to politicians such as Trump, which the company refused to block until they went to the press. One thing Facebook would not make transparent was the targeting information; so Damien wrote “Ad Observer” which volunteers could run to collect data on ad targeting. They found that Facebook’s political ad detection wasn’t much good, with precision of 0.45 and recall of 0.22; Facebook responded with a cease-and-desist order. The conclusion is that self-regulation doesn’t work; government regulation is needed.

    Discussion started on the many definitions of trust, and what neuroscience might contribute; Frank’s answer was that an expectation of vulnerability is always in there, although he can sometimes tell how much someone will trust as a function of their genetics, their brain activity and whether they’re psychopathic. Philosophically this doesn’t add much but it gives us somewhere to start. Is trust in intimates quite different from trust in institutions? Maybe but how do you measure trust in institutions? And do people who use Facebook trust Facebook or do they trust the people they meet on Facebook? One might be able to do experiments looking for neural correlates. Trust does seem to play a role in disinformation, as it has to be established before people will believe the lie. But it may not be cardinal. People may engage with material because they like it, or believe it a bit already; what’s the interpretive frame, or the interaction frame, within which people engage? On the marketing side, you have to think of it in terms of ecosystems, of marketing consultancies entangles with policy entrepreneurs promoting each other. Government security failures have a different dynamic; failed security people move from one department to another, leaving their legacies behind but keeping their security clearances. In each case the organisational nature of “trust” is somewhat counterproductive; you end up creating a pond with resident alligators. Institutions are not just the sum of their members; nobody trusts journalists but everyone reads the news, and much the same holds for bankers and banks. For some, trust is a pleasant affective experience; it motivates people to behave cooperatively. To others a trusted system or component is one that can screw me.

  4. Vaibhav Garg started Monday’s final session with a talk on reducing overhead for Comcast’s customers while still making progress on security and privacy. There are now quite a few security labels that a service provider can use to reassure customers. They can be binary seals of approval, like the BSI kitemark; graded labels, like those from Underwriters’ Labs; or complex and descriptive, like nutrition labels. These are probably in descending order of usefulness for users, while corporations might benefit more from the descriptive variety. Too many labels can confuse people. Vaibhav has been looking particularly at open source products they use, which are less likely to be certified; quite a few are relied on at Comcast much more than elsewhere, so they can’t reasonably hope that others will do the testing.

    Jens Grossklags has been wondering whether GDPR rights are effective. He collected data about 90 services and found that 27% did not have a data erasure process compliant with GDPR article 17; he’d ask for data to be erased, then later make a subject access request. There was wide variation across business sectors. Even worse was data portability where noncompliance was 71%.

    Susan Landau has been studying the privacy aspects of communications metadata. You can learn enormous amounts from a group’s metadata; religion, movements, even how extraverted users are. In the old days you just has browser characteristics; now you have magnetometers, gyroscopes, accelerometers and other things that can identify who’s in the same bus or subway car. The use of such data can’t practicably be limited by ideas of notice and consent; laws restricting data use will be necessary.

    Eliot Lear has, like Vaibhav, been thinking about the security labelling requirement that arrives with US Executive Order no. 14028. Given a software bill of materials and the national vulnerability database, you can in theory get a vulnerable package list; the problem is the 95% false positive rate, and the fact that the list “would stop your desk from flying away”. How can you evaluate vendor claims that a vulnerability doesn’t matter much?

    Tony Vance notes that boards are getting a lot of pressure, not just from peers who have been hacked but from regulators who are starting to ask whether boards have cybersecurity expertise to ride herd on their CISO. He’s interviewed boards and CISOs from large US companies. Often the “C” is marketing; you’re only a partner if you go to the CEO’s staff meetings. Legitimacy has to be cultivated, but for that the CISOs have to spend time with the board and build legitimacy. Can they help the board set up 2FA? Can they coach it? Unfortunately, they often don’t have what it takes.

    Discussion started around how well people understand their GDPR rights, on which Jens is doing more work. The next topic was the software bill of materials (SBoM); some firms might not want to know as they’re afraid they have GPL software in their systems. One could build a search engine to generate an SBoM automatically but who would pay for it, if it disclosed all their unlicensed or adversarially licensed software? Some might want to keep the SBoM confidential in case it made them more vulnerable, but the low-end attackers don’t bother with reconnaissance and just have a go at everything anyway. Perhaps we could do something rough and ready by just taking hashes of everything on people’s systems; but would that detect if a supplier had included something like log4j? An SBoM system can be gamed in numerous ways; for example, a software firm could make all its components appear to be new and unique so they would not get tainted by bug disclosures elsewhere, even if they ought to be. So it seems that this is an area where the regulation has got in front of the research. Might GDPR make a difference at scale? Jens doubts it; the data protection authorities just seem totally overwhelmed by the levels of noncompliance. The burden of cookie banners on users is an epic fail, even though that dark pattern tends to be a decision of the platform rather than the website. Why can’t the regulators tackle the platforms? It might be useful research to map out what comes from where in the ecosystem. Data brokers are also an issue in the US ecosystem of traffic data abuse. And finally, why don’t CISOs work well with boards? First, their technical training doesn’t give them the political savvy needed to operate there; and second, board members are fed up hearing of cybergeddon that never happens.

  5. Cassandra Cross started Tuesday’s sessions. She’s been collecting lots of data about romance fraud online, and analysing 509 reports to understand how people attempted to verify the identities of prospective romantic partners about whom they somehow became suspicious. There were generally two options. The most common was an Internet search that might turn up scammer websites, photos under a different name, the fact that a genuine person’s name and photos had been used for scams before, or merely the suspicious lack of an online presence; the second was asking third parties such as family, friends or NGOs. There are starting to be reports that suggest the use of deepfakes, where image searches don’t work at all.

    Christian Eichenmueller is a geographer working on the coloniality of smart cities who has done field work in Pune. His new research theme is adaptive environments, now that firms like Stroer are making advertising displays that greet you personally, like in Minority Report. Their acquisition of fine-grained behavioural spatial data via partnerships with Otto and T-Mobil supports the emergence of a dominant physical ad ecosystem in germany. Christian is also a city councillor in Erlangen and has to deal with this at the policy level. Will this move us more towards nudges and appeals to affect? How does GDPR interact with this? What does it mean for public space, and the way class, race and gender play out there? What about high-risk users?

    Jonas Hielscher is working on human-centred security in organisations. Many firms are dealing with phishing by bombarding their staff with awareness campaigns and phishing simulations. But turning theoretical knowledge into unconscious behaviour is hard! Jonas highlights the need to get IT and compliance right first; if you’re forcing people to change passwords every 30 days and forbidding the use of password managers, you’re dead on arrival. Second, training must build self-efficacy rather than undermining it. Third, to change behaviour you had better eliminate the cues that trigger old, insecure behaviours at the same time as you urge new behaviours triggered by new cues.

    Anita Lavorgna has been studying public resistance to Covid tracing apps, as a simultaneous case study of resistance to surveillance, information pollution and conspiratorial thinking. Other issues include institutionalised nepotism, false advertisement, scientific literacy and digital capital.

    Maryam Mehrnezhad has been wondering about the costs and benefits of femtech, from fertility monitors to smart milk pumps. Devices that claim to empower women to “take control of their bodies and lives” collect complex, sensitive personal data about the user and others, such as children; some of these data are GDPR special category data while others are less regulated but still highly sensitive. This is especially the case for excluded, neglected or “problem” groups. Threat actors include parters, ex-partners, crooks, medical companies and governments – in the USA, firms are already selling data to abortion vigilantes. So how can we protect women from online tracking?

    Jeff Yan has been studying gender bias in password managers. Some of these are built into browsers, while others come with operating systems and others are standalone. 73% of men use standalone products while 97% of women use products that came with the device. Women said they chose on brand and convenience, while men said security and features were most important. However they mostly only used two or three features beyond the basics of storage and autofill. And did the industry care? Suddenly, it did; the student on this project got a job offer from a vendor that had never thought of this aspect.

    Discussion started off on gender and software development, and moved on to the location and nature of security decision-making in organisations, which is as problematic in Germany as in the USA. Jean Camp noted that tech use is a function of expertise as well as gender; if you normalise for expertise, much of the gender bias disappears; confidence is also gendered. Maryam replied that although men express more confidence, their confidence is often misplaced. On femtech, we may be seeing a reflection of the fact that medical research has cared less about women for generations. Many health apps are unisex, and not that much good for pre-menopausal women who need to track periods; the femtech part may be an add-on or women may need to use a marginal app from a smaller vendor. The performance may be terrible if it depends on crude body temperature tracking. Public health communication is also tied up with power in complex ways.

  6. David Smith opened the second session with a talk on Monstrous Males: Gender and the Dehumanising Imagination. Dehumanisation, where we consider others as subhumanism, is a psychological response to political forces, often tied to race and often gendered: it’s generally applied to males rather than females. Of the 4,467 recorded lynchings in the USA, 4,027 were of men and 99 of women; 3,265 of the men were described as black. David surveyed Nazi visual propaganda, and in every case where gender was clear, the Jew was a man. Jewish women were portrayed as ugly but not subhuman. The demonising labels now are terrorist, superpredator, thug, illegal immigrant and, as before, monster.

    Judith Donath works on signalling and deception in technology, and has been thinking about signalling in the context of the pandemic. People want to present themselves as healthy even if they don’t know they are; that’s at the heart of testing. Will such devices become embedded and pervasive? We already see diabetics wearing glucose monitors and insulin pumps. Might we see demands for third-party access to self-monitoring and life-hacking tech, leading to more general ambulatory health monitoring? Or might they be required of company staff, school pupils or others? How might advertising drive this?

    Jonathan Leader-Maynard has just written a book on Ideology and Mass Killing. Mass killing is puzzling as it’s strategically indeterminate and normatively extreme; most conflicts go by without them. Scholars debate the role of ideology in this, and Jonathan tried to resolve this with two main arguments. First, we need a better understanding of how ideology shapes behaviour; communism kept shaping Soviet behaviour years after people stopped believing in it. That’s because it’s institutionalised in various ways, so people are still guided by it. Second, the key role of ideology isn’t revolution or hatred, but that normal security politics around group defence and order becomes stretched and radicalised in certain ways.

    Sophie van der Zee notes that 92% of UK higher education institutions suffered a data breach or attack last year; and much the same happens elsewhere. A ransomware attack on Maastricht university disrupted operations for months despite payment of €200,000 in ransom. She’s been running an experiment with n=1359 to see what can help nudge staff and students to download and use a VPN. She tried security primes versus a social-norm tale versus a demand for some time commitment. The commitment condition was the one that worked, increasing uptake from 1.6% in the control condition to 3.3%. The other conditions actually decreased uptake. You can send an email to info@deceptionresearchsociety.org to sign up for their seminar on the first Tuesday of each month.

    Lydia Wilson studies when hate becomes mainstream. Joan Smith has collected data on the many mass killers who first committed misogynistic crimes, which might have served as warnings were it not for the fact that such crimes are both widespread and widely ignored. The Cambridge Cybercrime Centre has collected misogynistic material from underground forums, the far right and the manosphere; but the mainstream media publish increasingly misogynistic material, and who’s collecting and analysing that? As well as understanding the psychology of the perpetrators, we need to understand the enablers, and hold them to account.

    In discussion, there are different types of mass violence, ranging from people with a gun who got angry and started shooting, to those who planned in advance with a manifesto and a livestream; but data don’t get collected on impulsive shooters any more then on misogyny. The press may report overtly political violence, but the ideology is often there is you look for it. And what is one to make of the 56 members of the British parliament under investigation for sexual abuse? At least it explains how male MPs laugh off the attempts of female MPs to bring in bills against harassment. The reaction comes when a group that previously had little power starts getting some more, whether women or blacks or Chinese in the USA or Jewish groups in Europe after World War One, and this is seen as a violation of the natural order, leading to an in-group / out-group reaction. Then there’s a distinction between gendercide, where you kill the men, and genocide where you prevent future out-group children by killing the women too. There are gaps between local and global; just as people who oppose global warming but still oppose a windmill in your backyard, there are people who support inclusion but bridle when a woman gets the job they sought or an out-group member tries to force a way into their in-group. One can distinguish between people’s formal ideology and their more enduring political psychology, which may be rooted in tribal reflexes. The practical utility of ideologies can include uniting a group and creating political power; but there’s an indeterminacy that creates a space for political choice. Gorbachov, for example, would not have done what Putin’s been doing to consolidate his power. This diversity of preferences includes that some have a preference for diversity while others find change threatening, and diverse groups are at a disadvantage to more cohesive ones. Researchers in this field need to be able to understand this high-level stuff, while tracking the mechanisms of ideological production and understanding the use and abuse of technology. We also have to beware of the difficult concepts; for starters, there are maybe twenty different definitions of ideology. Yet we sort-of know what ideology is, and bad ideologies can be remarkably resilient. Climate change will create enormous stress and vulnerability, creating an environment where the frozen conflicts and hatreds of centuries can thaw out along with the permafrost and go on the march.

  7. The penultimate session was opened by Luca Allodi, who’s been building a map of cognitive effects on social engineering research, inspired by the OSI model for systems. At the top there’s the cognitive level, and below that are attention, anomaly processing, heuristics and behaviour, among other things. Attention uses working memory and discriminates between foreground and background tasks, and so on. The second step was to map these to four actual social-engineering attacks, and then map the framework to the research literature. This revealed that there have been dozens of lab studies but only a handful of observational studies and almost nothing on targeted attacks, which are a big deal in the real world.

    Maria Brincker has been studying adversarial lessons from non-human animals. Her starting point is a paper on the biology of privacy by Klopfer and Rubinstein that sees privacy as the selective control of the flow of information to and from the animal or its group. This is in the context of camouflage and group behaviour, which in turn depend on the perceptual abilities and powers of potential predators and potential prey. In hawk-dove models of ethology, it’s natural for selection to attenuate the use of force and increase the use of signals, and this often works out through territoriality.

    Shannon French does not accept that we can algorithmitise ethics. We still don’t have anything like the clarity that would be needed to specify software, and indeed the irreducibility is fundamental to the world. Despite attempts by Aristotle, Kant, Mill and others, formal frameworks lead to weird outcomes in too many cases. For example, Kant’s moral harmony theory assumes we can always do the right thing without violating any commitments; but this just isn’t true, as the world is too messy. Tragedies exist. Which of two drowning children do you save? In the real world where you’re trying to nudge a military to be more ethical, you get problems we can’t even solve as people, let alone with software.

    Tom Holt has built an extremist cyber-crime database (ECCD) of cyber-attacks on US targets since 1998 related to political extremism, whether right-wing, left-wing, jihadist ot single-issue. He’s found 18 jihadist, 18 environmental/animal, and 8 racial/ethnic attacks since then, with a large peak of 14 attacks in 2015 thanks to Isis, and seven of the eight racial/ethnic incidents happening since 2016. The most common attacks are defacements, followed by doxxing and data breaches. The breaches tend to be done by environmental/animal groups who hack companies considered harmful, while the racists target educational institutions and the jihadists have a very wide range of targets. None of these actors have targeted healthcare or transportation.

    Rick Wash has been studying how people identify a phishing email or attempted scam. When they notice abnormalities and get an uncomfortable feeling, they start looking for phishing indicators. But how does this process fail? Rick has found multiple instances of five cases: (a) the automation failure where they’re doing email so fast they’re not paying attention (b) the discrepancy failure where they don’t notice anything weird (c) the accumulation failure where they come up with an explanation right away and the uncomfortable feeling didn’t have time to accumulate (d) the alternative failure where you get suspicious but don’t have an alternative theory of what the email might be trying to do (e) investigation failure where people get uncomfortable and suspicious but turn up the wrong answer.

    Pam Wisniewski works on promoting adolescent online safety and privacy. She has an app, CO-oPS, which acts as a neighbourhood watch for your phone, letting friends alert you if an ad is tracking you; and another Misu which enables geofencing for adolescents’ phones, so that some sites might only be accessible when the parent is at home. She’s also been working on best practices for youth protection. Gaps include ecologically valid data and ground truth; so she has been collecting data by getting teens to complete a 30-day diary of their experiences. Other gaps include extracting the right features for risk detection, so she’s been getting volunteers donate their instagram data to the project. 200 youth have donated over 5m messages of which over 2k were ‘unsafe’. The destination is safety by design, for which she’s building an app called ‘Circle of trust’ which flags bad content and shows it to the parent.

    In discussion, Pam described the ecological validity of her data collection, which explicitly excluded some material such as pornography. One confounder is that teens often dont flag things discussed with humour, even things like images of self-harm that adults might report; they also tend not to be report abuse aimed at others. One way of looking at it is that they want to be private in public. There are also issues around filtering; for example, Tom disregarded the thousands of everyday DDoS and spam campaigns from those considered politically salient. That’s hard as it can take a year or two to realise that a spam campaign was actually the side-effect of a nation-state attack. When it comes to ethical dilemmas, although humans cannot solve tragedies, at least they can feel the tragedy and this provides some moral accountability. Where hybrid systems might help is in nudging people to pause and consider alternatives; however one must beware of automation bias where people just follow the machine. To avoid this the machine might be better as a junior partner that just asks questions.

  8. I talked about what we can learn from studying which bugs get fixed and which don’t, using as an example the Bad Characters and Trojan Source vulnerabilities we disclosed last year. Why is it that most of the latter got patched, but only one of the former? The reasons are likely a mix of technical around cost and speed of patching, cultural of coders versus data scientists and around expectations of dependability, what the press like to report, market power and the internal politics of large firms. We need to understand these factors better as they determine what can practically be fixed and what can’t.

    Richard Clayton studies clusters of potentially bad things online. In 2015 he wrote a paper Concentrating Correctly on Cybercrime Concentration which many people cite but most seem not to have read. In 2019 he wrote a paper on Measuring the cost of cybercrime which showed that not much had changed in a decade, and a paper on Booting the booters which measured the effects of police interventions. In 2020 he wrote Cybercrime is often boring, showing that most of the kids who do the sysadmin work earn little money and eventually burn out. In 2021 he wrote a paper Silicon Den describing the entrepreneurial aspect of cybercrime. This year’s paper debunks the notion that all cybercrime attacks are “sophisticated”.

    Mike Levi has been thinking about precautionary efforts, covid-19 frauds and moral panics. He’s dug up a number of frauds and marketing drives around the Spanish flu epidemic in 1918-20, from a quack doctor who got ten years in jail to products like Oxo and Bovril that are still with us. This time we’re getting a lot better data on public health, and had a big drive against online scams, but there seem to have been huge frauds against government with alleged corruption involving ministers helping businessmen bypass normal purchasing processes. Fraud is now most crime and we might perhaps be paying more attention to investment scams such as clone firms rather than looking at highly techie exploits. The pseudo-performance metrics for Action Fraud are unconvincing. But while there has been concern about scams against individuals, there seems to be none about the larger scams against taxpayers. Does this say anything about police legitimacy?

    Serge Egelman would like us to talk about developers. He’s surveyed the devs of 130 child-directed apps and done follow-up interviews with 30 to understand their compliance with COPPA in the USA and GDPR-K in Europe. 80% said they were aware of these laws (25% said they were also aware of FLIRPPA, a law that Serge made up); only a half said they had some compliance process. Many were still clueless, thinking that consent wasn’t required if use was described in a privacy policy, or if they only collected “general” data like location. Others thought they didn’t need to check as Google surely does, or that they could leave it all to Admob (which is owned by Google, but uses behavioural advertising which is illegal without parental consent). Others thought they could just not list it as child-directed. Half were not sure whether data were sent encrypted; 72% said they didn’t send data to third parties (and a quarter were wrong). Most of these issues were caused by third-party SDKs, which they don’t configure properly or at all. Some SDKs are deceptive; Measurement Systems also does lawful intercept and has a CA with roots everywhere. We need to change the law to put pressure on platforms, but the platforms will assume liability if they start checking.

    Steven Murdoch has been working on productivity nudges for enterprise security policies. Enterprise software is typically much more configurable than the consumer variety so firms can comply with sectoral regulations; this means thousands of parameters that sysadmins can set, and not just for PCs and servers but phones too. Microsoft has been starting to budge users to not use features that are known to harmful, such as password aging, but leave these features in as some corporate customers exist. They also enable you to benchmark yourself against other organisations. Steven suggests stronger nudges such as “enabling password expiry will cause employees to spend 5,376 hours resetting passwords with a compliance cost of $325,028.57”. But how would you quantify the compliance costs of mock phishing, or the move to complex password rules? Does this matter when so many firms make decisions on the basis of really poor quality data?

    Bruce is due to deliver his hacking book in 68 hours. What happens when AIs start finding new loopholes by solving problems in ways their designers didn’t intend? You can think of this as the Midas problem: Midas asked Dionysus to turn everything he touched to gold, then died hungry and miserable as his food, drink and daughter all turned to gold. Such specification failures are generally OK in human interactions as we fill in the gaps, but AIs don’t do context well. There are many more examples of reward hacking. A real-world example might be the engine control software cheating discovered a few years ago; it maximised performance while passing tests, and wasn’t discovered for years because computers are hard to understand.

    Discussion started on AI loopholes; will we replace lawyers? AIs might find loopholes a lot faster than we can fix them, and some things are hard to fix for political and organisational reasons. The mapping between code and law is not exact; and judges will take the initiative to fix obvious flaws, while people often disregard rules to get better outcomes. The interaction between humans and AIs of various kinds is going to be complex; if the AIs start bullying people once embodied as robots there will surely be a backlash. But could tax AI find ways through the tax code, just as Google maps finds us new ways from A to B? The linkage between code and law is also fragile in practice as the developers don’t talk to the compliance people, as Serge has documented. So how can the authorities practically push back on abuses like investment fraud? Mike Levi has suggested that Google be prosecuted for running investment ads by firms not registered with the FCA, but the prosecutors appear fearful that it would exhaust their budgets. The effect of large budgets on the outcomes of court cases is too often overlooked by people outside the law and policy communities. The costs of compliance on smaller companies include a burden on developers to understand bad defaults in SDKs; this is basically dumped on the small developers by the platforms and toolsmiths. If we can’t go after the platforms, can we go after the data recipients? Or is it right that the liability sits squarely with the dev or the OEM? We’re not that keen on public-domain pharmaceuticals! And how do we constrain future AIs? When it comes to warfare, the rules are very different and they disappear when you’re losing. Unconscionable actions may become more common once it’s no longer humans who are pulling the trigger; if you make it too easy, the worst will happen. Leaders have worked for millennia to overcome our natural human reluctance to kill our fellow humans.

Leave a Reply to Ross Anderson Cancel reply

Your email address will not be published. Required fields are marked *