Category Archives: Cybercrime

Visualizing Diffusion of Stolen Bitcoins

In previous work we have shown how stolen bitcoins can be traced if we simply apply existing law. If bitcoins are “mixed”, that is to say if multiple actors pool together their coins in one transaction to obfuscate which coins belong to whom, then the precedent in Clayton’s Case says that FIFO ordering must be used to track which fragments of coin are tainted. If the first input satoshi (atomic unit of Bitcoin) was stolen then the first output satoshi should be marked stolen, and so on.

This led us to design Taintchain, a system for tracing stolen coins through the Bitcoin network. However, we quickly discovered a problem: while it was now possible to trace coins, it was harder to spot patterns. A decent way of visualizing the data is important to make sense of the patterns of splits and joins that are used to obfuscate bitcoin transactions. We therefore designed a visualization tool that interactively expands the taint graph based on user input. We first came up with a way to represent transactions and their associated taints in a temporal graph. After realizing the sheer number of hops that some satoshis go through and the high outdegree of some transactions, we came up with a way to do graph generation on-the-fly while assuming some restrictions on maximum hop length and outdegree.

Using this tool, we were able to spot many of the common tricks used by bitcoin launderers. A summary of our findings can be found in the short paper here.

Hiring for the Cambridge Cybercrime Centre (again!)

As recently posted, we currently advertising a post (details here) where “we expect that the best candidate will be someone from a sociology or criminology background who already has some experience analysing large datasets relating to cybercrime” — and now we have a second post for someone with a more technical background.

We seek an enthusiastic researcher to join us in collecting new types of cybercrime data, maintaining existing datasets and doing innovative research using our data. The person we appoint will define their own goals and objectives and pursue them independently, or as part of a team.

An ideal candidate would identify cybercrime datasets that can be collected, build the collection systems and then do cutting edge research on this data – whilst encouraging other academics to take our data and make their own contributions to the field.

We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus as would a solid technical background in networking and/or malware analysis. We do seek a candidate with strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.

Details of this second post, and what we’re looking for are in the job advert here: http://www.jobs.cam.ac.uk/job/19543/.

Hiring for the Cambridge Cybercrime Centre

We have a further “post-doc” position in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk.

We are looking for an enthusiastic researcher to join us to work on our datasets of posts made in “underground forums”. In addition to pursuing their own research interests regarding cybercrime, they will help us achieve a better understanding of the research opportunities that these datasets open up. In particular, we want to focus on establishing what types of tools and techniques will assist researchers (particularly those without a computer science background) to extract value from these enormous sets (10’s of millions of posts) of data. We will also be looking to extend our collection and need help to understand the most useful way to proceed.

We have an open mind as to who we might appoint, but expect that the best candidate will be someone from a sociology or criminology background who already has some experience analysing large datasets relating to cybercrime. The appointee should be looking to develop their own research, but also be prepared to influence how cybercrime research by non-technical researchers can be enabled by effective use of the extremely large datasets that we are making available.

Details of the posts, and what we’re looking for are in the job advert here: http://www.jobs.cam.ac.uk/job/19318/.

Symposium on Post-Bitcoin Cryptocurrencies

I am at the Symposium on Post-Bitcoin Cryptocurrencies in Vienna and will try to liveblog the talks in follow-ups to this post.

The introduction was by Bernhard Haslhofer of AIT, who maintains the graphsense.info toolkit and runs the Titanium project on bitcoin forensics jointly with Rainer Boehme of Innsbruck. Rainer then presented an economic analysis arguing that criminal transactions were pretty well the only logical app for bitcoin as it’s permissionless and trustless; if you have access to the courts then there are better ways of doing things. However in the post-bitcoin world of ICOs and smart contracts, it’s not just the anti-money-laundering agencies who need to understand cryptocurrency but the securities regulators and the tax collectors. Yet there is a real policy tension. Governments hype blockchains; Austria uses them to auction sovereign bonds. Yet the only way in for the citizen is through the swamp. How can the swamp be drained?

How Protocols Evolve

Over the last thirty years or so, we’ve seen security protocols evolving in different ways, at different speeds, and at different levels in the stack. Today’s TLS is much more complex than the early SSL of the mid-1990s; the EMV card-payment protocols we now use at ATMs are much more complex than the ISO 8583 protocols used in the eighties when ATM networking was being developed; and there are similar stories for GSM/3g/4g, SSH and much else.

How do we make sense of all this?

Reconciling Multiple Objectives – Politics or Markets? was particularly inspired by Jan Groenewegen’s model of innovation according to which the rate of change depends on the granularity of change. Can a new protocol be adopted by individuals, or does it need companies to adopt it en masse for internal use, or does it need to spread through a whole ecosystem, or – the hardest case of all – does it require a change in culture, norms or values?

Security engineers tend to neglect such “soft” aspects of engineering, and we probably shouldn’t. So we sketch a model of the innovation stack for security and draw a few lessons.

Perhaps the most overlooked need in security engineering, particularly in the early stages of a system’s evolution, is recourse. Just as early ATM and point-of-sale system operators often turned away fraud victims claiming “Our systems are secure so it must have been your fault”, so nowadays people who suffer abuse on social media can find that there’s nowhere to turn. A prudent engineer should anticipate disputes, and give some thought in advance to how they should be resolved.

Reconciling Multiple Objectives appeared at Security Protocols 2017. I forgot to put the accepted version online and in the repository after the proceedings were published in late 2017. Sorry about that. Fortunately the REF rule that papers must be made open access within three months doesn’t apply to conference proceedings that are a book series; it may be of value to others to know this!

Google doesn’t seem to believe booters are illegal

Google has a number of restrictions on what can be advertised on their advert serving platforms. They don’t allow adverts for services that “cause damage, harm, or injury” and they don’t allow adverts for services that “are designed to enable dishonest behavior“.

Google don’t seem to have an explicit policy that says you cannot advertise a criminal enterprise : perhaps they think that is too obvious to state.

Nevertheless, the policies they written down might lead you to believe that advertising “booter” (or as they sometimes style themselves to appear more legitimate) “stresser” services would not be allowed. These are websites that allow anyone with a spare $5.00 or so to purchase distributed denial of service (DDoS) attacks.

Booters are mainly used by online game players to cheat — by knocking some of their opponents offline — or by pupils who down the school website to postpone an online test or just because they feel like it. You can purchase attacks for any reason (and attack any Internet system) that you want.

These booter sites are quite clearly illegal — there have been recent arrests in Israel and the Netherlands and in the UK Adam Mudd got two years (reduced to 21 months on appeal) for running a booter service. In the USA a New Mexico man recently got a fifteen year sentence for merely purchasing attacks from these sites (and for firearms charges as well).

However, Google doesn’t seem to mind booter websites advertising their wares on their platform. This advert was served up a couple of weeks back:

advert for booter

I complained using Google’s web form — after all, they serve up lots of adverts and their robots may not spot all the wickedness. That’s why they have reporting channels to allow them to correct mistakes. Nothing happened until I reached out to a Google employee (who spends a chunk of his time defending Google from DDoS attacks) and then finally the advert disappeared.

Last week another booter advert appeared:

but another complaint also made no difference and this time my contact failed to have any impact either, and so at the time of writing the advert is still there.

It seems to me that, for Google, income is currently more important than enforcing policies.

Bitter Harvest: Systematically Fingerprinting Low- and Medium-interaction Honeypots at Internet Scale

Next week we will present a new paper at USENIX WOOT 2018, in which we show that we can find low- and medium-interaction honeypots on the Internet with a few packets. So if you are running such a honeypot (Cowrie, Glastopf, Conpot etc.), then “we know where you live” and the bad guys might soon as well.

In total, we identify 7,605 honeypot instances across nine different honeypot implementations for the most important network protocols SSH, Telnet, and HTTP.

These honeypots rely on standard libraries to implement large parts of the transport layer, but they were never intended to provide identical behaviour to the systems being impersonated. We show that fixing the identity string pretending to be OpenSSH or Apache and not “any” library or fixing other common identifiers such as error messages is not enough. The problem is that there are literally thousands of distinguishing protocol interactions, part of the contribution of the paper is to show how to pick the “best” one. Even worse, to fingerprint these honeypots, we do not need to send any credentials so it will be hard to tell from the logging that you have been detected.

We also find that many honeypots are deployed and forgotten about because part of the fingerprinting has been to determine how many people are not actively patching their systems! We find that  27% of the SSH honeypots have not been updated within the last 31 months and only 39% incorporate improvements from 7 months ago. It turns out that security professionals are as bad as anyone.

We argue that our method is a  ‘class break’ in that trivial patches cannot address the issue. Thus we need to move on from the current dominant honeypot architecture of python libraries and python programs for low- and medium-interaction honeypots. We also have developed a modified version of the OpenSSH daemon (sshd) which can front-end a Cowrie instance so that the protocol layer distinguishers will no longer work.

The paper is available here.