Monthly Archives: June 2023

The Pre-play Attack in Real Life

Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.

Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.

In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.

Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.

By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.

Will GPT models choke on their own exhaust?

Until about now, most of the text online was written by humans. But this text has been used to train GPT3(.5) and GPT4, and these have popped up as writing assistants in our editing tools. So more and more of the text will be written by large language models (LLMs). Where does it all lead? What will happen to GPT-{n} once LLMs contribute most of the language found online?

And it’s not just text. If you train a music model on Mozart, you can expect output that’s a bit like Mozart but without the sparkle – let’s call it ‘Salieri’. And if Salieri now trains the next generation, and so on, what will the fifth or sixth generation sound like?

In our latest paper, we show that using model-generated content in training causes irreversible defects. The tails of the original content distribution disappear. Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions. We call this effect model collapse.

Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI startups hammering the Internet Archive for training data.

After we published this paper, we noticed that Ted Chiang had already commented on the effect in February, noting that ChatGPT is like a blurry jpeg of all the text on the Internet, and that copies of copies get worse. In our paper we work through the math, explain the effect in detail, and show that it is universal.

This does not mean that LLMs have no uses. As one example, we originally called the effect model dementia, but decided to rename it after objections from a colleague whose father had suffered dementia. We couldn’t think of a replacement until we asked Bard, which suggested five titles, of which we went for The Curse of Recursion.

So there we have it. LLMs are like fire – a useful tool, but one that pollutes the environment. How will we cope with it?

2023 Workshop on the Economics of Information Security

WEIS 2023, the 22nd Workshop on the Economics of Information Security, will be held in Geneva from July 5-7, with a theme of Digital Sovereignty. We now have a list of sixteen accepted papers; there will also be three invited speakers, ten posters, and ten challenges for a Digital Sovereignty Hack on July 7-8.

The deadline for early registration is June 10th, and we have discount hotel bookings reserved until then. As Geneva gets busy in summer, we suggest you reserve your room now!