Monthly Archives: September 2006

Closing in on suspicious transactions

After almost 3 years of problem-free banking in the UK I recently received the following letter from HSBC’s “Accounts Review Team”. It advised me that the HSBC group no longer wished to have me as a customer and that I had 30 days to move my banking to an establishment in no way connected to HSBC, before they would close my account. Confident that I had not indulged in any illegal activity recently, and concerned about their reasons for taking such action, I attempted several times to phone the number given in the letter, unsuccessfully reaching a “we are busy, please try again” recording each time. Visiting my home branch was not much more helpful as they claimed that the information had not been shared with them. I was advised to make a written complaint and was told that the branch had already referred the matter, as a number of customers had come in with similar letters.

After two written complaints and a phone call to customer services, a member of the “Team” finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent “suspicious” transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I’m not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC’s “account review” affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.

My account is now restored, but I have still had no satisfactory explanation as to why the decision was made to close my account, nor do I know how this sorry affair will affect my future banking and credit rating. Would an attempt to transfer my account have caused HSBC’s negative opinion of me to spread to other institutions? A security mechanism that yields false positives or recommends a disproportionate reaction, e.g. closing an account based on a single unexpected transaction, should be seen as somewhat flawed. The end result is that the system runs on a guilty until proven innocent premise, with the onus for correcting marginal calls placed on the customer. Ultimately the bank will claim that these mechanisms are designed to protect the customer, but in the end randomly threatening to close my account does not make me feel any safer.

Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Continue reading Random isn't always useful

Which services should remain offline?

Yesterday I gave a talk on confidentiality at the EMIS annual conference. I gained yet more insights into Britain’s disaster-prone health computerisation project. Why, for example, will this cost eleven figures, when EMIS writes the software used by 60% of England’s GPs to manage their practices with an annual development budget of only 25m?

On the consent front, it turns out that patients who exercise even the mildest form of opt-out from the national database (having their addresses stop-noted, which is the equivalent of going ex-directory — designed for celebs and people in witness protection) will not be able to use many of the swish new features we’re promised, such as automatic repeat prescriptions. There are concerns that providing a degraded health service to people who tick the privacy box might undermine the validity of consent to information sharing.

On the confidentiality front, people are starting to wrestle with the implications of allowing patients online access ot their records. Vulnerable patients — for example, under-age girls who have had pregancy terminations without telling their parents — could be at risk if they can access sensitive data online. They may be coerced into accessing it, or their passwords may become known to friends and family. So there’s talk of a two-tier online record — in effect introducing multilevel security into record access. Patients would be asked whether they wanted some, all, or none of their records to be available to them online. I don’t think the Department of Health understands the difficulties of multilevel security. I can’t help wondering whether online patient access is needed at all. Very few patients ever exercise their right to view and get a copy of their records; making all records available online seems more and more like a political gimmick to get people to accept the agenda of central data collection.

We don’t seem to have good ways of deciding what services should be kept offline. There’s been much debate about elections, and here’s an interesting case from healthcare. What else will come up, and are there any general principles we’re missing?

How many Security Officers? (reloaded)

Some years ago I wrote a subsection in my thesis (sec 8.4.3, p. 154), entitled “How Many Security Officers are Best?”, where I reviewed over the various operating procedures I’d seen for Hardware Security Modules, and pondered why some people chose to use two separate parties to oversee a critical action and some chose to use three. Occasionally a single person is even deliberately entrusted with great power and responsibility, because there can be no question where to lay the blame if something goes wrong. So, “one, two, or three?”, I said to myself.

In the end I plumped for three… with some logic excerpted from my thesis below:

But three security officers does tighten security: a corrupt officer will be outnumbered, and deceiving two people in different locations simultaneously is next to impossible. The politics of negotiating a three-way collusion is also much harder: the two bad officers will have to agree on their perceptions of the third before approaching him. Forging agreement on character judgements when the stakes are high is very difficult. So while it may be unrealistic to have three people sitting in on a long-haul reconfiguration of the system, where the officers duties are short and clearly defined, three keyholders provides that extra protection.

Some time later, I mentioned the subject with Ross, and he berated me for my over-complicated logic. His general line of argument was along these lines “The real threat for Security Officers is not that they blackmail, bribe or coerce one another, it’s that they help! Here, Bob, you go home early mate; I know you’ve got to pack for your business trip, and I’ll finish off installing the software on the key loading PC. That sort of thing. Having three key custodians makes ‘helping’ and such friendly tactics much harder – the bent officer must co-ordinate on two fronts.”

But recently my new job has exposed me to a number of real dual control and split knowledge systems. I was looking over some source code for a key loading HSM command in fact, and I spotted code that took a byte array of key material, and split it into three components each with odd parity. It generates two fresh totally random components with odd parity, and then XORs these onto the third. Hmmm, I thought, so the third component would contain the parity information of the original key, dangerous — a leakage of information preferentially to the third key component holder! But wrong… because the parity of the original key is known anyway in the case of a DES key… it’s always odd.

I chatted to our chief technical bod about this, and he casually dropped a bombshell — that shed new light on why three is best, an argument so simple and elegant that it must be true, yet faintly depressing to now believe that no-one agonised over the human psychology of the security officer numbers issue as I did. When keys are exchanged a Key Check Value (KCV) is calculated for each component, by encrypting a string of binary zeroes with the component value. Old-fashioned DES implementations only accepted keys with odd parity, so to calculate KCVs on these components, each must have odd parity as well as the final key itself. For the final key to retain odd parity from odd parity components, there must be an odd number of components (the parity of keys could be adjusted, but this takes more lines of code, and is less elegant than just tweaking a counter in the ‘for’ loop). Now the smallest odd integer greater than one is three. This is why the most valuable keys are exchanged in three components, and not in two!

So, the motto of the story for me is to make sure to apply Occam’s Razor more thoroughly when I try to deduce the logic behind the status quo, but I still think there are some interesting questions raised about how we share responsibility for critical actions. There still seems to be to me a very marked and qualitative difference in the dynamics of how three people interact versus two, whatever the situation: be it security officers entering keys, pilots flying an aircraft, or even a ménage à trois! Just like the magnitude of the difference between 2D and 3D space.

If one, two and three are all magical numbers, qualitatively different, are there any other qualitative boundaries higher in the cardinal numbers, and if so, what are they? In a security-critical process such as an election, can ten people adjudicate effectively in a way that thirty could not? Is there underlying logic or just mysticism behind the jury of twelve? Or, to take the jury example, and my own tendency to over-complicate, was it simply that in the first proper court room built back a few hundred years ago, there happened only to be space for twelve men on the benches on the right hand side!

A Study on The Value of Location Privacy

There is a Workshop on Privacy in The Electronic Society taking place at the beginning of November. We (George Danezis, Marek Kumpost, Vashek Matyas, and me) will present there results of A Study on the value of Location Privacy we have conducted a half year back.

We questioned a sample of over 1200 people from five EU countries, and used tools from experimental psychology and economics to extract from them the value they attach to their location data. We compare this value across national groups, gender and technical awareness, but also the perceived difference between academic use and commercial exploitation. We provide some analysis of the self-selection bias of such a study, and look further at the valuation of location data over time using data from another experiment.

The countries we gathered the data from were Germany, Belgium, Greece, the Czech Republic, and the Slovak Republic. As some of the countries have local currencies, we have re-calculated the values of bids in different countries by using a “value of money” coefficient computed as a ratio of average salaries and price levels in particular countries — this data was taken from Eurostat statistics.

We have gathered bids for three auctions or scenarios. The first and second bids were for one-month tracking. The former data were to be used for academic purposes only, and the latter for commercial purposes. The third bids were for the scenario where participants agreed with a year long tracking and data free for commercial exploitation. Let us start with the first bids.

Differences among Countries

The distributions of the first bids are on the following plot. Although there are differences between all nations, the Greek bids are beyond our expectations.

Distributions of bids in the first auction round.

Continue reading A Study on The Value of Location Privacy

The real hustle on BBC3: watch it!

For UK residents: BBC Three is re-running their wonderful 10-episode series “The Real Hustle” in which three skilled con artists give, with hidden cameras, a revealing and entertaining guided tour of the most popular scams used to rip off people today. Some computer-based (including keyloggers, bluejacking and bank card cloning), most not.

This series should be required viewing for all security professionals and most definitely for all security students. The only way to understand security is by understanding what crooks actually do. It’s also great fun.

Each episode is re-broadcast several times, from prime time to middle-of-the-night, so you usually get several chances to set your digital video recorder if the programme overlaps with something else you want to watch or record. Check the EPG.

Hot or Not: Revealing Hidden Services by their Clock Skew

Next month I will be presenting my paper “Hot or Not: Revealing Hidden Services by their Clock Skew” at the 13th ACM Conference on Computer and Communications Security (CCS) held in Alexandria, Virginia.

It is well known that quartz crystals, as used for controlling system clocks of computers, change speed when their temperature is altered. The paper shows how to use this effect to attack anonymity systems. One such attack is to observe timestamps from a PC connected to the Internet and watch how the frequency of the system clock changes.

Absolute clock skew has been previously used to tell whether two apparently different machines are in fact running on the same hardware. My paper adds that because the skew depends on temperature, in principle, a PC can be located by finding out when the day starts and how long it is, or just observing that the pattern is the same as a computer in a known location.

However, the paper is centered around hidden services. This is a feature of Tor which allows servers to be run without giving away the identity of the operator. These can be attacked by repeatedly connecting to the hidden service, causing its CPU load, hence temperature, to increase and so change the clockskew. Then the attacker requests timestamps from all candidate servers and finds the one demonstrating the expected clockskew pattern. I tested this with a private Tor network and it works surprisingly well.

In the graph below, the temperature (orange circles) is modulated by either exercising the hidden service or not. This in turn alters the measured clock skew (blue triangles). The induced load pattern is clear in the clock skew and an attacker could use this to de-anonymise a hidden service. More details can be found in the paper (PDF 1.5M).

Clock skew graph

I happened upon this effect in a lucky accident, while trying to improve upon the results of the paper “Remote physical device fingerprinting“. A previous paper of mine, “Embedding Covert Channels into TCP/IP” showed how to extract high-precision timestamps from the Linux TCP initial sequence number generator. When I tested this hypothesis it did indeed improve the accuracy of clock skew measurement, to the extent that I noticed an unusual peak at about the time cron caused the hard disk on my test machine to spin-up. Eventually I realised the potential for this effect and ran the necessary further experiments to write the paper.

After ID Cards…

The next government initiative to undermine privacy is the Children’s Database project. I am one of the authors of a report on this, which was written for the Information Commissioner and will be published later in September. Press interest is starting to mount (see the Telegraph, the Tiimes, the Evening Standard and the Daily Mail), and there will be a TV programme on the subject today (More 4 at half past seven). If you’re in the UK and are interested in privacy or computer security, that might be worth watching.

The project aims at linking up all the government systems that keep information on kids. Your kids’ schoolteachers will be able to see not just their school records but also their medical records, social work records, police records and probation records; see here for the background. I can’t reveal the contents of the report prior to publication but I am reminded of Brian Gladman’s punchline in his talk at SFS8: ‘You can have scale, or functionality, or security.If you’re smart you can build a system with any two of these. But you can’t have all three.’

As well as the technical objections there are legal objections – and strong practical objections from social workers who believe that the project is putting children in harm’s way.