Passports and biometric certificates

A recurring media story over the past half year has been that “a person’s identity can be stolen from new biometric passports”, which are “easy to clone” and therefore “not fit for purpose”. Most of these reports began with a widely quoted presentation by Lukas Grunwald in Las Vegas in August 2006, and continued with a report in the Guardian last November and one in this week’s Daily Mail on experiments by Adam Laurie.

I have closely followed the development of the ISO/ICAO standards for the biometric passport back in 2002/2003. In my view, the worries behind this media coverage are mainly based on a deep misunderstanding of what a “biometric passport” really is. The recent reports bring nothing to light that was not already well understood, anticipated and discussed during the development of the system more than four years ago.

It is important to understand that a “biometric passport” is something very different from a traditional passport. Calling both creatures “passport”, and tying them together physically into the same booklet, unfortunately misleads many people into thinking that they somehow share similar security requirements. “Biometric certificate” might have been a better name, which would have carried no connotation that having its content become public knowledge would represent a problem. So what is the difference?

A traditional passport is a security token, an object that helps you to get access by the mere fact that you are in its posession. Therefore, similar to banknotes, passports are produced using fancy printing technology (laminates, holograms/kinegrams, Guilloché patterns, UV ink and threads, etc.), all aimed to make it as expensive as possible for a fraudster to modify the data on an existing passport or to produce something looking very similar to a genuine passport from scratch. Traditional passports could be stolen, therefore they also carry some biometric data (photo, hand signature, in many countries also the height and eye colour of the person). But because humans are not very good at comparing faces of strangers on old photos, it is not too difficult to find in a small group (say 30 people of similar sex, age and ethnicity) multiple pairs of similar-looking persons who would have little problems with using each others’ old passports in border controls (especially if hair styles are adjusted to match the photo). Because the traditional form of photo ID is not very reliable to verify, there exists an entire revocation infrastructure for reporting lost or stolen passports. Passports must be guarded against theft.

A “biometric passport” is just a computer file that lists a small number of commonly known attributes of a person (name, date and place of birth, nationality) together with a few administrative details (document serial number and expiry date) and some biometric data that can be used to verify the identity of a person (photo of face, fingerprints or iris). This entire file is digitally signed by the passport office, such that anyone can easily verify its authenticity.

Modern biometric identification algorithms have a much lower false-positive rate than humans. While you need to search in a group of about 1000 people to find someone whose face would pass a good comparison algorithm with a given passport photo, with iris photos or prints of all fingers, that group would exceed Earth’s population many times. Therefore, biometric passports (especially the second generation with iris or multiple fingerprint images) can quite securely be verified on their own. There is no need for fancy packaging and security printing.

In fact, some of the early proposals for the introduction of biometric passports suggested that the biometric passport should be handed out as a simple memory chipcard or even as a simple 2D barcode on a piece of paper that can be faxed. We could even carry them with us on USB sticks, mobile phones, or simply email them to embassies to apply for visas.

There is nothing wrong with copying or publishing all the data in a “biometric passport”, because it isn’t a secret. When I get mine, I’ll be quite happy to put all its data onto my web page. It is just another certificate, like my old PGP public key. A copy of my biometric passport should not allow you to impersonate me, because for that you will need to find someone whose fingerprints match a comparison with those in my passport.

So much about the basic idea. There is of course much more to be said about the particular choice of contactless interface and optional access control mechanisms that passport offices currently use to package the in-itself pretty harmless biometric certificate, but I’ll save that for other posts here.

About Markus Kuhn

I'm a Senior Lecturer at the Department of Computer Science and Technology, working on hardware and signal-processing aspects of computer security.

14 thoughts on “Passports and biometric certificates

  1. While you are entirely correct that the biometric certificate carried in a passport is not intended to act as a security token that does not mean that it it is either desirable or advisable to publish or broadcast it.

    In the real world as it is today, the manner in which many business transactions are authenticated is by demonstrating that one has to hand various personal information. For those who commit fraud through impersonation their task is easier when they have more information. Knowledge of your home address plus the contents of the biometric certificate in your passport would probably suffice for the vast majority of business authentication challenges I come across. As such, I understand the concern people have over making this information more widely available.

    It is an unfortunate fact that at the moment there are many people who use possession of information as evidence in authentication. So, while the Passport Service may not have intended the biometric certificate to be any sort of authentication token the sad fact is that other people may use the information contained therein for exactly that task. While that state persists it’s probably worth not shouting your passwords from the rooftops.

  2. Markus,

    I agree with what you’re saying here, but it shows that the security properties of a passport have been radically changed, without the users of passports (citizens, police officers, immigration officials, hotel receptionists…) being made aware of the change.

    This kind of unadvertised change of security properties is often dangerous, because users continue employing “unofficial” protoocols that previously worked, but are now broken.

    This might be termed the “microwaving your cat” problem. As the Urban Legend has it, some people were in the habit of drying out their cat in an electric oven when it got its fur wet in the rain; it is a bad idea to continue this practise when you have switched to a microwave.

    Possible example of this problem with passports:

    Old protocol: When a person has been charged with a crime but let out of jail pending trial, the police officers keep hold of their passport to make sure they don’t flee the country.

    New protocol: Now broken, because they might have made a copy of their biometric passport.

  3. The issue with biometrics lie in a false sense of security and unofficial protocols individuals may employ. I believe Nicko is correct in that it probably isn’t wise to broadcast your biometric data. This information alone shouldn’t be used as any form of token or identification marks without being verified (matched against the live human), but that doesn’t mean they won’t be used that way. The same can be said about social security numbers in the United States; they were never designed to be unique identifiers or authorization and authentication means, but many now tend to view them that way. It’s treated like a shared secret: if you know it, you must be authentic.

    But I don’t see the large security flaw that Michael has. Your biometric information won’t solely be used to gain entry into a foreign country, but is used as a means to tie the physical passport (with all the difficult to forge watermarks, holograms, uv inks, etc. that Markus mentions) to a particular individual. The biometric data says, officially, “Yes, you are John Doe.” We have authenticated you, but we haven’t authorized you to do anything. The biometrics are just one more factor that will increase the security of the system if used in conjunction with other security measures. A failure in this last point is what I fear.

  4. Extending Nicko’s point about the threat being an unauthorised access to your private data that could be exploited in many scenarios of identity theft, another threat that RFID biometric passports introduce is that it is potentially easier to obtain your private information from the new passport than before.

    Previously, one would’ve had to be at list in the line of sight of your open passport (or gain physical access to it for a period of time), whereas now, as I understand it, it is possible to simply read the data of the passport chip in a covert way given enough proximity (standing in the immigration control queue might be quite productive).

  5. I guess the floor in the biometric plan comes when a passport is initially obtained by a third party. Increased security also puts the traditional watchmen to sleep

  6. Sticking with the theme of unauthorised access and broken protocols, one example that comes to mind is the (comprehensively discredited but still widely used) use of mother’s maiden name as a password.

    It’s easy to argue that the new system lowers the barrier to impersonators by providing easy access to name, date and place of birth, enabling automation of acquisition of birth certificates and hence mother’s maiden name.

  7. Two comments:

    A) The correct protocol for a certificate is to run some check which validates something (i.e. the passport) against the certificate. For comparison, a lot of open-source software comes with a PGP certificate. But natural behaviour is that people will not actually validate the certificate, since this requires effort. Then, mere possession will be enough.

    To ensure that possession of a certificate is not enough, the certificates should all be published in an easily accessible register. Is this more or less your point, Markus?

    B) The problem with fingerprints is that they are not exactly a certificate. Fingerprints are ‘something you are’. If a black hat gets a valid certificate but with my fingerprints in it then my own certificate will be compromised. How do I revoke my biometric data? I suppose I could use acid…

  8. I have argued this point about the ‘cloning’ of RFID passports with friends and colleagues, and generaly I agree with Markus in that the ability to clone such passports is not a security hole in itself. A physical replica of the passport is still required, and this remains as difficult as ever to achieve, if not more so.

    However, if the data transmitted on the wireless link is sufficient to create a physical copy of the passport (and I plead ignorance here, as I have yet to read the standard) using a blank ‘template’ document (such as would be used by a professional forger) and filling in the blanks, then this is a new problem that has been created by this system. It allows someone to ‘steal’ my passport data, and use it to generate a replica without physical access.

    If the biometric data on the passports were checked, this would not be an issue, but although I have been through many customs counters with new RFID readers, I have yet to put through any procedures additional to those used before I had such a new fangled document.

    Ultimately I remain ambivalent about the new passports.

  9. Unless the biometric data is itself encrypted (which at least some – such as the photograph, apparently is not), then “posting your data” (or otherwise allowing unrestricted access, such as through RFID) is probably NOT a good idea.

    As noted in the article, it is not too difficult to find/gen4erate a facial-image duplicate, possibly augmented by makeup.

    However, giving persons unknown unlimited time to duplicate your fingerprints electronically seems extra scary.

    Sure, we are already vulnerable to well-demonstrated techniques for lifting and duplicating fingerprints onto “gummi-bear” finger-covers. But, in the case of a “random lift”, there is not necessarily any corresponding personal data with which to correlate the print. With all this data available upon, and broadcast by, a passport, you’re only asking for trouble from impersonators.

    Maybe they can’t effectively use a “cloned” passport against an alert and effective security-screening system. But, as others have noted, “alert and effective” is not necessarily the rule. However, even without reusing the passport per se, the impersonator now has a wealth of personal information, INCLUDING biometric data that (s)he can attempt to replicate.

    Certainly, if somebody shows up at a financial institution, basically looking like you, and apparently having your fingerprints (assuming the tester doesn’t look too hard at the fingertip surface itself), they are going to succeed at impersonating you to a degree that will probably stand up in a court of law.

  10. I think the claims made about biometrics don’t tally with real life performance figures for these solutions.

    Quote: “Modern biometric identification algorithms have a much lower false-positive rate than humans. While you need to search in a group of about 1000 people to find someone whose face would pass a good comparison algorithm with a given passport photo, with iris photos or prints of all fingers, that group would exceed Earth’s population many times”.

    But the real life performance of commercial biometrics is very rarely better than a 1% False Accept Rate. The best measured FAR in any literature I’ve seen is 0.0001% (iris); i.e. one in a million. That is at least 10,000 times worse than your suggestion of accuracy that “would exceed Earth’s population many times”.

    Quote: “Therefore, biometric passports (especially the second generation with iris or multiple fingerprint images) can quite securely be verified on their own. There is no need for fancy packaging and security printing … We could even carry them with us on USB sticks, mobile phones, or simply email them to embassies to apply for visas”.

    There are several problems here, including vulnerability to replay attack if templates are stolen, and the distinct possibility of reverse engineering a synthetic biometric original that will scan to generate the same template (demonstrated already I think with fingerprints).

    For unattended many-to-many matching (as suggested by the idea of emailing templates to embassies) you need really miniscule False Match rates (perhaps less than one in a trillion) to avoid “Birthday Paradox” false alarms in large databases. And what are you going to do about those poor individuals who cannot enrol at all?

    Cheers,

    Stephen Wilson
    Lockstep
    ——————-
    Lockstep provides independent specialist advice and analysis
    on identity management, PKI and smartcards, and is developing unique new smartcard technologies to address transaction
    privacy, phishing, pharming and spam.

  11. Stephen Wilson wrote that “[t]he best measured FAR in any literature I’ve seen is 0.0001% (iris); i.e. one in a million” and that therefore biometric matching is unsuitable for “unattended many-to-many matching”, such as finding duplicate entries in large databases. This is a quite common fallacy made in discussions about biometric algorithms, namely to implicitly assume that the same decision threshold is used in all applications and that therefore the performance of the system can be described with a single false-accept rate. In practice, the threshold of any matching algorithm can be adjusted in order to balance the false-accept and false-reject rates against each other. It is a well-established practice to chose different thresholds to match the different needs of different applications.

    Daugman’s IrisCode, the currently only commercially deployed iris matching algorithm, adjusts its decision threshold automatically based on the size of the comparison set. So if the algorithm is used to deduplicate national identity databases, it will automatically use a decision threshold that makes spurious matches unlikely given the size of the database. This leads to astronomically small false-accept rates, at the expense of somewhat worse false-reject rates, which is exactly what is needed if the number of cross comparisons made is proportional to the square of the number of persons enrolled.

    On the other hand, if the algorithm is merely used to verify a single claimed identity (i.e., match a single live iris image against a single digitally signed iris image on a passport), it adjusts its threshold such that there is a 1 in a million false-accept rate. Most users consider this acceptable for such a one-to-one comparison, which then offers a much better false-reject rate (the primary practical concern in every-day applications) than in the database-deduplication application.

    The available data suggests that the receiver operating characteristic (the curve that shows all possible combinations of false-accept and false-reject probabilities as a parameter of the decision threshold) of IrisCode provides nice operating points for both applications, namely choosing the decision threshold at a Hamming distance of <22% for matches in off-line deduplication of national databases, and <33% for matches in everyday verification of a single identity.

    As far as experimental real-world evidence goes, this algorithm has been used already to cross-compare 632,500 different irises (United Arab Emirates immigration database) without a single false match, which bounds the experimentally observed false match rate to less than 1 in 200 billion cross comparisons.

    Reference: Computer Laboratory Technical Report UCAM-CL-TR-635

    I’m happy to agree that biometric identification is not yet suitable for unsupervised access-control applications, because none of the currently available sensors have strong mechanisms to distinguish between fabricated templates and live tissue, but I would hope that for the primary applications of passports, only supervised recordings will be used, i.e. were there is a human present who is trained and experienced in recognizing the difference between a real finger and a piece of gelatin.

  12. [Sorry, trying again …]

    Marcus Kuhn explained that different threshlds can be used for different applications, and that Daugman has analysed all cross matches within the UAE immigration database, calculating a False Match Rate of 1 in 200 Billion.

    Yet something still doesn’t gel. Vendors’ own performance specs and ‘practical’ measurement trials are many many orders of magnitude worse than Daugman’s results. For instance, the UK Passport Office Biometric Enrolment Trial of May 2005 reported “Iris verification success” of 96% (not that it’s clear what that means exactly; more on that below).

    See http://www.passport.gov.uk/downloads/UKPSBiometrics_Enrolment_Trial_Report.pdf

    Could it be, in part, that the 632,500 different irises in the UAE database were all measured under precisely identical conditions? Is this sort of data representative of the real word application where repeat presentations are made in different environments?

    The smartcard trade media reported in 2005 some sobering test results from matching across different iris cameras.
    “Templates created on an OKI camera and matched on a Panasonic had the lowest false non-match rate at 2.297%. The highest false reject rate was from templates created on an OKI machines and matched on an LG devices with a 3.240% rate. False match rates were tiny with the OKI LG combination having the lowest rate at .00090%. The highest false match was .00199% with templates created on an LG camera and matched on a Panasonic.”
    http://www.cardtechnology.com/article.html?id=20050811DQB2XGLL.

    0.0009% might indeed be “tiny” but it is still hugely worse than 1 in 200 Billion. I guess that these off-the-shelf systems are not being tuned carefully, and a fixed threshold might be set in order to balance false accepts and false rejects, rather than varying the settings as Marcus advises.

    So three practical questions arise:

    1. Why aren’t commercial systems making more use of threshold tuning?

    2. If biometric vendors are judging that it is too hard or too fraught to have their systems tunable, then why do they continue to quote academic performance figures? [My favorite academic figure is the theoretical false match rate of 1 in 10-to-the-78 quoted by at least one iris scanning vendor.]

    3. How goes progress towards standardised measurement protocols for false match, false reject, fail to enrol etc.? The actual procedures for acquiring images goes unremarked in most reports. In a real world deployment, say in an immigration setting, what does a False Match Rate of X actually mean? How would it be measured? What variables are controlled in the measurement? How big is the database supposed to be?

    Oops, sorry, that was more than three questions.

    Cheers,

    Stephen Wilson.
    Lockstep Consulting.

  13. Not sure if anyone is still listening to this old thread?!

    I’ve been asking questions in other fora lately, getting very little satisfaction from biometrics’ advocates on fundamental questions like what to do about id theft and revocation, and what to do about the tension between sensitivity and specificity when banks come to use the biometrics in a National ID Card (the government might prefer lower false accept while banks might tend to prefer lower false reject).

    So I’d like to re-state the questions I raised on LBT over a year ago. In particular: What is the latest progress on standardising measurement of FRR, FMR and Fail to Enroll? When the UK National ID Card is rolled out and used in bank branches, these are going to be significant issues, because banks may choose different vendors, and if there is a mistake with all equipment working to manufacturer’s tolerances, liability will be a hot potato.

    I’m also still curious about the fine print behind the Daugman paper discussed last time. How meaningful are the “1 in 200 Billion” figures? Was the testing done on unrealistically homogeneous samples?

    Cheers,

    Stephen Wilson, Lockstep.

Comments are closed.