Moore's Law won't kill passwords

Computers are getting exponentially faster, yet the human brain is constant! Surely password crackers will eventually beat human memory…

I’ve heard this fallacy repeated enough times, usually soon after the latest advance in hardware for password cracking hits the news, that I’d like to definitively debunk it. Password cracking is certainly getting faster. In my thesis I charted 20 years of password cracking improvements and found an increase of about 1,000 in the number of guesses per second per unit cost that could be achieved, almost exactly a Moore’s Law-style doubling every two years. The good news though is that password hash functions can (and should) co-evolve to get proportionately costlier to evaluate over time. This is a classic arms race and keeping pace simply requires regularly increasing the number of iterations in a password hash. We can even improve against password cracking over time using memory-bound functions, because memory speeds aren’t increasing nearly as quickly and are harder to parallellise. The scrypt() key derivation function is a good implementation of a memory-bound password hash and every high security application should be using it or something similar.

The downside of this arms race is that password hashing will never get any cheaper to deploy (even in inflation-adjusted terms). Hashing a password must be as slow and costly in real terms 20 years from now or else security will be lower. Moore’s Law will never reduce the expense of running an authentication system because security depends on this expense. It also needs to be a non-negligible expense. Achieving any real security requires that password verification take on the order of hundreds of milliseconds or even whole seconds. Unfortunately this hasn’t been the experience of the past 20 years. MD5 was launched over 20 years ago and is still the most common implementation I see in the wild, though it’s gone from being relatively expensive to evaluate to extremely cheap. Moore’s Law has indeed broken MD5 as a password hash and no serious application should still use it. Human memory isn’t more of a problem today than it used to be though. The problem is that we’ve chosen to let password verification become too cheap.

10 thoughts on “Moore's Law won't kill passwords

  1. Memory isn’t so much the problem as basic awareness in the first place. It is a pity there isn’t a doubling of human security awareness every two years. You can build slow password hash functions but will developers use them? And will users stop using guessable passwords that no hash function is capable of protecting? And will users stop using the same password across multiple services? It’s sort of a lost cause if you have an application that uses a slow hash function but a high percentage of your users are reusing their password on other systems that don’t. This seems like an arms race where 99% of one side is mostly clueless or doesn’t care.

  2. Nice example. Lots of evidence that this type of ‘security’ is par for the course.

    Large UK Supermarket chain: “…they had plain text password storage, emailed passwords, XSS, SQLi, mixed mode HTTPS, broken password validation and a dose of security misconfiguration yet they still insisted that their security was “robust” and “industry standard””.

    The pros and cons of different types of hashing and how to hash passwords securely is way off their radar screen.

    http://www.troyhunt.com/2013/01/the-problem-with-website-security-is-us.html

  3. Your point is right — provided that…
    a) … the password mechanism in use has some kind of agility so that the cost of an authentication can be regularly adapted over time during (mandatory!) password changes.
    b) …the system in question is designed to completely cut backward compatibility with earlier versions of the password mechanism (which is, sadly, something Microsoft decided not to do in its operating systems some 20 years ago).
    c) …users change their passwords significantly over time so that a password recovered from a “cryopreserved” password hash leaked years before does not give usable hints on the current password (which is, judging from my own practice, kind of wishful thinking).

  4. (seems I pressed the submit button few seconds too early…)

    Thinking about it, requirement a) above does not really require a mandatory password change. Simply put another “onion skin” of hashing around the stored password hash at regular intervals.

  5. I don’t like the bit about the human brain being constant either. I used my brain a few weeks ago and found I was capable of learning new things.

    Regarding this specific problem, I don’t think humans are at all incapable of remembering cryptographic keys, but the proper memory techniques are not taught in school and neither society nor security economics directs the individual to wish to remember long strings of random numbers (whether static or changing on a regular basis). But if in future circumstances or incentives change, I’m sure the population will easily rise to the challenge and only a small tail of the distribution would struggle.

    There’s a yawning gap between “will not” and “cannot”.

  6. I guess it is almost impossible to make the internet entirely safe, also with encrypted passwords. Perhaps Google’s plan to create ID tokens can help somewhat (for a while).

  7. It’s not just CPUs that are getting better. Disks are getting bigger as well and that makes it easier to store precomputed password hashes with a corresponding password.

    Note that I said “a corresponding password”, not “the corresponding password”. If the password has more bits then the hash, then password collisions are inevitable. Depending on the circumstances an attacker may only care about getting a working password rather then the original password for the protected resource. For many password systems this is only of theoretical concern due to use of “salts” to make creating such precomputed hashes too large to store economically.

    Still if we are going to propose a new framework for passwords, it would be nice to allow for increasing the salt size over time without having to start over from scratch. Making it transparent to users so that the first time they login after the salt size is increased a new hash (with a new longer salt) was generated and stored as part of the login process seems doable and possibly even preferable to making a user type their password for a second time just because the administrator decided to increase the length of the salt.

  8. To clarify a few points made by Bill-password collisions inevitably exist but are impossible to find if a proper cryptographic hash function is used. That’s essentially a non-issue.

    Salt size also doesn’t need to be updated over time since it doesn’t have to be guessed. The only goal of salt is to ensure a unique function is used for each user’s password, and 32 bits is usually fine for that purpose forever.

Leave a Reply

Your email address will not be published. Required fields are marked *