Password Authenticated Key Exchange (PAKE) is one of the central topics in cryptography. It aims to address a practical security problem: how to establish secure communication between two parties solely based on their shared password without requiring a Public Key Infrastructure (PKI).

The solution to the above problem is very useful in practice — in fact, so useful that it spawns a lot “fights” over patents. Many techniques were patented, including the well-known Encrypted Key Exchange (EKE) and Simple Password Exponential Key Exchange (SPEKE). A secondary problem is technical; both the EKE and SPEKE protocols have subtle but worrying technical limitations (see the paper for details).

At the 16th Workshop on Security Protocols held in April 2008, Cambridge, UK, I presented a new solution (joint work with Peter Ryan) called Password Authenticated Key Exchange by Juggling (or J-PAKE). The essence of the protocol design inherits from the earlier work on solving the Dining Cryptographers problem; we adapted the same juggling technique to the two-party case to solve the PAKE problem. To our best knowledge, this design is significantly different from all past PAKE solutions.

Intuitively, the J-PAKE protocol works like a juggling game between two people — if we regard a public key as a “ball”. In round one, each person throws two ephemeral public keys (“balls”) to each other. In round 2, each person combines the available public keys and the password to form a new public key, and throws the new “ball” to each other.

After round 2, the two parties can securely compute a common session key, if they supplied the same passwords. Otherwise, the protocol leaks nothing more than: “the supplied passwords at two sides are not the same”. In other words, one can prove his knowledge of the password without revealing it. A Java implementation of the protocol on a MacBook Pro laptop shows that the total computation time at each side is merely 75 ms.

We hope this protocol is of usefulness to security engineers. For example, compared with SSL/TLS, J-PAKE is potentially much more resistant against phishing attacks, not to mention that it is PKI-free. Since this protocol is the result of an academic research project, we didn’t — and have no intention to — patent it. As explained in the paper, J-PAKE even has technical advantages over the patented EKE and SPEKE in terms of security, with comparable efficiency. It has been submitted as a follow-up to the possible future extension of IEEE P1363.2.

We believe the PAKE research is important and has strong practical relevance. This post is to facilitate discussions on this subject. The paper can be viewed here. Any comments or questions are welcome.

**Update**

**2008-06-28**: a crude J-PAKE demo source code (.java) by me. (link broken)**2008-11-04**: a more refined J-PAKE in C and OpenSSL by Ben Laurie.**2008-11-11**: possible applications of J-PAKE in VPN and browser by James.**2009-02-08**: public group parameters for 112-bit and 128-bit security can be found in the comments.**2009-03-15**: fixed the broken link to the old Java file. Here is the link to the Java demo code.**2010-04-17**: a journal version of the paper available on IACR. No technical change to the protocol.**2010-10-25**: the journal version of the paper is accepted to publish on the TCS journal – Springer Transactions on Computational Science, the special issue on “Security in Computing”, 2011.**2010-11-25**: Sebastien Martini reported an implementation issue of J-PAKE in OpenSSL and OpenSSH. The issue is not applicable to the Java demo code that I wrote. As stated in the last paragraph of p. 11 in the paper, one shall check the element lies within the specified group. Stefan Arentz implemented a fix in OpenSSL. Official OpenSSL and OpenSSH patches can be found here and here.**2011-01-11**: Mozilla built J-PAKE into the base product of Firefox 4 ( beta 8 and later). More details here.**2012-01-18**: Today, Mohsen Toorani uploadeda paper on IACR eprint to claim several attacks on J-PAKE. My response can be found here.**2012-07-21**: Phil Clay contributed a Java implementation of J-PAKE to bouncycastle.**2013-02-24**: J-PAKE included into bouncycastle 1.48.**2013-03-28**: a code example to show how to use J-PAKE in bouncycastle**2013-05-21:**Submitted two Internet Drafts to IETF (one on J-PAKE and the other one on Schnorr NIZK Proof)**2013-12-30:**a code example to show how to implement J-PAKE using Elliptic Curve (or ECDSA-like group setting)**2014-04-17:**J-PAKE included into VMware NSS Cryptographic Module**2014-10-27:**J-PAKE adopted by the ISO/IEC standard (11770-4) following the ISO/IEC SC27 meeting held in Mexico City, October 20-24, 2014**2014-12-26**: My response to Mohsen Toorani’s IEEE ISCC’14 paper “Security Analysis of J-PAKE”.**2015-04-29:**J-PAKE included into Python**2015-05-08:**Standardization of J-PAKE in ISO/IEC 11770-4 in process. The first working draft (WD1) passed reviews by ISO/IEC SC27 at Kuching Malaysia, May 5-8, 2015.**2015-05-19:**Here is an independent formal analysis of J-PAKE by other researchers published at Oakland’2015. Their results are consistent with the original J-PAKE paper.**2015-05-30:**J-PAKE included in BarcoSilex BA414E Public Key crypto engine**2015-05-31:**Firefox is upgrading Sync 1.1 (using J-PAKE to transfer a full-entropy AES key between sync devices) to new Sync 1.5 (using user-defined passwords as encryption keys). But Pale moon decides to stay with Sync 1.1.**2015-07-28:**the Thread Technical white paper is public. It describes a technique that securely enrols a new device to the network based on J-PAKE. The technique is used in Google Nest thermostat products.**2017-10-06:**J-PAKE published in ISO/IEC 11770-4 as an international standard and also published in RFC 8236. See here.

Doesn’t WPA pre-shared key algo work like this?

Mutual authentication based on password, isn’t this the same TLS-PSK idea?

> Doesn’t WPA pre-shared key algo work like this?

Not really. In WPA PSK mode, a password is directly transformed into an encryption key. This is vulnerable against off-line bruce-force attack, which is exactly the problem that a PAKE protocol is designed to address.

> Mutual authentication based on password, isn’t this the same TLS-PSK idea?

The difference is that TLS-PSK assumes that a STRONG shared key already exists between two parties. But J-PAKE aims to bootstrap a strong key from a weak but memorable secret (i.e., password). Clearly, the two techniques are complementary, though not the same.

Would it make sense to implement this protocol as an alternative method of password authentication for SSH?

Phil, right. It could have two advantages: 1) the user never leaks the password to a remote server; 2) the authentication is mutual, in other words, not only the user authenticating himself to the server, but also the sever must authenticate itself to the user too (based on sharing the same password).

This is not my field, but I’ll have a go.

I think this is somewhat vulnerable to brute force. Both sides carry out steps 1 and 2 without checking the other’s password (s). They only realise there is a problem when they compare session keys (or hashed session keys).

By that stage, both seem to have everything needed to generate K for any value of s, and test it locally using the session key which came from the other party, so if the password is very weak then a dictionary attack seems possible.

Comparing hashed session keys, rather then real session keys, may not help much assuming very weak (low-entropy) passwords, as the hash function cannot be very “lossy” without risking false matches.

Thanks to LBT for working on non-patented crypto. Lots of weird combinations of crypto primatives and implementations are possible, and really should not be patented, like mixing paint, or plant patents. Easy to generalize, hard to prove and work out all angles. Haven’t read paper, or reflected on this much. Seems neat, however, a few additions might be very helpful.

Interesting, I would want some tracking to deal with a hostile Mallory/s, or detect/test/time_to_crack it after. Some type of covert channel with like watermarking, or split keys, etc. I’m no crypto dude, but this J-PAKE seems like a good start, for some real world needs. Its the extra little crypto refinements, is where the big $ and focus really comes into play. LBT helps non-crypto people to start to get the big picture.

Peter,

> By that stage, both seem to have everything needed to generate K for any value of s, and test it locally using the session key which came from the other party, so if the password is very weak then a dictionary attack seems possible.

The scenario you described is an active attack, which is addressed in Theorem 5 of the paper. The short answer to your question is that the attacker cannot exhaustively test passwords because of the square term on the exponent (see Eq. 2 in Theorem 5). Interestingly, this square term only vanishes when the two passwords are the same: s=s’.

I’d talk with lawyers about making sure this remains patent and license free. In the US you can seek a patent up to a year after publicly disclosing it.

Your best bet is to give the license for a sample implementation to the EFF.

Are you sure the knowledge proofs themselves don’t leak enough information to brute force the password?

In Step 2, Alice sends out a knowledge proof of x_2*s. If Schnorr signatures are used as suggested in the paper, this means Alice sends out {g^v, r = v – x_2 * s * h}. Eve, the attacker, computes y = g^r / g^v = g^(-x_2 * s * h). She then computes z = y^(1/h) = g^(-x_2 * s). Since Alice already sent out g^x_2 earlier, Eve can now take an offline guess s’ of the password and compute (g^x_2)^(-s’) and compare it against z to see if she got it right.

Did I miss anything in this analysis?

Do you plan to make the implementation available so others can look at what you have done in practical terms and apply to other systems?

Hi Tom,

Note that for the knowledge proof of x_2*s, the generator is no longer g; it’s g^(x1+x3+x4) for Alice, and g^(x1+x2+x3) for Bob. The new generator is a simple multiplication. The general rule is to multiple the two received public keys with the first public key one generated. See the last paragraph of Section 2.2 for more details.

> Do you plan to make the implementation available so others can look at what you have done in practical terms and apply to other systems?

Yes, I’ve a Java prototype. I’ll make it available at some point.

The proofs in the paper don’t depend on the Schnorr KPs, and very little is said as to why they are needed (“[t]he necessity of the knowledge proofs is motivated by Anderson’s sixth principle in designing secure protocols [2]: “Do not assume that a message you receive has a particular form unless you can check this.””). That may a good guiding principle, but it’s not rigorous. I’d be interested in how the use or non-use of Schnorr KPs affect the protocol.

I agree that Tom Wu’s proposed attack doesn’t work: g^x_2 is of no help in attacking the KP for g^(x_2*s) because the two generators differ.

Also, while it’s clear that x_2 and x_4 must not be 1, it’s not clear what one gains from allowing x_1 and x_3 to be 1. Can you elaborate?

Nico,

> The proofs in the paper don’t depend on the Schnorr KPs

The Schnorr KPs essentially ensure that the messages are indeed formatted as described. Without these KPs, the proofs would be meaningless, as we cannot take it for granted that a “public key” is a public key while not a random string. As shown in the first line of Table 2, the Schnorr KP is an essential (and standalone) block in the whole analysis.

Here is one question you might be interested to think of: can we remove the KPs for round 2 (not for round 1)?

In that case, the attacker would have greater freedom to fabricate the round-2 messages in any format they wish, so Theorem 5 won’t be valid in its existing form. However, the protocol MIGHT still be secure as I couldn’t think of possible attacks. But I couldn’t prove it either. In any case, I’m keeping these round-2 KPs in the protocol as removing them only gives a small improvement in performance (reducing 3 out of 14 exps).

> it’s not clear what one gains from allowing x_1 and x_3 to be 1

I guess you meant 0 instead of 1 here. We allow x_1 or x_3 to be 0, as there is no need for restriction here. What really matters is x1+x3+x4 != 0 (for Alice) and x1+x2+x3 != 0 (for Bob). Refer to the last paragraph of Section 2.2.

The computation of K depends on s (as well as the ephemeral exponents). Thus it should be sufficient to demonstrate that A and B have the same K; demonstrating that A and B possess the exponents for their exponentials seems superfluous.

I think my bigger issue is: why don’t we use KPs in, say, SSHv2?

You say that without the KPs an attacker has more freedom to modify the messages, but active attackers (including MITMs) were an assumption to begin with. Assuming a pre-agreed group then the messages from A and B look like concatenations of fixed sized bit strings (large numbers padded with zeros, and each bit string (ignoring the KPs) is indistinguishable from randomly selected bit strings. The KPs can be seen as providing integrity protection for all the exponentials, but, so what? If we don’t have the KPs then we still can detect transmission errors (or active attacks) later when A and B check that they have the same session key.

I think I can see now how the theorems depend on the KPs though, and just for that alone it may be worth keeping them, and it may be worth adding them elsewhere. On the other hand, the KP you use depends on a “secure hash function” (is reference #5 the one you meant for this?) and this probably means that the proofs don’t necessarily translate into perfect security in practice. I can’t think of what attacks the KPs protect A and B from either, but I’m not a cryptanalyst…

Oh yeah, I did mean 0, not 1.

Also, looking at table 1 I count 9 exponentiations (out of the total of 14) needed by Alice to generate and verify KPs, not three (yes, I know you were talking about the second round of messages specifically). That’s more than half the total 🙂 Now, I agree that because the exponents are all short the overall protocol compares well with other uses of DH in Internet protocols, so maybe it doesn’t matter. In any case, I’m not asking that you remove the KPs!

> I think my bigger issue is: why don’t we use KPs in, say, SSHv2?

That’s really a good question. I’m not advocating using KPs in every application, as it depends on the situation. For some messages, you might only need to check to ensure they aren’t specific values like 0 or 1 (requiring 0 exp); for some, you might need to check whether the message is indeed an element of the prime-order group (requiring 1 exp in checking); for some more stringent cases, you might need a KP (requiring 1 exp in generation and 2 in checking). As you can see, the answer really lies in the trade-off between the computation and level of freedom you can tolerate active attackers to play. Then, it goes back to the Anderson’s 6th principle (I repeat here as i think it is really useful): “Do not assume a message you receive has a particular form unless you can check this”.

> On the other hand, the KP you use depends on a “secure hash function” (is reference #5 the one you meant for this?) and this probably means that the proofs don’t necessarily translate into perfect security in practice.

Hmm…, I understand you. That’s an old and long-standing debate in crypto. Some crypto purists deem anything involving the random oracle, like Schnorr’s signature etc, as unattractive. I could be wrong, but I’m a security engineer, not keen on the “perfect security”. I think in practice, there are far more important factors to consider, e.g., efficiency, and how easy for other people to understand your protocol and proofs.

Ben Laurie has started a thread over on the cryptography mailing list.

Back to x_1 and x_3. If they are both == 0 (given that you allow it) then K = 1^(x_2 * x_4 * s) == 1. Not good. An eavesdropper will know this too.

Now, K = (g^(x_1 + x_3))^(x_2 * x_4 * x_s), so if x_1 + x_3 == 1 then we have K = g^(x_2 * x_4 * s), and since you allow for this one must wonder why bother with x_1 and x_3 at all: why not just exchange g^x and g^y and compute K = g^(x*y*s)?

If you think that g^(x*y*s) insufficient then you must constrain x_1 != 0, x_3 != 0 and x_1 + x_3 > 1. But if g^(x*y*s) is sufficient, then J-PAKE is overly complex and can be greatly simplified.

Similarly, x_2 and x_4 shouldn’t be == 1 either: if x_1 == x_3 == 0 and either or both of x_2 and/or x_4 == 1 then K = g^xs or just g^s, and that can be attacked off-line (passive attack).

Perhaps we need another guiding principle here: check for weak public keys and don’t generate them yourself.

So, Alice should check that g^x_3 and g^x_4 != 1 and != g, and Bob should do the same for g^x_1 and g^x_2.

Or did I miss something?

Also, there’s not enough discussion of SignerID (used in the Schnorr KPs). Since its purpose is to prevent replays, shouldn’t it just be a nonce offered by the other party?

And why should one protect against KP replays? The purpose of the KPs is to show that g^x_1, g^x_2, g^x_3 and g^x_4 are, in fact, public keys (that is, that there is an exponent in the group that produces the public key), who cares if the KP is fresh? If g^x_i and its KP are replayed but the attacker doesn’t possess x_i then the subsecuent round will fail; the computational cost of the subsequent round is smaller than the cost of generating and verifying the KPs.

I bring this up because it occurs to me that the only KPs where there may be value in protecting against replays are the ones in round 2, so you might as well have Alice and Bob exchange nonces in round 1 and use the other’s nonce as the SignerID in the KP. But then, since the keys exchanged in round 2 necessarily involve one’s randomly selected exponent then we know that the keys must be unique and there’s no realistic chance of a KP replay, so then nonces are not needed.

Adding nonces, for replay detection/prevention, as the SignerIDs in the KPs for the keys in round 1 would require a prior round to exchange the nonces, which would make the protocol slower. I’m not sure that the replay protection of KPs is needed at all, as you can see. So I’m not sure that SignerID is needed either, but the SignerIDs are kept then they should be explicitly exchanged somewhere.

Nico,

> Back to x_1 and x_3. If they are both == 0 (given that you allow it) then K = 1^(x_2 * x_4 * s) == 1. Not good. An eavesdropper will know this too.

An honest player wouldn’t choose it to be zero; he would follow the protocol and choose a random value. The scenario you describe is to have two dishonest players communicating with each other. Of course, there is no protection for that.

> Also, there’s not enough discussion of SignerID (used in the Schnorr KPs). Since its purpose is to prevent replays, shouldn’t it just be a nonce offered by the other party?

Thanks for being observant here. The SignerID could be a person’s name or a machine’s MAC or IP, but it can’t be a nonce. First, it’s a good practice to include the explicit signer’s ID in the digital signature. Second, more importantly, it is required by the underlying logic in the security proofs. Without the SignerID, what happens if Bob replays Alice’s KP back to Alice and claims he knows the discrete logarithm but in fact he doesn’t? Well, this particular attack may not work here, but that’s irrelevant; it shows the logic importance to include the SignerID in the hash.

If you’re sharp (which I’m sure you are), you may ask what happens if Bob replays a third person’s KP, say Charlie’s, to Alice? It won’t affect the logic in the proof – from Alice’s perspective, she is talking to an abstract entity under the name “Bob” which knows the discrete logarithm. In fact, Bob gains no advantage of doing that, as he would be better off to generate the x values himself.

(If you also read my earlier paper on the av-net protocol, it may become clearer about the importance of including the Signer’s ID in the hash)

Why would an honest player never ever choose x_1 == 0, x_2 == 1, x_3 == 0, or x_4 == 1 unless protocol specs tell implementors not to? Or did I miss where your paper says not do this? (Also, if one party picks weak keys then an active attack is possible.)

The KPs authenticate that the sender, or rather, whoever computed the public key and the KP, knows the exponent. The KPs don’t, however, authenticate the whole exchange. For this we need an authenticator proving possession of the session key K. So I’m not at all concerned about one player stealing some other player’s ephemeral public keys and KPs: without actually knowing the exponents the attacker gets nothing and the proof of possession of K will fail.

I agree that binding in the IDs into the exchange is important, I just don’t see how doing it via the KPs gets you replay protection — it gets you protection against an attacker replaying KPs while pretending to be someone other than who they took the KPs from, but, so what? After all, the attacker won’t know the exponents, therefore the overall exchanges where he/she replays someone else’s keys/KPs will fail. Let’s be clear about what features the KPs provide: they prove the sender knows the private key to a public key, and provides one method (though there are other possibilities) of binding the parties’ IDs into the exchange, but they don’t provide replay protection unless you salt them with nonces.

> Why would an honest player never ever choose x_1 == 0, x_2 == 1, x_3 == 0, or x_4 == 1 unless protocol specs tell implementors not to?

An honest user would choose a value

randomlyfrom the allowed data range (for the his own benefit of privacy protection). In the protocol, at least one user must be honest; otherwise, we are solving an impossible task!> but, so what? After all, the attacker won’t know the exponents, therefore the overall exchanges where he/she replays someone else’s keys/KPs will fail.

In fact, heuristically, I couldn’t think of attacks either (though I know there can be an attack in the av-net protocol if the signer id were not included in the hash). But we can’t be too sure about the heuristic analysis. So the primary reason for including Signer’s IDs in KP is that this is required by the logic of proofs (unless it can be proven that it’s still secure without the signer id).

You can end up with x_1 == 0 if you choose randomly from 0..r.

Nico,

> You can end up with x_1 == 0 if you choose randomly from 0..r.

How likely would that be? r is a 160-bit prime.

Statistically speaking, an honest party would never choose a 0 (or 1, or 2, or …) as his secret private key. So the real question is what if Alice is an active attacker who can fix x_1 to any value of his choice. As I explained earlier, if he chose x_1=0, it doesn’t affects the protocol security, as long as the other party is honest.

Not very (ignore the Debian OpenSSL entropy bug for the moment). But checking is cheap. The very special case x_1 == x_3 == 0 implies K == 1 which disproves the security of the protocol, but I believe the proofs stand otherwise. Better safe than sorry no?

Well, as far as the protocol design is concerned, I don’t see the necessity of doing that, even though the checking is cheap. And I don’t think that’s a valid attack. Having said that, I can understand in practice, you might want to add extra checking just to ensure the random number generator isn’t poorly implemented. But that’s the implementation issue.

As far as I could tell, the proofs in this paper are really only “hand waving”, in the sense that you do not prove security in any model for password-based key exchange (e.g., the one introduced by Bellare et al.), but instead only prove isolated properties. This doesn’t necessarily mean this work is “bad”, but the lack of a real proof should be fairly noted.

Hi Jonathan,

Thanks for your comment. That is a reasonable – though not unanticipated – critique. We are not concerned about whether the proofs are “hand waving” or not – we only concern whether they make sense. For that, we had tried our efforts to keep the proofs simple and short, while capturing the essential logic. Hopefully, people find the paper easy-to-follow. In any case, security proofs are just a tool to help understanding. They are not the answer to everything, and can’t replace the time-honored rule in protocol analysis: public scrutiny. We welcome any attacks in concrete terms. : )

Fair enough, I was mainly pointing out that you should have honestly admitted the issue. Also, it is important to specify a formal security definition (even if you do not have a proof) so that it is clear what attacks count as “valid”.

In the past years, I heard debates on the (un)importance of formal security definitions etc. I don’t have strong opinions on that. It is just that I saw many theory papers in this area with over 20 pages definitions and proofs. It’s no doubt that they are important, but I often couldn’t help wondering whether they are overly complex and as a result, few people would actually read those.

I’m from an Engineering background, so I’m just attempting to tackle this PAKE problem from an engineer’s perspective. An engineer’s task is to find something that works – plain, simple, no fancy formulas. Some researchers might like to take it from here and add more “formalism” into the paper. I’m sure that will be a valuable addition in future work.

I tried to argue already that definitions have independent value even if there are no proofs. Proofs have independent value even if no one other than the authors reads them, since at least they force the authors to think through the security of their protocol. (Of course, this assumes the proof is correct in the first place, but even an effort at a proof is better than no effort at all.) Finally, proofs are important, especially for a protocol to be standardized, because it is more likely (and more time efficient) for someone to read your proof than to try to come up with random attacks against your protocol.

We can take the discussion off-line; please feel free to email me if you are interested. I hope I can convince you that security proofs *are* good engineering.

I’m not sure whether I can agree with you on the first two sentences. But you’re the expert : )

As for the very last sentence, I need to clarify that I agree (from the very beginning) that security proofs are good engineering especially compared with heuristic analysis. I think our difference only lies in the exact approach or how formal do you want to make the proofs look. I tend to think that a useful security proof should be written in a way that readers find them easy to understand (and be able to quickly spot a mistake if there is any). I hope we made that clear in the paper.

I find the setence “Since s has low entropy, we assume the value of s falls within [1, q-1]” troubling – this is a nonsequiter, the range of s and its entropy are unrelated.

(Well, except inasmuch as the range obviously puts an upper bound on the entropy).

Hi kme,

> I find the setence “Since s has low entropy, we assume the value of s falls within [1, q-1]” troubling

Password is a string of characters. It needs to be encoded into a numeric value before being used in the modular operation. This sentence is to say that since the password has low entropy, so it is possible to encode the string into a numeric value within [1, q-1] for 160-bit q. e.g., if the password has less than 160/7=22 chars, you could simply concatenate the 7-bit ASCII values to form a numeric value (Java has a ready API to construct a BigInteger from byte[]). Alternatively, for any-length passwords, you could just take a hash value.

We specify this range mainly to be mathematically precise, because s, s+q, s+2q, …, are equivalent values on the exponent. Otherwise, the claim of restricting exactly one guess of password per attempt would be incorrect (in the mathematical sense).

Are you interested in defining an EAP method based on J_PAKE ?

Sure, but I think you need to be precise with this kind of thing 🙂

Choosing a good encoding method is not entirely trivial. English phrases significantly longer than 22 characters can still have less than 160 bits of entropy. A naive implementation that simply took the first (or last) 22 characters would clearly allow testing many plausible low entropy passwords in one trial key exchange.

The obvious candidate is 160 bits of hash of the low-entropy input (at least in this case the relationships between passwords with equivalent s is non-trivial). What method do you recommend?

Perhaps it is better to simply restrict the claim to exactly one guess of s per attempt.

kme,

> Choosing a good encoding method is not entirely trivial.

I think it can be fairly straightforward. Note that a password typically has 8-10 chars. Some applications even limit users not to choose more than 14 chars as a password. Assume a password is limited under 22 char, concatenating the ASCII values looks a good method to me.

> The obvious candidate is 160 bits of hash of the low-entropy input

Yeah, using a hash is another way. You could use some function that looks like SHA1(password)%q to encode a password to a number s, but you may need to take some care that by definition s!=0 for non-empty passwords.

> Perhaps it is better to simply restrict the claim to exactly one guess of s per attempt.

That would be less meaningful in practical terms.

synp,

> Are you interested in defining an EAP method based on J_PAKE ?

Thanks for the suggestion. I looked through the EAP wiki, it occurs to me that it can be something very relevant. The J-PAKE protocol is based on public key juggling, and as the name “public key” suggests, all communication within the protocol is public. This makes it particularly suitable for key exchange in a wireless environment. We’ll see if there’s sufficient interest from the community.

Feng Hao,

EAP is used in wireless environments, but also in other things: it’s used to carry passwords in IKE (RFC 4306) as well as things like L2TP clients.

A good password method would be very useful for these things.

Well you would add the comment that iff the allowable passwords can be mapped onto the possible values of s such that at most one password maps to one value of s, only one password can be guessed per attempt.

Using a hash-derived value for s has the implication that the hash of the password becomes a valuable secret itself, which doesn’t matter within the context of J-PAKE only but may affect interactions with other systems that assume hashes can be public. An implementation would want to be careful in the way it salts the hash to avoid this. (Just thinking out loud here).

kme,

Thanks for the good comments. I agree with you. In the paper, we didn’t go down to the exact implementation detail as how to encode a password (normally 20-30 bits) to a number s (160 bits). As you correctly point out, some care needs to be taken if a hash is used. Still, I would prefer the simple concatenation for encoding (under the reasonable assumption that passwords are short strings).

The Java code for the J-PAKE demo is available (see the article). It would be helpful if someone could write a Javascript program for demonstrating the protocol execution more interactively. I’m not a Javascript guru.

This is really good information, since in my country Indonesia, we have only a little tutorial about cryptographic based security scheme.

I wrote about your protocol in my mother language: Password Authenticated Key Exchange. Thanks.

The Java demo code only gives an example of using 1024-bit p and 160-bit q for the key exchange – which gives 80-bit security. The parameters were taken from the DSA implementation in Sun’s Java package. However, Sun only specifies up to 1024-bit modulus.

For the prototyping purpose, that is fine. But for many real-world security applications, you may want to use at least 112-bit security.

To do that, simply replace the group parameters in the Java code with the ones published in NIST DSA parameters (for 112-bit and 128-bit security). I’ve verified the values myself, and you’re encouraged to do the same before adding to your code.

As kme has pointed out, the problem is that nowadays password-based authentication systems do not have this kind of symmetric sharing of secret, but rather as one end knows the secret, and the other end knows just the value from a one-way function of that secret.

In such cases, your protocol suffers from the fact that the attacker doesn’t need to know the secret itself, but just the corresponding value and is able to do authentication.

There is something even worse about it: the attacker doesn’t need to know s, it is sufficient that he knows g^s which may be revealed somehow by the context.

Hi Viet,

> the problem is that nowadays password-based authentication systems do not have this kind of symmetric sharing of secret, but rather as one end knows the secret, and the other end knows just the value from a one-way function of that secret.

What the paper presents is a generic solution based on symmetric sharing of the secret. You can easily extend it: Let the server store H(server, password). In that case, the user enters the password and use s=h(server, password) to perform J-PAKE. (sometimes the hash also includes a salt.)

> In such cases, your protocol suffers from the fact that the attacker doesn’t need to know the secret itself, but just the corresponding value and is able to do authentication.

yes, in the above case, the attacker doesn’t need to know the password, but just need to know H(server, password). But then what? The fundamental assumption in this research field is that password is weak and subject to exhaustive search. If the attacker knows H(sever, password), he can compute password by exhaustive search.

> There is something even worse about it: the attacker doesn’t need to know s, it is sufficient that he knows g^s which may be revealed somehow by the context.

I am not sure what’is the attacking scenario described here. But anyway, suppose the attacker knows g^s, then it’s a given in analysis that he can compute s if s is low-entropy.

I think you’re probably entering the arguments on whether augmented PAKE is better than balanced PAKE. If you think about it, you could realize that the benefits of the former are highly questionable (see the 3rd paragraph in P3 of the paper).

Of course, my main objection to the augmented PAKE is that it claims “server compromise resistance”, but in fact fails to deliver. A false sense of security is worse than no security.

Does this do something that SRP doesn’t?

no – that’s the short answer.

All these techniques SRP, EKE, SPEKE and J-PAKE work for the common goal, and only differ in the ways how to achieve it. SRP and SPEKE mandate the use of safe primes (for security reasons). EKE and J-PAKE don’t have this limitation (nor does the original Diffie-Hellman protocol). EKE has the password information leakage issue, due to the fact that the password is too short to be used as a symmetric key securely.

Regarding comments 20-28.

Based on my testing (over a small 16-bit group), if you allow (x1 + x3 = 0) then K = 1 is twice as likely as any other result for K. In order to get equal distribution of K over the group G, you need to make sure (x1 + x3 != 0).

This is not mentioned in the spec and should perhaps be added in, for sake of completenss. For example the statement in Theorem 4: “the obtained session key is completely different from keys derived in other sessions” is not entirely true if we allow (x1 + x3 = 0), since K = 1 is twice as likely as any other result. So session keys are not “completely different”, they are mostly different, with a bias towards K = 1.

In effect, allowing (x1 + x3 = 0) is the same as allowing either x2 or x4 to be 0.

Also note that completely random values for x1 and x3 can result in a sum of 0.

Arinaya,

Thanks for your very good comment. There is some subtle interplay between theory and practice here.

Strictly speaking, you’re right: x1+x3=0 is a special case (and the only case) that results in K=1, while in other cases the K values are randomly distributed in G. Hence, it makes sense to exclude the case of x1+x3=0, and it is cheap to do so.

However, in practice, this is not essential because the chance of x1+x3=0 is extremely small such that it is negligible. (Note, x1+x3=0 happens purely by random since the ZKPs require Alice and Bob must know the values of x1 and x3 respectively).

So, there are no practical security reasons to add check that x1+x3!=0. But, for satisfying the precise provability in the theoretical sense, it’s worth stating x1+x3!=0 (even though the probability is exceedingly overwhelming). That’s interesting. We’re writing a journal version of the paper and also RFC, I’ll add a note on this.

I agree with you (that with a large enough value for q the effects are negligible). But by following your reasoning, there is also no practical reason to check for x2, x4 = 0 or (x1+x3+x4), (x1+x2+x3) = 0, since these are no more likely than (x1+x3) = 0.

While this may or may not be true (perhaps it is best left up to the implementation), it is worth noting all 5 constraints in the spec. x2, x4 != 0 (easily constrained during key generation), x1+x3, x1+x3+x4, x1+x3+x2 != 0 (easily checked by Bob before sending his round 1 keys to Alice). If these constraints are enforced in round one, we can assert (theoretically) random distribution of A, B and K over the group G. If we relax the constraints, then distribution is skewed (in inverse proportion to the size of q) towards A, B, K = 1.

Looking forward to the revised documents. I may have an online demo soon (done in c#/silverlight).

On a more practical note, I was wondering your thoughts on the following method of creating round one knowledge proofs.

I haven’t seen yet if this introduces any opening for attack, although it does reduce round one message size by almost 50% and eliminates two exponentiations on each side (since g^vᵢ is not used).

r1 = x2 – (x1 * H(g, gx2, gx1, id))

r2 = x1 – (x2 * H(g, gx1, gx2, id)).

Then Bob checks that g^r1 * gx1^H(g, gx2, gx1, id) == gx2, and vice versa.

I might be missing something obvious, but this doesn’t seem like it will leak anything about x1 or x2.

I can see the appeal, in theory, of using a new key for each knowledge proof (it makes for a cleaner model), but I haven’t been able to think of a good practical argument against doing the round one KPs as described above.

Hi Arinaya,

> But by following your reasoning, there is also no practical reason to check for x2, x4 = 0 or (x1+x3+x4), (x1+x2+x3) = 0, since these are no more likely than (x1+x3) = 0.

The case of x2,x4=0 is different. An attacker (say Bob) can easily fix x4=0. Hence, the protocol explicitly states that x2,x4!=0. The (x1+x3+x4) and (x1+x2+x3)!=0 are implicitly guaranteed by probability (see the last paragraph of Section 2.2), so is the (x1+3!=0).

> While this may or may not be true (perhaps it is best left up to the implementation), it is worth noting all 5 constraints in the spec.

Yes, I agree. Whether it’s implicit assurance or explicit, it’s worth noting these explicitly in the spec. They are required by the proofs to assert the precise theoretical properties.

> On a more practical note, I was wondering your thoughts on the following method of creating round one knowledge proofs.

I’m afraid your method is not secure. It introduces correlations between the two ZKPs. One just needs to solve a system of equations with two variables x1 and x2.

Thanks for your comments. I understand now how x2, x4 are different than the other 3 cases, and I’ll stick to using dedicated keys for the ZKPs.

One thing I am not clear on, is what do the ZKPs offer that the simple check g^x^q % p = 1 does not? The latter is all that is needed to satisfy Anderson’s sixth principle, since it validates that the message is in the expected format (i.e. we are expecting to receive a random element from the group G).

I understand that the security proofs in their current form depend on the ZKPs, but I don’t know that this is necessary. For example in Lemma 1, Bob doesn’t gain anything by NOT knowing x3 and x4, as long as we can prove gx3 and gx4 belong to the group G.

> One thing I am not clear on, is what do the ZKPs offer that the simple check g^x^q % p = 1 does not?

That’s a good question. I explained it in the last paragraph of Section 4 of the following paper:

http://eprint.iacr.org/2010/136

Briefly, the simple prime-order check of g^x can’t check the correlation of this term to others. In the example you gave, what if Bob sends g^x3 and g^x4 such that g^(x3+x4) = g^{-x1}? The attacker will force the session key K=1.

Researchers (practical and theoretical) have tried all sorts of smart tricks without using ZKPs in PK-AKE and PAKE protocols, and rely on either heuristics or formal analysis. But, I’ve seen too many simple attacks and mistakes.

So, am I absolutely sure the ZKPs are indispensable? I can’t say. But if you do it properly, the cost of using ZKPs is actually comparable to those without – as I’ve shown in both J-PAKE and YAK protocols.

That’s an interesting case, but doesn’t it require that an attacker calculate the DL of g^x1?

And in that case (or if an attacker gets x1 by some other means), couldn’t he just send the fabricated g^x3, g^x4, along with valid knowledge proofs?

I don’t think the KPs can check against correlation any more than the prime-order check; they just tell us with fairly good certainty that the sender knows his own private key. They don’t guarantee that the sender is honest.

Perhaps another approach is simply reject K=1, starting the protocol over in the (1 / q) chance that this happens. So one in every 10^50 or so key agreements would take twice as long as normal. This seems like a better trade-off than having EVERY key agreement take twice as long.

I agree that conservative design is good in principle, but I have a hard time justifying the >50% cost of performing the knowledge proofs. Anyway, I have implemented both with and without ZKP and will continue testing.

> That’s an interesting case, but doesn’t it require that an attacker calculate the DL of g^x1?

Image the attacker (Bob) sends g^0 and g^{-x1}. The attacker doesn’t need to know the DL of g^{x1}.

> Perhaps another approach is simply reject K=1, starting the protocol over in the (1 / q) chance that this happens.

No, no, I use K=1 only as “one” example. K can in fact be arbitrary values. For example, the attacker (Bob) can send g^1 and g^{-x1}. Then in round two, Alice will send g^{1*x2*s}, which will allow the attacker to do exhaustive search to find out the secret s since s is a low-entropy value.

If you’re really keen to improve efficiency in the implementation, there are safe ways to do so, for example, see the last paragraph of section VI (p. 9) in the above eprint paper for some hint. But not by removing ZKPs …

>No, no, I use K=1 only as “one” example. K can in fact be arbitrary values.

I realized this a while after posting; an attacker could pick any value of K and target that just as easily as K=1.

Thanks for explaining. I can be a little dense. The need for KPs is not explained very fully in the spec, but I think I see all the angles now.

I don’t think performance is a huge issue here. Most people are used to waiting a few seconds when they sign in. I’m getting just over 1s for a 2048/224-bit group (processing only, no network latency).

I will take a look at the speedup you mentioned. So basically this involves caching the result of the squarings on the first call to ModPow for a given base.

Today, Mohsen Toorani uploaded a paper onto the IACR eprint to claim several attacks on the J-PAKE protocol. The paper can be found here: http://eprint.iacr.org/2012/021

As I started to read it, I immediately found that the author got the J-PAKE description wrong. The generator in the 2nd stage ZKP is not g – For Alice, it should be g^{x1+x3+x4} and for Bob, it’s g^{x1+x2+x3}. In fact, this had been asked and clarified four years ago.

I’ve sent an email to inform the author.

Toorani revised his paper on eprint today to correct the mistake that I informed him. In the latest version (2012-01-19) of the paper, he still claims J-PAKE is vulnerable to a Password Compromise Impersonation attack, a replay attack, and suffers from some further defects. I shall leave it to the reader to read the paper and judge by yourself.

[ed. removed at comment author’s request, 2012-01-21]

> Regarding “Password Compromise Impersonation attack”, I could find 3740 hits on the Google!

I don’t concern how many hits; I only concern whether it makes sense, and it doesn’t. If an attacker has compromised Alice’s password, of course he can impersonate Bob since the authentication is based on the knowledge of the password. Your “attack” trivially breaks all PAKE schemes, including EKE and SPEKE.

> Regarding the replay attack, I think it is obvious while it could be easily prevented!

The problem here is not that it’s “obvious”, but that it trivially applies to other PAKE schemes, such as EKE and SPEKE.

For the completeness of the discussion and for the benefit of your revision, let me give you comments on other parts of your paper (the version before your withdrawal).

– “Resilience to ephemeral key compromise impersonation attack”: This attack doesn’t make sense in the PAKE context, because if the (strong) ephemeral key is compromised, then the (weak) password secret would be at risk of being bruce-forced. You have just mixed it up with attacks in PKI based key exchange.

– “It is better to exclude zero … Specifically, for x1 = x3 = 0, we have K = 1”: From the protocol’s perspective, such check is unnecessary for two reasons: 1) the probability of x1 = x3 = 0 is negligible; 2) if you have to do that, then you should also exclude x1 = -1, x3 = 1 and so on because those give K=1 as well. I hope you see my reasoning here: any small change to a protocol is never small.

– You complained that we didn’t consider the generation of random numbers when comparing with EKE and SPEKE. Those costs are negligible as compared to the exponentiation, and it only makes sense to compare the predominant cost items. This is standard practice in CS and engineering.

[ed. removed at comment author’s request, 2012-01-21]

[ed. removed at comment author’s request, 2012-01-21]

Mohsen,

> Regarding the “replay attack”, it has been proved in my paper that the J-PAKE protocol is vulnerable to this attack

For any key exchange protocol, there is always key confirmation – either implicit or explicit. Explicit key confirmation requires additional rounds. Doesn’t it ever occur to you if an attack is too trivial, it might be wrong?

> FYI, we will not have x1=-1.

I had thought you knew it’s x1 = -1 mod q?

[ed. removed at comment author’s request, 2012-01-21]

Have you considered contributing a java implementation to the bouncycastle java library?

http://www.bouncycastle.org/

Under what license are you distributing your java demo?

Hi Phil,

I’d be happy to contribute. I however don’t know any contact of bouncycastle developers. Do you know if they will be interested?

My Java demo code is for illustration and is totally free. Anyone can freely modify it to suit their needs. They may want to acknowledge my name, but that’s not obliged. I’m happy enough if someone finds it useful.

I think they would be interested.

They have the SRP6a primitives available (http://www.bouncycastle.org/viewcvs/viewcvs.cgi/java/crypto/src/org/bouncycastle/crypto/agreement/srp/)

I think a similar contribution of the J-PAKE primitives would be appropriate.

If I get some time, I’ll look into making a contribution. No promises though.

Cool, thanks for the link. I had a look at the existing implementation of SRP6a. I believe the J-PAKE code can be a lot simpler – e.g., there’ll be no client/sever. Send me an email in private if you need my input, say reviewing code or something.

Hi,

I have a question related to the authentication part of the protocol. If I understood correctly the 2 parties involved in the communication need to share a common password. Based on this password a common key is derived which helps the parties authenticate each other.

Now lest’s take a specific setup: say a server to which more clients can connect. How would the server be able to differentiate and authenticate each client using this protocol? I mean, is this possible? Of course, each client should have an unique ID and password. Say the sever has a database of each password, but how would it know which password to use when communication with a specific client? Do you have any idea if this is possible or can you think of any solution for such a case?

Thanks in advance

Hi Carmen,

Yes of course it’s possible. The server differentiates clients based on the SigerIDs, which uniquely identify each client. Does this answer your question?

is there any specific reason why Schnorr signature is used as the non-interactive ZKP? why not ECDSA?

Hi Vania,

That’s a good question. The key requirement in the J-PAKE protocol design is to ensure the proper check of the Proof-of-Possession (PoP) of the sender’s ephemeral private key (based on Anderson-Needham’s 6th principle). The J-PAKE security proofs hold as long as the PoP is realized securely.

Using the Schnorr NIZK proof is just one of the several ways to realize the PoP. You could also use ECDSA. Personally, I would prefer the Schnorr NIZK proof as it has easy-to-understand and widely accepted security proofs. Equivalent proofs seem lacking in DSA/ECDSA.

Hello Feng, I was about to implement SPEKE and I came across your (excellent) work. I am curious to know if you have come across or have implementations of SPEKE (either python or java) that you can share. I have not seen any examples of SPEKE implementations….I am interested in comparing J-PAKE and SPEKE implementations. I am a novice.

Thanks much!

In a paper entitled “Security Analysis of J-PAKE” by Mohsen Toorani published by IEEE ISCC’14, the author claims J-PAKE has no security proofs and is vulnerable to replay, unknown key-share, password compromise impersonation and key-replication attacks. I have the following comments.

On “Security problems in J-PAKE” in Section 4

4.1: Replay attack

I think the author got the step 4) wrong. The verification of the replayed ZKP in the 2nd round won’t go through as the base generator is different from one used in the replayed session.

4.2: Unknown key-share attack

The author argues that J-PAKE is potentially susceptible to the UKS attack “as there is no specific binding between identifiers

of participants and authentication credentials, and the

session key derivation function” and then says the protocol is actually not vulnerable because the Schnorr ZKP includes the identities (as specified in the original J-PAKE paper). I don’t know what’s the author’s point.

4.3: Password Compromise Impersonation attack

I don’t think the PCI attack is anything meaningful in the context of PAKE, as it trivially applies to all PAKE protocols.

The author justifies the need for resistance against PCI in PAKE by using the protocol [20] as an example, and says that protocol was proved secure in the CK2001 formal model, but then states that the proof was disputed to be incorrect in [22]. Again, I don’t know what’s the author’s point.

4.4 Key-replication attack

The author claims a man-in-the-middle attack can force two sessions to have the same session key. However, no details are given on how the attack actually works in the real protocol context (not in an artificially simulated model).

4.5 Further defects

The author claims J-PAKE has further defects of needing too many exponentiations, of being subject of DoS attack, of being a balance PAKE and of needing a good random number generator for producing ephemeral private keys. I don’t feel I need to respond to any of these claims.

Non-repudiation is the most desirable property in digital signature. It is an essential property in most of the financial sort of applications for not denying the process. What is the equivalent one in JPAKE?

Anitha, by “non-repudiation”, do you mean a party should be able to deny his involvement in the key exchange process? He can always do that by claiming someone else knows the secret shared password (or claiming that the other communicating party simulated the key exchange transcript).

yes……..so non-repudiation property is not satisfied in JPAKE whereas non-repudiation is satisfied via a private key in the digital signature.