This repository contains the xml source files for the IETF Internet Drafts I am working on. I gladly accept pull requests.
npmccallum / ietf Goto Github PK
View Code? Open in Web Editor NEWIETF Internet Drafts
IETF Internet Drafts
This repository contains the xml source files for the IETF Internet Drafts I am working on. I gladly accept pull requests.
This is just a trivial consistency issue. In the derivation of the secret scalar w, we use "SPAKEsecret", while in the derivation of K'[n], we use "SPAKEKey". We should instead use "SPAKEkey", so that we are consistently not capitalizing any letters after SPAKE.
Our K'[n] derivation uses the concatenation of the following as the PRF+ input string:
We are not including the length of each fields, instead relying on the fact that each field has a fixed length or a self-describing length to prevent input string collisions (meaning, cases where the PRF+ input string is the same even though the tuple of input parameters is different).
The serialized K value and the transcript checksum are only fixed-length once the group number and initial reply key enctype are determined. By including those parameters as the second and third elements, we can guarantee that each tuple encodes to a different PRF+ input string.
Without FAST, PA-DATA messages are not bound to KDC-REQ-BODY fields, and can be replayed with different KDC-REQ-BODY values. Encrypted cookies only help so much, since cookies can themselves be replayed (though probably only within a limited time window, if they contain a timestamp). PKINIT includes a checksum of the KDC-REQ-BODY in the PKAuthenticator to prevent this kind of attack.
Nathaniel's opening position was that this binding should be accomplished by the cookie. I assume he means that a cookie should contain a checksum of the KDC-REQ-BODY, and the KDC will reject a cookie which comes with a different body. I don't believe that RFC 6113 requires that body fields not change over the course of a preauth exchange, and recent versions of MIT krb5 clients do sometimes change some of the body fields--in particular, they may change the requested end time and other timestamps after learning the KDC time.
The reply key derivation in the current draft includes the client and server principal names, which are the two most important fields. But the body fields could also change in a subsequent second-factor round.
Nico's position, last I talked to him, is that the KDC should remember (in the cookie) the body that accompanied the SPAKEResponse message and ignore the body of subsequent hops. That's a workable position, but I think we still need to bind the full KDC-REQ-BODY into the reply key derivation. If we include the KDC-REQ-BODY encoding in the PRF+ input for key derivation, then we don't need to separately include the client and server principals, which will save on specification verbiage.
In the optimizations section, the second optimization is described as follows:
Second, clients MAY skip the first pass and send an AS-REQ with a
PA-SPAKE PA-DATA element using the support choice. KDCs MUST support
this optimization.
In this case, the client is able to do optimistic preauth without knowing the initial reply key because SPAKESupport does not require that knowledge. But the client will need to know the initial reply key in the next step. Therefore, the KDC must send an ETYPE-INFO2 padata element with its KDC_ERR_MORE_PREAUTH_DATA_REQUIRED message, which is not an obvious product of the KDC implementation. We need text specifying this requirement. (It is already implemented in MIT krb5 1.14.)
Issues with the current key derivation section:
We typically use explicit tags in Kerberos specs using ASN.1 (RFC 6560 aside). The current draft doesn't use tags at all. We have agreement to add explicit tags to the spec; we just need to do it.
We also need a complete ASN.1 module near the end of the draft, which should compile with asn1c.
We are deriving keys using PRF+(key, string) where key is the original reply key and string contains a high-entropy secret (K) and some values we want to bind in. I proposed this method because it relies only on RFC 3961 primitives and requires no additional algorithm negotiation, but we may need some list discussion to verify that it is safe.
Formally, a pseudo-random function is a function chosen (based on the key) from a family, such that an attacker cannot easily distinguish between the function and a random oracle without brute-forcing the key.
In this case, we are using the PRF with a low-entropy key, but feeding it a secret input and not directly revealing the output. In practice, the PRF is going to either hash the input string down to the cipher block size and block-encrypt it with a derivative with the key, or compute HMAC(Kp, S) for some derivative Kp of the key. I believe that either of these should be safe with a low-entropy key and a high-entropy input, but formally the security of those functions is posited in terms of a high-entropy key and a low-entropy input.
If we decide we have to align ourselves with the formal promises of the primitives, we will need to convert the high-entropy input into a key and then combine it with the low-entropy original reply key, probably using KRB-FX-CF2(). Unfortunately, the only way to do this is using an unkeyed hash function, which would require us to do alg agility on the hash function. (We could then use the same unkeyed hash function for the transcript checksum, but that's not really important.)
RFC 6113 sets some matter-of-form requirements for new preauth mechs, partly in an attempt to formalize the interactions between multiple mechs. Although SPAKE preauth is not a FAST factor, it should still conform to these requirements.
I need to do a read over 6113 to re-familiarize myself with what these requirements are and make a pull request with appropriate changes.
We may also want to talk about the "reply key" instead of the "client long-term key," and might want to try to use the RFC 6113 "strengthening reply key facility" (section 3.2) instead of replacing the reply key, to avoid ruling out combinations of SPAKE preauth with other mechanisms.
Kerberos implementations have a long history of getting cryptographic operations wrong when there aren't test vectors. We will want vectors for:
I will try to produce test vectors using a dummy Python implementation, and then we can verify them in the production C implementation for MIT krb5.
Currently there is no simple way to distinguish between, say, a SPAKESupport and SPAKEResponse message in a client's padata, or between a SPAKEChallenge and EncryptedData message in KDC padata.
There are several options:
Option 1 has a certain elegance, but is unfriendly to tools like wireshark. Voice discussion converged on option 3 unless implementation issues make that too unwieldy.
The SPAKE2 algorithm requires a secret integer w. I don't think the current draft specifies how w is produced from the client long-term key. Using the bits of the client long-term key directly is not a good idea; for starters, there may not be the right number of them.
I believe that we should use a PRF+ invocation to derive w from the long-term key. (Nico also thought this was reasonable.) I'm not sure if there should be any additional transformations and whether that should depend on the curve. For example, Curve25519 implementations generally say to transform a random 32-byte value with v[0] &= 248; v[31] &= 127; v[31] |= 64; to produce a scalar. Watson's SPAKE2 draft doesn't provide much guidance. I need to do more research here.
While working on test vectors I noticed that the w value we produce from the long-term key will be similar for different groups, except in length. We generally try to avoid directly using the same keying material in multiple algorithms, so we should do something about that.
Once we have converted from OIDs to numbers for the group identifier, we should append the OID to the PRF+ pepper (currently just "SPAKEsecret").
Right now we use the client key to produce the w value, to key the transcript checksum, and to key the PRF+ for encryption key derivation. We derive encryption keys of the same type as the client key.
So, while the PAKE overcomes the poor distribution of the password-derived key material, our transcript checksum, our EncryptedData keys, and our reply key all share the algorithmic weaknesses of the client key encryption type. If it's DES, we generate 56-bit keys. If it's DES3, we use 64-bit ciphertext blocks. If it's RC4, we get statistical biases in the ciphertext.
At the cost of a little complexity, we can sweep these problems away by upgrading any key of an enctype directly specified in RFC 3961 to aes128-cts. Just PRF+ 128 bits from the reply key, random-to-key() that to an aes128-cts protocol key, and use that instead of the reply key for the transcript checksum and encryption key derivation. (It doesn't matter whether or not we use it for the SPAKE w parameter.)
Of course we would have to add RFC 3962 support to the prereqs.
The current draft allows curves of 128-bit strength (such as P-256) to be used with 256-bit keys. Implementations might choose to allow this for efficiency, but need to know what the consequences are on the strength of the exchange.
I believe an attacker who can solve ECDLP after the fact would best proceed as follows: first compute the discrete logs of M and N for the curve: m=M/G and n=N/G. Then compute t=T/G=x+wm and s=S/G=y+wn. I don't believe that knowing x+wm and y+wn is sufficient to trivially recover w or compute the shared PAKE key, but it is clearly enough to launch an offline attack against w. We should discuss this in the security considerations.
Nico also wants us to note that the use of salting and PBKDF2 in the process of deriving w from a password should help to frustrate offline dictionary attacks to a degree. Of course that's nothing new; it applies to encrypted timestamp as well. But we can make a quick note about it.
(Personally, I'm of the opinion that barring quantum computing, 128-bit work factors will remain prohibitive forever, and that we should only use 256-bit primitives when birthday attacks are possible. Since the EC work factor effectively has birthday attacks built in, a curve like P-256 or Curve25519 should be suitable for any use. But not everyone shares my opinion.)
https://tools.ietf.org/html/draft-iab-crypto-alg-agility is, of course, only a draft and not yet a BCP, but it likely reflects the current thinking of the IAB with regard to algorithm agility. We should follow its recommendations where we can.
The main thing I see so far is that section 2.2 requires us to specify mandatory-to-implement algorithms. Right now we only have recommended groups.
Mandatory-to-implement algorithms are not mandatory to enable or use and can change over time, so we won't want to make these implicit in SPAKESupport or anything. this is purely a matter of wording.
We might also want to think about dropping P-384 from the mandatory/recommended set, just because OpenSSL doesn't seem to provide an optimized side-channel-resistant implementation of it. Also, although the 192-bit security level is interesting to Suite B consumers, it doesn't correspond to the brute force resistance of either 128-bit or 256-bit symmetric keys.
When an attacker sees a public value like T=X+wM, it doesn't directly enable an offline guessing attack because the attacker doesn't know X. However, if an attacker knows X from a previous exchange with a different initial reply key (by calculating T-w'M), an offline guessing attack becomes easy. Therefore, reusing X across multiple initial reply keys is not secure.
A KDC could reuse an X value for the same initial reply key; in that case the KDC will send the same public key, providing no new information about w. A KDC could even store x and T values in the database, at a cost to forward secrecy (which isn't really a goal).
I will submit the necessary changes to the document to take this into account.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.