No symmetric cipher (or mode) can be used with the same key indefinitely. TLS has a natural limit here of 2^64, the size of the sequence number (which is 2^48 for DTLS), but it turns out that most ciphers degrade badly before that point. There is recent research on the limits for AES-GCM and ChaCha20Poly1305 (see http://www.isg.rhul.ac.uk/~kp/TLS-AEbounds.pdf). We don't have similar numbers for other ciphers, but there are some numbers RC4 cryptanalysis papers (hint: these are not good), and we should try to find conservative numbers for other modes.

Created attachment 8746922 [details] [diff] [review] bug1268745.patch Here's my first cut of this. # Overview The basic idea here is that NSS will abort when hitting a limit in either records sent or records received. I have not implemented key refresh, or automatic renegotiation; these limits are moderately large for most ciphersuites and the risks and complexity of renegotiation are such that it is best to simply let the connection break. Any software that depends on high-bandwidth, long-lived connections will find that connections break occasionally. This is a condition that applications need to handle anyway, so the expectation is that this won't be too disruptive. # Record Limits I've decided to limit records rather than bytes, since that is most conservative (and we already maintain counters for that). Overall, I've tried to be very conservative. Based on the cited paper, I am aiming at the 2-40 attack probability for those cipher suites. That leaves us with: * AES-GCM at 2^34.5 records, which I round down to 0x5a<<28 * ChaCha20Poly1305 at 2^48-1 records (this would be 2^53 based on the paper, but I want to stop overflow in DTLS, and 2^48 is a very large number anyway) (If we wanted a lesser margin, say 2^-32, we would increase the number by a factor of sqrt(2) for each factor of two safety, but these seem like reasonable limits for the moment.) For the other cipher suites, we have less good information available. I've set the AES-CBC limits to the same as the GCM number, which likely isn't correct, but I believe it to be conservative enough. If I find some credible numbers, then these can tweaked. I don't believe that key size is a significant factor, so 256 and 128 get the same values. Unfortunately, the papers on RC4 I found note that there are weakness that can be exploited after 2^26 or 2^24 bytes, which is a mere 2^10 records. That's at the point where this code will cause errors on very short time scales. I've raised the number for RC4 to 2^20, on the basis that anyone using RC4 is already accepting insanity. The same logic applies to RC2 and DES, which should (and will) be removed completely soon. I have found no good information on SEED, Camellia and 3DES. In my limited searching, I didn't find any analysis that suggested weaknesses, but then I didn't find that much. The values I've chosen are conservative estimates on my part. These will likely need revision. Finally, the limit for the null cipher is 2^10. That's more than I think we should ever need to complete the handshake. # Testing I've tested the code with small limits. It works, but I haven't worked out know how to test this change reliably without making intrusive changes.

ekr: https://codereview.appspot.com/309080043 I spent a little more time on this and added tests. Good thing that I did :) There are now LOTS more tests.

Created attachment 8782749 [details] [diff] [review] bug1268745-1.patch Wan-Teh, would you mind looking at the change in sslimpl.h regarding my change from a struct to PRUint64 for the TLS sequence number? Is this OK? We use PRUint64 elsewhere for storing sequence numbers. Rietveld here: https://codereview.appspot.com/309080043

Martin: I'm happy to help. I reviewed it and added some comments in Rietveld. In short, I suggest using a typedef for the TLS sequence number, which expresses the intent better than PRUint64 does.

https://hg.mozilla.org/projects/nss/rev/a1b0b7023e19

https://hg.mozilla.org/projects/nss/rev/71da21e9d6e9