Open
Bug 958786
Opened 11 years ago
Updated 2 years ago
PKIX_PL_AIAMgr_GetAIACerts should keep going if pkix_pl_AIAMgr_GetHTTPCerts or pkix_pl_AIAMgr_GetLDAPCerts fails with a non-fatal error
Categories
(NSS :: Libraries, defect, P2)
NSS
Libraries
Tracking
(Not tracked)
NEW
3.16.1
People
(Reporter: wtc, Unassigned)
Details
Attachments
(1 file)
2.35 KB,
patch
|
ryan.sleevi
:
review-
|
Details | Diff | Splinter Review |
PKIX_PL_AIAMgr_GetAIACerts should keep going if pkix_pl_AIAMgr_GetHTTPCerts or
pkix_pl_AIAMgr_GetLDAPCerts fails with a non-fatal error. An example of this is
that an HTTP URL for CA issuers may return a 404 response. Alexei also noted we
should do this in bug 528743 comment 9.
This patch improves the old patch I wrote in bug 528743 comment 3.
Question: should PKIX_PL_AIAMgr_GetAIACerts stop as soon as pkix_pl_AIAMgr_GetHTTPCerts
or pkix_pl_AIAMgr_GetLDAPCerts succeeds?
Attachment #8358718 -
Flags: superreview?(rrelyea)
Attachment #8358718 -
Flags: review?(ryan.sleevi)
Comment 1•11 years ago
|
||
Comment on attachment 8358718 [details] [diff] [review]
Patch
Review of attachment 8358718 [details] [diff] [review]:
-----------------------------------------------------------------
This is something actively debated in the CA/Browser Forum.
For the record, I'm not a big fan of falling back and trying all of the AIA. I think the current behaviour - try one and fail if it fails - is more desirable for performance and for avoiding some of the 'round-robin' hacks that CAs have proposed.
On a technical level, this patch is correct, but I can't bring myself to r+ it, because I think it will end up doing more harm than good - especially with regards to performance.
Attachment #8358718 -
Flags: review?(ryan.sleevi) → review-
Reporter | ||
Comment 2•11 years ago
|
||
Comment on attachment 8358718 [details] [diff] [review]
Patch
Review of attachment 8358718 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks for the comments.
The current behavior is
- Try each one until any one fails.
- If any one fails, return no results.
If nothing fails, the current code will try *all of them*. So
this patch doesn't hurt performance in the normal case. On the
other hand, the current code doesn't support failover, which
seems bad.
I can implement any reasonable behavior if you specify it.
Comment 3•11 years ago
|
||
Comment on attachment 8358718 [details] [diff] [review]
Patch
Review of attachment 8358718 [details] [diff] [review]:
-----------------------------------------------------------------
Oof. I misread the nbio case.
I'm of the view we should try the first URL for each type we support. Or at least support down-scaling timeouts. Or parallel fetches.
"Ideal" behaviour would be:
- Try the first URL whose type we support. If it succeeds, great. If it fails, too bad. End of AIA processing.
"Acceptable" behaviour would be:
- Try each URL
- If it fails, try next URL
- If it succeeds, stop processing
- Don't let total time exceed some threshold
Chromium currently tries to implement the last point of 'acceptable' by setting 15s timeouts of fetches. If the request exceeds that time, it reports to libpkix that it failed. This failure has the effect of limiting the total time spent doing AIA (since a failure will abort processing). This patch would seem to potentially increase the overall time for every bad URL, because these timeouts are not fatal processing errors. That's what I meant by negative performance implications.
Reporter | ||
Updated•11 years ago
|
Target Milestone: 3.16 → 3.16.1
Comment 4•3 years ago
|
||
The bug assignee didn't login in Bugzilla in the last months and this bug has priority 'P2'.
:beurdouche, could you have a look please?
For more information, please visit auto_nag documentation.
Assignee: wtc → nobody
Status: ASSIGNED → NEW
Flags: needinfo?(bbeurdouche)
Updated•2 years ago
|
Severity: normal → S3
Comment 5•2 years ago
|
||
We have modified the bot to only consider P1 as high priority, so I'm cancelling the needinfo here.
Flags: needinfo?(bbeurdouche)
You need to log in
before you can comment on or make changes to this bug.
Description
•