in addition to the static blacklist of servers, the dynamic blacklisting based on the pipelining feedback module, and the pipeline pretest - we can also add a downloadable list of known-broken-with-pipelining host names. opera does something very similar. it makes sense that this list is retrieved as a side effect of a pre-test (603505)
Created attachment 485102 [details] [diff] [review] Dynamic blacklist of hostnames wrt pipelining v1 as mentioned in the summary, the lists of hosts is piggybacked onto the pipeline-sanity check (it comes back as the first response body). how often to update that list is an open question, but one I hope to discuss in 603505.
Created attachment 487945 [details] [diff] [review] Dynamic blacklist of hostnames wrt pipelining v2 minor update for an interface that should have taken nscstring instead of nscautostring
Created attachment 495143 [details] [diff] [review] Dynamic blacklist of hostnames wrt pipelining v3 update bitrot, confrom better to style guide, updates based on experience (i.e. bugs and tweaks), etc..
Created attachment 513673 [details] [diff] [review] Dynamic blacklist of hostnames wrt pipelining v4
Created attachment 542084 [details] [diff] [review] hostname blacklist 5 larch bitrot
As I mentioned earlier in an email thread, we might consider basing this on a web service, something similar to update check or safe browsing updates, but this mostly depends on estimation how long the list might be. If significantly long, then using pretest to fetch it will probably not be the most optimal way, it could get blocked and influenced with a larger response. LevelDB is a perfect candidate to work with here to persist the list. SQLite is overkill. This needs to think a bit: - on what data we base the black list? how do we collect it? just on reported bugs? - should we allow running Firefox instances around the globe report us the list of hosts that actively fail to pipeline with some other data like topology etc? (probably a privacy issue..) - how long the list we believe might be? the publishing protocol should be designed based on that - how often are we going to update it? - could this be based on bloom filter, transferred compressed on the wire?
Comment on attachment 542084 [details] [diff] [review] hostname blacklist 5 Dropping the review flag. Feedback to this attachment in comment 6.
I'm going to mark this as wontfix until we have a demonstrated need for it. The only model I have for it is opera which has a very short list that appears frankly out of date to me. There isn't a lot of value there as opposed to dynamic problem detection. I also have concerns about privacy and adding dependencies to a production level services implementation.