Download should be fully done in the parent process.
Categories
(Core :: Networking, enhancement, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox73 | --- | fixed |
People
(Reporter: jya, Assigned: mattwoodrow)
References
Details
(Whiteboard: [necko-triaged])
Attachments
(6 files, 1 obsolete file)
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review |
Following bug 1556489, we now have all the bricks required so that a download gets initiated and downloaded entirely from the DocumentChannel, in the parent process.
There's no longer a need to bounce back and forth between child and parent like it does now.
this is a followup task for bug 1556489.
![]() |
||
Updated•6 years ago
|
Comment 1•6 years ago
|
||
Based on discussion elsewhere, we also want to handle content handlers in the parent process. Ideally we should do this at the same time, but some content handlers (e.g., the "application/x-xpinstall" one) may assume they'll have access to a docShell, so it may need to be spun off into a separate bug and wait until they can all be updated.
Assignee | ||
Comment 2•6 years ago
|
||
(In reply to Kris Maglione [:kmag] from comment #1)
Based on discussion elsewhere, we also want to handle content handlers in the parent process. Ideally we should do this at the same time, but some content handlers (e.g., the "application/x-xpinstall" one) may assume they'll have access to a docShell, so it may need to be spun off into a separate bug and wait until they can all be updated.
Yeah, we need to check all the potential content handlers (and run some of them in parent). We also want to know about the case where nothing will handle the content, so we can fallback to the download handler.
Some analysis of nsDocumentOpenInfo:
We first check if the docshell (nsDSURIContentListener) can load the content, either via IsPreferred/CanHandleContent. This is basically just nsIWebNavigationInfo::IsTypeSupported (which we can check easily in the parent), except that editor code can set a parent content listener that returns false. I think that only happens in the parent process though, and we could probably change the interface to express the intent in a way that could be serialised.
We also have code to handle the case where nsIURIContentListener wants to handle the content type, but wants it be converted first. I can't see anything that actually implements that though (at least within m-c), so maybe we could simplify that.
We then check the other nsIURIContentListener sources. There are none ever registered on the docshell (for firefox, thunderbird has one, but that's not e10s). There's one registered by contract ID, PSMContentListener, which I think we could convert to run in the parent fairly easily.
We then look for nsIContentHandlers for the MIME type. I see the application/x-xpinstall (should be easy to convert), and nsBrowserContentHandler. I think the latter could only be created for MIME types that were already handled by the docshell, so this shouldn't be possible to hit.
Finally, we check to see if any stream converters exist for the MIME type. If we find one, then we recurse into the whole process again to see if any of the above handlers want to handle the new type. However, we only ever attempt to create stream converters with the out type of "/". We can do the same in the parent to see if we get one, but there's no API to find out what the MIME type will be after conversion.
I think we have to add a new API to nsIStreamConverter to query the out type, and hope that it's implementable for all converters (there are a lot, I haven't audited).
Assignee | ||
Comment 3•6 years ago
|
||
(In reply to Matt Woodrow (:mattwoodrow) from comment #2)
I think we have to add a new API to nsIStreamConverter to query the out type, and hope that it's implementable for all converters (there are a lot, I haven't audited).
I don't think it is :(
nsMultiMixedConv doesn't know the content type that it's going to output until it actually gets data. It also can create an inner nsUnknownDecoder, which also doesn't know the output content type until it gets data.
That means we can't really know if the content process docshell will be able to convert and handle the content until we've had OnDataAvailable.
Trying to partially feed all the OnStartRequest/OnDataAvailable messages through a chain of stream converters, while also batching them all, so that we can replay them into the HttpChannelParent (so that it can run the same chain in the content process) seems awful.
I think we're going to have to run all of nsDocumentOpenInfo in the parent process (including actually doing stream conversion and decompression), and do so in the listener chain before we get to DocumentChannelParent. That way DocumentChannelParent::OnStartRequest only happens if we actually want to forward to a content process, and we can proceed with switching channels.
Assignee | ||
Comment 4•6 years ago
|
||
Honza, do you have any ideas for this?
It seems like doing all decompression and stream conversion in the parent process main-thread could be a real performance issue.
I don't see how we can handle downloading unknown content types in the parent until we've done all that work though to know what the final type is.
The only things I can think of are:
Some hybrid solution where we decompress/convert enough data to know the final type, and then transition to forwarding the remainder of the stream as-is and letting the content process decompress the rest.
Adding more threads and bounce all the conversion work to a background thread, and then back again for IPDL.
![]() |
||
Comment 5•6 years ago
|
||
I have to think. I believe you rediscovered the issues dragana was facing a long ago for some slightly different issue related to this code.
It's a nice feature to decompress on the child process. Right now we don't retarget nsHttpChannels paired to a HttpChannelChild because it's causing large scheduling harm. Hence we would have to do the decompression and sniffing on the main thread :(
I'll raise this issue in our Necko team meeting today. Maybe we will figure something out!
![]() |
||
Comment 6•6 years ago
|
||
Matt, to update our conclusions from Thursday meeting - Dragana is going to look at this. She was facing the same set of issues once. I'm afraid the solution would be something in between, with different approach for different initial conditions (content type in-hand knowledge). In the worst case, when the content type has to be sniffed and there is also a compression, to peek and decompress some reasonable amount of the data on the parent process, run the UnknownDecoder on it; this will also be probably main thread, which is undesirable. If we find out that the content should render, send the content again, compressed to the child, maybe with some hints what content type it is to save some work.
But please, this is not the final decision on the design! This needs more in-depth thinking before we start coding.
Assignee | ||
Comment 7•6 years ago
•
|
||
One other thing to note is that nsDocumentOpenInfo only ever passes */*
as the out type, when looking for a stream converter.
As far as I can tell, that means the only possible stream converters we can get are PdfStreamConverter, nsUnknownDecoder and nsMultiMixedConv.
PdfStreamConverter always has an output type of text/html, so we should be able to add a way to query that from the nsIStreamConverter(Service), and only do the actual conversion in the content process.
nsUnknownDecoder and nsMultiMixedConv seem relatively lightweight, so maybe doing them in the parent process isn't awful?
It seems like the real problem case is when the channel needs nsMultiMixedConv, but also has HTTP compression. In that case we'd need to SetApplyConversion(true) on the nsHttpChannel, and run both decompression and the multipart/mixed converter in the parent. Maybe that's rare enough to just accept for now?
![]() |
||
Comment 8•6 years ago
|
||
Hmm... this is getting complicated.
If one of the parts in the mixed content is to be downloaded/ext-processed (each can have its own mime type and we may sniff it too) we really need to decompress, sniff and decide on the parent process and then IPC-proxy the part channel to the child process for rendering.
-
What is the expected path when the request is made by a web extension? I'm not sure which process is the consumer, if the content process or the web ext process or both (in case of e.g. filtering).
-
And I don't recall what is expected to happen when a service worker interception is in play.
Stepping back a bit, one option is to let this (decomp/sniff/demux) happen off the parent process main thread (use a background thread on the parent process) as much as possible. hence, we may need bug 1528285 for this first.
Other option may be to have a "navigational" process that will do this (decomp/sniff/demux), on whichever thread, for simplicity. After we enable the socket process, we will need to send the raw (uncompressed) data between socket and parent anyway. If we send between socket and the nav process, it would not matter from the perf POV. The nav process would not be that restricted to allow downloads, or we may have yet another process to only do downloads w/o any further processing (aka w/o running a code with possible vulnerabilities.)
Adding shane and andrew to let them know about what we are trying to do here.
Comment 9•6 years ago
|
||
For ServiceWorkers, the intercepted channel just replaces the underlying nsHttpChannel. I think the only interesting change in behavior is that synthesized responses always call SetApplyConvesrion(false) to disable content-encoding conversion because any response a SW receives via fetch has always already been converted, and any other source of data (ex: Blobs, strings), should never have been encoded in the first place.
Comment 10•6 years ago
|
||
I found the old bug. i think it could be useful. Bug 1362564.
(I will keep my need info, I still nneed to look into the comments in this bug.)
Comment 11•6 years ago
|
||
(In reply to Matt Woodrow (:mattwoodrow) from comment #7)
One other thing to note is that nsDocumentOpenInfo only ever passes
*/*
as the out type, when looking for a stream converter.As far as I can tell, that means the only possible stream converters we can get are PdfStreamConverter, nsUnknownDecoder and nsMultiMixedConv.
PdfStreamConverter always has an output type of text/html, so we should be able to add a way to query that from the nsIStreamConverter(Service), and only do the actual conversion in the content process.
nsUnknownDecoder and nsMultiMixedConv seem relatively lightweight, so maybe doing them in the parent process isn't awful?
I think we already call them at least nsUnknownDecoder on the parent process here. nsUnknowDecoder needs only 512 bytes of data and has logic to only decompress a fit of data sniff it and then send compress data to the next listener.
It seems like the real problem case is when the channel needs nsMultiMixedConv, but also has HTTP compression. In that case we'd need to SetApplyConversion(true) on the nsHttpChannel, and run both decompression and the multipart/mixed converter in the parent. Maybe that's rare enough to just accept for now?
How much data does nsMultiMixedConv needs? Can we use code from nsUnknowDecoder to decompress a bit of data fore sniffing only?
Assignee | ||
Comment 12•6 years ago
|
||
(In reply to Dragana Damjanovic [:dragana] from comment #11)
It seems like the real problem case is when the channel needs nsMultiMixedConv, but also has HTTP compression. In that case we'd need to SetApplyConversion(true) on the nsHttpChannel, and run both decompression and the multipart/mixed converter in the parent. Maybe that's rare enough to just accept for now?
How much data does nsMultiMixedConv needs? Can we use code from nsUnknowDecoder to decompress a bit of data fore sniffing only?
I really know very little about nsMultiMixedConv, but that seems plausible.
So we'd run nsMultiMixedConv (or something similar that did the sniffing subset of it) in between nsHttpChannel and DocumentChannelParent (just like nsUnknownDecoder), so that when DocumentChannelParent::OnStartRequest is called, we know the actual content type?
How would we represent this sniffed content type? I assume changing the actual content type value on the nsIRequest isn't possible, since we still need to know on the content process side to apply the conversion?
Assignee | ||
Comment 13•6 years ago
|
||
Actually, no I don't think that makes sense, since the underlying content-type can be different for each part, so there's no way to sniff a single answer.
It also seems like we could want to download some parts, and not others, depending on content types. That'd mean we'd have to sniff the entire thing right up to the start of the last part (which I assume we can't identify until the actual OnStopRequest).
I think we really need to run nsMultiMixedConv in the parent to do that splitting into separate requests, and then decide what to do with each of them separately.
That seems fairly cheap, but if the channel is compressed, then we'd also have to do that, which could be painful.
So it seems like the choices are:
- Just run nsMultiMixedConv in the parent as-is, and hope that the combination of mixed+compressed is rare enough to not worry (we could measure this?).
- Run nsMultiMixedConv in the parent, but do so on a different thread. I guess that's bug 1528285?
![]() |
||
Updated•6 years ago
|
![]() |
||
Comment 14•6 years ago
|
||
It seems like in the multimixed (+compressed?) case we could detect that in OnStartRequest (on the main thread) and retarget OnDataAvailable to a background thread, so do all the streamconv work on the background thread. This isn't done in general because it leads to too much latency for normal browsing, as I understand it, but multimixed is very much an edge case... That shouldn't require bug 1528285, afaict.
Comment 15•6 years ago
|
||
(In reply to Boris Zbarsky [:bzbarsky, bz on IRC] from comment #14)
It seems like in the multimixed (+compressed?) case we could detect that in OnStartRequest (on the main thread) and retarget OnDataAvailable to a background thread, so do all the streamconv work on the background thread. This isn't done in general because it leads to too much latency for normal browsing, as I understand it, but multimixed is very much an edge case... That shouldn't require bug 1528285, afaict.
That is right, we do not need bug 1528285...
Some things will be a bit tricky:
- to make nsMultiMixedConv use the background thread we only need to make sMultiMixedConv implements nsIThreadRetargetableStreamListener and make sure/ or make it thread safe.
- but (the tricky part) nsMultiMixedConv may call multiple OnStart/OnStopRequest which should be called on the main thread, so we will need to dispatch them back to the main thread (writing this down does not sound that tricky :) )
Comment 16•6 years ago
|
||
A note to the previous message, OnStartRequest MUST happen before OnDataAvaialble and OnStopRequest must be the last. We need to make sure that holds.
Assignee | ||
Comment 17•6 years ago
|
||
My big concern here is that we then have multiple OnStartRequest calls happening to HttpChannelParent, which it doesn't currently support.
We'll need to decide if we create a new HttpChannelParent/Child pair for each part, or whether we supporting sending multiple parts through a single HttpChannelParent.
Docshell expects to deal with multipart by detecting the multipart wrappers, and looking up the http (HttpChannelChild) channel inside it. I guess that might just be able to go away if we split the multipart before we get to HttpChannelChild.
Comment 18•6 years ago
|
||
I have question:
can we disallow downloads for multipart/x-mixed-replace? We do not have any stats about this type, should we collect some?
Comment 19•6 years ago
|
||
(In reply to Dragana Damjanovic [:dragana] from comment #18)
I have question:
can we disallow downloads for multipart/x-mixed-replace? We do not have any stats about this type, should we collect some?
Sorry, I just realize that this will not solve the issue.
![]() |
||
Comment 20•6 years ago
|
||
Well, it is worth checking what other browsers do with multipart/x-mixed-replace
.... If no one else supports it, we could try use countering it and seeing whether we can just drop support for it completely, I guess. It used to be that bugzilla used it extensively, UA-sniffing for Firefox, but maybe that's stopped.
![]() |
||
Comment 21•6 years ago
|
||
It looks like we are the last browser that supports well/at all multipart/x-mixed-replace for navigation rendering. Removing this would likely simplify a number of things. I used this example from one of the last bugs related to mixed-replace we have fixed for navigation: http://mallory.csrf.jp/x-mixed-replace/csp/bad.php. It doesn't render in Chrome, Edge 44/EdgeHTML 18 and doesn't render the first part (only the second one) in Safari 13, which may be considered a broken support.
![]() |
||
Comment 22•6 years ago
|
||
And bugzilla doesn't use it any more for search results waiting image.
![]() |
||
Comment 23•6 years ago
|
||
If a navigated-to mixed-replace response delivers images as parts, in all Firefox, Chrome, Safari they render and alter on the screen. Edge and IE downloads the content.
Assignee | ||
Comment 24•6 years ago
|
||
Is it possible for different parts to have different mime types that require different handling?
I think if not, then we could have a sniffer (like nsUnknownDecoder) which just figures out the type, and then forwards to the content process or download handler.
If we do, then we really need to do all of the splitting in the parent, and handle each part correctly.
![]() |
||
Comment 25•6 years ago
|
||
Is it possible for different parts to have different mime types that require different handling?
Yes, and traditionally that was precisely the idea: one part which ends up with a download and one part which renders as HTML and serves as a confirmation page of some sort...
Assignee | ||
Comment 26•6 years ago
•
|
||
Now that the external app helper works in the parent, and multipart/x-mixed-replace runs in the parent, it seems like the biggest remaining issue is stream converters.
From what I can tell, nsDocumentOpenInfo only ever passes */*
as the output type, which restricts the possible set of converters (at least ones within the tree). I believe this means we'll never get chained converters from a single AsyncConvert call, since nothing takes */*
as input.
Some stream converters can't know what content type they'll output until they've actual received input (like nsUnknownDecoder and nsMultiMixedConv). These ones need to run in the parent, so we can find out the resulting content type before we decide where to target the document.
Others only convert a fixed type, so even though we create them with a contract id using */*
, they still could tell us the output type in advanced (like pdf.js).
Some stream converters seem very difficult to run in the parent, particularly PdfStreamConverter.jsm, which interacts with the Window and document.
Given that we now handle multipart and unknown content types in nsHttpChannel directly, my hope is that we can now only support stream converters (for nsDocumentOpenInfo) that will be able to tell us their content type. I'm experimenting with a new nsIStreamConverter API that queries the output type (and is unimplemented by most converters).
The parent side version of nsDocumentOpenInfo attempts to create a stream converter, checks the output type, and then checks to see if we can target that at the docshell (but doesn't bother trying to target that at other content handlers). If so, we send the data as is to the content process, and let the actual conversion happen.
That's only a subset of what we support right now, but it looks like it might be sufficient for the in-tree use cases we have,
Boris, does that sound somewhat sane? Am I missing something here?
![]() |
||
Comment 27•6 years ago
|
||
From what I can tell, nsDocumentOpenInfo only ever passes
*/*
as the output type
Hmm. It could pass something else if we had a IsPreferred
or CanHandleContent
return a non-empty type-to-use type that does not match our incoming type. I just checked, and it looks like nothing in Firefox does that. In Thunderbird, nsURLFetcher::CanHandleContent
does do that, but requesting that "message/rfc822" be converted to "text/html".... That said, it seems to me that nsURLFetcher
could just insert that converter itself in DoContent
, just like it already does for some other types. Worth checking with the Thunderbird folks on. If so, we could nix this whole "output a type to convert to" bit from nsIDSURIContentListener and then */*
would be the only output type this code uses, yes.
which restricts the possible set of converters (at least ones within the tree)
True.
I believe this means we'll never get chained converters from a single AsyncConvert call, since nothing takes
*/*
as input.
Unfortunately, a converter that declares itself as converting to */*
can produce any type as output. That includes "no type", followed by running the unknown decoder and sniffing a type, etc... So we can still get chained converters, since we use the actual type the converter outputs, whether directly or via not setting one and sniffing happening, to do our next conversion, if one is needed.
OK, so what are the actual converters that convert to */*
? Looks like:
- pdfjs: always produces
text/html
- jsonview: always produces
text/html
- Unknown decoder: could produce anything, unfortunately, but maybe we're not running it from uriloader code at this point?
- Various multipart things (multipart/byteranges, multipart/mixed, multipart/x-mixed-replace): Do we just run those from inside necko now?
- The Thunderbird converter from
message/rfc822
. This one can do all sorts of stuff, I think... That said, it could in fact implement the "tell us what type you plan to output" API, as far as I can see, as long as that API is given the same context arg thatAsyncConvertData
gets. Unfortunately, that type might end up beingUNKNOWN_CONTENT_TYPE)
(aka"application/x-unknown-content-type"
); not sure how much that complicates life here.
I can't find anything else. Is that about right? As far as Thunderbird goes, it doesn't do e10s, so far, so maybe we can only support whatever bits it needs in non-e10s mode, effectively? Does that help the situation any?
Assignee | ||
Comment 28•6 years ago
|
||
(In reply to Boris Zbarsky [:bzbarsky, bz on IRC] from comment #27)
From what I can tell, nsDocumentOpenInfo only ever passes
*/*
as the output typeHmm. It could pass something else if we had a
IsPreferred
orCanHandleContent
return a non-empty type-to-use type that does not match our incoming type. I just checked, and it looks like nothing in Firefox does that. In Thunderbird,nsURLFetcher::CanHandleContent
does do that, but requesting that "message/rfc822" be converted to "text/html".... That said, it seems to me thatnsURLFetcher
could just insert that converter itself inDoContent
, just like it already does for some other types. Worth checking with the Thunderbird folks on. If so, we could nix this whole "output a type to convert to" bit from nsIDSURIContentListener and then*/*
would be the only output type this code uses, yes.
Oh sorry, I forgot that I had a local patch that does exactly this based on the same conclusion.
Maybe I should just assert that we don't ever get a type-to-use when running under e10s, and file a thunderbird bug to first remove their usage of it.
- pdfjs: always produces
text/html
- jsonview: always produces
text/html
- Unknown decoder: could produce anything, unfortunately, but maybe we're not running it from uriloader code at this point?
I don't think we are, but it's hard to prove that it couldn't be run. nsHttpChannel runs the unknown decoder manually, both for normal request and when it does multipart handling (runs it on each part if needed).
It's possible other protocols, or weird paths that I've missed are a source of unknown content types? I think we can just add nsUnknownDecoder to them too if they show up.
- Various multipart things (multipart/byteranges, multipart/mixed, multipart/x-mixed-replace): Do we just run those from inside necko now?
multipart/x-mixed-replace is handled automatically by nsHttpChannel now. The other two are not, but I think that's just an oversight on my part.
- The Thunderbird converter from
message/rfc822
. This one can do all sorts of stuff, I think... That said, it could in fact implement the "tell us what type you plan to output" API, as far as I can see, as long as that API is given the same context arg thatAsyncConvertData
gets. Unfortunately, that type might end up beingUNKNOWN_CONTENT_TYPE)
(aka"application/x-unknown-content-type"
); not sure how much that complicates life here.I can't find anything else. Is that about right? As far as Thunderbird goes, it doesn't do e10s, so far, so maybe we can only support whatever bits it needs in non-e10s mode, effectively? Does that help the situation any?
Yeah I do think it helps the situation a lot. I think we can have 3 variants of nsDocumentOpenInfo, one being the current state (and non-e10s only), one for parent-process handling when using e10s, and one for content process handling.
The parent-process variant deals with deciding if the docshell will be able to handle the request (maybe after stream conversion), and retargetting the request to other consumers. The content-process variant applies the actual stream conversion, and forwards to the docshell.
Still trying to get my head around a sane way to share common code between these, but still making it clear as to which subset is actually in effect.
Thanks for looking into this!
Assignee | ||
Comment 29•6 years ago
|
||
This was added for b2g, and hasn't been used since.
Updated•6 years ago
|
Assignee | ||
Comment 30•6 years ago
|
||
IsTypeSupported requires a docshell in order to determine if plugins are allowed. This adds a static version that lets the caller provide their own value for allowing plugins.
Depends on D56131
Assignee | ||
Comment 31•6 years ago
|
||
Depends on D56132
Assignee | ||
Comment 32•6 years ago
|
||
We don't want to run stream conversion in the parent (since a lot of them require access to the document), so this instead adds a way to find out what their output type will be.
Depends on D56133
Assignee | ||
Comment 33•6 years ago
|
||
Depends on D56134
Assignee | ||
Comment 34•6 years ago
|
||
Depends on D56135
Assignee | ||
Comment 35•6 years ago
|
||
Depends on D56136
Assignee | ||
Comment 36•6 years ago
|
||
(In reply to Matt Woodrow (:mattwoodrow) from comment #28)
- Unknown decoder: could produce anything, unfortunately, but maybe we're not running it from uriloader code at this point?
I don't think we are, but it's hard to prove that it couldn't be run. nsHttpChannel runs the unknown decoder manually, both for normal request and when it does multipart handling (runs it on each part if needed).
It's possible other protocols, or weird paths that I've missed are a source of unknown content types? I think we can just add nsUnknownDecoder to them too if they show up.
This part ended up not being true, and we do get unknown content types in a few places.
I added a manual exception for doing nsUnknownDecoder stream conversions in the parent, while only doing the remainder (of ones that support the new query API) in the content process.
Updated•6 years ago
|
Comment 37•6 years ago
|
||
Comment 38•6 years ago
|
||
Comment 39•6 years ago
|
||
Comment 40•6 years ago
|
||
Comment 41•6 years ago
•
|
||
Backed out 9 changesets (bug 1574372) for bustage, wpt, crashtests, mochitest failures.
Push that started the failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&resultStatus=pending%2Crunning%2Csuperseded%2Ctestfailed%2Cbusted%2Cexception%2Crunnable&revision=419b94b1210e4304609821a7a4a1cb94e5317f99
Failure log for build bustages: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281441809&repo=autoland&lineNumber=35283
Failure log for ESlint: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281441806&repo=autoland&lineNumber=55
Failure log for crashtests: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281471851&repo=autoland&lineNumber=11797
Failure log for wpt: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281449228&repo=autoland&lineNumber=3550
https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281474126&repo=autoland&lineNumber=4245
Failure log for mochitest: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=281475982&repo=autoland&lineNumber=3548
Backout: https://hg.mozilla.org/integration/autoland/rev/3017a49159fba1083467d676c98fe1e61ecc2eaf
Comment 42•6 years ago
|
||
Comment 43•6 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/e9eb50f2348e
https://hg.mozilla.org/mozilla-central/rev/0a4b357c600d
https://hg.mozilla.org/mozilla-central/rev/4002a8984ead
https://hg.mozilla.org/mozilla-central/rev/67ccb856e9b0
https://hg.mozilla.org/mozilla-central/rev/3e9b4cd51f83
https://hg.mozilla.org/mozilla-central/rev/4f12433954ae
Assignee | ||
Updated•6 years ago
|
Description
•