Don't invalidate a preloaded style sheet HTTP response if encoding of the document isn't settled
Categories
(Core :: CSS Parsing and Computation, defect, P3)
Tracking
()
People
(Reporter: hsivonen, Unassigned)
References
Details
Despite transferring updates to the document character encoding via the speculative load queue when possible, it appears that we can process a style sheet preload when the Document
object hasn't had its encoding update according to either meta charset
or chardetng.
At present time, that logically seems like a bug in the HTML parser. However, bug 1701828 is going to make that a by-design situation, so it's worth addressing in the preload machinery.
In bug 1716290, I'm going to make UTF-8 the assumed encoding for preloads so that they succeed in the common case.
However, ideally we wouldn't throw away a preload HTTP response if it arrives at a time when Document::mCharacterSetSource == kCharsetUninitialized
.
Reporter | ||
Comment 1•3 years ago
|
||
We should also double-check that we don't reject anything due to the document not yet having gotten upgraded to the Standards Mode.
Comment 2•3 years ago
|
||
So part of the issue is that the stylesheet load is keyed by charset-guess-at-time-of-creation, but that can change.
To fix this I believe we'd need to track not only the charset, but whether the charset was guessed or specified via <link charset>
or what not. If it was guessed, I guess we could fix up the charset before parsing... But it seems non-trivial, because we could guess different charsets for different docs...
Comment 3•3 years ago
|
||
Changing severity to S3 because of the bad case is not terrible but we should fix it nonetheless.
Reporter | ||
Comment 4•3 years ago
|
||
AFAICT, in the case of scripts we only fetch speculatively and don't compile speculatively, so by code inspection, this problem doesn't arise for scripts. (I didn't actually test this.)
Reporter | ||
Comment 5•3 years ago
|
||
Bug 1701828 exposes this bug on dom/tests/mochitest/general/file_resource_timing_nocors.html, dom/tests/mochitest/general/performance_timeline_main_test.html, and dom/tests/mochitest/general/resource_timing_main_test.html, so one way to write tests for this is to copy those tests and remove the encoding declarations that I'm adding in bug 1701828.
To fix this I believe we'd need to track not only the charset
We only need to track which encoding we used and compare that at the time of use to what encoding we would use at that point. The main complication is that @charset
is part of what would be used and we don't want to reparse that.
Reporter | ||
Comment 6•3 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) from comment #4)
AFAICT, in the case of scripts we only fetch speculatively and don't compile speculatively, so by code inspection, this problem doesn't arise for scripts. (I didn't actually test this.)
And it seems that script compilation depends on the size of the script, so I will need to take another look with scripts larger than the threshold.
Comment 7•3 years ago
|
||
I believe we do try to parse the scripts (off the main thread), but https://searchfox.org/mozilla-central/rev/5122357c497684e01c5bb2d4a9bf8be1fe97a413/dom/script/ScriptLoader.cpp#2144 may return null if something has changed.
Description
•