Closed Bug 1507008 Opened 2 years ago Closed 2 years ago
Consider migrating Fluent
DOM to C++
Elements of FluentDOM: * Arbitrary DOM operations (setAttributes/getAttributes) (in C++ already) * Event/Prefs Observer (in C++ already) * Initial Document Translation * Mutation Observer * DOM Overlays algorithm (bug 1503657)
One of the potential solutions to bug 1441035 is to migrate the critical parts off of JS to minimize C++<-->JS communication on hot paths.
I know it's going to be hard, but I must ask everyone to stay calm and don't get overly excited about the quality of this code. There are three things that FluentDOM does: 1) Initial document localization 2) on mutations localization 3) rich localization using DOM Overlays (see bug 1503657) This is a mock of (1), skipping fluent parsing/resolving for now (replaced with a beautiful mock parser) which hopefully we'll get from Rust. It also doesn't cache the data for tpaint yet, and of course doesn't do any DOM Overlays nor does it fallback/async (that's what Localization.jsm does). Nonetheless, it does load some messages and translates Preferences. I'll work a bit more on it and try to get some talos numbers.
Attachment #9024888 - Attachment is obsolete: true
The mocked FluentDOM in C++ gives a nice performance boost to preferences: https://treeherder.mozilla.org/perf.html#/compare?originalProject=try&originalRevision=4dff1d19b36f&newProject=try&newRevision=e92a3a6e4bd1&framework=1 Notice that it doesn't have fluent-rs plugged yet, nor DOMOverlays nor language fallbacking, but it seems promising. Next, I'll test how this patchset works with browser-menubar.
Depends on D12048
And results for browser-menubar: https://treeherder.mozilla.org/perf.html#/compare?originalProject=try&originalRevision=4dff1d19b36f&newProject=try&newRevision=bf15f6a385b2&framework=1 I'm a bit confused as to why the about:preferences numbers here are even better (8%->12%) and the only guess I have is that in the second patch I skip translation for URLs other than `about:preferences` and `browser.xul` so maybe some of the about_preferences talos tests test for additional documents in preferences that use Fluent but now are skipped. What's important is that I have a full browser-menubar localized without tpaint/ts_paint/sessionrestore cost. It's a mock, and a very brute-forced one, but it does hit all major hotpaths: - read I/O (no caching!) - quasi-parse (fluent-rs shouldn't be much slower for most common hot-paths) - collect localizable elements from the document - apply translations (DOM Overlays will be slower, but they affect small percentage of elements) What it gives us is close to the lower bound of what we can get out of C++/Rust migration. From here, we'd need to add things that will cost: - full fluent-rs parser - load 13 files instead of one (about:preferences) - format message values (fluent-rs resolver or JS resolver) - implement full DOM Overlays - add lazy locale fallbacking to match the current behavior There are two things we can do to improve the performance: - cache the I/O at runtime. We currently do I/O in the mock for each of the 20 tpaint runs, but it would be trivial to do it only for the first of the 20 with some mock of L10nRegistry in C++/Rust. - cache the localized document.
still some minor perf impact, I'll try to cache the I/O tomorrow and see how that impacts the numbers.
for the record. If we were to go this route, I suspect the biggest questions lie around a rewrite of L10nRegistry from JS to Rust or C++. It would require us to decide on how we'll be doing I/O (sync/async), how we'll be storing data (per process? IPC?), and how we'll communicate and fallback to Fluent. As for the last one - we currently use async generator in L10nRegistry which makes it easy to lazily fallback in Localization.jsm as needed. This will likely look different in C++/Rust. If we decide not to do async, then the generator doesn't have to be async.
Depends on D12176
Attachment #9025445 - Attachment is obsolete: true
With string/AST caching we're green - https://treeherder.mozilla.org/perf.html#/compare?originalProject=try&originalRevision=0ee7975faaed&newProject=try&newRevision=b39aec372803&framework=1 That of course assumes that we're sync and if we are sync, it may be even better to parse as soon as the `<link/>` is hit and localize in ContentSink as the nodes get attached to the DOM rather than waiting for the document to be done and looking up for all the elements. Alternatively, we could also collect references to localizable elements that get inserted into DOM and localize them once the async I/O is complete or once parsing is done. I don't know which strategy is better.
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → DUPLICATE
Duplicate of bug: 1552714
You need to log in before you can comment on or make changes to this bug.