Open Bug 1546847 Opened 5 years ago Updated 1 month ago

Multi-second UI jank/freeze when serializing SessionStore data over to the SessionFile worker

Categories

(Firefox :: Session Restore, defect, P2)

66 Branch
defect

Tracking

()

REOPENED
Performance Impact medium

People

(Reporter: mail, Assigned: alexical, NeedInfo)

References

(Blocks 1 open bug)

Details

(Keywords: perf:responsiveness, Whiteboard: [bhr:SessionWorker.js])

Attachments

(4 files)

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0

Steps to reproduce:

Switch between 4-5 quickly, scroll up and down heavily.

Sometimes, just type in a text box (like the one I'm entering this message into), and experience multi-second delays until he text appears.

I have many tabs open (800, of which all except from 15 are unloaded due to a browser restart), but I've also observed this issue with a lot less tabs (e.g 50, with most of them being unloaded).

Firefox 66 on Ubuntu 16.04.

Actual results:

The whole Firefox UI freezes mid-scroll or mid-typing, for 1-4 seconds.

I have obtained 3 profiles:

You can immediately see a huge 100% CPU section spanning multiple seconds.

Most of the time is spent in function

std::_Function_base::_Base_manager<mozilla::dom::network::ConnectionProxy::Create(mozilla::dom::WorkerPrivate*, mozilla::dom::network::ConnectionWorker*)::{lambda()#1}>::_M_manager(std::_Any_data&, std::_Any_data const&, std::_Manager_operation)

That is likely this lambda:

https://searchfox.org/mozilla-central/rev/d143f8ce30d1bcfee7a1227c27bf876a85f8cede/dom/network/ConnectionWorker.cpp#26

doing

RefPtr<StrongWorkerRef> workerRef = StrongWorkerRef::Create(
    aWorkerPrivate, "ConnectionProxy", [proxy]() { proxy->Shutdown(); });

So it appears that proxy->Shutdown() is hanging for multiple seconds?

By the way, I have no proxy set in the settings, and I've made the first 2 profiles with the "use system proxy" setting and the third one with the "no proxy" setting.

Also suspicious is that an additional ~30% of the time is spent is spent in the function shown just below in the profile: strerror_l().

That is an insane amount of time for this function. strerror_l()'s purpose is to errno integer values into localised error messages. I suspect that what's happening might be something like a tight loop around a syscall that repeatedly fails, and strerror_l() being called in that loop (which I'm not sure makes sense, as no localised error message is shown, so perhaps all that localisation effort is wasted?).

I have this problem already for many Firefox releases, but now decided to investigate it and make a profile.

Expected results:

No multi-second Firefox freeze!

Suffice it to say, this bug makes using Firefox a total nightmare and has caused vast amount of aggression in my mild heart :)

Attached file strace2.log.zip

I've done an strace recording while experiencing a hang (attached).

I suspect that the hangs conicide with large amounts of madvise(..., MADV_DONTNEED) calls in a specific thread (14955).

grep -E '^14955' strace2.log | less

From this I guess that this thread is doing some form of memory management, most likely freeing previously malloc'ed buffers.

These buffers are allocated form the OS using a few large mmap() calls of pretty much exactly 10 MB each, but they are for some reason returned to the OS in small chunks.

It makes some intutitive sense that a function called Shutdown() is associated with such memory freeing, but I'm not familiar enough with Firefox to know what exactly it's doing. Shutdown() doesn't do a lot beside setting things to nullptr, so perhaps this is some expensive destructors being called?

Hi @Niklas Hambüchen, I've tested the issue on Ubuntu 18.04 with 500 tabs opened. I observed a delay switching between some tabs but since that I have opened that big number of tabs I guess this is kind a normal thing due the number of processes involved to be rendered.
Further, I will set a component and if this isn't the right one please fell free to change it. Additionally I will ask Mike Conley to take a look and help us with this issue.

Thanks for your contribution.

Flags: needinfo?(mail)

Hi @Mike, please can you take a look of this issue? I've add a comment with my attempt of reproducing. Thanks

Flags: needinfo?(mconley)
Component: Untriaged → General
Product: Firefox → Core

Hi Niklas,

You've done a wonderful job investigating this. Thank you for the profiles - yes, the issue seems to be pretty clearly a problem when posting a message to a worker. I've put this into the queue for investigation from someone on the DOM team.

Component: General → DOM: Workers
Flags: needinfo?(mconley)
Whiteboard: [qf]

Although it's also possible, since this is ConnectionProxy, that this might ultimately be a Necko / Networking bug. Let's start with DOM :: Workers though and see where we get.

mail@nh2.me, any chance you could try to capture profiles using Nightly. That might give better symbols.

yeah, the profiles look unfortunately somewhat broken. KeyframeValueEntry is all about animations, and nothing to do with ConnectionProxy.

Whiteboard: [qf] → [qf:investigate]

There are several issues here:

  1. navigator.connection should be disabled on desktop. You should have dom.netinfo.enabled set to false.
    Niklas, have you enabled this feature by yourself? Just asking to know if the bug is elsewhere :)

  2. ConnectionProxy creates a StrongWorkerRef but that runs on a Web Worker, which is a separate OS thread, and it should not block the UI.

  3. There are no relationship between KeyframeValueEntry and ConnectionProxy...

BTW, ConnectionProxy is not related to network 'proxy'. The 'proxy' name is because we proxy data from the main-thread to the worker thread when needed.

Hey Niklas, do you think you'd be able to supply us with another set of profiles?

Hey everyone,

I was very busy this week, sorry for that, I'm back now to help figure this out.

First, I've made another profile, this time on Firefox 67, and on my Desktop instead of my Laptop (also running Ubuntu 16.04):

https://perfht.ml/2HUWzXf

As in the other profile, ConnectionProxy shows up during the big hang.

(In reply to Andrea Marchesini [:baku] from comment #9)

  1. navigator.connection should be disabled on desktop. You should have dom.netinfo.enabled set to false.
    Niklas, have you enabled this feature by yourself? Just asking to know if the bug is elsewhere :)

dom.netinfo.enabled is on its default (false) in about:config on both my machines.
Is there something in the profile that suggests otherwise?
I'm not quite sure what navigator.connection is, it doesn't seem to be a setting.

BTW, ConnectionProxy is not related to network 'proxy'. The 'proxy' name is because we proxy data from the main-thread to the worker thread when needed.

Thanks for clarifying.

(In reply to Olli Pettay [:smaug] (vacation May 27-31) from comment #7)

any chance you could try to capture profiles using Nightly. That might give better symbols.

Yes I can try that now.

Usually using Nightly gives me a fresh profile. What's the best way to import my current profile over to Nightly, should I just copy the contents over in the file system?

Also hi @mconley, we met once working together on Reviewboard for a while :)

Nevermind my profile question, the profile chooser came up so this was easy.

Here is the new profile you've asked for:

https://profiler.firefox.com/public/ac57006d9c12f5faec93ee68407b599c794750cd/calltree/?globalTrackOrder=0-1-2-3-4-5&hiddenGlobalTracks=1-2-3-4&localTrackOrderByPid=28335-0~&thread=6&v=3

Indeed it appears more detailed than the non-Nightly version.

WIth a lot of time being spent in:

  • JSStructuredCloneWriter::writeString
  • mozilla::BufferList::AllocateBytes

and also _nss_passwd_lookup() in libc (but that was already present in my previous non-Nightly desktop profile).

Does this help?

Flags: needinfo?(mail) → needinfo?(bugs)

(on vacation, but putting [qf] back, so that someone will take a look at this .)

Flags: needinfo?(bugs)
Whiteboard: [qf:investigate] → [qf]

Relevant part of the profile: http://bit.ly/2Mi04g7

Looks like we (Firefox frontend code) are spending a long time in Worker.postMessage calling into JSStructuredCloneWriter::writeString. So I think we're just sending a ridiculously long string to a worker for some reason. Unfortunately I can't tell what JS is involved here -- that's not shown in the profile. Maybe smaug or mconley might know how to figure that out? (mconley, you've already looked at profiles here a bit -- do you see anything interesting in the new one?)

(I wonder if this is e.g. a serialization of the user's about:config preferences, which we're handing off to child processes, or something like that? I don't really know what we use worker.postMessage for in frontend code.)

and also _nss_passwd_lookup() in libc (but that was already present in my previous non-Nightly desktop profile)

From some googling, it sounds like that really might just be strlen or some other glibc string-related function which has been mis-identified:
https://stackoverflow.com/questions/41444097/whats-nss-passwd-lookup-call-that-i-see-in-profiler-output-about#comment76706650_41454127

Flags: needinfo?(mconley)
Whiteboard: [qf] → [qf:p2:responsiveness]

It sounds like this is probably SessionStore[1]. I did some analysis related to this previously at https://bugzilla.mozilla.org/show_bug.cgi?id=1537694#c25 in which I think I found that a data structure goes over, not just strings, but it wouldn't be surprising for there to be a lot of (big) strings in the data-structure since it also stores SessionStorage. I'm going to speculatively move this to SessionStore but will stay cc'ed on the bug in case worker-related stuff crops up again.

1: https://searchfox.org/mozilla-central/source/browser/components/sessionstore

Component: DOM: Workers → Session Restore
Product: Core → Firefox
Blocks: ss-perf
Flags: needinfo?(mconley)
Summary: Multi-second UI jank/freeze in ConnectionProxy::Create → Multi-second UI jank/freeze when serializing SessionStore data over to the SessionFile worker
Whiteboard: [qf:p2:responsiveness] → [qf:p2:responsiveness][fxperf]
Whiteboard: [qf:p2:responsiveness][fxperf] → [qf:p2:responsiveness][fxperf:p3]

I think that moving this over to sessionstore just because it seems like the callsite to do this is a bit premature, I think. Why would passing anything over to a worker cause jank? Isn't the whole use case for using a worker to mitigate janking the UI/ blocking the main thread?

Has the sessionstore team made a mistake to use a worker for expensive I/O ops? I mean, there's this and another issue (bug 1556614) that implicates workers don't GC well enough because, among possible other reasons, it doesn't have incremental GC enabled (bug 941794)?
Are these enough possible footguns to start discouraging their use in code we ship to users?

Component: Session Restore → DOM: Workers
Product: Firefox → Core

Using postMessage necessarily performs the step of the StructuredSerialize(Internal) algorithm[1] on the calling thread. This is what is causing the jank. The size of the object graph needs to be reduced in the call (possibly by making more calls spread out over more separate tasks) and/or more of it needs to be made transferrable.

Presumably the state representation of session store is an object graph that is updated in small deltas. From my code reading, it appears like these deltas are applied on the main thread and then the entirety of the large object graph is then transmitted to the Worker when it's time to snapshot and write the state. If, instead, the canonical SessionStore state was maintained on the worker, with the deltas sent to the worker via postMessage as they happen in small pieces, that would markedly improve the jank situation as well as the GC situation by reducing GC churn. (However, you are absolutely right, GC on workers is something that we really need to invest time in, if only to understand the status quo.)

Another alternative is to use Blobs more extensively in the data-structure. Blobs provide by-reference handles to potentially large amounts of underlying data and therefore are serialized/deserialized much more efficiently for postMessage purposes. A byte copy cost is paid when creating the Blob, and a byte copy cost still needs to be paid when an aggregate series of Blobs are flattened to a stream, but ideally the Blobs are created spread out over time, and the Blob flattening will occur either on the Stream Transport Service thread pool or on the worker itself.

Note that the Workers & Storage team is planning to begin work on persistent SessionStorage (built on top of LocalStorage) soon, and this should help reduce the object graph size in terms of number/size of strings as well as object graph churn.

I'm going to move this back to session store since there's nothing directly actionable for this particular problem in Workers. If you'd like to discuss trying to further optimize the structured clone implementation in Firefox, it's probably appropriate to file/reference a bug in the JS engine for that. Feel free to needinfo me on this bug if you have any additional questions about worker interaction. I'm also available to discuss this via Zoom call or in-person at the all-hands if it's helpful. I understand that in general serialization costs are not a well surfaced or publicized cost in the web platform world and this is something we should try and address in documentation and elsewhere. For example, people are frequently surprised that the async IndexedDB API can jank the main thread, again for the same reasons. (Also, for spec reasons, IDB serialization is even more expensive thanks to its index magic.)

1: https://html.spec.whatwg.org/#structuredserializeinternal

Component: DOM: Workers → Session Restore
Product: Core → Firefox

The priority flag is not set for this bug.
:mikedeboer, could you have a look please?

For more information, please visit auto_nag documentation.

Flags: needinfo?(mdeboer)

Apology for the late reply.

Thanks Andrew, for the great explanation! Indeed, it seems that the how and when the cost of serialization is high is not well publicized, thus known. I think that starting with the Mozilla frontend teams to be more aware would be good - I'll mention it at the next fx-desktop team meeting with a link to your comment.

Now, for next steps for this bug: you're right that the sessionstore data is an object graph that's updated in small deltas. On the main thread.
I think it's good that we chose not to maintain state in the worker, because not to long ago I had to work around the lesson learnt that long-lived (Chrome)Workers can crash intermittently - see bug 1427007.
So I think using Blobs might be a good alternative. I don't think that it's practical to turn parts of the object graph into Blobs, because I wouldn't know where to start. Only for serialized principals, perhaps? (Yeah, those can be HUGE strings.)
What we could do in the relatively short term is, when it's time to flush state to disk:

  1. Get the full state graph,
  2. Turn it into a big Blob, potentially chunked so that we are sure not to block the main thread,
  3. Send (copy) the blob over to the worker,
  4. Let the worker deal with a Blob and as I understand it, cleaning that up will be much faster because it's a thread-local copy(?)

Question that I'm left with: how do I turn the whole object into a Blob without serializing it to a string first? It's a plain (JSON) object without references, but changing from Structured Clone to JSON.stringify serialization doesn't win us much, or does it?

Status: UNCONFIRMED → NEW
Ever confirmed: true
Flags: needinfo?(mdeboer) → needinfo?(bugmail)
Priority: -- → P3
Priority: P3 → P2

(In reply to Mike de Boer [:mikedeboer] from comment #19)

Now, for next steps for this bug: you're right that the sessionstore data is an object graph that's updated in small deltas. On the main thread.
I think it's good that we chose not to maintain state in the worker, because not to long ago I had to work around the lesson learnt that long-lived (Chrome)Workers can crash intermittently - see bug 1427007.

I skim-read the bug and its precursor bug 1402267, and it sounds like the problem is OS.File at some point breaks, rather than the Worker itself? It seems possible that OS.File could have a bug that either is local to the global it runs in, or that is process-wide but is cleaned up by the worker's global getting torn down. An actual worker crash will take down the whole process. The only other type of emergent state problem that could cause worker weirdness would be if the stack limit started getting hit, but it's basically impossible in a worker to induce the type of nested event loop shenanigans that can happen on the main thread unless devtools is involved. (The devtools debugger fundamentally involves nested event loops.) Well, or if some code in the worker calls close() thinking that the method is in its scope but it really is calling WorkerGlobalScope.close() and thereby shutting down the worker.

Is there a test that we could use to attempt to reproduce the underlying leak by modifying a few constants? If the problem reproduces on linux then we can make a test that fails when you decide to tear down the worker, it would be very easy to capture a pernos.co trace and investigate the problem. If it's Windows-specific, there's other investigations that a reproducing test would make easier. I would very much like to avoid people not trusting Workers. (That said, we currently do currently have outstanding bugs on improving when we trigger worker Garbage Collection which have also involved SessionWorker, and it's probably not a bad idea to be able to architect things so that the workers can/are restarted periodically to help deal with any emergent leaks, etc. Plus it's nice for testing!)

So I think using Blobs might be a good alternative. I don't think that it's practical to turn parts of the object graph into Blobs, because I wouldn't know where to start. Only for serialized principals, perhaps? (Yeah, those can be HUGE strings.)
What we could do in the relatively short term is, when it's time to flush state to disk:

  1. Get the full state graph,
  2. Turn it into a big Blob, potentially chunked so that we are sure not to block the main thread,
  3. Send (copy) the blob over to the worker,
  4. Let the worker deal with a Blob and as I understand it, cleaning that up will be much faster because it's a thread-local copy(?)

Question that I'm left with: how do I turn the whole object into a Blob without serializing it to a string first? It's a plain (JSON) object without references, but changing from Structured Clone to JSON.stringify serialization doesn't win us much, or does it?

So I just looked at the OS.File API for workers and realized with some horror that the implementation still appears to be using js-ctypes. Which could very possibly explain the apparent instability you're seeing in workers if that has a bug. I hadn't realized that only the main-thread implementation was altered to use native code. Unfortunately, its API currently only takes a single ArrayBuffer. Also unfortunately, apparently the native (main-thread) implementation doesn't support compression. So we'll just take the status quo there for granted for right now.

I'm not sure Blobs will actually be helpful as a first step given that you eventually need to surface the data as a contiguous ArrayBuffer. They're more of a win when you can hand off the Blob directly to an API.

Chunking optimizations

I think we're agreed that bullet 2 above is where the complexity is and that's the best first step. There's no avoiding the need to incrementally consume the object graph. I would propose implementing a JS iterator that traverses your session graph that is capable of providing a sufficiently consistent result when consumed over multiple separate setTimeout()-chunked tasks. Performing a shallow clone of top-level data structures, or simply relying on existing JS iterator semantics may be sufficient. The iterator would yield strings, interleaving JSON.stringify-ed small portions of your object graph plus the short JSON glue-strings. Your driver logic would accumulate those strings in an array until some length constant is reached, then postMessage that across to the worker, then re-schedule itself with a setTimeout(0).

The worker would receive the strings, concatenating them. (Our JS engine uses a "rope" implementation under the hood so that the concatenated strings need not involve new allocations but can instead refer to the already existing strings). Once a final postMessage of null or something comes across the wire, it encodes to an arraybuffer and invokes the OS.File API.

Blob Optimizations

Once you have that, the question is how amenable your data structures are to reusing previously computed values and whether Blobs help simplify the effort required. Conceptually, you could get what Blobs give you by having a convention of sending messages like { define: ID, value: PAYLOAD }, { reuse: ID }, and { free: ID } over the wire and maintaining a Map on the receiving end. When you new Blob([PAYLOAD]) you're paying a somewhat similar cost to sending the hypothetical "define" message to the worker. (Although when the worker receives it, it doesn't pay the deserialization cost.) The benefit of the Blob is that you no longer need to do the book-keeping around sending a "free" message.

So for example, if you have a spot on your data-structure that could be, say, "cachedBlob" where:

  • When the magic iterator gets to the data-structure, if there's already a "cachedBlob", it just yields that. (This is why the driver logic would accumulate them in an array.) When the worker receives the Blob, it invokes text() on the Blob to asynchronously turn it into a string. This is a new method added in bug 1557121. (The worker logic would need to maintain a queue of received strings which it processes asynchronously or otherwise take care to ensure that it doesn't accidentally interleave strings incorrectly if a new postMessage message comes in before the text() call resolves.)
  • Otherwise, it invokes JSON.stringify() on the payload of the structure, news a Blob, puts the Blob on the data-structure an sends the Blob over the wire.
  • Whenever the data-structure is mutated, cachedBlob is set to null.

The net result would be that main-thread processing time should improve because of the caching and the mechanism doesn't introduce too many additional parts, plus you can keep restarting the SessionWorker without worrying that it needs to maintain some new, complex state. Additionally, if we manage to update the OS.File API to take a Blob or Stream, then your worker logic can stop having to invoke text() which will massively improve the GC burden in the worker. Probably the best hope is that we implement https://wicg.github.io/native-file-system/ and can expose some additional chrome privileged super-powers that session store can use. (Or maybe the C++ session store implementation will happen first?)

Flags: needinfo?(bugmail)

Worker.postMessage shows up in the top reported hangs, as JSStructuredCloneWriter::startWrite, currently in spots #13 and #38 on this BackgroundHangReporter dashboard: http://queze.net/bhr/test/

This bug is causing serious UI responsiveness issues in the wild and needs to be treated with high priority.

(In reply to Markus Stange [:mstange] from comment #21)

This bug is causing serious UI responsiveness issues in the wild and needs to be treated with high priority.

But how? I'd love to move away from Workers - or IO.File in general - but I certainly don't have the resource(s) at my disposal to fit that in any kind of roadmap.

Can the perf team perhaps consider picking this up instead? I mean, fixing this requires little to no knowledge Session Restore's inner workings other than that it throws a big JSON blob over the fence and requires it to persisted. When or where that happens can be flexible.
For example, I'd be quite happy if we can use SQLite (using SQLite.jsm or IndexedDB as frontends) to perform this task instead.

Whiteboard: [qf:p2:responsiveness][fxperf:p3] → [qf:p2:responsiveness][fxperf:p3][bhr:SessionWorker.js]
Assignee: nobody → dothayer
Status: NEW → ASSIGNED

Read this as a first step. It's the easiest first step I could think of to
both reduce the quantity of stuff we serialize and ship to the worker as
well as to spread it out over multiple messages.

Anyway, the motivation is pretty simple. Taking a look at a session store
file on disk, a giant chunk of it is base64 encoded tab icons. I suspect
that in many cases these are not distinct. For my session store it's about
90% the same repeated searchfox icon over and over.

So what I did was I changed the "image" property of the tab to be a reference
into a deduplicated cache of objects (in this case strings). Whenever the tab
icon changes, we drop a reference to its cache entry and add a reference to a
new or existing entry. Each time a cache entry is added or deleted, we send
a message to the worker to update its own copy of the cache. This does
represent a memory hit, since the cache is maintained on the worker as well as
the main thread, but I think it's going to be minor, and it's only in one
process. Given the deduplication there is the possibility of an overall
reduction in memory use? This needs more testing.

Once it comes time to write the session data to disk, we send the payload with
"image" entries referencing IDs in the cache. When the worker gets the message
to write, it adds its internal cache to the object, which it then serializes
to JSON and writes to disk as usual.

When reading the data off disk, we take the cache items that had been written
and we slowly populate the worker's internal cache with them (to not overload
during startup with a giant message). And when populating tab icons of tabs in
the tab strip, we look up the image in the main thread copy of the cache. Also,
if we cannot find the entry, we assume that the image is just the raw
representation of the image. This ensures that we interpret a sessionstore file
from prior to this patch correctly.

Additionally, since we have the cache duplicated on both threads, if the worker
gets terminated for some reason, we rehydrate it with the snapshot of the cache
from when we noticed it was a problem.

I suspect some tests will need to be updated, or maybe many tests. However I
wanted to throw this patch past someone with more knowledge of the session
store's inner workings before throwing a bunch of time at that.

Attachment #9220015 - Attachment description: Bug 1546847 - Cache tab icons in worker to avoid serialization r?mikedeboer → Bug 1546847 - Cache tab icons in worker to avoid serialization r?nika

Leaving a needinfo on myself - I still need to get to fixing tests and whatnot.

Flags: needinfo?(dothayer)
Pushed by dothayer@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/e8ce702d4744
Cache tab icons in worker to avoid serialization r=kashav
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → 92 Branch
Regressions: 1723725

We're in soft freeze; should we consider backing this out to give ourselves more time to address the regressions?

(In reply to :Gijs (he/him) from comment #27)

We're in soft freeze; should we consider backing this out to give ourselves more time to address the regressions?

Yeah, I think so :(

Flags: needinfo?(dothayer)

So, annoyingly, a strict backout will cause Nightly users' sessions to lose favicons. I think I will need to add a little snippet into the session worker that repopulates images from the cache data if it exists.

Pushed by dothayer@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/5d6d9bae4dd6
Backed out bug 1546847 and related revs r=Gijs
https://hg.mozilla.org/integration/autoland/rev/405805c207af
SessionStore _cachedObjs fixup code for backout r=Gijs
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Target Milestone: 92 Branch → ---

I'm going to monitor for the next day of BHR data, but it's looking from the data from August 3 like the image caching did not have much of an impact: https://fqueze.github.io/hang-stats/#date=20210803&row=2

So I suspect this is session history stuff being a particular problem, and not images. I think the same cache built out for images should be usable by other things, but it will require a bit of investigation.

Performance Impact: --- → P2
Whiteboard: [qf:p2:responsiveness][fxperf:p3][bhr:SessionWorker.js] → [fxperf:p3][bhr:SessionWorker.js]
Depends on: 1752853

Doug, this should be substantially better now due to bug 1752853 and losing OS.File - are you still likely to work on this in the near future?

Flags: needinfo?(dothayer)
Severity: normal → --
See Also: → 1768946
See Also: → 1849393
Whiteboard: [fxperf:p3][bhr:SessionWorker.js] → [bhr:SessionWorker.js]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: