Open Bug 1391077 Opened 7 years ago Updated 2 years ago

Expanding a big array causes protocol slowness due to large packet size

Categories

(DevTools :: Console, defect, P3)

defect

Tracking

(firefox57 fix-optional)

Tracking Status
firefox57 --- fix-optional

People

(Reporter: bgrins, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: [newconsole-reserve])

Attachments

(1 file)

Blocking 1308566 for now but I'm not sure how much of this is also triggered by the variables view (the slowness is mostly on the backend for creating the Object Actors and sending the packet over).

STR:
Run this in the console: `Array(1e5).fill(0)`
Expand the Array

Profile:
https://perf-html.io/public/08792e9ab97ffa7f180ae5376cc38bf73be4375b/calltree/?hiddenThreads=&implementation=js&range=2.1041_3.2990&thread=2&threadOrder=0-2-3-4-1

About half of the time is spent in the onPrototypeAndProperties handler (ObjectActor) and the other half is spent sending the huge packet back to the parent process.
Attached file packet.txt
Here's the packet this ends up sending (gathered with devtools.dump.emit and browser.dom.window.dump.enabled prefs on).

A couple ideas (not mutually exclusive):
1) Turn the configurable/writable/enumerable properties in the packet into a bitmask or something much smaller than spelling each out (backwards incompatible RDP change)
2) Load only the buckets on the initial expansion and then wait to fetch properties until the buckets are expanded
Whiteboard: [console-html][triage]
The frontend part is already tracked in https://github.com/devtools-html/devtools-core/issues/552 .
I think this is the easiest way to go for now: the variable view was doing it that way so we have the server functions to load only slices of an object properties.

Bug 1387823 is also related I think
See Also: → 1387823
See bug 1023386 to see how it has been implemented in the old console.
I used to replicate chrome's behavior at the time I implemented that bug.
The slicing algorithm is quite simple. We split the property names into 4 buckets, everytime we have more than 2000 of them. So that we are sure that we never display too many slices, we only display 4 slices.

Otherwise I don't know if you already use client.enumProperties and its `iterator` feature?
  http://searchfox.org/mozilla-central/source/devtools/shared/client/main.js#2607-2635
  http://searchfox.org/mozilla-central/source/devtools/shared/client/main.js#2767-2837

It looks like you do:
  http://searchfox.org/mozilla-central/source/devtools/client/webconsole/new-console-output/actions/messages.js#81
But without really using its main feature. You shouldn't fetch slices from 0 to iterator.count, but instead only slices what you have to display. iterator actor has a "names" request which helps knowing what is the name of a property at a given offset. I don't know if you added slicing for non-array object, this is also important to support slicing on objects with many properties, like huge JSON objects.
Flags: qe-verify?
Priority: -- → P3
Whiteboard: [console-html][triage] → [reserve-console-html]
We could also consider compressing big packets before sending cross process - this particular packet is 7.4MB and after gzipping it is 270KB
(In reply to Brian Grinstead [:bgrins] from comment #4)
> We could also consider compressing big packets before sending cross process
> - this particular packet is 7.4MB and after gzipping it is 270KB

This is done for Telemetry: https://dxr.mozilla.org/mozilla-central/source/toolkit/components/telemetry/TelemetrySend.jsm#147
(In reply to Brian Grinstead [:bgrins] from comment #5)
> (In reply to Brian Grinstead [:bgrins] from comment #4)
> > We could also consider compressing big packets before sending cross process
> > - this particular packet is 7.4MB and after gzipping it is 270KB
> 
> This is done for Telemetry:
> https://dxr.mozilla.org/mozilla-central/source/toolkit/components/telemetry/
> TelemetrySend.jsm#147

Alex, have we thought about doing this before? And do you have any ideas how we could handle backwards compatibility for older clients that wouldn't support receiving compressed packets?
Flags: needinfo?(poirot.alex)
It may be worth looking at for cases where we really have to pass big chunk of data, like http bodies in network monitor.
But here, it looks like we should lazily load things.
FYI, the sole fact of creating the grip for all property values will slow things down a lot, this isn't only about RDP/message manager being slow.

Also instead of compressing, I think we should look into:
  http://searchfox.org/mozilla-central/source/dom/webidl/StructuredCloneHolder.webidl
And see if we can make specific optimization on non-remote communications (i.e. on Firefox Desktop).
Flags: needinfo?(poirot.alex)
(In reply to Alexandre Poirot [:ochameau] from comment #7)
> It may be worth looking at for cases where we really have to pass big chunk
> of data, like http bodies in network monitor.
> But here, it looks like we should lazily load things.
> FYI, the sole fact of creating the grip for all property values will slow
> things down a lot, this isn't only about RDP/message manager being slow.

I don't see it as an either / or for this particular bug. We should definitely lazily load the buckets, but even individual buckets can get quite large depending on the size and contents of the array, and other scenarios would benefit from improved transfer for big packets.

> Also instead of compressing, I think we should look into:
>  
> http://searchfox.org/mozilla-central/source/dom/webidl/StructuredCloneHolder.
> webidl
> And see if we can make specific optimization on non-remote communications
> (i.e. on Firefox Desktop).

I'd be happy to focus optimizations on non-remote transport. It will get rid of backwards compatibility issues as well.
(In reply to Alexandre Poirot [:ochameau] from comment #7)
> Also instead of compressing, I think we should look into:
>  
> http://searchfox.org/mozilla-central/source/dom/webidl/StructuredCloneHolder.
> webidl
> And see if we can make specific optimization on non-remote communications
> (i.e. on Firefox Desktop).

Looked a bit more into this and AFAICT the messageManager is already using this: https://dxr.mozilla.org/mozilla-central/source/dom/base/nsFrameMessageManager.cpp#625
See Also: → 1392180
Depends on: 1399460
Bug 1399460 made expanding the array from Comment 0 fast. SHould we re-purpose this bug to investigate with the protocol side of things ?
Flags: needinfo?(bgrinstead)
(In reply to Nicolas Chevobbe [:nchevobbe] from comment #10)
> Bug 1399460 made expanding the array from Comment 0 fast. SHould we
> re-purpose this bug to investigate with the protocol side of things ?

Yes, good idea. We should also investigate how chrome RDP handles this.
Flags: needinfo?(bgrinstead)
Summary: Expanding a big array is slow → Expanding a big array causes protocol slowness due to large packet size
Talked with billm about this, and most likely zipping won't help since the cost to serialize the object in the first place will be similar to the structured clone. Best bet is to somehow shorten the packet (either simplify the descriptor format, have a way to specify repeated values, etc). Also, let's check into the chrome protocol handles object previews and see if it's smaller to start with.
Could we have some sort of a keys dictionary sent in the packet to reduce the size ? Like : 
```
{
  mapping: {
    0: "from",
    1: "to",
    2: "ownProperties",
    3: "prop-a",
    4: "prop-b",
  },
  0: "actor1",
  1: "actor2",
  2: {
    3: "x",
    4: []
  }
}
```
And we could then recreate the original object.
Not sure 1. we want this for every packet, 2. it will enhance performance, since we have to go through the tree to build the dictionary.
Priority: P3 → P4
Blocks: console-perf
Flags: qe-verify?
Priority: P4 → P2
Whiteboard: [reserve-console-html] → [newconsole-mvp]
Priority: P2 → P3
Whiteboard: [newconsole-mvp] → [newconsole-reserve]
Depends on: 1450944
Product: Firefox → DevTools
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: