Open Bug 1400915 Opened 7 years ago Updated 2 years ago

stylo: Still have a lot of Display, Position, Border structs on the HTML5 page even with STYLO_THREADS=1

Categories

(Core :: CSS Parsing and Computation, defect, P3)

53 Branch
defect

Tracking

()

Tracking Status
firefox57 --- wontfix
firefox58 --- affected

People

(Reporter: bzbarsky, Unassigned)

References

(Blocks 1 open bug)

Details

Per the about:memory output I just posted in bug 1397380, STYLO_THREADS=1 on the HTML5 page has, under servo-style-structs

  2.46 MB (16.10%) ── Display
  1.13 MB (07.41%) ── Position
  0.85 MB (05.55%) ── Border

The Gecko numbers for the no-stylo output are:

  0.86 MB (-5.65%) ── Display
  0.21 MB (-1.34%) ── Position
  0.02 MB (-0.13%) ── Border

The other reset structs are also much bigger on the stylo side, but those are the ones that are biggest in absolute terms.

Is that still expected with our TLS sharing patches?
Summary: stylo: Still have a lot of Display, Position, Border structs on the HTML5 page event with STYLO_THREADS=1 → stylo: Still have a lot of Display, Position, Border structs on the HTML5 page even with STYLO_THREADS=1
(In reply to Boris Zbarsky [:bz] (still digging out from vacation mail) from comment #0)
> Per the about:memory output I just posted in bug 1397380, STYLO_THREADS=1 on
> the HTML5 page has, under servo-style-structs
> 
>   2.46 MB (16.10%) ── Display
>   1.13 MB (07.41%) ── Position
>   0.85 MB (05.55%) ── Border
> 
> The Gecko numbers for the no-stylo output are:
> 
>   0.86 MB (-5.65%) ── Display
>   0.21 MB (-1.34%) ── Position
>   0.02 MB (-0.13%) ── Border
> 
> The other reset structs are also much bigger on the stylo side, but those
> are the ones that are biggest in absolute terms.
> 
> Is that still expected with our TLS sharing patches?

I do know that, for STYLO_THREADS=1, the two-tiered cache (effectively a larger cache) made a big difference on the html5 spec. I decided not to land that though, since it didn't seem to move the needle on AWSY and the aforementioned testing conditions aren't very realistic.

That said, it would be good to verify that the cache is working properly. Is there some eviction parameter we can tune to see if the difference goes away with a larger cache?
Flags: needinfo?(emilio)
Flags: needinfo?(cam)
Priority: -- → P3
The TLS struct cache doesn't evict anything during a traversal.  It's only cleared at the end of a traversal.

The rule node cache patches I had added a few additional conditions beyond font-size/WritingMode that helped with some of the struct overhead that came from lack of some anonymous box sharing.  I can try adding those to the TLS cache.
Flags: needinfo?(cam)
(In reply to Cameron McCormack (:heycam) from comment #2)
> The TLS struct cache doesn't evict anything during a traversal.  It's only
> cleared at the end of a traversal.
> 
> The rule node cache patches I had added a few additional conditions beyond
> font-size/WritingMode that helped with some of the struct overhead that came
> from lack of some anonymous box sharing.  I can try adding those to the TLS
> cache.

OK sounds good - probably best to postpone that to next week though.
Yeah, also note that we don't share per struct, but per whole ComputedValues, which can affect sharing negatively.
Flags: needinfo?(emilio)
Oh, I didn't realize that.  I thought we had sharing per-struct, because we already had per-ComputedValues sharing via the existing sharing mechanism...
(In reply to Boris Zbarsky [:bz] (still digging out from vacation mail) from comment #5)
> Oh, I didn't realize that.  I thought we had sharing per-struct, because we
> already had per-ComputedValues sharing via the existing sharing mechanism...

Nope, we share all reset structs at the same time or none.

I suspect this may actually explain this, but I haven't investigated it.
That's a great point.  I did see some increase in sharing in my patches when I made it share per struct.  So that definitely sounds like worth trying, in addition to the extra conditions.
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.