stylo: Servo CSS stylesheet parsing is 35% slower than Gecko's on myspace Talos testcase

Assigned to



CSS Parsing and Computation
7 months ago
5 days ago


(Reporter: bholley, Assigned: SimonSapin)


(Depends on: 3 bugs, Blocks: 3 bugs)

Firefox Tracking Flags

(Not tracked)


I did some quick profiling on Talos based on shing's results in bug 1331578. I focused on the myspace testcase, where Stylo is 155% slower than stock Gecko.

One piece of low-hanging fruit I found was CSS parsing overhead. Servo spends 153 ms parsing the stylesheet:

While gecko spends 23 ms parsing the same stylesheet:

There's no reason I'm aware of that we shouldn't be able to hit parity here.
Err, this is a better view of the Servo CSS parser profile:

May help here.

In general, optimizing this is not super urgent, since it is unlikely to impact major design decisions, and is easy to separate out when measuring other parts of performance. Still, we should pick off any low hanging fruit we see sooner rather than later.
Manish says may also help here.

Comment 4

7 months ago
There’s certainly more to do, but how does this look with ?

Comment 5

7 months ago
( which has now merged and is published on as version 0.7.3.)
(In reply to Simon Sapin (:SimonSapin) from comment #5)
> ( which has now merged and
> is published on as version 0.7.3.)

This hasn't hit incubator yet, so we'll need to wait for the next sync to test.
This should be on incubator now.
Those patches seem to have helped a lot. Parse performance is now only 3x slower than Gecko (~65ms):
Summary: stylo: Servo CSS stylesheet parsing is 6-7 times slower than Gecko's on myspace Talos testcase → stylo: Servo CSS stylesheet parsing is ~3 times slower than Gecko's on myspace Talos testcase
A bottom-up profile of the time in Servo_StyleSheet_FromUTF8Bytes is at [1]. Deduplication tops the list.

I suspect using with_capacity instead of `let mut deduplicated = Vec::new();` might help the deduplication issue.

In general we need to box the heaviest properties so that we can drastically reduce the impact of a copy here.
Preallocation definitely helps a bit, shaving off about 3ms:

But it seems like there's still a fair amount of memmov traffic coming from deduplication.
Yeah, the memmove traffic is because we're unconditionally copying each decl even when there's no duplication. And our declarations are *large*, we need to box some of them up. It may be preferable to iterate twice instead; the first time only reading the vector.
Hm, a problem with iterating twice is that that will double the codesize of this function. Though we can make it work with some conditionals if we need to.

Even better would be to have PropertyDeclaration::parse mark seens and whether or not duplicate properties exist, and reuse this data to skip the deduplicate call in the common case.

Made PR to pull in the easy fix of preallocation
Oh, also filed
Depends on: 1339894
Assignee: nobody → bobbyholley
Priority: -- → P1

Comment 16

5 months ago
One thing that I *guess* could explain some of the parsing performance difference compared to Gecko is the cssparser::Parser::try method. It records the current position in the Unicode input, executes a closure, and if that closure returns an error "rewinds" to the recorded position. This is used a lot in Servo to implement "<foo> | <bar>" alternatives in CSS grammars.

It does make e.g. adding a new property easier, but it has a cost: the tokenzier ends up doing the same work multiple times. We could have a one-token cache, but that feels like a band-aid. Better yet would be to avoid rewinding entirely. Currently, rewinding is sort-of necessary because the only way to access a token is to consume it and advance the current position. If we separate "access the current token" and "move to the next token", we should be able to refactor Servo to not need rewinding, and remove the `try` method.

However taking ownership of a token (and the `Cow<str>`s inside it, to avoid copies) would become more difficult. It might be beneficial to get rid of these `Cow`s and make tokens implement `Copy`. At the moment they only own buffers when a string is not as in the source, because the source contains backslash-escapes. We could replace that with on-demand unescaping (based on Iterator or/and Display), and make cssparser truly zero-copy.

I want to try these ideas but it’s a time investment, especially to port parsing code in Servo.
> We could have a one-token cache

So for what it's worth, Gecko's setup here is that the API looks like this: GetToken/UngetToken.  GetToken gets a token.  UngetToken marks the current token as "not gotten", so the next GetToken call will just return it.  You can't UngetToken twice in a row.  So this is effectively a one-token cache under the hood.
With SimonSapin's recent fixes (and a new machine), I get 23 ms for Gecko and 31 ms for Servo, so Servo is still 35% slower, but we're closing in on it.

I'll dig into the profile a bit more after the call.
Summary: stylo: Servo CSS stylesheet parsing is ~3 times slower than Gecko's on myspace Talos testcase → stylo: Servo CSS stylesheet parsing is 35% slower than Gecko's on myspace Talos testcase
So I reprofiled, and 97% of the time is spent in parse_qualified_rule [1]. Simon, is that consistent with the analysis in comment 16?

[1] Everything inside is inline, so I can't get more granular than that, even though I inflated the overhead by parsing each stylesheet 100 times.
Flags: needinfo?(simon.sapin)

Comment 20

5 months ago
> 97% of the time is spent in parse_qualified_rule

That’s not surprising, and unfortunately not very informative. Qualified rules are any rule that doesn’t start with an @-keyword, which includes style rules. So the parsing of the selector list and the declarations for one given style rule is all under one call to parse_qualified_rule.
Flags: needinfo?(simon.sapin)
Here's the updated profile with more stuff out-of-line:
Seems like a decent amount of it is spent in url parsing. We should try to get that fixed I guess, stuff like is pretty sad.
To test Simon's theory from comment 16, I wrote up a patch to measure the wasted work. The basic idea is that, if the parse callback returns failure, we do the work a second time in an out-of-line function which we can then measure with the profiler.

The patch is here:
The resulting profile is here:

This shows us spending 82 ms in wasted_work, and 3012 ms total time in Servo_StyleSheet_FromUTF8Bytes, which is 2930 after subtracting out the redundant set of calls in wasted_work.

This means that the wasted work accounts for about 3% of total parse time, which is about 12% of the total remaining difference between Gecko and Servo. So it's definitely worth fixing, but may not be worth a boil-the-ocean refactor right now.

There's still bigger fish to fry in there somewhere.

Comment 24

5 months ago
Regarding URL parsing, I maintain that we should skip rust-url entirely for `data:` URLs. Emilio’s attempt at this ended up not landing, but I don’t remember why exactly.
Depends on: 1347408
Depends on: 1347435

Comment 25

5 months ago
> we should skip rust-url entirely for `data:` URLs

… or even for all URLs, in Stylo. (See discussion in Bug 1347435.)
Depends on: 1347719
Simon is looking at this.

Here's a breakdown I did of a couple of profiles:

Some ideas:
* Either shrink tokens or redesign the API to copy them less. Bug 1347408 may or may not play a role here. If it looks like a lot of work to redesign the API, we should try to get confidence beforehand that it's going to help.
* Avoid vec doubling by using a temporary smallvec when parsing property declarations and selectors.

It's also worth considering that, per discussions elsewhere, it's likely better for caching efficiency (during matching) to eliminate the linked list of selectors if we can, and use a Vec instead.
Assignee: bobbyholley → simon.sapin

Comment 27

5 months ago
Bobby, I reproduced your benchmark at mozilla-central commit aaa0cd3bd620 (hg) / b09c8505d957 (git), and saw Stylo ~14% faster than Gecko. This is on Linux x64 with Intel i7-5600U. Could you try at the same commit, to see how much of the difference with your results is actual Stylo improvements v.s. differences in our setups?

Gecko opt:, 907 ms in mozilla::css::Loader::ParseSheet
    6 runs: 907, 933, 908, 886, 880, 923. Average: 906ms

Stylo opt:, 818 ms in Servo_StyleSheet_FromUTF8Bytes
    6 runs: 818, 771, 748, 772, 764, 795. Average: 778 ms

With CPU frequency scaling disabled:
    echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Interesting! My results still reproduce. I'm on a new-gen MBP 2.9Ghz i7-6920HQ, OSX - so generally similar hardware.

The big difference that I can see is that allocation / doubling is basically negligible in your profile, whereas it's quite significant in mine (mostly in doubling when appending things to vecs, hence my suggestion in comment 26). I'm not sure why this is - but we should figure it out, and then try to replicate those characteristics for other platforms.

That said, the overhead in Parser::next looks pretty comparable between my profile and yours (250 on the profile you linked me, whereas mine varies between 230 and 210), so that's definitely still something worth looking into.
Memory allocator performance may vary somewhat widely by OS.  free() is _definitely_ much more expensive on Mac than elsewhere...  I can't speak for malloc().
So, I am seeing:
150ms in malloc_zone_malloc
40ms in malloc_zone_realloc
10ms in free

So free isn't really the bottleneck.
Julian, would also be able to help us look into this? This is in the (sequential) parsing phase, but it's still hot and it would be really good if we could figure out (a) why we're spending so much more time malloc-ing on osx than linux and (b) How to spend less time in malloc on osx.
Flags: needinfo?(jseward)

Comment 32

5 months ago
Aren’t we using jemalloc on both platforms?
We are.  free() still has to do a bunch of extra work on mac to deal with some things the Mac OS system libraries expect from free() that most other OSes don't have to provide.

I have no idea why _malloc_ would take more time on Mac specifically, offhand.
Manish pointed out in the meeting that it's probably because jemalloc acquires a lot of mutexes, which are very slow on mac (at least IIRC).

So this means that we should probably just focus on reducing the number of calls to malloc/realloc. We can probably do this by:
(a) Using a SmallVec when parsing PDB blocks, which should eliminate the 40ms of realloc overhead, and
(b) Rearrange the Selector representation to avoid the linked list, which should also help with cache locality. See [1]

I'll file dependent bugs.

Flags: needinfo?(jseward)
Depends on: 1353239
Depends on: 1353240
If those three things give us the wins we expect without bad side effects, I imagine we'd have fast enough parser performance to ship without regression, at least on this testcase.
Depends on: 1354215
Blocks: 1354566
Blocks: 1243581


4 months ago
Depends on: 1354989


4 months ago
Depends on: 1355028
Duplicate of this bug: 1353240

Comment 37

3 months ago
I’ve submitted which will hopefully reduce `memmove` traffic during parsing.
Priority: P1 → --
We're still slower at parsing on a lot of the sites that I look at. Simon, what's the current status?
Flags: needinfo?(simon.sapin)
Priority: -- → P2

Comment 39

12 days ago
My work on this is on hold this week for Houdini and CSSWG meetings which I’m helping host in Paris.

Per bug 1384192 comment 5 I was/will be looking at making line number counting happen during tokenization to avoid looking at every byte again. The tokenizer would maintain the current line number and start position of the current line. These would need to be stored for the purpose of Parser::reset but are not necessary for example for Parser::slice_from. So the SourcePosition / SourceLocation / PreciseParseError types may need a redesign.
Flags: needinfo?(simon.sapin)
Depends on: 1386398
You need to log in before you can comment on or make changes to this bug.