Closed Bug 1716179 Opened 3 years ago Closed 2 years ago

ublock cause firefox UI hang on a page with very high request frequency.

Categories

(WebExtensions :: General, defect)

Firefox 91
defect

Tracking

(firefox-esr78 unaffected, firefox-esr91 affected, firefox89 wontfix, firefox90 wontfix, firefox91 wontfix, firefox92 wontfix, firefox93 wontfix, firefox94 wontfix)

RESOLVED INCOMPLETE
Tracking Status
firefox-esr78 --- unaffected
firefox-esr91 --- affected
firefox89 --- wontfix
firefox90 --- wontfix
firefox91 --- wontfix
firefox92 --- wontfix
firefox93 --- wontfix
firefox94 --- wontfix

People

(Reporter: mmis1000, Unassigned)

Details

(Keywords: regression)

Attachments

(1 file)

User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0

Steps to reproduce:

  1. Install uBlockOrigin
  2. Open a youtube live stream with a lot of message with custom emoji spammed.
  3. Watch for a while (usually happens within about 10 or 20 minutes)

https://www.youtube.com/watch?v=ZoO6WijuXB4
Seems only happens with live stream.
This is already a archive, so you no longer able to reproduce the problem with it.

Mainly happened on live karaoke with chat and custom emoji enabled. (Due to a lot of glow stick emojis)

Actual results:

Stream plays normally

Expected results:

Stream stopped, chat stopped, tab not respond to f12 for open page devtool normally. And even you open before it hanged, the console just don't work.

(Devtool in the image is the browser global console instead of page console)
What makes it even worse is, the global browser console (Ctrl+Shift+Alt+I) is also not responding.
The screenshot attached is get by open the global browser console "before" the hang happened.
If I try to open after that, then it don't work and stuck at connecting forever.

Seems the Browser is half dead for some reason. (Due to dead lock or something similar?)

This bug seems like https://bugzilla.mozilla.org/show_bug.cgi?id=1533815 but somehow even worse for some reason when use with uBlock.

The same behavior did not happen when use the uBlock with chrome.

Hi mmis1000,
I could reproduce the issue on Windows 10, on Firefox Nightly 91.0a1 (2021-06-14) (64-bit) version and Release 89 (64-bit) versions.
For further details, can you provide profile data where the issue is occurring? Here you can take a look how you can capture profile performance https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Reporting_a_Performance_Problem It could be helpful.
Please, also note that this add-on works only on FF Nightly, so that means you need to be able to reproduce the issue on Nightly first.

I've assigned a component in order to get the dev team involved.
'WebExtensions:' team: if the component is not relevant, please, change it to a more appropriate one.

Regards.
Jerónimo.

Status: UNCONFIRMED → NEW
Component: Untriaged → General
Ever confirmed: true
Product: Firefox → WebExtensions

(In reply to Jerónimo Torti from comment #1)

Hi mmis1000,
I could reproduce the issue on Windows 10, on Firefox Nightly 91.0a1 (2021-06-14) (64-bit) version and Release 89 (64-bit) versions.
For further details, can you provide profile data where the issue is occurring? Here you can take a look how you can capture profile performance https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Reporting_a_Performance_Problem It could be helpful.
Please, also note that this add-on works only on FF Nightly, so that means you need to be able to reproduce the issue on Nightly first.

I've assigned a component in order to get the dev team involved.
'WebExtensions:' team: if the component is not relevant, please, change it to a more appropriate one.

Regards.
Jerónimo.

I actually took a few try to capture performance record.
However, because the problem hang the ui and debugger itself.
I haven't successfully done that yet.

I may took a try that "Can I successfully capture it if I open capture tool before It crashed?".
But given the behavior this bug behaves, it is unlikely to succeed.

And another problem I encountered is The hang took very long time to happen.
The new profile tool seems stuck forever If i take such a big profile.
Is there anyway to workaround it?

I successfully captured one

https://profiler.firefox.com/public/ax1axsfjex8dw6z8f58a2x0kw8vh0sb7kvymwar/marker-chart/?globalTrackOrder=0-1-2-3-4-5-6-7-8-9-10-11-12-13&localTrackOrderByPid=20528-1-2-0~28820-0~24908-0~2592-0~4024-0~30820-0~20680-0~9460-0~21684-0~6932-0~13184-0~18780-0~8308-8-0-1-2-3-4-5-6-7~20840-0-1~&thread=0&timelineType=cpu-category&v=5

The hang start at position about 130s.
It seems all connections on that page stuck at waiting response for some reason.
Thus both chat and video no longer works.

Although that didn't really explain why f12 breaks.

By the way.

Is connection rejected by ublock supposed to be shown as waiting for reponse forever in the profiler?
If it isn't, should I open another issue for that?

Encountered on exit the firefox when a page is stuck by this

https://crash-stats.mozilla.org/report/index/2cc71574-d9e0-4eec-b96e-3f26e0210620

Thanks a lot for trying to collect new profile (and also for the link to the shutdown crash), I looked to both the profiles but unfortunately I wasn't able to spot anything that would point me in the right direction.

Today I did look to both the attached profiles again with help from Mozilla colleagues more experienced than myself in reading Firefox profiles but unfortunately we couldn't get any definitive conclusion about why/what is making the network requests to get stuck and to never resolve, but we determined a couple of additional details that may be helpful to dig into this more, it would be great if you could try to collect them into a couple of new profiles:

  • collect the profile with screenshots (it should be something configured through from profiler settings) and making sure to interact with the UI while the entire Firefox UI is hanging for you (to increase the chances to make it visible in the profile when that is happening
  • collect a separate profile while using a speed test webpage (e.g. like speedtest.net), this profile could be useful for comparison (if the issue isn't reproduced) or to confirm if it is correlated with an high pressure of many network requests (if it does also reproduce the same issue that youtube is triggering for you)

Another detail that I was thinking of is:

do you have an high frequency monitor attached to the machine where you are experiencing the issue? if that is the case, could you try to reduce the monitor frequency temporarily (around 60Hz) and double-check if you are still able to reproduce the same issue?
I'm basically wondering if this may be related to Bug 1684703 + a website that triggers an high amount of non-toplevel network requests.

Thanks for the help you are providing to us for investigating this bug, that's very much appreciated.

Flags: needinfo?(mmis1000)

(In reply to Luca Greco [:rpl] [:luca] [:lgreco] from comment #8)

Thanks a lot for trying to collect new profile (and also for the link to the shutdown crash), I looked to both the profiles but unfortunately I wasn't able to spot anything that would point me in the right direction.

Today I did look to both the attached profiles again with help from Mozilla colleagues more experienced than myself in reading Firefox profiles but unfortunately we couldn't get any definitive conclusion about why/what is making the network requests to get stuck and to never resolve, but we determined a couple of additional details that may be helpful to dig into this more, it would be great if you could try to collect them into a couple of new profiles:

  • collect the profile with screenshots (it should be something configured through from profiler settings) and making sure to interact with the UI while the entire Firefox UI is hanging for you (to increase the chances to make it visible in the profile when that is happening
  • collect a separate profile while using a speed test webpage (e.g. like speedtest.net), this profile could be useful for comparison (if the issue isn't reproduced) or to confirm if it is correlated with an high pressure of many network requests (if it does also reproduce the same issue that youtube is triggering for you)

Another detail that I was thinking of is:

do you have an high frequency monitor attached to the machine where you are experiencing the issue? if that is the case, could you try to reduce the monitor frequency temporarily (around 60Hz) and double-check if you are still able to reproduce the same issue?
I'm basically wondering if this may be related to Bug 1684703 + a website that triggers an high amount of non-toplevel network requests.

Thanks for the help you are providing to us for investigating this bug, that's very much appreciated.

I don't have a high refresh rate screen, but I have a 4K@60fps one with another 1080P@75fps one.
And generally I set the 1080P one to 4K resolution.
So It will be two 4K screen from firefox's perspective.

And that is also why I didn't upload the screenshot.
Because the upload size limit is way too small for a 4K screen, I just couldn't upload it.

Flags: needinfo?(mmis1000)

Further Explanation.

Most of time.
I open firefox in two window setup.
With main window (the one I am using to surf the web) on 1st screen.
And youtube on 2nd screen (So I am able to comment with ease).

Although the latest profile is created with only one window, so two window or not probably doesn't matter too much.

https://drive.google.com/file/d/1Rf87ngD1y_6Gl88WPuHsp1ovOLM-eEt2/view?usp=sharing

This profile is captured with 1080p resolution at 75fps.
So screen resolution doesn't matter.
It triggers even not under 4K.

And can someone please fix the profile uploader?
The profile above always end up with Error 502 if I try to upload with screenshot.
So I uploaded it to the google drive.

I looked into the profile attached in comment 11 and based on what I can see at the moment:

  • the parent process isn't that busy and there isn't any jank and so I would expect the browser UI (e.g. opening the app menu) to not be lagging.
  • the extension process does also seem to be using almost no CPU, seems to be almost always idle.
  • on the contrary the content process where youtube is playing is quite busy (which isn't super surprising, given that it is the process where most of the work related to what the youtube page is doing does actually happen).

All these seem to be the case also in the other two profiles attached to comment 4 and comment 5 (beside the fact that in the other two profiles there is a red bar that suggest there was a jank, but at the same time the activity traced in the same time slice doesn't seem to reflect that jank and it may be some kind of issue in the data got from the profiler).

Another details that I did notice in this last profile (but not in the other two) is an "hole" in the "Extension Suspend" markers around the time where based on the screenshot I think that stream got stuck (between ~107s and ~130s), the markers seems to have been received and handled with a pretty regular timing (up to that point, where the stream seems to be suspiciously getting stuck too).

And so, at least at the moment based on the details I do have, my theory is that we may be hitting:

  • an issue that is making the request intercepted by the extension to be suspended and to never resume (under this condition there wouldn't be any "Extension Suspend" marker, because those marker are added when the request is resumed from ChannelWrapper::Resume).

  • or some other issue that is making the http request to get stuck and maybe not even triggering any webRequest listener because it got stuck before any of the steps that would be notified to the extensions

I tried to "force" the first of these two issues with a minimal test extension (one that would suspend any webRequest intercepted once I click on its browserAction button, so that I could let the youtube stream and live chat to work as expected until I trigger the extension explicitly) and the results was that:

  • the stream and livechat got stuck a few seconds after the extension has started to suspend any webRequest and leave it suspended
  • the video element started to show a loading spinner

But in the screenshots collected in the profile attached to comment 11 I don't see the same spinner rendered on top of the video element when the stream got stuck, which makes me think that the minimal test extension wasn't reproducing the exact same issue.

Some addition detail:

I noticed that.
If I disable the Show the number of blocked requests on the icon in uBlock.

This issue will happen less (probably from 1 in 5 minutes to 1 in 30 minutes)
The issue do still happen, just in less frequency.

Although I don't really have idea how would that matter.
Some random guess: probably because disable that results in less RPC actions?

url: https://www.youtube.com/watch?v=0YjRYNhPtXA&t=1s
profile: https://share.firefox.dev/3zTp1Cz

Another try that captures all threads and IPC capture enabled.
It seems this url triggers the problem reliably even it is a archive for some reason.

By the way, do firefox come with a way to generate thread dump to know what did exactly stuck at the time video not playing?

It seems to be a regression. The video above starts to behave the hang behavior since

Differential Revision: https://phabricator.services.mozilla.com/D85550

Build after this all hangs within one hour of video play.
Disable the feature enabled in that revision fixed the hang problem in latest nightly (Both in video mentioned above and random lives).

(At least I haven't encountered a single hang in about two hours. With two live opened at same time, Or probably I am lucky tonight?)

(In reply to mmis1000 from comment #16)

It seems to be a regression. The video above starts to behave the hang behavior since

Differential Revision: https://phabricator.services.mozilla.com/D85550

Build after this all hangs within one hour of video play.
Disable the feature enabled in that revision fixed the hang problem in latest nightly (Both in video mentioned above and random lives).

(At least I haven't encountered a single hang in about two hours. With two live opened at same time, Or probably I am lucky tonight?)

Thanks a lot for bisecting this issue and pin point the potential regressing bug!

That's very helpful (actually invaluable)!

Hi Hiro, would you mind to double-check if this may be a duplicate of some other issue already being tracked as a potential regression introduced by the feature enabled through the "layout.animation.prerender.partial" pref?


As a side note, I'm not sure yet (and still curious about) what is the role of ublock on triggering this issue (and if it has any actual role, e.g. maybe it apply some cosmestic changes to the page through css and that as a side-effect may be making it easier to trigger the actual underlying bug?).

Flags: needinfo?(hikezoe.birchill)
Regressed by: 1656418

Another side note. The youtube do transform A very paint area in chatroom.
The chatroom is structured like

<!-- scroll, short fixed height container, where scroll bar happenes -->
<div style="height: 400px; overflow-y: scroll"> 
  <!-- outer, long fixed height container, same height as content-->
  <div style="height: 4000px">
    <!-- inner, holds content -->
    <div style="transform: translateY(100px)">
        ... A lot of messages here ...
    </div>
  </div>
</div>

After add new message to the inner container.
It translate inner container to ease the message in without cause the scroll container move at all (so your scroll bar stays at bottom even message added).

(In reply to Luca Greco [:rpl] [:luca] [:lgreco] from comment #18)

Hi Hiro, would you mind to double-check if this may be a duplicate of some other issue already being tracked as a potential regression introduced by the feature enabled through the "layout.animation.prerender.partial" pref?


As a side note, I'm not sure yet (and still curious about) what is the role of ublock on triggering this issue (and if it has any actual role, e.g. maybe it apply some cosmestic changes to the page through css and that as a side-effect may be making it easier to trigger the actual underlying bug?).

Probably it is timing issue anyway? (Or it won't take so long time to happen)
The ublock changes some timing slightly and force the issue to happen luckily (or unluckily).
The issue may probably even disappear mystically after one or two of youtube ui revisions?

As far as I see, there isn't youtube stuck completely type bug on bugzilla, and nor did there be any related thread in layout.animation.prerender.partial ticket.

About other site freeze reports.
There is one for youtube tv (1712796) but I don't think youtube TV has a chatroom?
And another one for bilibili (1713552) but that don't have chatroom either (although bilibili do translate a lot of texts to make danmu effect).

Given the natural of this bug, it is likely users don't even think it is a firefox bug (probably network issue or something?) and reloading the youtube fixed it.

(In reply to Luca Greco [:rpl] [:luca] [:lgreco] from comment #18)

Hi Hiro, would you mind to double-check if this may be a duplicate of some other issue already being tracked as a potential regression introduced by the feature enabled through the "layout.animation.prerender.partial" pref?

I don't think there's a dup of this issue. The effect of ayout.animation.prerender.partial=true basically improves performance. It skips some amount of main-thread works, e.g. styling, layout, display list building, etc. Those regressions are results of the skipping works on the main-thread, thus it incorrectly paints in some cases.

As a side note, I'm not sure yet (and still curious about) what is the role of ublock on triggering this issue (and if it has any actual role, e.g. maybe it apply some cosmestic changes to the page through css and that as a side-effect may be making it easier to trigger the actual underlying bug?).

That's the key point of this issue. And I guess there should be clues in the profile results. I will look them carefully once after I can have time.

Flags: needinfo?(hikezoe.birchill)

Got a weird https://profiler.firefox.com/ crash during profile this

Error: index undefined not in UniqueStringArray

Ly@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:7868
div
My@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:5084
@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:165:12541
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
div
div
li
Sv@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:33059
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
ol
xv@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:30071
div
div
div
zv@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:44505
div
Dv@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:39838
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
Zv@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:57460
@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:165:12541
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
pw@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:67054
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
div
s@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:11:10810
div
t@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:11:12459
div
div
vC@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:181:24594
Ew@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:69619
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
vE@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:97582
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
$p@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:159:2881
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
div
Bw@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:71393
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
tc@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:124:4527
Fk@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:121171
d@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:56260
$@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:50:54006
rh@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:159:5895
Ok@https://profiler.firefox.com/4d64dc50fc76c19f2b46.bundle.js:185:121794

Not sure if it is this bug generates broken profile or the profiler crashed on normal profile
It this belongs to this issue?

(In reply to mmis1000 from comment #22)

Got a weird https://profiler.firefox.com/ crash during profile this
...

Not sure if it is this bug generates broken profile or the profiler crashed on normal profile
It this belongs to this issue?

My guess would be that it doesn't belong strictly to this bug, and that it may be an issue with unexpected profiler data (it may also be just that the Firefox profile webapp was not refreshed and an older webapp version may not be aware of some changes to the format of the data received, maybe more likely if the profile is being called on Nightly and the webapp running in the tab is not the last version yet).

You may try to report it to the github repo of the profile.firefox.com webapp: https://github.com/firefox-devtools/profiler/issues
The engineers actively working on that project may more likely be already aware of what the issue is related to, or able to ask you additional details if they need to pin point it first.

Seems captured a profile without the ublock (so probably ublock actually isn't actually the cause, or even nor layout.animation.prerender.partial be. They just changed the timing and made it appear more frequently?)

https://drive.google.com/file/d/1lv-krgi9n2NloyUoSkH72Hlov9CuZGGl/view?usp=sharing (The profile is too big to upload to profile.firefox.com)
Between 72 and 75

Another one happen with ublock

https://drive.google.com/file/d/1m33w8DLvE320hQVtWKS3b_ojvkt013fB/view?usp=sharing (The profile is also too big to upload to profile.firefox.com)
After 185

There a several request hanged.
But the IPC recorded indicates they actually received content from network normally.
The socket thread of web process actually received corresponding PHttpBackgroundChannel::Msg_OnStatus and PHttpBackgroundChannel::Msg_OnStopRequest from parent process.
But another message that normally happened right after it PHttpBackgroundChannel::Msg___delete__ never happen at all.

(In reply to mmis1000 from comment #22)

Got a weird https://profiler.firefox.com/ crash during profile this

Error: index undefined not in UniqueStringArray

This is very likely the result of using a nightly that already includes my patch from bug 1641298 but with a version of the profiler webapp that didn't include the changes we deployed this Wednesday. Sorry for the inconvenience, reloading the profiler once should fix it (and given you were able to capture profiles in comment 24, I guess it did).

(In reply to mmis1000 from comment #24)

Seems captured a profile without the ublock (so probably ublock actually isn't actually the cause, or even nor layout.animation.prerender.partial be. They just changed the timing and made it appear more frequently?)

https://drive.google.com/file/d/1lv-krgi9n2NloyUoSkH72Hlov9CuZGGl/view?usp=sharing (The profile is too big to upload to profile.firefox.com)
Between 72 and 75

Here is a subset of this profile that was possible to upload: https://share.firefox.dev/3BSyJqx
Once the bug occurs, mozilla::image::ProgressTracker::OnUnlockedDraw takes 60% of the CPU time in the content process.
The most frequent stack is:

imgRequestProxy::GetImage(imgIContainer**)
ImageIsAnimated(imgIRequest*)
nsImageLoadingContent::OnUnlockedDraw()
imgRequestProxy::Notify(int, mozilla::gfx::IntRectTyped<mozilla::gfx::UnknownUnits> const*)
mozilla::image::ProgressTracker::OnUnlockedDraw()
mozilla::detail::RunnableFunction<`lambda at /builds/worker/checkouts/gecko/image/Image.cpp:534:53'>::Run()
mozilla::SchedulerGroup::Runnable::Run()
mozilla::TaskController::DoExecuteNextTaskOnlyMainThreadInternal(mozilla::detail::BaseAutoLock<mozilla::Mutex &> const&)
Task image::ImageResource::SendOnUnlockedDraw
nsThread::ProcessNextEvent(bool, bool*)

And this seems to be spending the time trying to acquire and release a lock.

The profile in comment 14 looks similar: https://share.firefox.dev/3zXUKT3

Another one happen with ublock

https://drive.google.com/file/d/1m33w8DLvE320hQVtWKS3b_ojvkt013fB/view?usp=sharing (The profile is also too big to upload to profile.firefox.com)
After 185

Here's also a subset of this profile (the screenshots were most of the file size) : https://share.firefox.dev/37axiFI

The stack I mentionned above is also visible, but one more thing happens here: the content process starts receiving very little CPU time (we are missing a lot of samples, and when we do have samples the CPU use since the previous sample was very low). This is as if the operating system priority of the process had been reduced, but it's not Firefox reducing it (we have a "Process Priority" marker that shows that process was still expected to be in the "FOREGROUND" priority at the time the profile was captured).

(In reply to Florian Quèze [:florian] from comment #26)

(In reply to mmis1000 from comment #24)

Seems captured a profile without the ublock (so probably ublock actually isn't actually the cause, or even nor layout.animation.prerender.partial be. They just changed the timing and made it appear more frequently?)

https://drive.google.com/file/d/1lv-krgi9n2NloyUoSkH72Hlov9CuZGGl/view?usp=sharing (The profile is too big to upload to profile.firefox.com)
Between 72 and 75

Here is a subset of this profile that was possible to upload: https://share.firefox.dev/3BSyJqx
Once the bug occurs, mozilla::image::ProgressTracker::OnUnlockedDraw takes 60% of the CPU time in the content process.
The most frequent stack is:

imgRequestProxy::GetImage(imgIContainer**)
ImageIsAnimated(imgIRequest*)
nsImageLoadingContent::OnUnlockedDraw()
imgRequestProxy::Notify(int, mozilla::gfx::IntRectTyped<mozilla::gfx::UnknownUnits> const*)
mozilla::image::ProgressTracker::OnUnlockedDraw()
mozilla::detail::RunnableFunction<`lambda at /builds/worker/checkouts/gecko/image/Image.cpp:534:53'>::Run()
mozilla::SchedulerGroup::Runnable::Run()
mozilla::TaskController::DoExecuteNextTaskOnlyMainThreadInternal(mozilla::detail::BaseAutoLock<mozilla::Mutex &> const&)
Task image::ImageResource::SendOnUnlockedDraw
nsThread::ProcessNextEvent(bool, bool*)

And this seems to be spending the time trying to acquire and release a lock.

The profile in comment 14 looks similar: https://share.firefox.dev/3zXUKT3

Another one happen with ublock

https://drive.google.com/file/d/1m33w8DLvE320hQVtWKS3b_ojvkt013fB/view?usp=sharing (The profile is also too big to upload to profile.firefox.com)
After 185

Here's also a subset of this profile (the screenshots were most of the file size) : https://share.firefox.dev/37axiFI

The stack I mentionned above is also visible, but one more thing happens here: the content process starts receiving very little CPU time (we are missing a lot of samples, and when we do have samples the CPU use since the previous sample was very low). This is as if the operating system priority of the process had been reduced, but it's not Firefox reducing it (we have a "Process Priority" marker that shows that process was still expected to be in the "FOREGROUND" priority at the time the profile was captured).

The reduced CPU is likely just because these two profile are captured 'at same time'. And I finish the upper one first.
I tried to capture profile with different setting at same time. So my system pressure is some what very high (80%+ cpu).

This bug is marked as regressed by bug 1656418 which is a nightly only feature but comment #1 says that this was reproducible in 89 Release. Do we have the correct regressor and should we care about this bug for release and esr builds? Thanks

Flags: needinfo?(mixedpuppy)

Hi Jerónimo,
I was just re-reading your comment 1, and I'm in doubt if there could be a typo in it and you meant that you were not able to reproduce the bug in 89, or if you have been able to reproduce it also in the 89 release.
Would you mind to confirm us if you have been able to reproduce it in 89 as well?

Flags: needinfo?(jeronimo.torti)

Hi Luca,
I was able to reproduce it also in the 89 release.
Any other help or confirmation, please, let me know!

Regards,
Jerónimo.

Flags: needinfo?(mixedpuppy)
Flags: needinfo?(jeronimo.torti)

(In reply to Jerónimo Torti from comment #30)

Hi Luca,
I was able to reproduce it also in the 89 release.
Any other help or confirmation, please, let me know!

Thanks a lot for the confirmation, then (as Pascal was pointing out in comment 28) Bug 1656418 is unlikely to be the regressing bug.

was the issue reproducible in a consistent enough way to allow you to run mozregression and try to double-check if we can identify a particular regressing change?

is there any other details related to the windows 10 machine you were able to reproduce it on?
(e.g. did the machine had two screens as the reporter's machine? could you share the info available from about:support opened on that windows 10 machine?)

Flags: needinfo?(jeronimo.torti)
No longer regressed by: 1656418

I remember that the issue was reproducible 1/3 times, and not in a consistent way.

As far as I can remember, I was using two screens as the reporter also. And since the test was 3 months ago, about:support info available could be very different due to system updates.

Flags: needinfo?(jeronimo.torti)

Hey Raymond, could you please look into this? Have you maybe receive other similar reports recently?

I'll also try to look into the profile to see if I can spot something, so ni? for me as well.

Flags: needinfo?(tomica)
Flags: needinfo?(rhill)

Which profile should I check? I looked at the profiles posted in comment #27 and I couldn't see much of uBO's code being executed (I filtered using moz-extension) -- less than 1/3 of 1% in the content process.

Things to try to narrow down the issue:

  • use another blocker such as AdGuard or ABP and see if the issue is also reproducible
  • disable cosmetic filtering in uBO to find out whether specific CSS rule(s) are involved (tabs must be reloaded after turning off cosmetic filtering)

It seems to me that there is an undue amount of time spent in Element.setAttribute(), called by the page at line 1708 of https://www.youtube.com/s/desktop/0c58a82c/jsbin/live_chat_polymer.vflset/live_chat_polymer.js.

Flags: needinfo?(rhill)

While we explored many directions in this bug, there are no actionable courses of actions that we can take.

In order to resolve this bug, we would need a reproducible test case, or at least a consistent profile, trace or log output that suggests a likely culprit. If you have that, please attach it and we'll reopen the bug.

Status: NEW → RESOLVED
Closed: 2 years ago
Flags: needinfo?(tomica)
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: