Closed Bug 1645122 Opened 5 years ago Closed 4 years ago

Intermittent image/test/reftest/jpeg/webcam-simulacrum.mjpg == image/test/reftest/jpeg/blue.jpg | image comparison, max difference: 224, number of differing pixels: 799999

Categories

(Core :: Graphics: ImageLib, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla79
Tracking Status
firefox79 --- fixed

People

(Reporter: intermittent-bug-filer, Assigned: emilio)

References

(Regression)

Details

(Keywords: intermittent-failure, regression, Whiteboard: [stockwell needswork:owner])

Attachments

(1 file)

Anyone able to pull an approximate regression range for this from treeherder results?

Actually, this is probably bug 1641682. It landed on the 11th but was backed out, then relanded on the 14th late in the day, and the intermittent started on the 11th before stopping and then starting again on the 15th.

Flags: needinfo?(tnikkel)
Regressed by: 1641682

Not bug 1641682. Get about the same amount of failures with bug 1641682 backed out.

No longer regressed by: 1641682

Bug 1599160 landed at both starts and touches image loader.

Flags: needinfo?(emilio)
Regressed by: 1599160
Has Regression Range: --- → yes
Keywords: regression

Hmm, there's no stylesheets involved in those test-cases though. Only ImageDocument.css and TopLevelImageDocument.css, which don't load any image whatsoever... Plus the mjpg file is served with Cache-Control: no-cache.

Though I guess it could potentially make the "resource" URI for the image document fire the load event faster? The mjpg should block the load event anyhow... Aryx, is there a way to confirm that this is caused by bug 1599160? I'd find it a bit surprising / it'd seem it's a pre-existing bug if anything.

Flags: needinfo?(emilio) → needinfo?(aryx.bugmail)

None of the retriggers reproduced the issue.

Flags: needinfo?(aryx.bugmail)

I've pushed a couple of try pushes checking asan reftests where I've been able to reproduce pretty easily to see if bug 1599160 is involved.

One thing to note about this test that is unlike almost every other reftest: the image is a multipart image and loading multipart images toplevel means we create a new (image) document for every frame/part of the multipart image.

Try results do strongly suggest that bug 1599160 is to blame. Using Windows Asan builds (which seem the most reliable for me)

0/19 failures with base revision 3ce9af6580b2748bd34837cb3d1aa074c6f76436 (changeset immediately before bug 1599160)
https://treeherder.mozilla.org/#/jobs?repo=try&revision=5fa54a13959beeec49640df043bfec7414d238ff

4/19 failures with base revision 419a897d7e4b9489ad51b51044f154e5bcbb605e (first with changeset with bug 1599160 landed)
https://treeherder.mozilla.org/#/jobs?repo=try&revision=092e01a3718f54c7f4bbc81586751d7a7640e1a9

Emilio is there logging we can turn on to maybe shed more light on this?

Flags: needinfo?(tnikkel) → needinfo?(emilio)

If two loading documents hit the sheet cache and we coalesce the
resource load, there's nothing that prevents the load event on the
second document from firing right now, and there should be.

While at it, also fix the handling of the pending load count, though
it has no correctness impact on the particular test we're fixing here...

We were never decrementing it, which is of course wrong. However it
kinda ended up working because it just causes us to not defer more
loads.

The new assertions and responsibility of the counter should ensure it
stays correct.

Assignee: nobody → emilio
Status: NEW → ASSIGNED

(In reply to Timothy Nikkel (:tnikkel) from comment #9)

I've pushed a couple of try pushes checking asan reftests where I've been able to reproduce pretty easily to see if bug 1599160 is involved.

One thing to note about this test that is unlike almost every other reftest: the image is a multipart image and loading multipart images toplevel means we create a new (image) document for every frame/part of the multipart image.

So I loaded the reftest with MOZ_LOG=nsCSSLoader:5 and I saw them coalesce various loads across the ImageDocument.css. I dug a bit and I think we were not properly blocking onload for these, so I sent a patch to do that.

Tim, is there any chance you can confirm that patch fixes the issue?

Flags: needinfo?(emilio) → needinfo?(tnikkel)

I confirm that your patch looks to fix the problem on my try push. Thanks!

Flags: needinfo?(tnikkel)
Pushed by ealvarez@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/d07d66ecd6b9 Properly block onload when coalescing loads with other documents. r=heycam
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla79
Regressions: 1648095
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: