Closed Bug 1692701 Opened 4 years ago Closed 1 year ago

Intermittent Btime perftest-results-handler Critical: TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Categories

(Testing :: Raptor, defect, P5)

defect

Tracking

(firefox-esr91 unaffected, firefox97 unaffected, firefox98 unaffected, firefox99 wontfix)

RESOLVED INCOMPLETE
Tracking Status
firefox-esr91 --- unaffected
firefox97 --- unaffected
firefox98 --- unaffected
firefox99 --- wontfix

People

(Reporter: intermittent-bug-filer, Unassigned)

References

Details

(Keywords: intermittent-failure, Whiteboard: [stockwell unknown])

Attachments

(1 file, 6 obsolete files)

Filed by: nbeleuzu [at] mozilla.com
Parsed log: https://treeherder.mozilla.org/logviewer?job_id=329895887&repo=autoland
Full log: https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/fFng6IwCQw2_SWBWCUlP6A/runs/2/artifacts/public/logs/live_backing.log


[task 2021-02-13T19:49:10.396Z] INFO - [JOB-3] Extracting frames from /builds/worker/fetches/browsertime-results/google-sheets/pages/docs_google_com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit/query-ac1947fe/data/video/3.mp4 to /tmp/vis-VUM0O3
[task 2021-02-13T19:49:10.396Z] INFO - [JOB-3] {"SpeedIndex": 0, "FirstVisualChange": 0, "LastVisualChange": 0, "ContentfulSpeedIndex": 0, "videoRecordingStart": 440, "PerceptualSpeedIndex": 0, "VisualProgress": "0=100", "VisuallyComplete": 0}
[task 2021-02-13T19:49:10.396Z] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
[task 2021-02-13T19:49:10.396Z] INFO - Tests which failed: ['SpeedIndex', 'LastVisualChange', 'ContentfulSpeedIndex', 'PerceptualSpeedIndex']
[task 2021-02-13T19:49:10.396Z] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
[task 2021-02-13T19:49:10.396Z] INFO - Tests which failed: ['SpeedIndex', 'LastVisualChange', 'ContentfulSpeedIndex', 'PerceptualSpeedIndex']
[task 2021-02-13T19:49:10.396Z] INFO - Running command                cmd=['/usr/bin/python', '/builds/worker/fetches/visualmetrics.py', '-vvv', '--logformat', '[%(levelname)s] - %(message)s', '--video', '/builds/worker/fetches/browsertime-results/google-sheets/pages/docs_google_com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit/query-ac1947fe/data/video/5.mp4', '--orange', '--perceptual', '--contentful', '--force', '--renderignore', '5', '--json', '--viewport']
[task 2021-02-13T19:49:10.396Z] INFO - Running command                cmd=['/usr/bin/python', '/builds/worker/fetches/visualmetrics.py', '-vvv', '--logformat', '[%(levelname)s] - %(message)s', '--video', '/builds/worker/fetches/browsertime-results/google-sheets/pages/docs_google_com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit/query-ac1947fe/data/video/6.mp4', '--orange', '--perceptual', '--contentful', '--force', '--renderignore', '5', '--json', '--viewport']
[task 2021-02-13T19:49:10.396Z] ERROR - Failed to run visualmetrics.py error={'SpeedIndex': 0, 'FirstVisualChange': 0, 'LastVisualChange': 0, 'ContentfulSpeedIndex': 0, 'videoRecordingStart': 440, 'PerceptualSpeedIndex': 0, 'VisualProgress': '0=100', 'VisuallyComplete': 0} video_path=PosixPath('/builds/worker/fetches/browsertime-results/google-sheets/pages/docs_google_com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit/query-ac1947fe/data/video/1.mp4')
Summary: Perma Btime [Tier2] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Intermittent Btime [Tier2] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

The recent spike first started with this push. Failure log: https://treeherder.mozilla.org/logviewer?job_id=331792515&repo=autoland
Retriggers on a push that was initially all green show that this might be an off tree issue. They only fail on vismet jobs on Windows 10 x64 Shippable and when they do they fail in a bunch. Normally after one or two retriggers, they turn green.
Greg, could you have a look over this recent spike of failures? Thank you.

Flags: needinfo?(gmierz2)
Summary: Intermittent Btime [Tier2] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Yup I'm looking into it, thanks for digging into it.

So it looks like the videos are recording a blank white page in the tasks that are failing. In passing runs, we see the actual page loading up. What's odd is that this is only happening on non-webrender tests.

:aerickson, do you know of anything that may have changed in CI recently (since yesterday) for windows that could be causing this? The failures are very sporadic so I'm wondering if there might be an infrastructure issue.

Flags: needinfo?(aerickson)

I took a quick scan of https://github.com/mozilla-platform-ops/ronin_puppet/commits/master and didn't see anything, but I don't work on Windows.

Redirecting to Mark and Rob.

Flags: needinfo?(rthijssen)
Flags: needinfo?(mcornmesser)
Flags: needinfo?(aerickson)

I am not aware of any changes that could be related tot his. Could some one point me to a few logs for failures on Windows? It looks the log from comment 4 is from a task that was ran on a Linux worker https://firefox-ci-tc.services.mozilla.com/tasks/b-rjDNyFT0e4jLWyM2cMJg .

Flags: needinfo?(rthijssen)
Flags: needinfo?(mcornmesser)
Flags: needinfo?(csabou)
Attached video bad.mp4 (obsolete) —
Attached video good.mp4 (obsolete) —

The failures on vismet windows tell us that the recordings are problematic. I've attached a good and bad recording so you can see what I mean - there's a white screen which is being recorded for some reason. I don't see anything obvious in the logs, here's a link to one of them: https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/HpP-N9srStSX9p3FS7AN0w/runs/0/artifacts/public/logs/live_backing.log

You can find some of the failing windows tests here: https://treeherder.mozilla.org/jobs?repo=autoland&group_state=expanded&resultStatus=pending%2Crunning%2Csuperseded%2Csuccess%2Ctestfailed%2Cbusted%2Cexception%2Cretry%2Cusercancel&searchStr=win%2Cbrowsertime&tochange=9187a95143a1898fb60a0c039860c394ea2b3081&fromchange=c2533f316d5b10348e7de1c3a63d49eab5692dd6&selectedTaskRun=b-rjDNyFT0e4jLWyM2cMJg.0

The reason I think this is infra-related is because of how sporadic these errors are, they seem to happen in large clusters and then everything is fine for a bit afterwards

I've started looking at what machines this is failing on but I don't see a pattern yet (I've only gone through a few failures so far):

T-W1064-MS-117
T-W1064-MS-204
T-W1064-MS-061
T-W1064-MS-547
T-W1064-MS-038
T-W1064-MS-135
T-W1064-MS-086
T-W1064-MS-170
T-W1064-MS-409
T-W1064-MS-034
T-W1064-MS-134
T-W1064-MS-122
T-W1064-MS-217
T-W1064-MS-076
T-W1064-MS-152
T-W1064-MS-127
T-W1064-MS-417
T-W1064-MS-266
Flags: needinfo?(mcornmesser)

Rgr, I will take a look this afternoon.

Flags: needinfo?(csabou)

Thanks!

There has not been any recent changes that would cause an issue like this. Spot checking the machines there are no issues that jump out, and everything seems to be in place for the graphic cards itself. I wonder if we are hitting some type of race condition or hardware failure.

Is there a way in which I can recreate what the task is doing at this point of the test? I think that is next step to see if i can replicate this behavior.

Whiteboard: [stockwell disable-recommended] → [stockwell needswork:owner]

(In reply to Mark Cornmesser [:markco] from comment #15)

There has not been any recent changes that would cause an issue like this. Spot checking the machines there are no issues that jump out, and everything seems to be in place for the graphic cards itself. I wonder if we are hitting some type of race condition or hardware failure.

Is there a way in which I can recreate what the task is doing at this point of the test? I think that is next step to see if i can replicate this behavior.

Yes, if you're on the machine and it was setup to run the test, you can run this command on it (or something similar): C:/mozilla-build/python3/python3.exe run-task -- c:\mozilla-build\python\python.exe -u mozharness\scripts\raptor_script.py --cfg mozharness\configs\raptor\windows_config.py --chimera --browsertime --no-conditioned-profile --browsertime-no-ffwindowrecorder --browsertime-video --app=firefox --test=amazon --project=mozilla-central --browsertime-browsertimejs $MOZ_FETCHES_DIR/browsertime/node_modules/browsertime/bin/browsertime.js --browsertime-node $MOZ_FETCHES_DIR/node/node.exe --browsertime-geckodriver $MOZ_FETCHES_DIR/geckodriver.exe --browsertime-chromedriver $MOZ_FETCHES_DIR/{}chromedriver.exe --browsertime-ffmpeg $MOZ_FETCHES_DIR/ffmpeg-4.1.1-win64-static/bin/ffmpeg.exe --download-symbols ondemand

That command was taken from the log in this task: https://treeherder.mozilla.org/jobs?repo=mozilla-central&searchStr=browsertime&selectedTaskRun=RiLDzSS0TietZuluZbugUQ.0

When the tests here are failing, they fail right from the start with all video recordings showing a white screen rather than the pageload - I'm not sure what you'll see if you run the tests on the machine so it's best to check the recordings that get produced to see if you caught the issue in case it's an ffmpeg issue.

:markco, would you know what's different between the windows10-shippable and the windows10-shippable-qr platforms or who I could ask about that? I'm wondering if there's a configuration difference causing this since I've read that ffmpeg can have issues recording when hardware acceleration is on (https://trac.ffmpeg.org/ticket/7718).

I'm going to get a patch to turn off ffmpeg on windows10-shippable tomorrow and leave it on for windows10-shippable-qr. This should solve the issue for now.

:markco, would you know what's different between the windows10-shippable and the windows10-shippable-qr platforms or who I could ask about that? I'm wondering if there's a configuration difference causing this since I've read that ffmpeg can have issues recording when hardware acceleration is on (https://trac.ffmpeg.org/ticket/7718).

They are the same platform. It looks like both use the gecko-t-win10-64-1803-hw worker type. Which is our standard Windows hardware tester.

Depends on: 1696423
Whiteboard: [stockwell disable-recommended]
Flags: needinfo?(mcornmesser)

The windows10-shippable issue is resolved now.

Flags: needinfo?(gmierz2)
Whiteboard: [stockwell disable-recommended]
Whiteboard: [stockwell disable-recommended]

There is an increase in the intermittent failures when we increased the fission tests on autoland. We may need to re-record the wikipedia test

Flags: needinfo?(afinder)

We may be seeing more of this failure: https://treeherder.mozilla.org/jobs?repo=mozilla-central&tier=1%2C2%2C3&searchStr=live%2Ccnn&revision=27315087fdc550a1fed32807d1702e227319e431

It looks like the issue is that the recording starts at a painted frame rather than the orange frame.

Depends on: 1735390

This started to fail frequently today. The only affected platform was Android 8.0 Pixel2 AArch64 Shippable WebRender.
Push with failures from mozilla-beta
Push with failures from autoland
Push with failures mozilla-central

Could this be a new problem or is the same issue as in the past?

Flags: needinfo?(gmierz2)
Summary: Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → High frequency Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Some of the pageload cycles of the failing vismet don't load the recording, but a "Testdroid Device Service".

Attached video 10.mp4 (obsolete) —

There's nothing I can do for the moment as the people that can help are on holidays. I don't think that temporarily disable the tests on pixel2 is a good idea.

Andrew, any chance you could have a look over what's going on here? This looks more towards an infra issue as it fails across all trees and only on android-hw-p2-8-0-android-aarch64-shippable-qr platform, as per comment 63. Thank you.
Link with failure rate: https://treeherder.mozilla.org/intermittent-failures/bugdetails?startday=2021-12-23&endday=2021-12-30&tree=trunk&bug=1692701
Failure log: https://treeherder.mozilla.org/logviewer?job_id=362644212&repo=mozilla-central

Flags: needinfo?(aerickson)
Whiteboard: [stockwell needswork:owner]
Whiteboard: [stockwell disable-recommended] → [stockwell needswork:owner]
Flags: needinfo?(afinder)

Bitbar hasn't changed anything regarding their infrastructure or these devices recently (log4j patching on Dec 22nd was the last change). I haven't changed devicepool or our middleware during this period.

The "Testdroid Device Service" shown in the movie is an application that Bitbar runs on the devices to manage/monitor them. It isn't new.

Have the vismet tests changed? All of the failures from the intermittent failure report's last 30 days are on m-c or autoland.

Flags: needinfo?(aerickson)

:aerickson, I think the testdroid stuff is interfering with the browsertime test- i.e. something needs to change on the phones in order to allow the tests to run reliably. I assume this is something that bitbar can change.

See Also: → 1748326

Sakari @ Bitbar noticed that the 'tap to beam' in the video in https://bugzilla.mozilla.org/show_bug.cgi?id=1692701#c66 is something new. He can disable NFC in Bitbar's device reset phase if we're not using it.

I'm pretty sure we don't use NFC in any of our tests. :jmaher or others, can you confirm?

Flags: needinfo?(jmaher)

confirmed: we do not use NFC

Flags: needinfo?(jmaher)

Sakari has disabled NFC and Android Beam on all of our devices.

:aerickson :jmaher thanks for taking care of this!

We still have 30 failures per day which is high. Most of the screens are the attached.

Attached video 9.mp4 (obsolete) —
Attached video 4.mp4 (obsolete) —

From what I understand here, bluetooth should be disabled and mock location engabled. In video 9.mp4 bluetooth is not enabled and there's only the warning about mock location disabled.

Flags: needinfo?(aerickson)
Attached video 11.mp4 (obsolete) —

And a new one: USB timeout.

Whiteboard: [stockwell disable-recommended] → [stockwell needswork:owner]

I don't see new instances in the last ~2 days, where are you getting the data about ~30/day?

Flags: needinfo?(aionescu)

comment 79 which is hidden.

Flags: needinfo?(aionescu)

that was Jan 3, no new instances after 9pm PDT Jan 3, that is 33 hours and counting. We should give this another day before calling it fixed.

Component: mozperftest → Raptor

(In reply to Alexandru Ionescu (needinfo me) [:alexandrui] from comment #83)

From what I understand here, bluetooth should be disabled and mock location engabled. In video 9.mp4 bluetooth is not enabled and there's only the warning about mock location disabled.

These tests were working before and now they seem to be again. Let's hold off on changing more things. I'm open to it if things go bad again.

Flags: needinfo?(aerickson)

Yeah, I was missreading the date of the latest failures.

Failure rate went down, 1 failure in last 6 days.

Whiteboard: [stockwell disable-recommended]
Flags: needinfo?(gmierz2)
Whiteboard: [stockwell disable-recommended]

THere are 85 total failures in the last 7 days on

[task 2022-02-22T00:49:12.656Z] WARNING - [JOB-1] Hero elements file is not valid: None
[task 2022-02-22T00:49:12.656Z] INFO - [JOB-4] libswscale      5.  3.100 /  5.  3.100
[task 2022-02-22T00:49:12.656Z] INFO - [JOB-1] {"SpeedIndex": 1527, "FirstVisualChange": 1527, "LastVisualChange": 1527, "VisualProgress": "0=0, 1527=100", "ContentfulSpeedIndex": 1527, "videoRecordingStart": 1829, "ContentfulSpeedIndexProgress": "0=0, 1527=100", "PerceptualSpeedIndex": 1527, "PerceptualSpeedIndexProgress": "0=0, 1527=100"}
[task 2022-02-22T00:49:12.656Z] INFO - [JOB-4] libswresample   3.  3.100 /  3.  3.100
[task 2022-02-22T00:49:12.656Z] INFO - [JOB-4] libpostproc    55.  3.100 / 55.  3.100
[task 2022-02-22T00:49:12.656Z] INFO - [JOB-4] Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/4.mp4':
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] Metadata:
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] major_brand     : mp42
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] minor_version   : 0
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] compatible_brands: isommp42
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] creation_time   : 2022-02-22T00:35:42.000000Z
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] com.android.version: 8.0.0
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] Duration: 00:00:04.22, start: 0.000000, bitrate: 136 kb/s
[task 2022-02-22T00:49:12.657Z] INFO - [JOB-4] Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/smpte170m), 1080x1920, 129 kb/s, SAR 1:1 DAR 9:16, 2.60 fps, 90k tbr, 90k tbn, 180k tbc (default)
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] Metadata:
[task 2022-02-22T00:49:12.658Z] INFO - Running command                cmd=['/usr/bin/python', '/builds/worker/fetches/visualmetrics.py', '-vvv', '--logformat', '[%(levelname)s] - %(message)s', '--video', '/builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/6.mp4', '--orange', '--perceptual', '--contentful', '--force', '--renderignore', '5', '--json', '--viewport', '--viewportretries', '60', '--viewportminheight', '100', '--viewportminwidth', '100']
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] creation_time   : 2022-02-22T00:35:42.000000Z
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] handler_name    : VideoHandle
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] Stream mapping:
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-3] compatible_brands: isommp42
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-3] com.android.version: 8.0.0
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] Stream #0:0 -> #0:0 (h264 (native) -> png (native))
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-3] encoder         : Lavf58.20.100
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-4] Press [q] to stop, [?] for help
[task 2022-02-22T00:49:12.658Z] INFO - [JOB-3] Stream #0:0(eng): Video: png, rgb24, 1080x1920 [SAR 1:1 DAR 9:16], q=2-31, 200 kb/s, 3.02 fps, 3.02 tbn, 3.02 tbc (default)
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-4] Output #0, image2, to '/tmp/vis-XfmIeP/viewport.png':
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-4] Metadata:
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-3] Metadata:
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-4] major_brand     : mp42
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-3] creation_time   : 2022-02-22T00:34:33.000000Z
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-4] minor_version   : 0
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-3] handler_name    : VideoHandle
[task 2022-02-22T00:49:12.659Z] INFO - [JOB-3] encoder         : Lavc58.35.100 png
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-3] frame=    1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.33 bitrate=N/A speed=2.82x
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-3] video:44kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] compatible_brands: isommp42
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] com.android.version: 8.0.0
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-3] Extracting frames from /builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/3.mp4 to /tmp/vis-QTUQDO
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] encoder         : Lavf58.20.100
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] Stream #0:0(eng): Video: png, rgb24, 1080x1920 [SAR 1:1 DAR 9:16], q=2-31, 200 kb/s, 2.60 fps, 2.60 tbn, 2.60 tbc (default)
[task 2022-02-22T00:49:12.660Z] WARNING - [JOB-3] Hero elements file is not valid: None
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] Metadata:
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-3] {"SpeedIndex": 0, "FirstVisualChange": 1103, "LastVisualChange": 1942, "VisualProgress": "0=100, 1103=100, 1371=100, 1942=100", "ContentfulSpeedIndex": 1328, "videoRecordingStart": 1913, "ContentfulSpeedIndexProgress": "0=0, 1103=83, 1371=100, 1942=100", "PerceptualSpeedIndex": 1105, "PerceptualSpeedIndexProgress": "0=0, 1103=99, 1371=99, 1942=100"}
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] creation_time   : 2022-02-22T00:35:42.000000Z
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] handler_name    : VideoHandle
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] encoder         : Lavc58.35.100 png
[task 2022-02-22T00:49:12.660Z] INFO - [JOB-4] frame=    1 fps=0.0 q=-0.0 Lsize=N/A time=00:00:00.38 bitrate=N/A speed=3.18x
[task 2022-02-22T00:49:12.660Z] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
[task 2022-02-22T00:49:12.661Z] INFO - Tests which failed: ['SpeedIndex']
[task 2022-02-22T00:49:12.661Z] INFO - [JOB-4] video:44kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[task 2022-02-22T00:49:12.661Z] INFO - [JOB-4] Extracting frames from /builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/4.mp4 to /tmp/vis-XfmIeP
[task 2022-02-22T00:49:12.661Z] WARNING - [JOB-4] Hero elements file is not valid: None
[task 2022-02-22T00:49:12.661Z] INFO - [JOB-4] {"SpeedIndex": 0, "FirstVisualChange": 1067, "LastVisualChange": 1889, "VisualProgress": "0=100, 1067=100, 1387=100, 1889=100", "ContentfulSpeedIndex": 1323, "videoRecordingStart": 1898, "ContentfulSpeedIndexProgress": "0=0, 1067=80, 1387=100, 1889=100", "PerceptualSpeedIndex": 1069, "PerceptualSpeedIndexProgress": "0=0, 1067=99, 1387=99, 1889=100"}
[task 2022-02-22T00:49:12.661Z] ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
[task 2022-02-22T00:49:12.661Z] INFO - Running command                cmd=['/usr/bin/python', '/builds/worker/fetches/visualmetrics.py', '-vvv', '--logformat', '[%(levelname)s] - %(message)s', '--video', '/builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/7.mp4', '--orange', '--perceptual', '--contentful', '--force', '--renderignore', '5', '--json', '--viewport', '--viewportretries', '60', '--viewportminheight', '100', '--viewportminwidth', '100']
[task 2022-02-22T00:49:12.661Z] INFO - Tests which failed: ['SpeedIndex']
[task 2022-02-22T00:49:12.661Z] ERROR - Failed to run visualmetrics.py error={'SpeedIndex': 0, 'FirstVisualChange': 1103, 'LastVisualChange': 1942, 'VisualProgress': '0=100, 1103=100, 1371=100, 1942=100', 'ContentfulSpeedIndex': 1328, 'videoRecordingStart': 1913, 'ContentfulSpeedIndexProgress': '0=0, 1103=83, 1371=100, 1942=100', 'PerceptualSpeedIndex': 1105, 'PerceptualSpeedIndexProgress': '0=0, 1103=99, 1371=99, 1942=100'} video_path=PosixPath('/builds/worker/fetches/browsertime-results/cnn-ampstories/pages/cnn_com/ampstories/us/why-hurricane-michael-is-a-monster-unlike-any-other/data/video/3.mp4')

Greg can you please take a look? There seems to be a spike here starting with the 19th of February
Retriggers: https://treeherder.mozilla.org/jobs?repo=autoland&group_state=expanded&selectedTaskRun=PyDqucBPTWuYrTeIxYIuWg.0&resultStatus=pending%2Crunning%2Csuccess%2Ctestfailed%2Cbusted%2Cexception%2Cretry%2Cusercancel&tochange=2b42abbdb0df38f31dfa1178fe3b5f773f8e4812&fromchange=fac8cd3d8e8debbb221b2df113852841a3192514&searchStr=android%2C8.0%2Cpixel2%2Caarch64%2Cshippable%2Cwebrender%2Copt%2Cbrowsertime%2Cperformance%2Ctests%2Con%2Cfirefox%2Ctest-vismet-android-hw-p2-8-0-android-aarch64-shippable-qr%2Fopt-browsertime-tp6m-geckoview-dailymail%2Cdailymail-vismet

Flags: needinfo?(gmierz2)
Whiteboard: [stockwell needswork:owner]

It looks like this fb-vismet tier 2 perma failing with the same failure line comes from Bug 1675054. Valentin can you please take a look?. And also this on tier 1.

Flags: needinfo?(valentin.gosu)
Flags: needinfo?(gmierz2)
Flags: needinfo?(valentin.gosu)
Regressed by: 1675054
No longer regressions: 1675054

Thanks for finding the cause Cristian!

Set release status flags based on info from the regressing bug 1675054

Bug 1675054 was fixed and relanded. This seems to have been failing a long time before it, so removing the regression tag.

Keywords: regression
No longer regressed by: 1675054
Summary: High frequency Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Recent failures here are on fenix.

:kimberlythegeek, do you have any updates on this?

Flags: needinfo?(ksereduck)

Apologies, I forgot to follow up on this. it looks like mcomella is on PTO the next few days, I will reach out to Amedyne for another point of contact

Flags: needinfo?(ksereduck)

:mleclair, are you able to help with this?

It appears that there may be a crash occurring. the failing tests have at least one frame/recording that just shows a black screen on the device.

wikipedia for example: https://treeherder.mozilla.org/jobs?repo=fenix&revision=a2289c5b2f7c8dbce7452df0ff1c631c34387d88&selectedTaskRun=IHq16g62T7S_pz-iMsRmUg.0
the first frame in the artifacts is a black screen

here is the logcat for that test run: https://firefoxci.taskcluster-artifacts.net/IHq16g62T7S_pz-iMsRmUg/0/public/test_info/logcat-ZY322HN8G4.log

Flags: needinfo?(mleclair)
Flags: needinfo?(michael.l.comella)

:fdoty will find an owner for this issue.

Flags: needinfo?(mleclair)
Flags: needinfo?(michael.l.comella)
Flags: needinfo?(fdoty)

Hello, office-vismet started perma failing starting with this push (Bug 1734997) . Could you take a look at it to see what could've caused it? Thank you!

Flags: needinfo?(sefeng)
Summary: Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Perma Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Follow up for Fenix should be brought to the Fenix/Perf triage to get this in front of that team for review (In reply to Kimberly Sereduck :kimberlythegeek from comment #116)

Flags: needinfo?(fdoty)

office-vismet starts perma failing because the page is broken when the scheduling API (bug 1734997) is enabled. I've filed bug 1766059 for that. However I believe this has nothing do with the root cause of this bug.

I am going to mark this bug depends on bug 1766059, but again, this is not the root cause.

Depends on: 1766059
Flags: needinfo?(sefeng)

(In reply to Frank Doty [:fdoty] from comment #120)

Follow up for Fenix should be brought to the Fenix/Perf triage to get this in front of that team for review (In reply to Kimberly Sereduck :kimberlythegeek from comment #116)

Is there someone we can needinfo?

Flags: needinfo?(fdoty)

(In reply to Greg Mierzwinski [:sparky] from comment #122)

(In reply to Frank Doty [:fdoty] from comment #120)

Follow up for Fenix should be brought to the Fenix/Perf triage to get this in front of that team for review (In reply to Kimberly Sereduck :kimberlythegeek from comment #116)

Is there someone we can needinfo?

Updating with needinfo from csadilek@mozilla.com

Flags: needinfo?(fdoty) → needinfo?(csadilek)
Flags: needinfo?(csadilek)

This ticket is a year old linking to numerous bugs (e.g., bug 1734997, bug 1766059, bug 1675054) and resolutions unrelated to Fenix. Could someone please file a ticket for us with details you would like the Fenix team to look into? We will then prioritize it as part of the Android performance triage. Thank you!

(In reply to Sean Feng [:sefeng] from comment #121)

office-vismet starts perma failing because the page is broken when the scheduling API (bug 1734997) is enabled. I've filed bug 1766059 for that. However I believe this has nothing do with the root cause of this bug.

I am going to mark this bug depends on bug 1766059, but again, this is not the root cause.

office-vismet job is green after the land in Bug 1766059 here, thank you.

Summary: Perma Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
Attached video 1.mp4
Attachment #9206724 - Attachment is obsolete: true
Attachment #9206725 - Attachment is obsolete: true
Attachment #9257145 - Attachment is obsolete: true
Attachment #9257601 - Attachment is obsolete: true
Attachment #9257602 - Attachment is obsolete: true
Attachment #9257605 - Attachment is obsolete: true

(In reply to Christian Sadilek [:csadilek] from comment #124)

This ticket is a year old linking to numerous bugs (e.g., bug 1734997, bug 1766059, bug 1675054) and resolutions unrelated to Fenix. Could someone please file a ticket for us with details you would like the Fenix team to look into? We will then prioritize it as part of the Android performance triage. Thank you!

The issue is that the content in the browser is completely black while we are trying to run the test. Here's a link to the treeherder run that I got this from: https://treeherder.mozilla.org/jobs?repo=fenix&revision=a2289c5b2f7c8dbce7452df0ff1c631c34387d88&searchStr=wikiped&selectedTaskRun=IHq16g62T7S_pz-iMsRmUg.0

You can find the logcat logs here: https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/IHq16g62T7S_pz-iMsRmUg/runs/0/artifacts/public/test_info/logcat-ZY322HN8G4.log

Flags: needinfo?(csadilek)
Whiteboard: [stockwell disable-recommended]
Depends on: 1766127

(In reply to Greg Mierzwinski [:sparky] from comment #128)

The issue is that the content in the browser is completely black while we are trying to run the test. Here's a link to the treeherder run that I got this from: https://treeherder.mozilla.org/jobs?repo=fenix&revision=a2289c5b2f7c8dbce7452df0ff1c631c34387d88&searchStr=wikiped&selectedTaskRun=IHq16g62T7S_pz-iMsRmUg.0

I see. I misread based on the request for performance triage in comment #123. This is a bug and doesn't need to go through performance triage. I can reproduce this problem in Fenix Nightly and filed bug 1766127 to investigate and fix the regression.

Flags: needinfo?(csadilek)

That's great, I'm glad you were able to reproduce this. Thanks :csadilek!

Whiteboard: [stockwell disable-recommended]
Whiteboard: [stockwell disable-recommended]
Summary: Intermittent Btime ERROR - TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0. → Intermittent Btime perftest-results-handler Critical: TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

After bug 1767436 landed and exposed the failures, there is an increase in failure rate.
@Greg, can you look into this?

Flags: needinfo?(gmierz2)

Looking, thanks for the ni?

(leaving the ni? for myself)

The issue is that the failures are being triggered on metrics we don't care about:

[task 2022-05-06T13:55:50.496Z] 13:55:50    ERROR -  perftest-results-handler Critical: TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.
[task 2022-05-06T13:55:50.496Z] 13:55:50     INFO -  perftest-results-handler Info: Visual metric tests failed: ['VisualReadiness', 'VisualReadiness', 'VisualReadiness']
Depends on: 1768199

The patch in bug 1768199 has been pushed.

Flags: needinfo?(gmierz2)
Whiteboard: [stockwell disable-recommended]
Whiteboard: [stockwell disable-recommended]
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
Status: REOPENED → RESOLVED
Closed: 2 years ago2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---

hi :aerickson could you please look into this?
This looks like it is consistently a51-19 devices
if you look at the video from the job, it seems to be hanging on the drag-down/notification

Flags: needinfo?(aerickson)

I've quarantined a51-19 for now.

:kash, are you thinking it's the setup steps that are causing the panel to cover the screen? Or NFC/android beam (https://bugzilla.mozilla.org/show_bug.cgi?id=1692701#c77)?

Flags: needinfo?(aerickson) → needinfo?(kshampur)

thanks for the quarantine!
I am not entirely sure if that would cause it (or if it is, why only these two devices?)
but it could be. Not present in all videos, but in this video you can see there is a bit more activity going on in the setup (progress bar before the 'finish setup' message)

:sparky would you have a better idea if the NFC/android beam is potentially causing this? (Bug 1692701, comment 77)

Flags: needinfo?(kshampur) → needinfo?(gmierz2)

I don't think this could be caused by NFC/Android Beam. I'm leaning towards there being something that broke in the setup steps as well since the phone should be finished setting up by the time we get to the tests.

Flags: needinfo?(gmierz2)

Thanks sparky
:aerickson, would bitbar be able to check for us why exactly the setup step is not done before the tests start?

Flags: needinfo?(aerickson)

:kshampur, absolutely. I've asked them to check out those devices. Will report back with their findings.

Bitbar has completed setup on this device. Removing from quarantine.

Flags: needinfo?(aerickson)

Thanks for your help :aerickson! graph is looking good now

Status: REOPENED → RESOLVED
Closed: 2 years ago2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
Status: REOPENED → RESOLVED
Closed: 2 years ago2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
Status: REOPENED → RESOLVED
Closed: 2 years ago2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
Status: REOPENED → RESOLVED
Closed: 2 years ago1 year ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
Status: REOPENED → RESOLVED
Closed: 1 year ago1 year ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---

Update

There have been 36 total failures within the last 7 days, all of them OS X 10.15 WebRender Shippable opt.

Recent failure log: https://treeherder.mozilla.org/logviewer?job_id=452018476&repo=autoland&lineNumber=1814

[task 2024-03-23T16:28:05.237Z] 16:28:05     INFO -  perftest-output Info: preparing browsertime results for output
[task 2024-03-23T16:28:05.237Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: fnbpaint
[task 2024-03-23T16:28:05.238Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: fcp
[task 2024-03-23T16:28:05.238Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: dcf
[task 2024-03-23T16:28:05.238Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: loadtime
[task 2024-03-23T16:28:05.238Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: largestContentfulPaint
[task 2024-03-23T16:28:05.238Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: cpuTime
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: FirstVisualChange
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: LastVisualChange
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: SpeedIndex
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: PerceptualSpeedIndex
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: ContentfulSpeedIndex
[task 2024-03-23T16:28:05.239Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: fnbpaint
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: fcp
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: dcf
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: loadtime
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: largestContentfulPaint
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning off subtest alerting for measurement type: cpuTime
[task 2024-03-23T16:28:05.240Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: FirstVisualChange
[task 2024-03-23T16:28:05.241Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: LastVisualChange
[task 2024-03-23T16:28:05.241Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: SpeedIndex
[task 2024-03-23T16:28:05.241Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: PerceptualSpeedIndex
[task 2024-03-23T16:28:05.241Z] 16:28:05     INFO -  perftest-output Info: turning on subtest alerting for measurement type: ContentfulSpeedIndex
[task 2024-03-23T16:28:05.247Z] 16:28:05     INFO -  perftest-output Info: PERFHERDER_DATA: {"framework": {"name": "browsertime"}, "suites": [{"name": "google-search", "type": "pageload", "extraOptions": ["fission", "cold", "webrender"], "tags": ["fission", "cold", "webrender"], "lowerIsBetter": true, "unit": "ms", "alertThreshold": 2.0, "subtests": [{"name": "ContentfulSpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [333, 300, 234, 300, 300, 267, 267, 267, 200, 200, 267, 300, 300, 267, 234, 267, 300, 200, 300, 267, 267, 300, 300, 267, 233], "value": 267.0}, {"name": "FirstVisualChange", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [333, 300, 234, 300, 300, 267, 267, 267, 200, 200, 267, 300, 300, 267, 234, 267, 300, 200, 300, 267, 267, 300, 300, 267, 233], "value": 267.0}, {"name": "LastVisualChange", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [333, 300, 234, 300, 300, 267, 267, 267, 234, 200, 267, 300, 300, 267, 234, 267, 367, 200, 300, 267, 434, 300, 300, 267, 233], "value": 276.2}, {"name": "PerceptualSpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [333, 300, 234, 300, 300, 267, 267, 267, 200, 200, 267, 300, 300, 267, 234, 267, 300, 200, 300, 267, 267, 300, 300, 267, 233], "value": 267.0}, {"name": "SpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [333, 300, 234, 300, 300, 267, 267, 267, 200, 200, 267, 300, 300, 267, 234, 267, 301, 200, 300, 267, 269, 300, 300, 267, 233], "value": 267.2}, {"name": "cpuTime", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [1802, 1797, 1758, 1734, 1716, 1739, 1744, 1719, 1766, 1843, 1770, 1806, 1809, 1787, 1776, 1725, 1746, 1781, 1802, 1731, 1771, 1766, 1751, 1746, 1747], "value": 1765.0}, {"name": "dcf", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [302, 136, 128, 133, 143, 146, 134, 146, 138, 141, 147, 142, 133, 130, 132, 140, 137, 133, 135, 142, 145, 130, 147, 142, 130], "value": 142.2}, {"name": "fcp", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [327, 162, 140, 148, 161, 142, 151, 142, 149, 152, 143, 157, 152, 146, 148, 154, 162, 159, 158, 154, 148, 146, 161, 155, 156], "value": 156.5}, {"name": "fnbpaint", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [328, 164, 141, 149, 163, 144, 155, 143, 151, 153, 144, 160, 155, 150, 154, 156, 163, 160, 160, 155, 150, 153, 162, 156, 159], "value": 158.8}, {"name": "largestContentfulPaint", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [329, 165, 142, 150, 164, 144, 156, 161, 151, 153, 145, 160, 156, 150, 154, 157, 164, 161, 160, 155, 150, 153, 163, 157, 159], "value": 160.0}, {"name": "loadtime", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [742, 448, 251, 250, 247, 249, 255, 249, 246, 545, 255, 502, 451, 426, 473, 441, 463, 502, 535, 494, 445, 463, 494, 248, 249], "value": 374.5}]}, {"name": "google-search", "type": "pageload", "extraOptions": ["fission", "webrender", "warm"], "tags": ["fission", "webrender", "warm"], "lowerIsBetter": true, "unit": "ms", "alertThreshold": 2.0, "subtests": [{"name": "ContentfulSpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [100, 100, 100, 33, 100, 167, 100, 234, 133, 100, 167, 133, 133, 133, 100, 134, 100, 100, 100, 133, 66, 100, 133, 66], "value": 108.5}, {"name": "FirstVisualChange", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [100, 100, 100, 33, 100, 167, 100, 234, 133, 100, 167, 133, 133, 133, 100, 134, 100, 100, 100, 133, 66, 100, 133, 66], "value": 108.5}, {"name": "LastVisualChange", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [100, 100, 100, 33, 100, 167, 100, 234, 133, 100, 167, 133, 133, 133, 100, 134, 100, 100, 100, 133, 66, 100, 133, 66], "value": 108.5}, {"name": "PerceptualSpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [100, 100, 100, 33, 100, 167, 100, 234, 133, 100, 167, 133, 133, 133, 100, 134, 100, 100, 100, 133, 66, 100, 133, 66], "value": 108.5}, {"name": "SpeedIndex", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [100, 100, 100, 33, 100, 167, 100, 234, 133, 100, 167, 133, 133, 133, 100, 134, 100, 100, 100, 133, 66, 100, 133, 66], "value": 108.5}, {"name": "cpuTime", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [1064, 1046, 1017, 1053, 1070, 1049, 1071, 1021, 1060, 1075, 1099, 1102, 1074, 1034, 1031, 1087, 1060, 1063, 1103, 1091, 1069, 1065, 1078, 1116, 1033], "value": 1064.9}, {"name": "dcf", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [70, 71, 67, 69, 76, 74, 72, 75, 73, 72, 72, 73, 75, 69, 72, 70, 72, 69, 71, 71, 71, 68, 72, 75, 71], "value": 71.6}, {"name": "fcp", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [86, 76, 78, 82, 91, 78, 77, 79, 91, 75, 79, 80, 95, 87, 79, 77, 77, 82, 92, 77, 90, 74, 79, 92, 86], "value": 82.1}, {"name": "fnbpaint", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": false, "replicates": [88, 77, 78, 83, 92, 79, 78, 81, 92, 76, 80, 81, 95, 87, 80, 77, 78, 83, 93, 77, 91, 75, 80, 93, 87], "value": 83.0}, {"name": "largestContentfulPaint", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [88, 78, 79, 84, 93, 80, 79, 81, 93, 77, 81, 82, 96, 88, 80, 78, 78, 83, 93, 78, 91, 75, 80, 94, 88], "value": 83.7}, {"name": "loadtime", "lowerIsBetter": true, "alertThreshold": 2.0, "unit": "ms", "shouldAlert": true, "replicates": [129, 131, 122, 126, 135, 132, 131, 134, 130, 131, 305, 295, 129, 127, 129, 306, 131, 128, 302, 312, 130, 128, 131, 132, 125], "value": 153.7}]}], "application": {"name": "firefox", "version": "126.0a1"}}
[task 2024-03-23T16:28:05.247Z] 16:28:05     INFO -  perftest-output Info: results can also be found locally at: /opt/worker/tasks/task_171120747378779/build/raptor.json
[task 2024-03-23T16:28:05.247Z] 16:28:05    ERROR -  perftest-results-handler Critical: TEST-UNEXPECTED-FAIL | Some visual metrics have an erroneous value of 0.

Retriggers and backfills might point to Bug 1884000.
Kash, any chance you could take a look?
Thank you.

Flags: needinfo?(kshampur)
Whiteboard: [stockwell needswork:owner]
See Also: → 1888219

Bug 1888219 should hopefully resolve this once it is landed

Flags: needinfo?(kshampur)
Status: REOPENED → RESOLVED
Closed: 1 year ago1 year ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: