Closed Bug 1509835 Opened 3 years ago Closed 3 years ago

Raptor test raptor-tp6-amazon-chrome is not rendering a completely loaded amazon site

Categories

(Testing :: Raptor, defect, P1)

defect

Tracking

(firefox68 fixed)

RESOLVED FIXED
mozilla68
Tracking Status
firefox68 --- fixed

People

(Reporter: acreskey, Assigned: Bebe)

References

Details

Attachments

(3 files)

Attached image amazon-in-chrome.png
When examining the page rendered in the raptor-tp6-amazon-chrome test you can see that not all of the style sheets have been loaded. (See attachment).

Since this test is used to compare performance between browsers, it's necessary that both Firefox and Chrome are rendering the same page.

Note: I'm quite sure this used to work, so perhaps a mozregression bisect can quickly isolate the problem.
Thanks Andrew.

I tried this on my OSX with my version of Chrome and I see the same thing. The page definitely looks better on Firefox. Wouldn't this more likely be an issue with Chrome? I can't see how this would be a mitmproxy issue, we haven't changed the recording that is played back or any playback options etc.
It's a bit of mystery to me at the moment Rob :)

One thing I noticed is that when running the page on Chrome there are an awful of lot of http 404 errors on the playback.
It's possible that Florin's fix here will resolve these.
https://bugzilla.mozilla.org/show_bug.cgi?id=1511029
FYI, I tried Florin's patch but it does not resolve this issue.

Looks like missing style sheets.

Might need to debug this request by request.

Or try on an older version of Chrome?
there is a small difference between the requests from chrome and the requests from firefox when generating the hash.
https://searchfox.org/mozilla-central/source/testing/raptor/raptor/playback/alternate-server-replay.py#64
When generating the hash we have a different char in the path.

Not sure how to do a fix though  or why we have this difference

Example:

Chrome
see: 61hpgixl7UL._RC%7C01evdoiemkL

['443', 'https', 'GET', '/images/I/61hpgixl7UL._RC%7C01evdoiemkL.css,01K+Ps1DeEL.css,31Dj+6BjA7L.css,11PuQQlCaSL.css,11UGC+GXOPL.css,21LK7jaicML.css,11L58Qpo0GL.css,21EuGTxgpoL.css,01Xl9KigtzL.css,21GwE3cR-yL.css,019SHZnt8RL.css,01KHSUOoAjL.css,11vZhCgAHbL.css,21Mne54CsmL.css,11WgRxUdJRL.css,01dU8+SPlFL.css,11DGn6WmpTL.css,01SHjPML6tL.css,111-D2qRjiL.css,01QrWuRrZ-L.css,31WnKks7R1L.css,114KWZGKCVL.css,01cbS3UK11L.css,21zId9c5Z5L.css,01cNnXK5MbL.css,11R1EhdqF-L.css_.css', 'images-na.ssl-images-amazon.com', '?']


Firefox
see: 1hpgixl7UL._RC|01evdoiemkL

['443', 'https', 'GET', '/images/I/61hpgixl7UL._RC|01evdoiemkL.css,01K+Ps1DeEL.css,31Dj+6BjA7L.css,11PuQQlCaSL.css,11UGC+GXOPL.css,21LK7jaicML.css,11L58Qpo0GL.css,21EuGTxgpoL.css,01Xl9KigtzL.css,21GwE3cR-yL.css,019SHZnt8RL.css,01KHSUOoAjL.css,11vZhCgAHbL.css,21Mne54CsmL.css,11WgRxUdJRL.css,01dU8+SPlFL.css,11DGn6WmpTL.css,01SHjPML6tL.css,111-D2qRjiL.css,01QrWuRrZ-L.css,31WnKks7R1L.css,114KWZGKCVL.css,01cbS3UK11L.css,21zId9c5Z5L.css,01cNnXK5MbL.css,11R1EhdqF-L.css_.css', 'images-na.ssl-images-amazon.com', '?']
:rwood in this case can we record the request on chrome and use a separate recording?
Flags: needinfo?(rwood)
(In reply to Florin Strugariu [:Bebe] from comment #5)
> :rwood in this case can we record the request on chrome and use a separate
> recording?

I really prefer not to use separate recordings for Google Chromium and Firefox if we can avoid it. It would add more overhead to tooltool, standing up new tests, re-recording when we upgrade the pages/mitmproxy, etc. I also think since we are comparing head-to-head we should use the same recording on both, IMO.
Flags: needinfo?(rwood)
Priority: -- → P1

(In reply to Robert Wood [:rwood] from comment #6)

(In reply to Florin Strugariu [:Bebe] from comment #5)

:rwood in this case can we record the request on chrome and use a separate
recording?

I really prefer not to use separate recordings for Google Chromium and
Firefox if we can avoid it. It would add more overhead to tooltool, standing
up new tests, re-recording when we upgrade the pages/mitmproxy, etc. I also
think since we are comparing head-to-head we should use the same recording
on both, IMO.

When recording, can you load the page once using Firefox, and then load it again using Chrome and then save the output? This can (hopefully) ensure that if different resources were requested by one browser and not the other because of UA sniffing or whatever, then the recording will still have it available.

Is this still an issue? Thanks.

Flags: needinfo?(acreskey)

Yes, just checked and I still see the missing images / web fonts etc on Chrome.

Flags: needinfo?(acreskey)

helped :marauder retest this issue and the behavior form Comment 4 is still reproducible.

As a workaround we recorded both browsers in the same recording.
This way we would get both packages and the replay will work fine on both chrome and Firefox.

:davehunt :rwood please approve this change

Flags: needinfo?(rwood)
Flags: needinfo?(dave.hunt)

:bebe and I discussed this earlier. It's due to Firefox not URL encoding pipes (|) in the requests, so the hashes of these requests do not match with Chrome sends as requests. Bebe is going to look into how we might fix this in mitmproxy. I'm not keen on opening the can of worms of separate recordings, but I suppose that would be the alternative. Problems with this would be sites that are very different between Firefox and Chrome (extreme example currently being https://web.skype.com/), or knowing where we draw the line. Would we re-record all sites in Firefox and Chrome? Would we also have device specific recordings, etc?

Flags: needinfo?(dave.hunt)

I agree, it would be best if this can be fixed in mitmproxy instead.

Flags: needinfo?(rwood)

To avoid the escape on chrome we can revert it with mtmproxy.

Once we get a request we check if we have %7C in the url and replace it with |
I know it's not the most elegant solution but it's the only thing we can do at the moment.

:rwood :tarek what do you think about it?

try build:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=9143294f77662f93eec394e82f59a0337cc70511

One of the issue I see at the moment is having %7C in the url and changing that url by mistake.

Flags: needinfo?(tarek)
Flags: needinfo?(rwood)

Why special-casing %7C ? Unless I am missing something, it seems to me that we want to unquote the URL in case we have other quotes there? I commented the patch assuming this

Flags: needinfo?(tarek)

Deferring to :tarek as he's more familiar with this (thanks!)

Flags: needinfo?(rwood)
Pushed by fstrugariu@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/d55785a2bf02
Raptor test raptor-tp6-amazon-chrome is not rendering a completely loaded amazon site r=tarek
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla68
Assignee: nobody → fstrugariu
Regressions: 1544356

If anyone else can reproduce this I'll log a new bug -- I'm seeing missing images on amazon-cold through raptor.

You need to log in before you can comment on or make changes to this bug.