Closed
Bug 855044
Opened 12 years ago
Closed 12 years ago
Tp4 Mobile NoChrome regression on Mar 26
Categories
(Firefox for Android Graveyard :: General, defect)
Tracking
(Not tracked)
RESOLVED
DUPLICATE
of bug 877779
People
(Reporter: mfinkle, Unassigned)
References
Details
mozilla-inbound is showing a regression Mar 26 16:18
http://graphs.mozilla.org/graph.html#tests=[[84,63,20]]&sel=1363725435734,1364330235734&displayrange=7&datatype=running
Looks like a change from ~715ms to 850ms
![]() |
||
Comment 1•12 years ago
|
||
The regressing changeset looked to me like https://hg.mozilla.org/integration/mozilla-inbound/rev/2cadff91837b, but it seems implausible for that change to affect tpn.
I re-triggered tpn on that rev + earlier revisions, but all of the re-triggers produced the regressed value. Did Talos change? I don't see anything checked in to the talos repo today, nor do I see evidence of a new talos deployment today.
Comment 2•12 years ago
|
||
talos hasn't changed since 3/23 (bug 853868). This might be a real product regression.
Reporter | ||
Comment 3•12 years ago
|
||
(In reply to Geoff Brown [:gbrown] from comment #1)
> The regressing changeset looked to me like
> https://hg.mozilla.org/integration/mozilla-inbound/rev/2cadff91837b, but it
> seems implausible for that change to affect tpn.
>
> I re-triggered tpn on that rev + earlier revisions, but all of the
> re-triggers produced the regressed value. Did Talos change? I don't see
> anything checked in to the talos repo today, nor do I see evidence of a new
> talos deployment today.
I also see what Geoff noticed: All retriggers of "tpn" are regressed. It seems to have nothing to do with the changesets themselves.
Comment 4•12 years ago
|
||
:armenzg, did anything change on our foopies or talos that you are aware of this week? Last week we deployed a new talos.zip, but that was March 22/23.
Comment 5•12 years ago
|
||
Callek, could the switch to Linux foopies have triggered these?
(We have had Linux foopies for a while though).
Our last reconfiguration of the masters was 4 days ago.
Do these tests hit any web servers?
Comment 6•12 years ago
|
||
this test hits the bm-remote web server...so do the other tests, but this one would be affected more than others.
Comment 7•12 years ago
|
||
FTR, it seems that we're back to normal:
http://graphs.mozilla.org/graph.html#tests=[[84,63,20],[84,11,20],[84,23,20],[84,52,20]]&sel=1364171703354.3687,1364394524407&displayrange=7&datatype=running
It is most likely that it was an infra issue since the data points of Aurora and Try also went up. The Aurora data points also came down. Unfortunately, I don't have data for the Firefox/mobile branch.
I don't see any spikes on our bm-remote web hosts or anything significant in the last week.
As a side note, bm-remote-talos-webhost-03 gets most (if not all) of the load out of the three heads.
Our foopies are now in other data centers compared to our web hosts.
Maybe a network change between the two locations? Maybe network load?
arr, do you know of anything that it could have changed yesterday? or anything that we might have been undergoing?
(I might be looking or asking the wrong questions).
Comment 8•12 years ago
|
||
It seems that I got it wrong. The Linux foopies are on MTV as well.
Reporter | ||
Comment 9•12 years ago
|
||
Yes. Things seem to have un-regressed.
http://graphs.mozilla.org/graph.html#tests=[[84,63,20]]&sel=1363795689900,1364400489900&displayrange=7&datatype=running
I am fine with closing as WORKSFORME, unless we need to keep it open for some investigation
Updated•12 years ago
|
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WORKSFORME
Updated•12 years ago
|
Resolution: WORKSFORME → DUPLICATE
Assignee | ||
Updated•4 years ago
|
Product: Firefox for Android → Firefox for Android Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•