[tracking bug] automate running talos on mobile linux builds

VERIFIED FIXED

Status

Release Engineering
General
P2
normal
VERIFIED FIXED
10 years ago
5 years ago

People

(Reporter: alice, Assigned: aki)

Tracking

({mobile})

Firefox Tracking Flags

(Not tracked)

Details

(Reporter)

Description

10 years ago
Tracking bug for talos code improvements for mobile testing.
Component: Release Engineering: Talos → Release Engineering: Future
(Reporter)

Updated

10 years ago
Assignee: nobody → anodelman
Priority: -- → P2
(Reporter)

Updated

10 years ago
Depends on: 419947
Per meeting with dougt, alice, removing some blockers that are nice-to-have, but not actually blocking.
No longer depends on: 403835, 408093, 408228, 419776
(Reporter)

Updated

10 years ago
Depends on: 432764
Component: Release Engineering: Future → Release Engineering: Talos
Summary: tracking bug for talos on mobile work → [tracking bug] automate running talos on mobile linux builds
Blocks: 448073
No longer depends on: 448073
(Assignee)

Updated

10 years ago
No longer blocks: 448073
Depends on: 448073
No longer depends on: 430200

Updated

10 years ago
Duplicate of this bug: 439052

Updated

10 years ago
Blocks: 448073
No longer depends on: 448073
(Assignee)

Comment 3

10 years ago
Update:

 * I'm changing the branch from fennec to mobile, since talos looks for the browser process name and the 'fennec' branch for the 'fennec' process was causing issues.  If I should avoid the 'mobile' name as well let me know.

 * The date was off on n810-talos05; now fixed.

 * The browser_wait seems to need to be different for each test, and if I set it to too long it seems to break the test, as does too short a setting.  This seems to be an underlying issue but for now I'm investigating what the best browser_wait and timeout is for each test.  If anyone has any ideas what numbers work best, please let me know.

 * Adding the "Unknown platform" bug to the dependency list.

 * Three of the n810s are running tp3 at one cycle, the rest at 3 cycles, which makes for two separate groupings of results on graphs-stage.  I'll probably set them all at 3 once I can reliably run tp3 without either browser_wait'ing too long/short and not crash mid-stream.

I'll be out of the office next week.  John knows how to reboot them if they hang and he will check them daily.
Depends on: 469721
(Assignee)

Comment 4

10 years ago
re: browser_wait times: still custom per test on n810-talos{03,04,05}.  However, they look a lot healthier now that I've reduced the display brightness, which is helping with heat.
(Assignee)

Updated

9 years ago
Depends on: 471465
(Assignee)

Updated

9 years ago
Depends on: 471599
All talos suites (except Tp) have been running fine on graph-stage.m.o for a while now. We'll start posting results for all suites except Tp to production graphs.m.o as soon as bug#469721 is fixed in production graphs.m.o (later today/monday).

The crash in Tp is being tracked in bug#471585.
(Assignee)

Updated

9 years ago
Depends on: 468521
(Assignee)

Updated

9 years ago
Assignee: anodelman → aki
(Assignee)

Updated

9 years ago
No longer depends on: 454731
(Assignee)

Comment 6

9 years ago
This has been running long enough that I'm going to call it fixed.  The JIT crash/hang and the config merge are tracked in separate bugs.
Status: NEW → RESOLVED
Last Resolved: 9 years ago
No longer depends on: 471585, 471599
Resolution: --- → FIXED

Updated

9 years ago
Component: Release Engineering: Talos → Release Engineering
verified with beta3
Status: RESOLVED → VERIFIED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.