Closed
Bug 1324470
Opened 6 years ago
Closed 5 years ago
Add mach command to access historical test results
Categories
(Testing :: General, defect)
Tracking
(firefox54 fixed)
RESOLVED
FIXED
mozilla54
Tracking | Status | |
---|---|---|
firefox54 | --- | fixed |
People
(Reporter: gbrown, Assigned: gbrown)
References
(Blocks 1 open bug)
Details
Attachments
(1 file, 3 obsolete files)
11.39 KB,
patch
|
gps
:
review+
ekyle
:
review+
|
Details | Diff | Splinter Review |
ActiveData has very comprehensive data about test results, but accessing that data can be awkward. I'd like a simple mach command that provides a report of commonly-useful information like how many times a test has failed, and how long it takes to run. Pass/fail counts for a branch/platform/time-period are of interest when trying to determine when/why a test regressed. (Fail counts are already available in OrangeFactor, but not per-test pass counts.) Test durations are of interest when investigating test timeouts.
![]() |
Assignee | |
Comment 1•6 years ago
|
||
This is a little rough still, but does most of what I want. $ ./mach test-info devtools/client/webconsole/test/browser_console_native_getters.js Test results for devtools/client/webconsole/test/browser_console_native_getters.js on mozilla-central,mozilla-inbound,autoland between today-week and now linux32/debug: 0 failures in 22 runs linux32/opt: 0 failures in 29 runs linux32/opt-e10s: 0 failures in 32 runs linux32/pgo: 0 failures in 19 runs linux32/pgo-e10s: 0 failures in 17 runs linux64/asan-chunked: 0 failures in 150 runs linux64/asan-e10s: 0 failures in 72 runs linux64/debug-chunked: 0 failures in 65 runs linux64/opt: 0 failures in 21 runs linux64/opt-chunked: 0 failures in 23 runs linux64/opt-e10s: 0 failures in 51 runs linux64/pgo: 0 failures in 20 runs linux64/pgo-chunked: 0 failures in 29 runs linux64/pgo-e10s: 0 failures in 29 runs macosx64/debug: 0 failures in 27 runs macosx64/debug-e10s: 0 failures in 26 runs macosx64/opt: 0 failures in 29 runs macosx64/opt-e10s: 0 failures in 30 runs win32/debug: 0 failures in 25 runs win32/debug-e10s: 0 failures in 29 runs win32/opt: 0 failures in 27 runs win32/opt-e10s: 0 failures in 24 runs win32/pgo: 0 failures in 26 runs win32/pgo-e10s: 0 failures in 23 runs win64/debug: 0 failures in 23 runs win64/debug-e10s: 0 failures in 21 runs win64/opt: 0 failures in 32 runs win64/opt-e10s: 0 failures in 33 runs win64/pgo: 0 failures in 19 runs win64/pgo-e10s: 0 failures in 19 runs Test durations for devtools/client/webconsole/test/browser_console_native_getters.js on mozilla-central,mozilla-inbound,autoland between today-week and now linux32/debug: 44.02 s (38.38 s - 47.45 s over 22 runs) linux32/opt: 6.50 s (5.61 s - 7.74 s over 29 runs) linux32/opt-e10s: 7.47 s (6.10 s - 10.26 s over 32 runs) linux32/pgo: 5.44 s (5.06 s - 5.87 s over 19 runs) linux32/pgo-e10s: 5.97 s (5.30 s - 6.71 s over 17 runs) linux64/asan-chunked: 16.52 s (13.78 s - 21.41 s over 150 runs) linux64/asan-e10s: 16.95 s (15.36 s - 21.41 s over 72 runs) linux64/debug-chunked: 19.46 s (18.58 s - 23.33 s over 65 runs) linux64/opt: 6.40 s (5.29 s - 8.66 s over 21 runs) linux64/opt-chunked: 3.53 s (3.27 s - 3.90 s over 23 runs) linux64/opt-e10s: 6.06 s (3.27 s - 7.82 s over 51 runs) linux64/pgo: 5.31 s (4.88 s - 5.83 s over 20 runs) linux64/pgo-chunked: 3.22 s (2.96 s - 3.52 s over 29 runs) linux64/pgo-e10s: 4.87 s (3.16 s - 6.44 s over 29 runs) macosx64/debug: 10.06 s (8.50 s - 18.18 s over 27 runs) macosx64/debug-e10s: 10.14 s (8.32 s - 17.00 s over 26 runs) macosx64/opt: 3.03 s (2.76 s - 3.35 s over 29 runs) macosx64/opt-e10s: 2.91 s (2.45 s - 3.46 s over 30 runs) win32/debug: 15.74 s (13.80 s - 18.05 s over 25 runs) win32/debug-e10s: 13.16 s (12.51 s - 14.40 s over 29 runs) win32/opt: 3.52 s (3.36 s - 3.77 s over 27 runs) win32/opt-e10s: 3.11 s (3.05 s - 3.41 s over 24 runs) win32/pgo: 2.82 s (2.35 s - 3.02 s over 26 runs) win32/pgo-e10s: 2.09 s (1.88 s - 2.47 s over 23 runs) win64/debug: 21.25 s (15.20 s - 25.91 s over 23 runs) win64/debug-e10s: 15.41 s (14.56 s - 17.58 s over 21 runs) win64/opt: 3.29 s (3.10 s - 3.49 s over 32 runs) win64/opt-e10s: 2.94 s (2.83 s - 3.10 s over 33 runs) win64/pgo: 2.71 s (2.48 s - 2.89 s over 19 runs) win64/pgo-e10s: 1.99 s (1.89 s - 2.26 s over 19 runs) jmaher -- What do you think? Is there other info you'd like added? ekyle -- Could you have a look at my queries? Is there a better way to implement query_results()? This works fine for mochitests, but I never find data for other test types, like xpcshell: $ ./mach test-info dom/indexedDB/test/unit/test_metadataRestore.js Test results for dom/indexedDB/test/unit/test_metadataRestore.js on mozilla-central,mozilla-inbound,autoland between today-week and now No general test data found. Test durations for dom/indexedDB/test/unit/test_metadataRestore.js on mozilla-central,mozilla-inbound,autoland between today-week and now No test durations found. Is that expected?
Attachment #8819926 -
Flags: feedback?(klahnakoski)
Attachment #8819926 -
Flags: feedback?(jmaher)
Comment 2•6 years ago
|
||
this is really useful! either subcommands or additional commands/info would be to have: - data found in orangefactor (take data from the last 2 weeks) and any bugs with a title matching the leafname? (help find related bugs, or link to bugs for relevant discussion) - hg history, specifically who to ping and when more recent changes took place - link to component, and manifest which contains the test (so we can see what other tests are run at the same time) I also don't see android in that list- is that expected? One other cool thing would be to figure out on the runtimes if we are close to the max runtime for the harness/test. That would mean if we have 45 seconds max runtime that if we are running >=34 seconds on average for a config then we flag it. I believe we have issues with some harnesses not submitting structured logs- maybe Kyle can enlighten us on what test harness/configurations are lacking complete data in activedata.
Updated•6 years ago
|
Attachment #8819926 -
Flags: feedback?(jmaher)
![]() |
Assignee | |
Comment 3•6 years ago
|
||
(In reply to Joel Maher ( :jmaher) from comment #2) > I also don't see android in that list- is that expected? That's just because this test doesn't run on Android. Thanks for all the enhancement ideas. I may leave some of them for follow-up bugs.
Comment 4•6 years ago
|
||
I will answer the easy questions first: * OrangeFactor is in ActiveData, but it is not useful unless Bugzilla ES Cluster is also queried. Matching will be hard because it is fuzzy. There may be quick wins, but I have not given it much thought. * Since hg is in ActiveData, I believe one query can provide a list of people that touched a file, or directory, and when they last did so (including on try). https://activedata.allizom.org/tools/query.html#query_id=2VvYeJ8R * If we are interested in a specific test, then we can query for all other tests in that chunk. The tests do seem to move often enough that the current manifest is probably out of date. This may not be an issue because the manifest is in tree. * We can annotate tests with more information, like what the suite timeout value is: Then we can query all tests that are approaching that timeout value. We already have the test/suite start/end times, so we know how far into the suite a test is running. I will need someone to point me to the metatdata (?in tree?) so it can be used to annotate these tests.
Comment 5•6 years ago
|
||
Comment on attachment 8819926 [details] [diff] [review] work in progress Looking a the `format*` functions: > def format_branches(self, branches): > body = '' > for branch in branches.split(','): > if len(body): > body = body + ',' > body = body + '{"eq":{"build.branch":"%s"}}' % branch > return '{"or":[%s]}' % body May I suggest these functions return a JSON-izable structure instead? Something along the lines of: > def format_branches(self, branches): > return {"or": [{"eq": {"build.branch": branch}} for branch in branches.split(',')]} > > def format_times(self, start, end): > return {"and": [ > {"gt": {"run.timestamp": {"date": start}}}, > {"lt": {"run.timestamp": {"date": end}}} > ]} Furthermore, the "in" operator can be used to simplify `format_branches` further: > def format_branches(self, branches): > return {"in": {"build.branch": branches.split(',')} Looking at `query_results()`, the query building can also go from string construction to JSON-izable structures: > query = """ >{ > "from":"unittest", > "limit":100, > "groupby":["build.platform","build.type","run.type"], > "select":[{"aggregate":"count"}, "0"], > "where":{"and":[ > {"eq":{"result.test":"%s"}}, > %s, > %s > ]} >} >""" % (test, self.format_branches(branches), self.format_times(start, end)) > all = self.submit(query) ...can be rephrased as... > query ={ > "from": "unittest", > "limit": 100, > "groupby": ["build.platform", "build.type", "run.type"], > "select": [{"aggregate": "count"}, "0"], > "where": {"and": [ > {"eq": {"result.test": test}}, > self.format_branches(branches), > self.format_times(start, end) > ]} > } > all = self.submit(query) ...and since the `format*` functions are trivial, we can inline them... > query ={ > "from": "unittest", > "limit": 100, > "groupby": ["build.platform", "build.type", "run.type"], > "select": [{"aggregate": "count"}, "0"], > "where": {"and": [ > {"eq": {"result.test": test}}, > {"in": {"build.branch": branches.split(',')}}, > {"gt": {"run.timestamp": {"date": start}}}, > {"lt": {"run.timestamp": {"date": end}}} > ]} > } > all = self.submit(query) I noticed you are joining `all` and `failures`; they can be merged into a single query: > query ={ > "from": "unittest", > "limit": 100, > "groupby": ["build.platform", "build.type", "run.type"], > "select": [ > {"aggregate": "count"}, > { > "name": "failures", > "value": {"case": {"when": {"eq": {"result.ok": "F"}}, "then": 1}}, > "aggregate": "sum", > "default": 0 > } > ], > "where": {"and": [ > {"eq": {"result.test": test}}, > {"in": {"build.branch": branches.split(',')}}, > {"gt": {"run.timestamp": {"date": start}}}, > {"lt": {"run.timestamp": {"date": end}}} > ]} > } > all = self.submit(query) _____________________________________________________________ Below this line, my suggestions have dubious value. The column indices can make it hard to follow the intention of the code, for example: > data = self.query_results(test_name, branches, start, end) > if data and len(data) > 0: > for record in data: > e10s = ("-%s" % record[2]) if record[2] else "" > platform = "%s/%s%s:" % (record[0], record[1], e10s) > runs = record[3] > failures = record[4] if record[4] else 0 > print("%-30s %6d failures in %6d runs" % (platform, failures, runs)) > else: > print("No general test data found.") It is also prone to breakage should the query change. If you add `{"format":"list"}` to all queries, you will get an array of objects (instead of an array of tuples), which will save you from aligning columns to your code. Unfortunately, ActiveData uses many dots (`.`) in the names, which results in inner objects, which are difficult to access using Python without a library to deal with them. I suggest renaming your columns to exclude the dots: So,... > "groupby": ["build.platform", "build.type", "run.type"], ...gets rewritten to... > "groupby": [ > {"name": "platform", "value": "build.platform"}, > {"name": "build_type", "value": "build.type"}, > {"name": "run_type", "value": "run.type"} > ], ...then the code reads like... > data = self.query_results(test_name, branches, start, end) > if data and len(data) > 0: > for record in data: > e10s = ("-%s" % record['run_type']) if record['run_type'] else "" > platform = "%s/%s%s:" % (record['platform'], record['build_type'], e10s) > runs = record['count'] > failures = record['failures'] > print("%-30s %6d failures in %6d runs" % (platform, failures, runs)) > else: > print("No general test data found.") The above suggestions mess with your `submit` function. So, change it to something like: > def submit(self, query): > import requests > response = requests.post("http://activedata.allizom.org/query", > data=json.dumps(query), > stream=True) > response.raise_for_status() > data = response.json()["data"] > data.sort(key=lambda r: (r['platform'], r['build_type'], r['run_type'])) > return data ActiveData will produce lots of nulls: You may find a null-coalescing function helpful. Instead of... > e10s = ("-%s" % record['run_type']) if record['run_type'] else "" ...you can write... > e10s = coalesce(record['run_type'], "") ...where `coalesce` is defined as... > def coalesce(*args): > for a in args: > if a != None: > return a > return None Using `or` also works in this specific case > e10s = record['run_type'] or "" but in general, coalescing Nones is more consistent than `or`ing together falsey values.
Comment 6•6 years ago
|
||
Comment on attachment 8819926 [details] [diff] [review] work in progress Review of attachment 8819926 [details] [diff] [review]: ----------------------------------------------------------------- see https://bugzilla.mozilla.org/show_bug.cgi?id=1324470#c5
Attachment #8819926 -
Flags: feedback?(klahnakoski) → feedback-
![]() |
Assignee | |
Comment 7•6 years ago
|
||
Updated with Kyle's feedback - thanks! Mostly working the way I want, I think.
Attachment #8819926 -
Attachment is obsolete: true
Comment 8•6 years ago
|
||
(In reply to Geoff Brown [:gbrown] from comment #7) > Created attachment 8822769 [details] [diff] [review] > work in progress Looking at > searches = [ > self.full_test_name, > self.test_name, > ".*%s" % self.test_name, > self.short_name, > self.robo_name, > ".*%s.*" % self.test_name, > ".*%s.*" % self.short_name > ] > for re in searches: > print("Searching for '%s' in ActiveData..." % re) > query = { > "from": "unittest", > "format": "list", > "limit": 10, > "groupby": ["result.test"], > "where":{"regexp":{"result.test":re}} > } > data = self.submit(query) > if data and len(data) > 0: > self.activedata_test_name = re > break It is faster to search for all combinations at once. It is more verbose, but using "eq" instead of "regexp" will make the query run faster, and there are less queries: > searches = [ > {"eq":{"result.test": self.full_test_name}}, > {"eq":{"result.test": self.test_name}}, > {"regexp":{"result.test": ".*%s" % self.test_name}}, > {"eq":{"result.test": self.short_name}}, > {"eq":{"result.test": self.robo_name}}, > {"regexp":{"result.test": ".*%s.*" % self.test_name}}, > {"regexp":{"result.test": ".*%s.*" % self.short_name}} > ] > print("Searching for '%s' in ActiveData..." % json.dumps(searches)) > query = { > "from": "unittest", > "format": "list", > "limit": 10, > "groupby": ["result.test"], > "where":{"or":searches} > } > data = self.submit(query) > if data and len(data) > 0: > self.activedata_test_name = self.test_name The `searches` array can be simplified: > searches = [ > {"in": {"result.test": [ > self.full_test_name > self.test_name, > self.short_name, > self.robo_name > ]}}, > {"regexp": {"result.test": ".*%s.*" % self.test_name}}, > {"regexp": {"result.test": ".*%s.*" % self.short_name}} > ] Your strategy of looping does have the advantage of prioritized lookup, which is lost here. Combining to a single query also has the disadvantage of returning multiple matching test names, which you may not care for. On another subject: You may want to store the full activedata_test_name, which will make your subsequent queries faster: > if data and len(data) > 0: > self.activedata_test_name = data[0].result.test Then replace > {"regexp": {"result.test": self.activedata_test_name}}, with > {"eq": {"result.test": self.activedata_test_name}},
![]() |
Assignee | |
Comment 9•5 years ago
|
||
(In reply to Kyle Lahnakoski [:ekyle] from comment #8) Thanks again Kyle. I've made updates now to use fewer regexp's based on your comments. I did not collapse the loop, as I think there is value in the prioritized lookup: I would prefer to use the full test name if that works, the specified test name if the full test name doesn't work but the test name name does, etc, etc.
![]() |
Assignee | |
Comment 10•5 years ago
|
||
This is working well for me and I'm finding the reports very useful when I am triaging test failure bugs. Requests for the simple, short test name generally work for me and I have tried all the common test flavors: mochitest, reftest, xpcshell, robocop. I see some variation in activedata response times, but most of my test-info requests generate a complete report in less than 10 seconds -- good enough for me. :ekyle -- I hope I've implemented all your suggestions for activedata improvements, modulo comment 9. :gps -- Looking for your mach expertise; also wondering if you would suggest a different approach for 'hg log' -- it looks like there isn't a great mercurial api.
Attachment #8822769 -
Attachment is obsolete: true
Attachment #8825207 -
Flags: review?(klahnakoski)
Attachment #8825207 -
Flags: review?(gps)
Comment 11•5 years ago
|
||
:gbrown
If you did go with a single request to ActiveData, the multiple responses could be prioritized with:
> if data and len(data) > 0:
> self.activedata_test_name = [
> d['result']['test']
> for p in searches
> for d in data
> if re.match(p+"$", d['result']['test'])
> ][0] # first match is best match
> break
I am also wondering if any of the characters in test names would mess with the regular expressions. Maybe use `re.escape` on them?
Comment 12•5 years ago
|
||
Comment on attachment 8825207 [details] [diff] [review] implement |mach test-info| Review of attachment 8825207 [details] [diff] [review]: ----------------------------------------------------------------- This looks *extremely* useful and has tons of potential! I'm pretty sure a lot of people will love using this when it lands. Only a preliminary review so far... Our policy is to support Git for client-side development. So, anything doing VCS on developer machines needs to support Mercurial *and* Git. This becomes a bit painful when e.g. remote data is indexed against Mercurial changesets. Fortunately, we've solved this problem for artifact builds. See the code in artifacts.py. Unfortunately, that code in artifacts.py isn't really in a reusable state since it is part of the Artifacts class. (We're ideally supposed to put this reusable VCS code in python/mozversioncontrol.) Anyway, I'm not a fan of scope bloating, so Git support doesn't need to be in the first iteration. But if you announce this without Git support, prepare yourself for a mob of angry Git users. You have been warned :) On to the design. It's worth noting we already have a `mach file-info` command. I'm a bit sensitive to introducing new mach commands in general because, well, I already think we have a bit too many. I feel like we could shoehorn this test info functionality into `mach file-info`. That command is actually a collection of sub-commands that show various "views" of information for a given file. So perhaps `mach file-info test-info` (or something along those lines)? Related to the command dispatch, the command as implemented already does a few things. I could easily see more queries and views being added. This could lead to lots of complexity around e.g. command arguments and could make the command difficult to use. How do you feel about separating each query/view/goal into its own mach sub-command? Finally, you query out to some external services. While the wheel reinvention today is small and tolerable, there is a Bugsy Python package you may want to look at for querying Bugzilla. I'm not sure why it isn't vendored into mozilla-central yet, but it could be (we use it heavily in version-control-tools for Bugzilla interaction). There may be something similar for ActiveData. I'm not sure. I'll look at the code in more detail later, hopefully tomorrow. ::: testing/mach_commands.py @@ +1074,5 @@ > + # Report mercurial changes to test for time range > + print("\nChanges in hg to %s between %s and %s:" % > + (self.full_test_name, self.start, self.end)) > + dates = '%s to %s' % (self.start, self.end) > + cmd = ['hg', 'log', '-d', dates, '--pager', 'never', The dates in Mercurial (or Git) commits can't be trusted. For example, I can today author a commit and claim it is 1970. You really need to find the revision range you care about then search between those using '-r startrev::endrev'.
Comment 13•5 years ago
|
||
oh! Also, do not accept test name input less than N characters; too short and your Python process may not like the volume returned.
![]() |
Assignee | |
Comment 14•5 years ago
|
||
Comment on attachment 8825207 [details] [diff] [review] implement |mach test-info| Thanks for the preliminary comments. I think that's enough for now...will try to respond to issues and get a new version out soon.
Attachment #8825207 -
Flags: review?(klahnakoski)
Attachment #8825207 -
Flags: review?(gps)
![]() |
Assignee | |
Comment 15•5 years ago
|
||
(In reply to Kyle Lahnakoski [:ekyle] from comment #11) > If you did go with a single request to ActiveData, the multiple responses > could be prioritized with: > > > if data and len(data) > 0: > > self.activedata_test_name = [ > > d['result']['test'] > > for p in searches > > for d in data > > if re.match(p+"$", d['result']['test']) > > ][0] # first match is best match > > break > > I am also wondering if any of the characters in test names would mess with > the regular expressions. Maybe use `re.escape` on them? Thanks. I had to make some minor adjustments, but that basically works - will be in the next version.
![]() |
Assignee | |
Comment 16•5 years ago
|
||
(In reply to Gregory Szorc [:gps] from comment #12) > The dates in Mercurial (or Git) commits can't be trusted. For example, I can > today author a commit and claim it is 1970. You really need to find the > revision range you care about then search between those using '-r > startrev::endrev'. Are dates frequently wrong though? The "main" part of the report - test results and test durations - is for a certain range of dates, and I think the hg log should be consistent with that. For many test results/durations scenarios, it's natural to request a date range, and it would be nice to see what hg/git changes were made in that same range. Can I map a requested date range to a revision range reasonably? If that's troublesome, perhaps I'll: - remove the hg log part of the report entirely - list recent changes instead: hg log -l 3 ? - dump a dxr / hg / git link to the test?
![]() |
Assignee | |
Comment 17•5 years ago
|
||
(In reply to Gregory Szorc [:gps] from comment #12) > It's worth noting we already have a `mach file-info` command. I'm a bit > sensitive to introducing new mach commands in general because, well, I > already think we have a bit too many. I feel like we could shoehorn this > test info functionality into `mach file-info`. That command is actually a > collection of sub-commands that show various "views" of information for a > given file. So perhaps `mach file-info test-info` (or something along those > lines)? > > Related to the command dispatch, the command as implemented already does a > few things. I could easily see more queries and views being added. This > could lead to lots of complexity around e.g. command arguments and could > make the command difficult to use. How do you feel about separating each > query/view/goal into its own mach sub-command? I have two broad concerns about putting this in file-info: 1. Today's file-info applies to source files generally, while test-info is about tests only. |mach file-info test-info dom/animation/Animation.cpp| could reasonable report 'Unable to find test information for Animation.cpp', but it feels like test-info has a slightly different domain. 2. Command arguments and help. I can split up the test-info report into specific sub-commands but I still need optional arguments that are specific to test-info sub-commands. |mach file-info test-durations <test-name|file-name>| kind of works, but to make this useful, I need to offer options to restrict branches and/or dates - options that do not apply to file-info generally. If nothing else, |mach help file-info| quickly becomes complicated.
![]() |
Assignee | |
Comment 18•5 years ago
|
||
(In reply to Geoff Brown [:gbrown] from comment #16) > If that's troublesome, perhaps I'll: > - remove the hg log part of the report entirely > - list recent changes instead: hg log -l 3 ? > - dump a dxr / hg / git link to the test? I have removed the 'hg log' part of the report. I wasn't finding much value in it; most of the time, it was just cluttering the report.
![]() |
Assignee | |
Comment 19•5 years ago
|
||
:ekyle - I think I have incorporated all of your previous feedback now. :gps - Thanks much for reminding me about git and pointing me at artifacts.py; I'm using a similar approach here and feel good about handling both hg and git now. I had a look at bugsy but am comfortable going straight to the bugzilla api. I only make the one simple bugzilla query and the api does the job for me. This version still implements 'test-info' rather than extending 'file-info' - see my thoughts in comment 17.
Attachment #8825207 -
Attachment is obsolete: true
Attachment #8830392 -
Flags: review?(klahnakoski)
Attachment #8830392 -
Flags: review?(gps)
Updated•5 years ago
|
Attachment #8830392 -
Flags: review?(klahnakoski) → review+
Comment 20•5 years ago
|
||
Comment on attachment 8830392 [details] [diff] [review] implement |mach test-info| Review of attachment 8830392 [details] [diff] [review]: ----------------------------------------------------------------- I still feel like there is a way to integrate this into `mach files-info` and/or to split the functionality into sub-commands. But, perfect is the enemy of good. This is good enough for a first iteration. We can see where things land and refactor later if needed.
Attachment #8830392 -
Flags: review?(gps) → review+
Comment 21•5 years ago
|
||
Pushed by gbrown@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/edd70bcf7b44 Add support for |mach test-info|; r=gps,ekyle
Comment 22•5 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/edd70bcf7b44
Status: NEW → RESOLVED
Closed: 5 years ago
status-firefox54:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla54
You need to log in
before you can comment on or make changes to this bug.
Description
•