Closed Bug 1215955 Opened 10 years ago Closed 9 years ago

Clear heap snapshots

Categories

(DevTools :: Memory, defect, P3)

43 Branch
defect

Tracking

(firefox46 verified)

VERIFIED FIXED
Firefox 46
Tracking Status
firefox46 --- verified

People

(Reporter: jsantell, Assigned: jdescottes)

References

(Blocks 1 open bug)

Details

(Whiteboard: [polish-backlog])

Attachments

(2 files, 8 obsolete files)

Probably fine with just clearing them all out, like performance tool.
Blocks: 1215400
Assignee: jsantell → nobody
Status: ASSIGNED → NEW
No longer blocks: memory-tools-fx44
Should be a matter of resetting react states. Probably a good first bug, Jordan, what do you think ?
Flags: needinfo?(jsantell)
Bit more than that, would have to see how it handles clearing when in-flight taking of a snapshot, or a census, as those all expect the snapshot entry to exist in state
Flags: needinfo?(jsantell)
Handling in-flight snapshots should be as easy as > snapshots.filter(s => s.state !== states.SAVED_CENSUS) Although, to avoid leaking memory we want to also send a message to the worker to clear out the corresponding HeapSnapshot instance we read from disk into memory earlier. This should be easy as well. See `devtools/shared/heapsnapshot/HeapAnalyses{Client,Worker}.js`.
Has STR: --- → irrelevant
Whiteboard: [polish-backlog]
Assignee: nobody → jdescottes
Priority: -- → P3
Attached patch heapworker-patch.diff (obsolete) — Splinter Review
This patch adds a deleteSnapshot task on the heapAnalysisWorker, which deletes a single snapshot from the snapshots referenced by the worker.
Attachment #8706149 - Flags: feedback?(nfitzgerald)
Attached patch ui-patch.diff (obsolete) — Splinter Review
This patch adds the "Clear" button to the memory tool UI. I followed your suggestion which was to only delete the snapshots which are in SAVED_CENSUS state. I think it makes more sense, otherwise "clear" becomes a mix of "clear" and "cancel". It relies on the deleteHeapSnapshot task from the previous patch. In this patch we call sequentially the HeapAnalysisClient to delete each heap snapshot. Alternatively we could have another task on the worker to delete several snapshots at the same time. Let me know which approach you prefer.
Attachment #8706150 - Flags: feedback?(nfitzgerald)
Attached patch heapworker-patch.v2.diff (obsolete) — Splinter Review
Updated the worker part : - no longer throwing an error if the snapshot filepath matches no existing snapshot - added more tests
Attachment #8706149 - Attachment is obsolete: true
Attachment #8706149 - Flags: feedback?(nfitzgerald)
Attachment #8706251 - Flags: feedback?(nfitzgerald)
Comment on attachment 8706251 [details] [diff] [review] heapworker-patch.v2.diff Review of attachment 8706251 [details] [diff] [review]: ----------------------------------------------------------------- This patch looks really good, thanks Julian! A few nitpicks below, but I want to emphasize that this was a very nice patch and my comments are very much nitpicking. Flag me for review on the next version and then we can get a try push and land this :) ::: devtools/shared/heapsnapshot/HeapAnalysesClient.js @@ +54,5 @@ > /** > + * Tell the worker to delete all references to the snapshot and dominator trees > + * linked to the provided snapshot file path. > + * > + * @param {String} snapshotFilePath Super nit: add a "@return Promise<undefined>" here. ::: devtools/shared/heapsnapshot/HeapAnalysesWorker.js @@ +37,5 @@ > + */ > +workerHelper.createTask(self, "deleteHeapSnapshot", ({ snapshotFilePath }) => { > + let snapshot = snapshots[snapshotFilePath]; > + if (!snapshot) { > + return; Nit: The other request handlers throw if we don't any snapshot with the given path and I think we should do that here for API consistency (even though the promise rejection can largely be ignored). @@ +45,5 @@ > + > + let dominatorTreeId = dominatorTreeSnapshots.indexOf(snapshot); > + if (dominatorTreeId != -1) { > + delete dominatorTreeSnapshots[dominatorTreeId]; > + delete dominatorTrees[dominatorTreeId]; Another super nit: as a style preference, we prefer `foo[bar] = undefined` to `delete foo[bar]`. Using delete removes the property from the object, which is some extra hoops for SpiderMonkey to jump through and disables some optimizations on that object forever after. It totally doesn't matter here in practice, but whenever I see a `delete` I have to spend a minute thinking about whether it matters for this specific case, so I prefer we avoid it. As we discussed on irc, we can make this issue (and others) go away by using Map (after my big dominator trees patch lands). ::: devtools/shared/heapsnapshot/tests/unit/test_HeapAnalyses_deleteHeapSnapshot_01.js @@ +14,5 @@ > + yield client.readHeapSnapshot(snapshotFilePath); > + ok(true, "Should have read the heap snapshot"); > + > + let creationTime = yield client.getCreationTime(snapshotFilePath); > + notEqual(creationTime, null, "Should get snapshot creation time"); We don't need to test creation time in this test. @@ +17,5 @@ > + let creationTime = yield client.getCreationTime(snapshotFilePath); > + notEqual(creationTime, null, "Should get snapshot creation time"); > + > + let dominatorTreeId = yield client.computeDominatorTree(snapshotFilePath); > + ok(dominatorTreeId === 0, "dominatorTreeId should be 0"); We shouldn't rely on id-generation having deterministic behavior. This test would break if we switched to using uuids rather than an incrementing counter. Instead, we should assert that we got something/anything not undefined/null back. Especially since computing dominator trees is tested in other tests and we just happen to be also doing it for this test. @@ +23,5 @@ > + yield client.deleteHeapSnapshot(snapshotFilePath); > + ok(true, "Should have deleted the snapshot"); > + > + creationTime = yield client.getCreationTime(snapshotFilePath); > + equal(creationTime, null, "Should no longer get snapshot creation time"); Does getCreationTime really return null when given a path to a snapshot that is not loaded in the worker? We should fix that to make it consistent with the others request handlers. That could be a quick and easy next bug for you, if you're interested: bug 1238639. @@ +31,5 @@ > + dominatorTreeId = yield client.computeDominatorTree(snapshotFilePath); > + } catch (_) { > + threw = true; > + } > + ok(threw, "error expected, computeDominatorTree should throw an error"); Great! Let's also test attempting to take a census on a snapshot that has been deleted. ::: devtools/shared/heapsnapshot/tests/unit/test_HeapAnalyses_deleteHeapSnapshot_02.js @@ +21,5 @@ > + let creationTime = yield client.getCreationTime(snapshotFilePath); > + notEqual(creationTime, null, "Should get snapshot creation time"); > + > + let dominatorTreeId = yield client.computeDominatorTree(snapshotFilePath); > + ok(dominatorTreeId === 0, "dominatorTreeId should be 0"); I don't think we need anything in this test but the attempt to delete a non-existant snapshot. The other stuff is all covered by other tests.
Attachment #8706251 - Flags: feedback?(nfitzgerald) → feedback+
Comment on attachment 8706150 [details] [diff] [review] ui-patch.diff Review of attachment 8706150 [details] [diff] [review]: ----------------------------------------------------------------- Looks good -- again mostly nitpicks. Looking forward to the next iteration of these patches :) ::: devtools/client/memory/actions/snapshot.js @@ +216,5 @@ > + */ > +const clearSnapshots = exports.clearSnapshots = function (heapWorker) { > + return function*(dispatch, getState) { > + let snapshots = getState().snapshots.filter( > + s => s.state === states.SAVED_CENSUS); I think we can also remove snapshots whose state === ERROR. @@ +220,5 @@ > + s => s.state === states.SAVED_CENSUS); > + > + for (let snapshot of snapshots) { > + yield heapWorker.deleteHeapSnapshot(snapshot.path); > + } This doesn't really matter (this isn't very perf sensitive and the worker handles requests in serially anyways), but how about we avoid waiting on the round trip for each request to complete before starting the next one: yield Promise.all(snapshots.map(s => heapWorker.deleteSnapshot(s.path))); Additionally, we should take care to wrap requests to the worker in try/catch and reportException + dispatch SNAPSHOT_ERROR on failures. See readSnapshot in this file as an example. Finally, I think we should have DELETE_START and DELETE_END, like we do with other worker requests. We can optimistically remove snapshots from our state in the reducer for DELETE_START, and use DELETE_END for testing, etc. ::: devtools/client/memory/test/browser/browser.ini @@ +15,5 @@ > [browser_memory_no_allocation_stacks.js] > [browser_memory_no_auto_expand.js] > skip-if = debug # bug 1219554 > [browser_memory_percents_01.js] > +[browser_memory-clear-snapshots.js] We will see in the try push, but this might need the skip-if that other tests in here have. ::: devtools/client/memory/test/unit/test_action-clear-snapshots_02.js @@ +30,5 @@ > + ok(true, "clear snapshot deleted one snapshot"); > + > + let remainingSnapshot = getState().snapshots[0]; > + notEqual(remainingSnapshot.state, states.SAVED_CENSUS, > + "remaining snapshot doesn't have the SAVED_CENSUS state"); Great test, thanks!
Attachment #8706150 - Flags: feedback?(nfitzgerald) → feedback+
Attached patch heapworker-patch.v3.diff (obsolete) — Splinter Review
Thanks for the nice feedback! This patch should address the points you raised. > Does getCreationTime really return null when given a path to a snapshot that > is not loaded in the worker? [...] That could be a quick and easy next bug > for you, if you're interested: bug 1238639. Sure, thanks for opening it. > Using delete removes the property from the object, which is some extra hoops > for SpiderMonkey to jump through and disables some optimizations on that > object forever after Thanks for the insight!
Attachment #8706251 - Attachment is obsolete: true
Attachment #8706898 - Flags: review?(nfitzgerald)
Attached patch ui-patch.v2.diff (obsolete) — Splinter Review
Now the new version of the ui patch. > We will see in the try push, but this might need the skip-if that other tests > in here have. Indeed, my first try push had failures due to leaked objects for this test. I added a few new tests in order to check the error management and the support of snapshots with ERROR state. In test_action-clear-snapshots_05.js I used a "fake/mocked" HeapAnalysisClient to simulate errors, let me know if I should approach this differently.
Attachment #8706150 - Attachment is obsolete: true
Attachment #8706905 - Flags: review?(nfitzgerald)
Comment on attachment 8706898 [details] [diff] [review] heapworker-patch.v3.diff Review of attachment 8706898 [details] [diff] [review]: ----------------------------------------------------------------- Great!
Attachment #8706898 - Flags: review?(nfitzgerald) → review+
Comment on attachment 8706905 [details] [diff] [review] ui-patch.v2.diff Review of attachment 8706905 [details] [diff] [review]: ----------------------------------------------------------------- Also great! Thanks, Julian! ::: devtools/client/themes/memory.css @@ +114,3 @@ > } > > + Nit: remove this extra blank line.
Attachment #8706905 - Flags: review?(nfitzgerald) → review+
Attached patch ui-patch.v3.diff (obsolete) — Splinter Review
Thanks for the reviews! Carrying over r+ from previous patch.
Attachment #8706905 - Attachment is obsolete: true
Attachment #8706998 - Flags: review+
All dt tests are green, adding checkin needed.
Keywords: checkin-needed
does the ui-patch also need checkin ? in that case it failed to apply: applying ui-patch.v3.diff patching file devtools/client/locales/en-US/memory.properties Hunk #1 FAILED at 62 1 out of 1 hunks FAILED -- saving rejects to file devtools/client/locales/en-US/memory.properties.rej patching file devtools/client/memory/app.js Hunk #1 FAILED at 5 Hunk #2 FAILED at 63 2 out of 2 hunks FAILED -- saving rejects to file devtools/client/memory/app.js.rej patching file devtools/client/memory/components/toolbar.js Hunk #1 FAILED at 9 Hunk #2 FAILED at 61 2 out of 2 hunks FAILED -- saving rejects to file devtools/client/memory/components/toolbar.js.rej patching file devtools/client/memory/constants.js Hunk #1 FAILED at 40 1 out of 1 hunks FAILED -- saving rejects to file devtools/client/memory/constants.js.rej patching file devtools/client/memory/reducers/snapshots.js Hunk #1 FAILED at 74 1 out of 1 hunks FAILED -- saving rejects to file devtools/client/memory/reducers/snapshots.js.rej patching file devtools/client/memory/test/browser/browser.ini Hunk #1 succeeded at 15 with fuzz 2 (offset 2 lines). patch failed, unable to continue (try -v) patch failed, rejects left in working directory errors during apply, please fix and qrefresh ui-patch.v3.diff
Flags: needinfo?(jdescottes)
Keywords: checkin-needed
Attached patch heapworker-patch.v4.diff (obsolete) — Splinter Review
Rebased after dominator tree patch landed. Carry over r+
Attachment #8706898 - Attachment is obsolete: true
Attachment #8707815 - Flags: review+
Attached patch ui-patch.v4.diff (obsolete) — Splinter Review
Rebased after dominator tree patch landed. Carry over r+
Attachment #8706998 - Attachment is obsolete: true
Attachment #8707816 - Flags: review+
> does the ui-patch also need checkin ? Yes both patches need to be applied. I just rebased them both and uploaded them here. Try push : https://treeherder.mozilla.org/#/jobs?repo=try&revision=3d538dcf1586
Flags: needinfo?(jdescottes)
Unrelated failures on try and Windows 8 tests seem to be stalled. Not related to this patch. Adding checkin-needed
Keywords: checkin-needed
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 46
I had to back this out for being a possible cause of frequent devtools test failures in browser_profiler_tree-abstract-02.js https://treeherder.mozilla.org/logviewer.html#?job_id=6636210&repo=fx-team https://hg.mozilla.org/mozilla-central/rev/324c50cbc40a
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
The ui commit is the one with which the intermittent test starts. The same diff was tested with https://treeherder.mozilla.org/#/jobs?repo=try&revision=3d538dcf1586, and there, no failures on dt6 e10s.
Removing the tests seems to fix the issue. I compared the test logs of dt6 with and without my test. With my tests added, browser_profiler_tree-abstract-01.js is in dt6. Without my tests, browser_profiler_tree-abstract-01.js is actually in dt5. My guess is that browser_profiler_tree-abstract-01.js triggers the failure of browser_profiler_tree-abstract-02.js when executed in the same job on Linux x64 opt e10s. I don't know how tests are allocated to each job but the issue might come back soon as tests get added to the devtools suite.
https://treeherder.mozilla.org/#/jobs?repo=try&revision=9fa903f52c0d Try push where an "empty" mochitest was added on top of commit 03a60b9513da (for which we have no intermittent failures). This try push shows frequent dt6 failures. => intermittent failures are due to browser_profiler_tree-abstract-01.js being allocated to dt6 instead of dt5.
Depends on: 1240368
Carry over r+.
Attachment #8707815 - Attachment is obsolete: true
Attachment #8709339 - Flags: review+
Attached patch ui-patch.v5.diffSplinter Review
Attachment #8707816 - Attachment is obsolete: true
Attachment #8709341 - Flags: review+
Unrelated intermittent errors on try. Please checkin both patches : heapworker-patch.v5.diff & ui-patch.v5.diff.
Keywords: checkin-needed
Status: REOPENED → RESOLVED
Closed: 9 years ago9 years ago
Resolution: --- → FIXED
[bugday-20160323] Status: RESOLVED,FIXED -> UNVERIFIED Comments: STR: Not clear. Developer specific testing Component: Name Firefox Version 46.0b9 Build ID 20160322075646 Update Channel beta User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0 OS Windows 7 SP1 x86_64 Expected Results: Developer specific testing Actual Results: As expected
Status: RESOLVED → VERIFIED
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: