Created attachment 511372 [details] [diff] [review] Patch It looks like starting Weave cost a few megabytes of RAM and we are doing that even if Weave is not used at all, WeaveGlue.init in the UIReadyDelayed handler of Fennec called Weave.Services that seems to load a bunch of stuff. The patch turn the pref browser.sync.enabled to false by default and add some chekc to ensure we're not starting anything in this case. The only effect seen in the UI can be seen in the preference panel.
Attachment #511372 - Flags: review?(mark.finkle)
For info what I'm doing to see that is by calling the garbage collector and reading /proc/self/statm before and after the WeaveGlue.init call. This simple and dirty way of reading the memory tell me there is more than 5mo of allocated memory for weave even if the pref is disabled and a bit more than 6mo of allocated memory if the pref is turned on. With the patch there is 0mo allocated if the pref is turned off and still 6 and more megabyte of memory allocated if the pref is turned on.
Summary: Lazy load Weave is it is unused → Lazy load Weave if it is unused
We really can't use this approach. We have to enable Sync by default. We can file a bug with the Sync team about the 5MB on startup.
Attachment #511372 - Flags: review?(mark.finkle) → review-
Created attachment 511431 [details] [diff] [review] Patch v0.2 If I understand correctly the Weave code, no username means we're not able to retrieve any informations relative to that account so we don't bother accessing anything from Weave.Service.
(In reply to comment #2) > We really can't use this approach. We have to enable Sync by default. We can > file a bug with the Sync team about the 5MB on startup. I have changed my mind about this and will review the patches
Comment on attachment 511431 [details] [diff] [review] Patch v0.2 This is all we need to remove the 5MB use? Have you tested this with a real Sync account? Also try disconnect and re-connect. r+ if you have covered the testing
Attachment #511431 - Flags: review?(mark.finkle) → review+
This is what about:memory is reporting: - Memory used without any patch * first launch: ~30mo | ~30mo * next launch: ~28mo | ~28mo - Memory used with patch 1: * first launch: ~27mo | ~25mo * next launch: ~26mo | ~24mo - Memory used with patch 2 (a few kilobytes more than patch 1): * first launch: ~27mo | ~25mo * next launch: ~26mo | ~24mo How to read those data: - first launch is a launch corresponding after rebuild - next launch is by simply launching the binary - The first number in each row correspond to a memory used reported by about:memory, the second number is the same memory used reported by about:memory but after a call to the garbage collector. I've tried by setting up a Weave account and everything seems to work for me, if Weave is set up the memory used is the same as the memory used if there is not patch.
Assignee: nobody → 21
Status: NEW → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → FIXED
Tony, can you please verify this?
Note to self for steps to verify. tchung: vingtetun: hi there. how can i best verify the lazy weave patch? [10:38am] vingtetun: you need to have yesterday's build and today's build [10:39am] vingtetun: then create an empty profile and load about:memory for each build [10:39am] vingtetun: then check the amount of memory reported there [10:39am] tchung: vingtetun: can i keep an existing profile, but just have sync disabled? [10:40am] tchung: or is that not the same effect [10:40am] vingtetun: not the same effect [10:40am] vingtetun: sorry [10:41am] tchung: vingtetun: heh, np. okay what value should i look for in about:memory to do the comparison [10:41am] vingtetun: hmm, i'm ensure I can do a fix for what you mentionned [10:41am] vingtetun: personnlly i'm looking at the value named "Memory in use"
Created attachment 511782 [details] [diff] [review] Patch - followup Tony has make me think that we probably don't want the RAM overhead if the user has explicitly disabled Weave but still have an account. The attached patch handle that.
Attachment #511782 - Flags: review?(mark.finkle)
Nexus S, Clean profile 2/10 Fennec build: - Memory mapped: 33,554,432 - Memory in Use: 28,215,328 2/11 Fennec build: - Memory mapped: 29,360,128 <-- lower! - Memory in Use: 25,694,056 <-- lower! I'll retest this again with existing profile/sync disabled after followup patch has landed.
Comment on attachment 511782 [details] [diff] [review] Patch - followup I don't think I want to take this though. Adding WeaveGlue.init(); to places where we need to make sure Sync is initialized is too fragile. I think we are to the point where the Sync team need to look for ways to reduce the memory usage.
Attachment #511782 - Flags: review?(mark.finkle) → review-
Mike, Philipp - Any ideas why Weave is using so much memory even before we connect?
(In reply to comment #13) > Mike, Philipp - Any ideas why Weave is using so much memory even before we > connect? I don't know exactly the Weave code but I've seen that loading the Crypto component take more than 2 mo of RAM.
(In reply to comment #13) > Mike, Philipp - Any ideas why Weave is using so much memory even before we > connect? Sadly not. ~2mb seem to be due to the crypto component. Other than that, Sync just creates a bunch of objects, registers a bunch of observers and creates lazy getters to XPCOM services. Hardly anything expensive. While I'm not ruling out we're doing something stupid somewhere, I think we need better data. Maybe this is due to the JIT? Or we initialize some XPCOM service that's memory intensive? (In reply to comment #14) > I don't know exactly the Weave code but I've seen that loading the Crypto > component take more than 2 mo of RAM. When you import resource://services-crypto/WeaveCrypto.js, it doesn't really do anything other than define the WeaveCrypto constructor, so the culprit has to be one of the modules WeaveCrypto imports. I suspect ctypes (resource://gre/modules/ctypes.jsm). A completely unscientific test on Minefield has shown that the memory jumps by about 2mb when I import that.
You need to log in before you can comment on or make changes to this bug.