Closed
Bug 894834
Opened 12 years ago
Closed 12 years ago
[Buri][Email]Exit email automatically when download several large attachments at the same time
Categories
(Firefox OS Graveyard :: Gaia::E-Mail, defect, P2)
Firefox OS Graveyard
Gaia::E-Mail
Tracking
(blocking-b2g:1.3+)
People
(Reporter: sync-1, Assigned: asuth)
References
()
Details
AU_LINUX_GECKO_ICS_STRAWBERRY.01.01.00.019.158
Firefox os v1.1
Mozilla build ID:20130709070206
DEFECT DESCRIPTION:
Exit email automatically when download several large attachments at the same time
REPRODUCING PROCEDURES:
1.Receive a mail with three 4M attachments;
2.Then tap the three download button to download the three attachments at the same time;
3.Wait a moment,it exit the email application and the attachments failed to be downloaded-->ko
EXPECTED BEHAVIOUR:
The attachments should be downloaded successfully
ASSOCIATE SPECIFICATION:
TEST PLAN REFERENCE:
TOOLS AND PLATFORMS USED:
USER IMPACT:
REPRODUCING RATE:3/3
For FT PR, Please list reference mobile's behavior:
Comment 1•12 years ago
|
||
Can product team provide info about the memory expectations of the device, what size of attachment we should be able to download without hitting OOM issues, and any other info that could help set the criteria for v1.1 in this situation?
blocking-b2g: leo? → -
Flags: needinfo?(ffos-product)
Assignee | ||
Comment 2•12 years ago
|
||
I've sent a message to dev-b2g asking about the more elegant options here. https://groups.google.com/d/msg/mozilla.dev.b2g/N8aHf8AN1mk/qgIxJ9CwoRcJ
Currently, and especially for v1.1, I suspect the platform limits what we can do pretty extensively, making most comprehensive fixes complicated/a lot of work and so unlikely for v1.1.
Excerpting from the bottom of the message:
"I expect we might just resort to a series of stop-gap measures like not offering download of files above a certain size and doing more to ensure that we minimize our maximum live memory footprint. We will currently definitely end up with the entire base64-encoded string in memory (built incrementally using += so there could be funky rope stuff happening) at the same time the decoded string is in memory. We could reduce that via chunking the decode more. And depending whether the Blob will reuse our String's internal rep (no? or if not no, then presumably the Blob's payload gets structured clone when we sent it from our worker to the main page thread?), we could probably get the decode string into Blob form a lot faster, and possibly onto the main thread in smaller chunks faster too depending on the cost profile there."
The only absurdly simple options on hand are:
- Refuse to download attachments above a larger size by just not showing the download button and letting users guess why.
- Forcing an idle period between downloading attachments so that we give the garbage collector more time to clean up after us. We can help it a little bit more by zeroing out mparser._currentNode.content after we return it to decreate the liveness of the decoded string too.
Comment 3•12 years ago
|
||
Can we force one attachment download at a time in email?
Assignee | ||
Comment 4•12 years ago
|
||
(In reply to Peter Dolanjski [:pdol] from comment #3)
> Can we force one attachment download at a time in email?
Attachments already download sequentially. A deficiency of our implementation that I didn't think of in that message (and realized while working on the streaming-send logic), is that we are not forcing the BodyInfo structure to be re-loaded from the database so that the memory-backed Blobs we are holding onto keep increasing our memory footprint even though the data is already available on disk.
This would explain why downloading multiple attachments in sequence would have the potential to kill us.
We want to make sure that our attachment download jobs force a flush of the body record, then re-fetch before returning any additional info about the body. More specifically, we want to make sure there is zero opportunity for the memory Blob to get sent to the front-end. I think I'll be added a special method to FolderStorage for draft purposes for this idiom anyways.
We may still want to do chunked writing as part of the process, but just making this one change should have a sizable impact.
blocking-b2g: - → koi?
Comment 5•12 years ago
|
||
It probably makes sense to pull this into our next sprint if possible.
Flags: needinfo?(ffos-product)
Hi all, mark leo? again because this is critical problem. Pull it into next sprint, thanks.
blocking-b2g: koi? → leo?
Comment 7•12 years ago
|
||
Jack - while this might be a critical problem it's out of scope for the final stages of v1.1 and we cannot add it to a sprint when there are no more left - we're closing out final blockers and getting through IOT. This will have to get a proper engineering solution created in v1.2 or later.
blocking-b2g: leo? → koi?
Updated•12 years ago
|
blocking-b2g: koi? → koi+
Assignee | ||
Comment 8•12 years ago
|
||
The multiple-small-blob-kills-us problem discussed in comment 4 is being fixed as part of my fix for the streaming memory-death bug 871897. The more difficult issue of a single-giant-blob-kills-us will need to be spun off, especially because there are platform issues that would ideally be addressed so we're not just adding more kludges.
Assignee: nobody → bugmail
Assignee | ||
Updated•12 years ago
|
Status: NEW → ASSIGNED
Updated•12 years ago
|
Target Milestone: --- → 1.3 Sprint 3 - 10/25
Comment 10•12 years ago
|
||
(In reply to Preeti Raghunath(:Preeti) from comment #9)
> What are the next steps here?
This bug will be fixed by bug 871897 (currently in review). Per comment #8 here it looks like we'll need a different bug for handling a single larger attachment, but that will not be in scope for 1.2.
Flags: needinfo?(bugmail)
Comment 11•12 years ago
|
||
Given the latest developments in bug 871897, this one is too risky for 1.2 at this point in the cycle -- we'll keep working on it but push it out to 1.3.
blocking-b2g: koi+ → 1.3+
Updated•12 years ago
|
Target Milestone: 1.3 Sprint 3 - 10/25 → 1.3 Sprint 4 - 11/8
Assignee | ||
Comment 12•12 years ago
|
||
The fix discussed in comment 4 and comment 8 has now been landed as part of bug 871897. As previously mentioned, I am marking this fixed.
I have filed bug 943798 as the spin-off about the next logical steps for addressing this problem.
Updated•11 years ago
|
Flags: in-moztrap?
Updated•11 years ago
|
Flags: in-moztrap? → in-moztrap+
You need to log in
before you can comment on or make changes to this bug.
Description
•