Closed Bug 968132 Opened 10 years ago Closed 10 years ago

Calculate the hard limit for decoded image buffer instead of a hard-coded preference

Categories

(Core :: Graphics: ImageLib, defect)

ARM
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: schien, Unassigned)

References

Details

Attachments

(1 file)

By @milan's suggestion in Bug 878577 comment #8, it'll be better to provide a reference design by guessing the buffer size limit at runtime.
Changes the preference to specify a percentage of total system memory
Attachment #8526752 - Flags: review?(schien)
Attachment #8526752 - Flags: review?(milan)
Comment on attachment 8526752 [details] [diff] [review]
Bug-968132-Calculate-the-hard-limit-for-decoded-imag.patch

Review of attachment 8526752 [details] [diff] [review]:
-----------------------------------------------------------------

I'll defer to Seth, but this doesn't quite feel right.  Since we have the device specific preferences that OEMs can set, it is easy enough to set the original value in KB to something different for the larger devices.  25% may be too little or too much for anything but 256mb devices - would we really want 128mb on 512mb devices, for instance?  If we were to decide we want the percentage of the system memory, then I'd rather see a separate, new, preference, which, when set to 0, uses the old value (in kb) and when set to > 0, computes it as a percentage.  But I'm not sure we actually want to move away from what we have, my comment that may have started this non-withstanding.
Attachment #8526752 - Flags: review?(milan) → review?(seth)
Hi Fredrik, thanks for working on this!

Using the percentage of total system memory sounds like a good start. The only concern I have is that you are using a linear function here. For 1GB platform 25% seems sufficient, but for 128MB platform we might need up to 50% to ensure the browsing experience on image-intensive websites/apps.

In this bug we are trying to avoid do the preference adjustment on every product/platform and the hard limit we have now is too conservative to a memory-rich platform. My suggestion is we can still provide a memory limit in KB according to our most-memory-constraint platform as a minimum guarantee and use a percentage if we got more memory.

The equation will look like this:

  limit_we_choose = max([pre-configured memory size], (total_memory * [pre-configured percentage]))
Comment on attachment 8526752 [details] [diff] [review]
Bug-968132-Calculate-the-hard-limit-for-decoded-imag.patch

Review of attachment 8526752 [details] [diff] [review]:
-----------------------------------------------------------------

I'm r-'ing this because the DiscardTracker-based hard limit is about to go away totally. (Indeed, the entire DiscardTracker is being removed.)

The SurfaceCache (which is replacing the DiscardTracker) already has a hard limit that works exactly in this way, as a percentage of main memory.
Attachment #8526752 - Flags: review?(seth)
Attachment #8526752 - Flags: review?(schien)
Attachment #8526752 - Flags: review-
(In reply to Shih-Chiang Chien [:schien] (UTC+8) from comment #3)
>   limit_we_choose = max([pre-configured memory size], (total_memory *
> [pre-configured percentage]))

The SurfaceCache uses min([maximum limit set by pref], (total_memory * [percentage set by pref])).
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → WONTFIX
Fredrik, I do want to thank you for working on this. It's really unfortunate timing that DiscardTracker is getting removed just as you posted the patch. Thanks so much for taking the time to look into making this happen!
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: