Closed Bug 945161 Opened 11 years ago Closed 10 years ago

Decode big jpeg image to a lower resolution version for low end devices

Categories

(Core :: Graphics: ImageLib, defect, P2)

All
Gonk (Firefox OS)
defect

Tracking

()

RESOLVED DUPLICATE of bug 1045926
tracking-b2g backlog

People

(Reporter: sinker, Assigned: kanru)

References

(Blocks 4 open bugs)

Details

(Keywords: perf, Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2][demo])

Attachments

(8 files, 2 obsolete files)

23.32 KB, patch
seth
: review+
Details | Diff | Splinter Review
8.00 KB, patch
seth
: review+
Details | Diff | Splinter Review
1.63 KB, patch
seth
: review+
Details | Diff | Splinter Review
2.81 KB, patch
seth
: review+
Details | Diff | Splinter Review
14.16 KB, patch
seth
: review+
Details | Diff | Splinter Review
2.37 KB, patch
seth
: review+
Details | Diff | Splinter Review
3.22 KB, patch
Details | Diff | Splinter Review
1.85 KB, patch
seth
: review+
Details | Diff | Splinter Review
Story: Some WEB sites embed full size images on the pages. It may eat out the memory of low-end devices, and make the browser app unusable; crash often. Solution: We should handle extreme cases of this kind, and do not decode images that exceed a threshold; for ex. 2MB. It would make the browser app more usable with a down-graded service.
See Also: → 854795
I don't know the first thing about image decoding... is it possible to decode a low-res version of an image? So quality would be reduced but memory consumption would be lower.
For now, AFAIK, libjpeg always decode full size image before down-sampling. We are finding some one to improve libjpeg to get low-res version directly. Once libjpeg is ready, a low-res version is a better solution.
I believe that libjpeg already supports this. CCing the imagelib people and moving this to a component where they will see it.
Component: General → ImageLib
Product: Firefox OS → Core
Keywords: perf
Whiteboard: [MemShrink]
Bug 854795 is relevant, as it discusses possible mechanisms for dealing with this problem. AFAICT nobody is currently working on it, unfortunately.
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #3) > I believe that libjpeg already supports this. CCing the imagelib people and > moving this to a component where they will see it. Yup, libjpeg does it for power of two downscaling. Bug 854795 is dealing with "decode into downsampled size" - this bug suggests we ignore images larger than a certain size (resolution instead?) - or can it be made a duplicate of bug 854795.
If we had the flexibility to decode into different sizes with ease I'd say we should implement this feature in terms of that, although the exact policy we should use is debatable (and some parts of that debate have already played out in bug 854795). If web pages were totally static and didn't share images we'd just want to select a decoded size using a formula like min(image_size_on_screen, absolute_max_size). This would nicely handle cases like image galleries and forums that require us to decode huge images to display tiny thumbnails. There are sticky issues around questions like "what do we do if the on-screen size of the image changes?" and "what if the same image is displayed multiple times at different sizes?" that require solving before this can be handled in a transparent way, though. The approach that was winning in bug 854795 sidestepped those issues by requiring that web developers specifically request that the image be decoded to a smaller size, which handles B2G's problems but not so much the problem in this bug. Given that nobody is actively working on solving those problems, I think it is reasonable to keep this bug separate. It would be better to refuse to decode large images than OOM, and we can get that a _lot_ more easily than we can get even bug 854795 done. Someday we may have the infrastructure to do much better, though.
Agreed with Seth on absolute_max_size. We should put no limitation on the input image size, but always decode them UP TO a certain size. Although not as memory efficient as the approach in bug 854795, it requires no API changes and still be able to cover extremely large images. By the way, isn't it possible to decode jpegs into n/8 (0<n<=8) downscaling directly?
(In reply to Ting-Yuan Huang from comment #7) > By the way, isn't it possible to decode jpegs into n/8 (0<n<=8) downscaling > directly? Yes, I believe libjpeg supports that, and IIRC that's the approach that Shih-Chiang Chien was taking in bug 854795. There's a patch up already but it needs more work.
Looks like the latest libjpeg support n/16 (0<n<=16) downscaling now. http://dxr.mozilla.org/mozilla-central/source/media/libjpeg/jdmaster.c#101
After the discussion with Ting-Yuan and Shih-Chiang, we reach a consensus on some ideas. - We need 2 threshold, IMAGE_BIG_SIZE & IMAGE_SUPER_SIZE. - For the images that the number of all pixels is bigger than IMAGE_BIG_SIZE, they are down scaled when decoding. - For the images bigger than IMAGE_SUPER_SIZE, they are not to be decoded. - IMAGE_BIG_SIZE & IMAGE_SUPER_SIZE are vendor's decisions. (compile time or preference). - Down scale the image when decoding if the on-screen size is smaller than image size. I had discussed about the application of sprites. Usually, it is an image of icons or frames side-by-side. Every time, a part of the image, a frame or an icon, would be shown. For the former case, the images are usually not too big to handle. At least, it is very rare to fall into the case of IMAGE_SUPER_SIZE. For the later case, it prone to the cases of IMAGE_BIG_SIZE and IMAGE_SUPER_SIZE. For IMAGE_BIG_SIZE case, the quality of the animation would be downgraded, but still in some kind of acceptable level. For IMAGE_SUPER_SIZE case, the image would be down scaled to much smaller size, and is hard to be tell after drawing on the screen with original scale, if we still try to do that. So, it would be better to stop decode it. There also suggest to open another bug for improving the quality of sprites. If we found an image is request to be draw at the same place, but being shifted time-to-time, it could be an animation. We decode only a part of the image for each frame. It slows down the animation, but it work at least. We don't have the function of partially decoding for now. (?) It should be a long term improvement for low-end devices.
Why do we need vendor constants for this? Can't we look at the available RAM and figure out the right settings?
I guess you are not talking detecting available RAM size on the fly. It is not easy to figure out available RAM correctly since there are not only b2g on devices. Drivers and other daemons may grow their size during their life, their behavior may vary device to device. It would get a better setting with experiments of real devices. But, we should provide recommendation for normal case of 128MB and 256MB, or for reference devices. That is my reason for constant values, but I don't opposite to have a fancy function for these two value if we could do it correctly.
If we could decode lazily and scale/crop on the fly, this problem can be avoided. Assuming that the screen or canvas size is always reasonable, the sizes of images shown on the canvas, which are never larger than canvas, are always reasonable, too. Obviously performance will be a problem. Caching could be a solution. For multiple resolutions of an image, we only keep the largest one. We may as well apply some heuristics to predict how large of the image will be needed to avoid decoding repeatedly; For example, by doubling the size of an image buffer if the requested one is larger.
I think t(In reply to Ting-Yuan Huang from comment #13) > If we could decode lazily and scale/crop on the fly, this problem can be > avoided. Assuming that the screen or canvas size is always reasonable, the > sizes of images shown on the canvas, which are never larger than canvas, are > always reasonable, too. We don't support decoding with cropping now. If it is easy to implement, it would be a good solution. But, if it takes a lot of time to implement, the risk of matching milestone should be considered, and maybe it should be an improvement of later stages. So, we need more feedback from someone who know this better, about cropping of imagelib.
Whiteboard: [MemShrink] → [MemShrink][tarako]
Blocks: 128RAM
blocking-b2g: --- → 1.3?
Let's talk about this for 1.4. If it's done, then we can uplift for 128mb device.
blocking-b2g: 1.3? → 1.4?
Assignee: nobody → kchen
> - We need 2 threshold, IMAGE_BIG_SIZE & IMAGE_SUPER_SIZE. > - For the images that the number of all pixels is bigger than > IMAGE_BIG_SIZE, they are down scaled when decoding. > - For the images bigger than IMAGE_SUPER_SIZE, they are not to be decoded. > - IMAGE_BIG_SIZE & IMAGE_SUPER_SIZE are vendor's decisions. (compile time > or preference). > - Down scale the image when decoding if the on-screen size is smaller than > image size. Surely downsampling all images is better than not displaying the super-sized images?
Whiteboard: [MemShrink][tarako] → [MemShrink:P2][tarako]
So the plan is to use following algorithm: 0. Do size decode so we could compute layout. 1. Send constraints to the decoder based on the screen size, frame size, available memory, etc. 2. Decoder allocates memory and decode downsampled image according to the constraints. 2. Re-decode the image if constraints changed. The image source might be quite large as well so we would like to not preserve the image source buffer. In order to do that we need to read from the stream every time we decode the image. That requires us to cache the image file, for the session only if the image is not cacheable.
Whiteboard: [MemShrink:P2][tarako] → [c=memory p= s= u=tarako] [MemShrink:P2][tarako]
hi Kanru, can you advise on the effort and ETA of this bug? we'd like to consider this for Tarako. Thanks
blocking-b2g: 1.4? → 1.3T?
Flags: needinfo?(kchen)
Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2][tarako] → [c=memory p= s= u=tarako] [MemShrink:P2]
Let's kill this bug in the Tarako work week.
Flags: needinfo?(kchen)
triage: 1.3T+
blocking-b2g: 1.3T? → 1.3T+
Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2] → [c=memory p= s= u=tarako] [MemShrink:P2][demo]
Attached patch WIP patch (obsolete) — Splinter Review
Use fixed width/height constraint currently
Summary: Stop decode big image for the browser app for low end devices → Decode big image to a lower resolution version for low end devices
Status: NEW → ASSIGNED
Priority: -- → P2
Attachment #8376013 - Flags: feedback?(seth)
Comment on attachment 8376013 [details] [diff] [review] WIP patch Review of attachment 8376013 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, Kan-Ru. Sorry for my slow response. You're planning on getting feedback from layout eventually, instead of relying on media fragments, right? ::: image/src/Decoder.cpp @@ +456,5 @@ > +{ > + mFrameSizeConstraint.width = std::max(mFrameSizeConstraint.width, > + aFrameSizeConstraint.width); > + mFrameSizeConstraint.height = std::max(mFrameSizeConstraint.height, > + aFrameSizeConstraint.height); Don't you really want to work in terms of area and not width/height independently? Maximizing the coordinates independently can give you a much larger area than any of the aFrameSizeConstraints you got passed had individually. Also, I don't understand why you're using max and not min here. Isn't the goal to satisfy all constraints simultaneously? That'd imply min to me. ::: image/src/ImageFactory.cpp @@ +185,5 @@ > + nsAutoCString ref; > + aURI->GetRef(ref); > + mozilla::net::nsMediaFragmentURIParser parser(ref); > + if (parser.HasResolution()) { > + newImage->SetRequestedResolution(parser.GetResolution()); Is this based on bug 854795, or is there just some code missing from the patch? ::: image/src/RasterImage.h @@ +722,5 @@ > > inline bool CanQualityScale(const gfx::Size& scale); > inline bool CanScale(GraphicsFilter aFilter, gfx::Size aScale, uint32_t aFlags); > + inline bool IsDownsampled(nsIntRect aFrameRect) { > + return (aFrameRect.width < mSize.width || aFrameRect.height < mSize.height); The problem is that for animated images, individual frames may not take up the whole area of the image. They may be smaller than mSize without actually being downsampled. We probably shouldn't try to infer this from the size of the imgFrame.
Attachment #8376013 - Flags: feedback?(seth)
(In reply to Seth Fowler [:seth] from comment #22) > Comment on attachment 8376013 [details] [diff] [review] > WIP patch > > Review of attachment 8376013 [details] [diff] [review]: > ----------------------------------------------------------------- > > Looks good, Kan-Ru. Sorry for my slow response. > > You're planning on getting feedback from layout eventually, instead of > relying on media fragments, right? Yes. The idea is to infer a nice downsampling ratio from layout. > ::: image/src/Decoder.cpp > @@ +456,5 @@ > > +{ > > + mFrameSizeConstraint.width = std::max(mFrameSizeConstraint.width, > > + aFrameSizeConstraint.width); > > + mFrameSizeConstraint.height = std::max(mFrameSizeConstraint.height, > > + aFrameSizeConstraint.height); > > Don't you really want to work in terms of area and not width/height > independently? Maximizing the coordinates independently can give you a much > larger area than any of the aFrameSizeConstraints you got passed had > individually. Hmm... you're right. I should use the area here. > Also, I don't understand why you're using max and not min here. Isn't the > goal to satisfy all constraints simultaneously? That'd imply min to me. I use max here because otherwise the image with largest area will appear very blur. The auto downsampling is only triggered if the image is larger than a certain size and we want to limit the max size but maintain some rendering quality. The math here is just temporary; other heuristic could also be used. > ::: image/src/ImageFactory.cpp > @@ +185,5 @@ > > + nsAutoCString ref; > > + aURI->GetRef(ref); > > + mozilla::net::nsMediaFragmentURIParser parser(ref); > > + if (parser.HasResolution()) { > > + newImage->SetRequestedResolution(parser.GetResolution()); > > Is this based on bug 854795, or is there just some code missing from the > patch? This patch is based on bug 854795 SC's work but I changed how the buffer size is reported from the decoder so that each decoder could implement downsampling separately. > ::: image/src/RasterImage.h > @@ +722,5 @@ > > > > inline bool CanQualityScale(const gfx::Size& scale); > > inline bool CanScale(GraphicsFilter aFilter, gfx::Size aScale, uint32_t aFlags); > > + inline bool IsDownsampled(nsIntRect aFrameRect) { > > + return (aFrameRect.width < mSize.width || aFrameRect.height < mSize.height); > > The problem is that for animated images, individual frames may not take up > the whole area of the image. They may be smaller than mSize without actually > being downsampled. > > We probably shouldn't try to infer this from the size of the imgFrame. I thought individual frames have the same size. I could compare mSampledSize with mSize here.
Kanru, when do you think you can provide a patch for review? We're running out of time.
Flags: needinfo?(kchen)
Flags: needinfo?(kchen)
Summary: Decode big image to a lower resolution version for low end devices → Decode big jpeg image to a lower resolution version for low end devices
(In reply to James Ho from comment #24) > Kanru, when do you think you can provide a patch for review? We're running > out of time. Will focus on jpeg for this bug. To use the layout information we will only enable this if decode on draw is true. I think in one or two days I could come up a patch for review.
Attachment #8376013 - Attachment is obsolete: true
Attachment #8390352 - Flags: review?(seth)
Blocks: 983050
Blocks: 983051
Blocks: 983056
Comment on attachment 8390352 [details] [diff] [review] Decode big image to a lower resolution version for low end devices Review of attachment 8390352 [details] [diff] [review]: ----------------------------------------------------------------- I like where this is going! As usual, there are a couple of aggravating corner cases in imagelib, though. ::: image/public/imgIContainer.idl @@ +339,5 @@ > * resetAnimation(), or requestRefresh() is called for the first time. > */ > [notxpcom] void setAnimationStartTime([const] in TimeStamp aTime); > + > + void addSizeConstraint([const] in nsIntSize aSize); Needs a comment. ::: image/public/imgIRequest.idl @@ +158,4 @@ > void requestDecode(); > void startDecoding(); > > + void addSizeConstraint([const] in nsIntSize aSize); Needs a comment. ::: image/src/ImageFactory.cpp @@ +193,5 @@ > + aURI->GetRef(ref); > + mozilla::net::nsMediaFragmentURIParser parser(ref); > + if (parser.HasResolution()) { > + newImage->SetRequestedResolution(parser.GetResolution()); > + } Are you still using the media fragment solution? I was hoping we could drop that from this patch, now that you have imgIRequest::AddSizeConstraint. ::: image/src/ImageWrapper.cpp @@ +315,5 @@ > > +NS_IMETHODIMP > +ImageWrapper::AddSizeConstraint(nsIntSize const& aSize) > +{ > + return mInnerImage->AddSizeConstraint(aSize); You'll need to override this for OrientedImage and ClippedImage, or you'll get wrong results. Those wrappers change the size that the image seems to have from the perspective of layout. The aspect ratio might even be different! If time is short and you need a quick fix, you could just refuse to pass AddSizeConstraint on to the inner image for those wrappers. Neither of those cases is very common. ::: image/src/RasterImage.cpp @@ +2599,5 @@ > + if (IsDownsampled()) { > + gfx::Size downsampledScale = GetDownsampledScale(); > + userSpaceToImageSpace.Multiply(gfxMatrix().Scale(downsampledScale.width, > + downsampledScale.height)); > + subimage.ScaleRoundOut(downsampledScale.width, downsampledScale.height); So we never do high quality downscaling if we're downsampled? Is that what we want? ::: image/src/imgRequest.cpp @@ +828,5 @@ > static_cast<uint32_t>(mInnerWindowId)); > > + if (mSizeConstrainted) { > + mImage->AddSizeConstraint(mSizeConstraint); > + } So in the multipart/x-mixed-replace case, the different parts can have different sizes. I'm frankly not entirely sure how we should implement handling this, because I'm not sure exactly how the layout code ends up reacting. It's kind of a corner case anyway, so we don't need to spend too much effort on it as part of this bug, but at a minimum we should make sure that test_bug733553.html is hitting your size constraint code. If it is and that test passes, we're probably OK, but unless you're 100% sure there's no problem you should still add a comment about this issue. That way if it ever bites us we can track it down more easily. ::: image/src/imgRequest.h @@ +253,4 @@ > bool mIsInCache : 1; > bool mBlockingOnload : 1; > bool mResniffMimeType : 1; > + bool mSizeConstrainted : 1; Should be "mSizeConstrained".
Attachment #8390352 - Flags: review?(seth) → review-
(In reply to Seth Fowler [:seth] from comment #27) > Comment on attachment 8390352 [details] [diff] [review] > Decode big image to a lower resolution version for low end devices > ::: image/src/ImageFactory.cpp > @@ +193,5 @@ > > + aURI->GetRef(ref); > > + mozilla::net::nsMediaFragmentURIParser parser(ref); > > + if (parser.HasResolution()) { > > + newImage->SetRequestedResolution(parser.GetResolution()); > > + } > > Are you still using the media fragment solution? I was hoping we could drop > that from this patch, now that you have imgIRequest::AddSizeConstraint. Are you in favor of removing the media fragment solution entirely? Or add a size constraint based on the requested sample size. > ::: image/src/RasterImage.cpp > @@ +2599,5 @@ > > + if (IsDownsampled()) { > > + gfx::Size downsampledScale = GetDownsampledScale(); > > + userSpaceToImageSpace.Multiply(gfxMatrix().Scale(downsampledScale.width, > > + downsampledScale.height)); > > + subimage.ScaleRoundOut(downsampledScale.width, downsampledScale.height); > > So we never do high quality downscaling if we're downsampled? Is that what > we want? We could do high quality downscaling when the CPU is idling since we still have the source data. That would require us to decode to a higher resolution then HQ downscale it so we should put memory into consideration too. This will be a follow up work. > ::: image/src/imgRequest.cpp > @@ +828,5 @@ > > static_cast<uint32_t>(mInnerWindowId)); > > > > + if (mSizeConstrainted) { > > + mImage->AddSizeConstraint(mSizeConstraint); > > + } > > So in the multipart/x-mixed-replace case, the different parts can have > different sizes. I'm frankly not entirely sure how we should implement > handling this, because I'm not sure exactly how the layout code ends up > reacting. > > It's kind of a corner case anyway, so we don't need to spend too much effort > on it as part of this bug, but at a minimum we should make sure that > test_bug733553.html is hitting your size constraint code. If it is and that > test passes, we're probably OK, but unless you're 100% sure there's no > problem you should still add a comment about this issue. That way if it ever > bites us we can track it down more easily. All 18 tests passed but they were not hitting the size constraint code. We could ignore the size constraint for mMultipart and mAnim first if we are not sure what to do for the corner cases.
(In reply to Kan-Ru Chen [:kanru] from comment #28) > Are you in favor of removing the media fragment solution entirely? Or add a > size constraint based on the requested sample size. If there's reason to keep it, we should keep it for now. If not, I am in favor of removing it entirely. Is your patch enough to do the job without the media fragment solution? > > ::: image/src/RasterImage.cpp > > @@ +2599,5 @@ > > > + if (IsDownsampled()) { > > > + gfx::Size downsampledScale = GetDownsampledScale(); > > > + userSpaceToImageSpace.Multiply(gfxMatrix().Scale(downsampledScale.width, > > > + downsampledScale.height)); > > > + subimage.ScaleRoundOut(downsampledScale.width, downsampledScale.height); > > > > So we never do high quality downscaling if we're downsampled? Is that what > > we want? > > We could do high quality downscaling when the CPU is idling since we still > have the source data. That would require us to decode to a higher resolution > then HQ downscale it so we should put memory into consideration too. This > will be a follow up work. Sounds good. If you file a followup bug about it (please CC me on it), I think that's enough. > All 18 tests passed but they were not hitting the size constraint code. > We could ignore the size constraint for mMultipart and mAnim first if we are > not sure what to do for the corner cases. I'd prefer not to ignore the size constraints for all multipart images, because the vast majority of such cases will have the same size between all parts. Size changes are _very_ rare. How about this: in RasterImage::SetSize, if we have a multipart image and the size has changed, we reset the size constraints and set a flag to ignore all future calls to AddSizeConstraint. That way we get the advantage of decoding to lower resolution for almost all multipart images, but we should still get reasonable results if the parts do change in size.
Blocks: 984759
David, could you try if these patches work for you? These patches remove -moz-samplesize support and downsample when you set width and height that is smaller than the original image. Note you have to turn image.mem.use_decode_size_constraint on.
Flags: needinfo?(dflanagan)
Attachment #8393300 - Flags: review?(seth)
Attachment #8393348 - Flags: review?(seth)
Comment on attachment 8393348 [details] [diff] [review] Fix compiler warnings Review of attachment 8393348 [details] [diff] [review]: ----------------------------------------------------------------- ::: image/decoders/nsJPEGDecoder.cpp @@ +271,4 @@ > > // Size constraint changed, request a new frame > nsIntRect bounds = mCurrentFrame->GetRect(); > + if (bounds.width > 0 && bounds.height > 0 && Should we assert that bounds.width/bounds.height are >= 0?
Attachment #8393348 - Flags: review?(seth) → review+
Comment on attachment 8392732 [details] [diff] [review] Revert Bug 854795.-Add-support-for-moz-samplesize Review of attachment 8392732 [details] [diff] [review]: ----------------------------------------------------------------- R++++++++++++++++++++++++++ Best patch ever!
Attachment #8392732 - Flags: review?(seth) → review+
Comment on attachment 8392731 [details] [diff] [review] Decode big image to a lower resolution version for low end devices Review of attachment 8392731 [details] [diff] [review]: ----------------------------------------------------------------- Nice! ::: image/src/RasterImage.cpp @@ +3187,5 @@ > + return NS_OK; > + } > + > + if (sMaxBufferWidth != 0 && sMaxBufferHeight != 0 && > + (aSize.width >= sMaxBufferWidth || aSize.height >= sMaxBufferHeight)) { It seems to me we should be using area rather than separate settings for width and height here as well. r+ with that change.
Attachment #8392731 - Flags: review?(seth) → review+
Comment on attachment 8393300 [details] [diff] [review] Set Preference variables Review of attachment 8393300 [details] [diff] [review]: ----------------------------------------------------------------- ::: b2g/app/b2g.js @@ +922,5 @@ > pref("dom.wakelock.enabled", true); > + > +// Should be set on a per-device basis > +pref("image.mem.max_buffer_width", 0); > +pref("image.mem.max_buffer_height", 0); As I mentioned in the other patch, I think we should be using area here. r+ with that change.
Attachment #8393300 - Flags: review?(seth) → review+
Comment on attachment 8392731 [details] [diff] [review] Decode big image to a lower resolution version for low end devices >diff --git a/layout/generic/nsImageFrame.cpp b/layout/generic/nsImageFrame.cpp >index 78e10c2..5fc69e1 100644 >--- a/layout/generic/nsImageFrame.cpp >+++ b/layout/generic/nsImageFrame.cpp >@@ -855,6 +855,11 @@ nsImageFrame::Reflow(nsPresContext* aPresContext, > getter_AddRefs(currentRequest)); > if (currentRequest) { > currentRequest->GetImageStatus(&loadStatus); >+ if (mState & IMAGE_SIZECONSTRAINED) { >+ currentRequest->AddSizeConstraint( >+ nsIntSize(aPresContext->AppUnitsToDevPixels(aMetrics.Width()), >+ aPresContext->AppUnitsToDevPixels(aMetrics.Height()))); >+ } > } > } > if (aPresContext->IsPaginated() && Seems like this would make more sense lower down in the function? >@@ -908,6 +913,12 @@ nsImageFrame::ReflowFinished() > { > mReflowCallbackPosted = false; > >+ if (mImage) { >+ mImage->AddSizeConstraint( >+ nsIntSize(PresContext()->AppUnitsToDevPixels(GetSize().width), >+ PresContext()->AppUnitsToDevPixels(GetSize().height))); >+ } >+ > nsLayoutUtils::UpdateImageVisibilityForFrame(this); > > return false; I don't think the size can change after this frames Reflow function has finished, so this is unnecessary.
(In reply to Timothy Nikkel (:tn) from comment #40) > >+ currentRequest->AddSizeConstraint( > >+ nsIntSize(aPresContext->AppUnitsToDevPixels(aMetrics.Width()), > >+ aPresContext->AppUnitsToDevPixels(aMetrics.Height()))); Oh, and this won't give you the actual device pixel size (despite what it says) the image is drawn at if there are any transforms that have non-1 scales involved at all.
Do we have a plan for how to handle the situation where we've decoded an image at a smaller size, but then later we need a larger size of it?
I'm getting lost on what we want in 1.5 between this and bug 854795. This bug is in 1.3, bug 854795 is in 1.4, what's our long term solution?
Comment on attachment 8392731 [details] [diff] [review] Decode big image to a lower resolution version for low end devices >@@ -855,6 +855,11 @@ nsImageFrame::Reflow(nsPresContext* aPresContext, >+ if (mState & IMAGE_SIZECONSTRAINED) { >+ currentRequest->AddSizeConstraint( >+ nsIntSize(aPresContext->AppUnitsToDevPixels(aMetrics.Width()), >+ aPresContext->AppUnitsToDevPixels(aMetrics.Height()))); >+ } Sorry to pile on. aMetrics.Width/Height() include the border and padding too, not just the content box (where the image is actually drawn into).
(In reply to Timothy Nikkel (:tn) from comment #40) > Comment on attachment 8392731 [details] [diff] [review] > Decode big image to a lower resolution version for low end devices > > >diff --git a/layout/generic/nsImageFrame.cpp b/layout/generic/nsImageFrame.cpp > >index 78e10c2..5fc69e1 100644 > >--- a/layout/generic/nsImageFrame.cpp > >+++ b/layout/generic/nsImageFrame.cpp > >@@ -855,6 +855,11 @@ nsImageFrame::Reflow(nsPresContext* aPresContext, > > getter_AddRefs(currentRequest)); > > if (currentRequest) { > > currentRequest->GetImageStatus(&loadStatus); > >+ if (mState & IMAGE_SIZECONSTRAINED) { > >+ currentRequest->AddSizeConstraint( > >+ nsIntSize(aPresContext->AppUnitsToDevPixels(aMetrics.Width()), > >+ aPresContext->AppUnitsToDevPixels(aMetrics.Height()))); > >+ } > > } > > } > > if (aPresContext->IsPaginated() && > > Seems like this would make more sense lower down in the function? Maybe even earlier in the function? Just after we get the IMAGE_SIZECONSTRAINED and computed size because we want to know the size constraint but not the borders, padding and pagination. > >@@ -908,6 +913,12 @@ nsImageFrame::ReflowFinished() > > { > > mReflowCallbackPosted = false; > > > >+ if (mImage) { > >+ mImage->AddSizeConstraint( > >+ nsIntSize(PresContext()->AppUnitsToDevPixels(GetSize().width), > >+ PresContext()->AppUnitsToDevPixels(GetSize().height))); > >+ } > >+ > > nsLayoutUtils::UpdateImageVisibilityForFrame(this); > > > > return false; > > I don't think the size can change after this frames Reflow function has > finished, so this is unnecessary. On a second thought.. this indeed is unnecessary. (In reply to Timothy Nikkel (:tn) from comment #41) > (In reply to Timothy Nikkel (:tn) from comment #40) > > >+ currentRequest->AddSizeConstraint( > > >+ nsIntSize(aPresContext->AppUnitsToDevPixels(aMetrics.Width()), > > >+ aPresContext->AppUnitsToDevPixels(aMetrics.Height()))); > > Oh, and this won't give you the actual device pixel size (despite what it > says) the image is drawn at if there are any transforms that have non-1 > scales involved at all. See below. (In reply to Timothy Nikkel (:tn) from comment #42) > Do we have a plan for how to handle the situation where we've decoded an > image at a smaller size, but then later we need a larger size of it? Currently we assume the initial size is the desired size to display so any transform or change of size will just stretch the downsampled image. Later we could add heuristics to redecode if the size changes too much.
(In reply to Kan-Ru Chen [:kanru] from comment #45) > Currently we assume the initial size is the desired size to display so any > transform or change of size will just stretch the downsampled image. Later > we could add heuristics to redecode if the size changes too much. Transforms can be completely static and present from the beginning of the load of the document, so I think dynamic changes and transform are separate issues.
(In reply to Milan Sreckovic [:milan] from comment #43) > I'm getting lost on what we want in 1.5 between this and bug 854795. This > bug is in 1.3, bug 854795 is in 1.4, what's our long term solution? IMO this is our long term solution.
(In reply to Kan-Ru Chen [:kanru] from comment #33) > David, could you try if these patches work for you? These patches remove > -moz-samplesize support and downsample when you set width and height that is > smaller than the original image. Note you have to turn > image.mem.use_decode_size_constraint on. Building and trying out a custom gecko takes me a long time, and I don't have any time right now. So I'm just going to have to trust that what you're doing will work. I never got beyond a wip patch with -moz-samplesize so it is not like I'll have to redo much to use this new approach. I assume that you've thought through the fact that the media fragment approach works for CSS background-image URLs and that this approach does not, right?
Flags: needinfo?(dflanagan)
(In reply to David Flanagan [:djf] from comment #48) > I assume that you've thought through the fact that the media fragment > approach works for CSS background-image URLs and that this approach does > not, right? There is no particular reason why this can't be made to work for CSS background images, especially in combination with background-size. It's true that this patch doesn't handle that case right now, though.
I changed all nsIntSize constraint to a single uint32_t constraint which means the area. I'll fold the patches into one patch later.
Attachment #8394667 - Flags: review?(seth)
Blocks: 987016
kchen/Seth, do you mind updating the status of the bug? thanks
Flags: needinfo?(seth)
Flags: needinfo?(kchen)
Since this will affect bug 949779 which is already using #-moz-samplesize we need some testing against that use case. I'll talk to Steve about this.
Flags: needinfo?(kchen)
Hi Kan-Ru, would you mind giving us the updates? thanks!
Flags: needinfo?(kchen)
Oops, this does not work with Canvas. nsLayoutUtils::SurfaceFromElement get frame from imgContainer directly and does not know it's downsampled.
Flags: needinfo?(kchen)
Comment on attachment 8394667 [details] [diff] [review] Use single area size constraint everywhere Review of attachment 8394667 [details] [diff] [review]: ----------------------------------------------------------------- Very nice!
Attachment #8394667 - Flags: review?(seth) → review+
(In reply to Kan-Ru Chen [:kanru] from comment #54) > nsLayoutUtils::SurfaceFromElement get frame from imgContainer directly and > does not know it's downsampled. Heh, yeah, that's a good point. You can use ClippedImage as a guide to know which imgIContainer methods you may need to think about, since ClippedImage has to deal with the same problem of presenting a different size than the 'real' size of the underlying imgIContainer. This situation is very ugly to handle with the current imgIContainer API, since things like Canvas request the actual image surface. There are a couple of options when you need to return a surface at the original size of the image but you have only the downsampled version: (1) Rasterize to a temporary surface. This is what ClippedImage does. (2) Update the caller to be aware of this specific issue. This might well be the best bet here. We may just want to say that callers cannot count on the size of the surface returned from GetFrame being the same as the nominal size of the image. It may also be that there's another change to the imgIContainer API that would make this cleaner; I haven't look at this in depth.
Flags: needinfo?(seth)
BTW, I think it's OK if we don't fix every problem in this bug. Let's make sure we don't regress functionality, but for example if we want to go for option 2 in comment 56, we can definitely start with option 1, get something landed, and then work on option 2 in a followup bug.
Priority: P2 → P1
Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2][demo] → [c=memory p= s= u=tarako] [MemShrink:P2][demo][ETA 4/16]
IMO we shouldn't block 1.3t on this. It's risky to apply this large change without too much testing given our little time frame.
blocking-b2g: 1.3T+ → 1.3T?
ni? Thinker/Fabrice
Flags: needinfo?(tlee)
Flags: needinfo?(fabrice)
Yep, let's move that to backlog.
blocking-b2g: 1.3T? → backlog
Flags: needinfo?(fabrice)
agree
Flags: needinfo?(tlee)
Note about the XXX: We can't get the PresContext until the <img> are attached to a document. But I wonder, can't we use the document that created the <img>?
Attachment #8410803 - Flags: review?(seth)
Implement option (1) as described in comment 56.
Attachment #8410805 - Flags: review?(seth)
Priority: P1 → P2
I've started seriously using #-moz-samplesize in 1.3T. See bug 989290 and bug 1001724 for example. Some (most) of this 1.3T code will also eventually land on master. I know an earlier version of this bug removed the #-moz-samplesize media fragment. If you continue to do that, there will be significant work required in Gaia to adapt to the new system. If there is any way you can make this patch work for all image types with a special-case fast path for jpeg, that would be a huge win over the existing JPEG-only media fragment. I'd be happy to switch from the media fragement to this new approach if I get extra image types with it. The case for making the necessary Gaia changes is much less compelling if the solution is still jpeg-specific. My point, I guess, is that if all you can do is jpeg, please don't remove the #-moz-samplesize code.
Actually, another place where this patch could add useful features over the moz-samplesize approach is to allow the downsample numerator to be set instead of just the denominator. With moz-samplesize, the smallest downsample I can do (with #-moz-samplesize=2) reduces the total number of pixels by a factor of 4. I think libjpeg supports downsample ratio like 7/8ths and 4/5ths, and it might be useful to be able to downsample images just a little bit, rather than so dramatically. needinfo Jeff: am I right about what libjpeg can do? Have you thought about supporting a fractional #-moz-samplesize where you'd handle a n/m syntax? (If there was no / in the value, then samplesize n would mean 1/n)
Flags: needinfo?(jmuizelaar)
David, we're discussing that issue in bug 1004908, right? I've removed the needinfo for Jeff. Currently these patches *do* remove -moz-samplesize, but whether it will end up landing that way depends on whether what these patches offer is enough to meet Gaia's immediate needs. Removing -moz-samplesize requires no immediate work on the part of Gaia devs, and indeed I'd recommend that the code that already uses it not be changed right away. All that would happen is that it would stop doing anything. If we can turn these patches on, turn -moz-samplesize off, and get results that are just as good (if not better), then we can call that a win and we'll definitely rip out -moz-samplesize. If that happens then Gaia devs can remove the non-useless references to -moz-samplesize whenever they get around to it. It won't do any harm in the meantime.
Flags: needinfo?(jmuizelaar)
Comment on attachment 8410803 [details] [diff] [review] Part 4. Try set size constraint when img width/height are set Review of attachment 8410803 [details] [diff] [review]: ----------------------------------------------------------------- Looks good! ::: content/html/content/src/HTMLImageElement.cpp @@ +218,5 @@ > + } > + const nsAttrValue* width = GetParsedAttr(nsGkAtoms::width); > + const nsAttrValue* height = GetParsedAttr(nsGkAtoms::height); > + if (width->Type() != nsAttrValue::eInteger || > + height->Type() != nsAttrValue::eInteger) { This is kind of a bummer, but presumably percentage cases get handled in nsImageFrame, right?
Attachment #8410803 - Flags: review?(seth) → review+
Comment on attachment 8410804 [details] [diff] [review] Part 5. Allocate new frame & set size when doing sync decode Review of attachment 8410804 [details] [diff] [review]: ----------------------------------------------------------------- Looks good but I could use a little clarification. ::: image/decoders/nsJPEGDecoder.cpp @@ +198,5 @@ > { > + if (aBuffer && aCount) { > + mSegment = (const JOCTET *)aBuffer; > + mSegmentLen = aCount; > + } Just from looking at the patch I don't understand this change. Why was this needed? @@ +417,5 @@ > mState = JPEG_START_DECOMPRESS; > + if (NeedsNewFrame()) { > + // Yield to allocate new frame > + return; > + } Same here.
Comment on attachment 8410805 [details] [diff] [review] Part 6. GetFrame returns a scaled frame when the original frame is downsampled Review of attachment 8410805 [details] [diff] [review]: ----------------------------------------------------------------- Looks good.
Attachment #8410805 - Flags: review?(seth) → review+
Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2][demo][ETA 4/16] → [c=memory p= s= u=tarako] [MemShrink:P2][demo]
Kan-Ru, did you see comment 69? I'd like to get this patch r+'d but I need a response from you first.
Flags: needinfo?(kchen)
(Actually, I'm going to unset the review flag. Please re-request review once you've responded to comment 69.)
Attachment #8410804 - Flags: review?(seth)
Any news on this? Is this cross-platform? Beause this will definitely fix bug 1142294 on Android. The image will be downscaled, but when you save the image, will it save the original (the one from the url)?
Yeah, we're still working on this, but we ended up not using the code in this bug. The new approach is called "downscale-during-decode" and it's happening in bug 1045926. (And yes, this should have no effect on what happens when you save the file.)
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → DUPLICATE
(Most of the bugs blocked by this bug are obsolete too. When I get a chance, I need to clean those up.)
Flags: needinfo?(kchen)
blocking-b2g: backlog → ---
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: