Closed Bug 1004908 Opened 10 years ago Closed 7 years ago

#-moz-samplesize media fragment is counter-intuitive and can't handle scales > 0.5

Categories

(Core :: Graphics: ImageLib, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED INVALID

People

(Reporter: djf, Unassigned)

References

Details

Attachments

(1 file, 1 obsolete file)

I've been making heavy use of the #-moz-samplesize media fragment (see bug 854795) for the 1.3T FirefoxOS branch, where it has proved indispensible for the Camera and Gallery apps on the low-memory Tarako device.

Even though I requested the feature, was involved in its design, and have been using it for the last two weeks, I realized today that I did not actually understand how it works.

After examing the gecko patch and libjpeg, I've discovered that when I specify a sample size of n, gecko requests a scale of 1/n from libjpeg.  The libjpeg API allows us to specify both the numerator and the denominator of the scale value. Gecko always uses 1 for the numerator and uses the samplesize as the denominator.

Unfortunately, the libjpeg implementation does not support arbitrary sample sizes like 1/3rd or 1/5th.  What it does it take the fraction you pass and maps to the nearest larger fraction of the form m/8.  So for #-moz-samplesize of 3 we don't get 1/3rd, but 3/8ths, which is the closest larger eighth to what we requested.

In tabular form:

 samplesize  requested scale  actual scale
 ------------------------------------------
     2           1/2             4/8
     3           1/3             3/8
     4           1/4             2/8
     5           1/5             2/8
     6           1/6             2/8
     7           1/7             2/8
     8           1/8             1/8

This seems wrong to me:

 - there are 7 possible values but only four distinct scales we can get. 

 - values 2, 4, and 8 give us scales 1/2, 1/4, and 1/8, so it is natural to
   expect that the value 3 would give 1/3rd.  3/8ths just seems bizarre.

 - there is no way to get scales that are larger than 1/2 but less than 1.
   libjpeg offers scales like 5/8, 3/4 and 7/8th, and these scales would be
   useful to me in Gaia, but I can't access them with #-moz-samplesize.

I think it would be better to use the value from the media fragment as the numerator of the fraction and use 8 as the denominator. Then a value of 1 would be a scale of 1/8th, 2 would be 2/8ths, etc.

The name "samplesize" might still make sense with this new system.  If we specify 1, then we're sampling 1 out of every 8 pixels.  If we specify 4, then we're sampling 4 of every 8.

It may not be too late to change the meaning of #-moz-samplesize like this. I think only the sms, gallery and camera apps are using it at this point. Or, if we want to retain compatibility with the old syntax, I guess we could implement a distinct new syntax where "#-moz-samplesize=n/8" gives us the new interpretation.

As a general note about this feature, I'm finding for my work in gallery and camera, I want to be able to predict with some certainty what size an image will be if I apply the media fragment. So I want a syntax with values that I know will work.  It wouldn't work for me to convert #-moz-samplesize to accept an arbitrary float value. I would still need to rely on the implementation to support only scales of the form n/8.  (So really, I think that the media apps might be best served by a dedicated image manipulation API for downsampling, cropping, and so on. But for now I need to use the media fragment.)
Jeff and Jonas: what do you think. Can we change this?  Is there any chance of changing it in 1.3T?
Flags: needinfo?(jonas)
Flags: needinfo?(jmuizelaar)
This attachment shows a utility function that I'm working on to encapsulate the weirdness of #-moz-samplesize. It also demonstrates the typical use cases I've found for it so far.
Attachment #8416358 - Attachment description: downsample.js → downsample.js: utilities for working with #-moz-samplesize
See Also: → 854795
I forgot to mention that my data on the actual downsampling ratio we get from #-moz-samplesize is from jpeg_core_output_dimensions in media/libjpeg/jdmaster.c
Attachment #8416358 - Attachment is obsolete: true
Julien thinks that Jeff is OOO and suggests that I bring this bug to the attention of Seth..
Flags: needinfo?(seth)
To back David's comment 0, I made this: http://everlong.org/mozilla/samplesize/
Obviously you need to enable image.mozsamplesize.enabled in about:config.
(In reply to David Flanagan [:djf] from comment #0)
> I think it would be better to use the value from the media fragment as the
> numerator of the fraction and use 8 as the denominator. Then a value of 1
> would be a scale of 1/8th, 2 would be 2/8ths, etc.

I think that sounds reasonable.

> It may not be too late to change the meaning of #-moz-samplesize like this.
> I think only the sms, gallery and camera apps are using it at this point.
> Or, if we want to retain compatibility with the old syntax, I guess we could
> implement a distinct new syntax where "#-moz-samplesize=n/8" gives us the
> new interpretation.

Let's avoid that if possible.

> As a general note about this feature, I'm finding for my work in gallery and
> camera, I want to be able to predict with some certainty what size an image
> will be if I apply the media fragment.

The media fragment solution is pretty bad, and we're working right now on replacing it. -moz-samplesize is not something we could realistically standardize, at least in my view.

It seems out of the scope of this bug, but would you mind sending an email to me, Kan-Ru, Shih-Chiang, and Jeff describing what you've learned after trying to use -moz-samplesize in B2G? I'd particularly like to hear some examples of situations where you need to be able to predict the final size of the image exactly, or where you otherwise need more control than you're getting right now. This will help us greatly in planning our work down the road.
Flags: needinfo?(seth)
So beyond those general comments, I would be happy to change the meaning of -moz-samplesize in the way you recommend in comment 0. Since this is a nonstandard property I don't see backwards compatibility as a significant concern, and personally I'd much rather we just change the meaning of the syntax if we've found that the current approach doesn't meet our needs. From the platform side of things, this is fine.

However, presumably there are people on the B2G end who may care about this. David, who do we need to consult with to get the go-ahead to break this code, or to make the decision about when it's appropriate to land this change?
Flags: needinfo?(dflanagan)
Short answer: Yes, I think we should change this to be whatever is the most useful to reduce memory usage. The only constraint we have is riskyness of the patch. I.e. what's the likelihood of this breaking the 1.3T release.

I've always found the samplesize syntax wrong, but since that's what was asked for in 854795 I let it be.

I always thought that what we really wanted was to be able to say "decode to an X pixels by Y pixels bitmap". Where X and Y would be mapped to the size that we're expecting to display the picture on screen.

I.e. I would have thought that the common case is "we want to display a user-provided image in application UI, but we don't want to waste resources keeping an unnecessarily large bitmap in memory". The most expressive way would be to do something like:

<img src="img.jpg#decodeto=200x320">

This would decode into the optimal resolution for a picture which is displayed at a 200x320 resolution. I.e. never upscale a picture as it's being decode, but downscale as is seen fit to not waste memory, while keeping ratio intact of course.
Flags: needinfo?(jonas)
IMO we should have no syntax at all, because the platform should be able to infer this automatically. For the example you give, ideally this should just work:

> <img src="img.jpeg" style="width:200; height:320">

Or even better, to retain the image's intrinsic ratio, something like:

> <img src="img.jpg" style="max-width:200; max-height:320">

That's the direction we're working in now. That's why I want to hear from David about his use cases, because I'd like to know if there are situations where we really can't infer this stuff. So far I haven't heard a convincing example.

The existing media fragment approach is really just a stopgap, because doing it right is a harder (but definitely solvable) problem.
(In reply to Seth Fowler [:seth] from comment #10)
> IMO we should have no syntax at all, because the platform should be able to
> infer this automatically. For the example you give, ideally this should just
> work:
> 
> > <img src="img.jpeg" style="width:200; height:320">
> 
> Or even better, to retain the image's intrinsic ratio, something like:
> 
> > <img src="img.jpg" style="max-width:200; max-height:320">
> 
> That's the direction we're working in now. That's why I want to hear from
> David about his use cases, because I'd like to know if there are situations
> where we really can't infer this stuff. So far I haven't heard a convincing
> example.

The examples that you gave control the display width and height. I'm not sure that combining display width and height and decode width and height into a single value is a good idea. I expect most authors would be surprised if changing an image's width caused redecode. Further, animating these properties seems relatively common.
(In reply to David Flanagan [:djf] from comment #0)
> After examing the gecko patch and libjpeg, I've discovered that when I
> specify a sample size of n, gecko requests a scale of 1/n from libjpeg.  The
> libjpeg API allows us to specify both the numerator and the denominator of
> the scale value. Gecko always uses 1 for the numerator and uses the
> samplesize as the denominator.
> 
> Unfortunately, the libjpeg implementation does not support arbitrary sample
> sizes like 1/3rd or 1/5th.  What it does it take the fraction you pass and
> maps to the nearest larger fraction of the form m/8.  So for
> #-moz-samplesize of 3 we don't get 1/3rd, but 3/8ths, which is the closest
> larger eighth to what we requested.
> 

So the current api is basically a direct copy of Android's inSampleSize in BitmapFactory.Options. Libjpeg originally only supported denominators of 1,2,4,8 and even now that it supports more ratio's these are still the most efficient to decode to (there is no simd for the other ratios).

Are the other sizes valuable enough that you'd rather do a slower decode? How often do you need the other ratios?
Flags: needinfo?(jmuizelaar)
(In reply to Jeff Muizelaar [:jrmuizel] from comment #12)
> (In reply to David Flanagan [:djf] from comment #0)
> > After examing the gecko patch and libjpeg, I've discovered that when I
> > specify a sample size of n, gecko requests a scale of 1/n from libjpeg.  The
> > libjpeg API allows us to specify both the numerator and the denominator of
> > the scale value. Gecko always uses 1 for the numerator and uses the
> > samplesize as the denominator.
> > 
> > Unfortunately, the libjpeg implementation does not support arbitrary sample
> > sizes like 1/3rd or 1/5th.  What it does it take the fraction you pass and
> > maps to the nearest larger fraction of the form m/8.  So for
> > #-moz-samplesize of 3 we don't get 1/3rd, but 3/8ths, which is the closest
> > larger eighth to what we requested.
> > 
> 
> So the current api is basically a direct copy of Android's inSampleSize in
> BitmapFactory.Options. Libjpeg originally only supported denominators of
> 1,2,4,8 and even now that it supports more ratio's these are still the most
> efficient to decode to (there is no simd for the other ratios).
> 
> Are the other sizes valuable enough that you'd rather do a slower decode?
> How often do you need the other ratios?

Jeff: I'm fine with always having a denominator of 8. But I want more numerator choices. The current implementation gives me 1/8, 2/8, 3/8, and 4/8, but it gives them to me in a weird way by setting the denominator.  3/8 works fine, and I'd like to have 5/8, 6/8 and 7/8 as well, or at least 6/8.  

There are cases on Tarako where I have to reduce the size of an image more than necessary because samplesize doesn't have fine enough granularity.  On Tarako, I don't decode an image that is larger than 2mp. If the user puts a 2.4mp image on their phone, my only choice is to use -moz-samplesize=2 on it and decode it as a .6mp image.  If I had a samplesize that would give me 7/8ths, that would allow me to decode it at 49/64ths of full size, which would be under 2mp, but would be big enough that the user probably wouldn't even notice the difference.

If the intent of #-moz-samplesize was to allow only 1/2, 1/4 and 1/8, then perhaps we should change the code so that it does not turn the value 3 into 3/8ths.  And since the name "samplesize" is based on existing usage in Android, maybe we should pick a new name if we create a media fragment that sets the numerator instead of setting the denominator.

If you think that downsampling performance for 5,6, and 7/8s would be really bad, or if you think that it will not be possible to ever implement those downsampling ratios for other image types, then I'll reconsider this request.  Note, though, that a slowly decoded image is arguably better than an image that is smaller than desired and also better than an OOM!
Flags: needinfo?(dflanagan)
(In reply to Seth Fowler [:seth] from comment #10)
> IMO we should have no syntax at all, because the platform should be able to
> infer this automatically. For the example you give, ideally this should just
> work:
> 
> > <img src="img.jpeg" style="width:200; height:320">
> 
> Or even better, to retain the image's intrinsic ratio, something like:
> 
> > <img src="img.jpg" style="max-width:200; max-height:320">
> 
> That's the direction we're working in now. That's why I want to hear from
> David about his use cases, because I'd like to know if there are situations
> where we really can't infer this stuff. So far I haven't heard a convincing
> example.
> 

Jeff makes good points about this approach above. Here's what I'd add:

This seems like the webby approach to it. I somehow specify my desired decode size (possibly in addition to my desired display size) and gecko does its best to save memory and get me the size I asked for.  

That doesn't work for me in the Gallery app, however. With the relatively large image sizes (i.e. photos, not typical web content images) and severely limited memory (on Tarako) I need more control.  If I have a 1600x1200 image it is not enough to give me a way to request it at 480x360.  If I asked for that, gecko would probably decode it at 1/2 (800x600) or 3/8 (600x450) and either leave it at that size or downsample it again at that size.  In either case, the peak memory required is going to be higher than what is required for 480x360 pixels.  Or as another example, suppose I've got a 3mp 2000x1500 image on the phone. I know I don't have enough memory to decode it at full size, so I ask for a 2mp 1600x1200 version because the API allows me to do that.  But under the covers, the implementation can only downsample by 1/2, 3/8ths, 1/4, or 1/8. 1000x750 would be too small, so gecko just gives me the full 3mp, and causes an OOM.

Basically, for Gallery, I need complete transparency in the API. I want to know exactly what is going on when my image is decoded. Kind of the opposite of typicaly web apis.
(In reply to Seth Fowler [:seth] from comment #8)
> 
> However, presumably there are people on the B2G end who may care about this.
> David, who do we need to consult with to get the go-ahead to break this
> code, or to make the decision about when it's appropriate to land this
> change?

I believe that so far #-moz-samplesize is used only by the gallery, camera and sms apps. So Julien and I are the only ones that would need to adapt to a change.  Given that Jeff picked "samplesize" to mirror an Android thing we should pick a new name for the new version, and for a short while allow the two versions to live alongside each other so we don't run into version skew issues where gecko is implements one and gaia is expecting the other.
(In reply to Seth Fowler [:seth] from comment #10)
> IMO we should have no syntax at all, because the platform should be able to
> infer this automatically. For the example you give, ideally this should just
> work:
> 
> > <img src="img.jpeg" style="width:200; height:320">
> 
> Or even better, to retain the image's intrinsic ratio, something like:
> 
> > <img src="img.jpg" style="max-width:200; max-height:320">
> 
> That's the direction we're working in now. That's why I want to hear from
> David about his use cases, because I'd like to know if there are situations
> where we really can't infer this stuff. So far I haven't heard a convincing
> example.
> 
> The existing media fragment approach is really just a stopgap, because doing
> it right is a harder (but definitely solvable) problem.

I said above that this won't work for Gallery. But I do think that it would be valuable for web content on memory-constrained devices, and I hope you continue working on it.

I should add also, that all of the approaches discussed so far (except perhaps this max-width, max-height option) still require me to be able to figure out the full size of an image. Typically the only way to do that is by decoding it, but if decodes can cause OOMs, I'm going to have to continue to parse the image files myself to figure out the image size.  Not something that a JS programmer would typically expect to be able to do.

I also agree that the media fragment approach feels like a stopgap.  I'll take whatever I can get to make Gallery work better.  But the deeper I get into it the more features I want. Like the ability to determine the size of an image without decoding it.  The ability to read and write EXIF data. The ability to do lossless jpeg rotation. And the ability to crop and downsample in one operation, and not just for jpeg images. The ability to encode an image without having to copy it to a canvas first (since that doubles the memory requirement, I think). Basically, I want a JS API to an image manipulation library. Possibly based on the ImageBitmap work being done with the canvas API.

For now, though, #-moz-samplesize, or the enhanced version we're discussing in this bug is going to be the best I can get.
(In reply to Seth Fowler [:seth] from comment #7)
> I'd particularly like to hear some
> examples of situations where you need to be able to predict the final size
> of the image exactly, or where you otherwise need more control than you're
> getting right now. This will help us greatly in planning our work down the
> road.

I think I haven't responded explicitly to this point.  The specific thing that lead me to discover that #-moz-samplesize=3 meant 3/8ths and not 1/3rd and caused me to file this bug is my MediaFrame class in shared/js/media/media_frame. This displays an image and lets hte user zoom in and pan. To save memory, though, it starts off with an EXIF preview image and then loads the full-size image if the user zooms in.  In order to make this smooth, I need to know the full size of the image, because the first thing I do when the user zooms is resize the preview image to match the full size of the image.  Then, when that fullsize image is loaded, I can just swap it in.  All the details are suddenly clearer, but all are still in the same place.

This worked fine when the full size image was decoded at full size.  But for Tarako I sometimes needed to decode full size images at less than full size.  But it only works if I know in advance what size the image is going to be.  My bug was that I was predicting that I'd get an image scaled down to 1/3rd size so when I got an image that was 3/8ths instead, it was positioned incorrectly.

I suppose that as long as I have a lower bound I'm okay. I could have fixed the bug by displaying the 3/8ths image at 1/3 size.  Still, as described above, I also need an upper bound. I need to know that when I try to downsample an image to avoid an OOM gecko really will downsample it enough.
Hey everyone. The discussion here seems to have stalled. We're now getting requests from partners to use this feature in order to reduce memory in partner apps. So I'd like to understand:

* Are *we* still planning to use this feature?
* Would we like to see some of the improvements discussed here? At the very least the ability use
  5/8ths 6/8ths and 7/8ths sampling?
* How urgently if so?

Obviously the perfect solution here would be for gecko to simply automatically detect the render size and scale as appropriate to that. However I got the impression that that was going to take a while before we could make happen.

The second best solution to me seems to be to have the webpage stick an expected render size into the fragment. So something like "img.jpg#-moz-decodesize=500x300". Then as underlying libraries gain features we can do a better and better job at decoding directly to the requested size.

Potentially, if needed, we could also add "img.jpg#-moz-decodesize=500x300&-moz-optimize-for-memory" which would decode to the requested size even if it means that we're not able to use simd instructions.

However, I don't know if this last piece is needed. I.e. I don't know if memory usage is ever so important that we're willing to do sacrifice performance for it. OTOH, is image resampling really CPU bound rather than memory bound?
(In reply to Jonas Sicking (:sicking) from comment #18)
> Hey everyone. The discussion here seems to have stalled. We're now getting
> requests from partners to use this feature in order to reduce memory in
> partner apps. So I'd like to understand:
> 
> * Are *we* still planning to use this feature?
> * Would we like to see some of the improvements discussed here? At the very
> least the ability use
>   5/8ths 6/8ths and 7/8ths sampling?
> * How urgently if so?
> 
> Obviously the perfect solution here would be for gecko to simply
> automatically detect the render size and scale as appropriate to that.
> However I got the impression that that was going to take a while before we
> could make happen.

I'm not necessarily convinced that is the best solution. I think it make sense to have different presentation sizes and decode sizes. Using a single size for this will make it difficult to give predictable performance.

> The second best solution to me seems to be to have the webpage stick an
> expected render size into the fragment. So something like
> "img.jpg#-moz-decodesize=500x300". Then as underlying libraries gain
> features we can do a better and better job at decoding directly to the
> requested size.

I don't expect libjpeg or libpng will gain the ability to decode at an arbitrary size. If we want that functionality we'd probably need to add separate scale during decode functionality.

> 
> Potentially, if needed, we could also add
> "img.jpg#-moz-decodesize=500x300&-moz-optimize-for-memory" which would
> decode to the requested size even if it means that we're not able to use
> simd instructions.
> 
> However, I don't know if this last piece is needed. I.e. I don't know if
> memory usage is ever so important that we're willing to do sacrifice
> performance for it. OTOH, is image resampling really CPU bound rather than
> memory bound?

The boundedness really depends on how you're resampling the image. The short cut that libjpeg takes just does less work because it's using fewer dct coefficients. Regular image sampling is still probably cpu bound instead of memory bandwidth bound.
(In reply to Jeff Muizelaar [:jrmuizel] from comment #19)
> I'm not necessarily convinced that is the best solution. I think it make
> sense to have different presentation sizes and decode sizes. Using a single
> size for this will make it difficult to give predictable performance.

Fair enough. I agree that getting predictable performance would be hard.

In any case it feels like we agree that the "do everything automatically in the platform" is not a viable solution right now (and maybe ever).

> > The second best solution to me seems to be to have the webpage stick an
> > expected render size into the fragment. So something like
> > "img.jpg#-moz-decodesize=500x300". Then as underlying libraries gain
> > features we can do a better and better job at decoding directly to the
> > requested size.
> 
> I don't expect libjpeg or libpng will gain the ability to decode at an
> arbitrary size. If we want that functionality we'd probably need to add
> separate scale during decode functionality.

I think you're misunderstanding my proposal.

My proposal is that the page indicates a size that it is planning to render to. All we'd do is to ensure that the intrinsic size of the resulting image is predictable. The actual size of the internal buffer isn't really affecting the page, so we are free to choose whatever size is convenient for us.

In other words, for something like <img src="img.jpg#-moz-decodesize=500x300">, as long as the resulting picture is rendered as 500 by 300 pixels, the actual resolution of the buffer doesn't matter.

Likewise, for something like <img src="img.jpg#-moz-decodesize=500x300" style="width: 100px">, as long as the resulting rendered picture size is 100 by 60 pixels, the page won't break.

Ensuring that the intrinsic size of the image is what it needs to be is something that we can ensure in dom/layout code.

In fact, it might be better if the intrinsic size matched the original size of the image. That will provide better fallback behavior in other browsers that simply ignore the fragment. 

> > Potentially, if needed, we could also add
> > "img.jpg#-moz-decodesize=500x300&-moz-optimize-for-memory" which would
> > decode to the requested size even if it means that we're not able to use
> > simd instructions.
> > 
> > However, I don't know if this last piece is needed. I.e. I don't know if
> > memory usage is ever so important that we're willing to do sacrifice
> > performance for it. OTOH, is image resampling really CPU bound rather than
> > memory bound?
> 
> The boundedness really depends on how you're resampling the image. The short
> cut that libjpeg takes just does less work because it's using fewer dct
> coefficients. Regular image sampling is still probably cpu bound instead of
> memory bandwidth bound.

Ok, happy to drop this feature for now.
(In reply to Jonas Sicking (:sicking) from comment #18)
> Hey everyone. The discussion here seems to have stalled. We're now getting
> requests from partners to use this feature in order to reduce memory in
> partner apps. So I'd like to understand:
> 
> * Are *we* still planning to use this feature?

Yes. The gallery app is utterly dependent on it. Camera resolutions are going up faster than device memory is, and we have to be able to display large photos (either from our own camera or those received from other cameras by sms, nfc, email, etc.) without OOM.  I assume that Android has some similar capability for displaying large images in their gallery app.


> * Would we like to see some of the improvements discussed here? At the very
> least the ability use
>   5/8ths 6/8ths and 7/8ths sampling?

Right now, if an image is just slightly over our "too big" threshold, we have to throw out 75% of the pixels and display it at half width and half height.  

> * How urgently if so?

So it would be nice to be able to use these other decode sizes, but not urgent.

> 
> Obviously the perfect solution here would be for gecko to simply
> automatically detect the render size and scale as appropriate to that.
> However I got the impression that that was going to take a while before we
> could make happen.

In many of our current use cases, the image is rendered to a canvas with a drawImage() call and is never inserted into the document, so any solution will have to work with that less webby scenario.
 
> The second best solution to me seems to be to have the webpage stick an
> expected render size into the fragment. So something like
> "img.jpg#-moz-decodesize=500x300". Then as underlying libraries gain
> features we can do a better and better job at decoding directly to the
> requested size.

In order to maintain the aspect ratio with this -moz-decodesize, you'll need code to determine the size of an image without decoding it, unless this media fragment is also going have "contain" heuristics and resize the image to be as large as possible while still fitting within the region.  It turns out that I already need (and already have) that code to determine image sizes without decoding even with #-moz-samplesize.

I think -moz-decodesize would work for my needs. I'd probably have to change a lot of code around to use it, though.  It certainly seems easier to understand and use in the general case.

> Potentially, if needed, we could also add
> "img.jpg#-moz-decodesize=500x300&-moz-optimize-for-memory" which would
> decode to the requested size even if it means that we're not able to use
> simd instructions.
> 
> However, I don't know if this last piece is needed. I.e. I don't know if
> memory usage is ever so important that we're willing to do sacrifice
> performance for it. OTOH, is image resampling really CPU bound rather than
> memory bound?

On low-end FirefoxOS devices, I'd say that memory usage is always more important than performance. If the app is slow, I can display a spinner, but if I OOM that is game over. On low-memory devices, I need a solution that is guaranteed to reduce memory use.
I'll add that it would be awesome to get #-moz-samplesize or its replacement to work with pngs.

Also, I recently discovered that #-moz-samplesize works for progressive jpeg images in the sense that it returns a downsampled image. But it turns out that it does not reduce the memory required to decode pjpegs: they still take the full amount of memory and cause OOMs.

I suspect that this is a fundamental limitation of libjpeg, but if not, it would be great to fix it. As it stands now, we can use moz-samplesize to downsample large jpegs. But not to display large pngs or large pjpegs. For those image types, the gallery app just has to reject the image as too large to display.
David, I'd like to understand what your preferred solution is. Both from the perspective of "what does our gallery app need", and from the perspective of "what do you think would be most useful for the web if we were to standardize it".

I'm less interested in "what could you make work if you were forced to". My goal is to create a good platform, not one that we can make due with.

By the sounds of it, your preferred solution, at least from the "what does our gallery app need" perspective, is to keep -moz-samplesize with it's current behavior, but just tweak it a little to get access to the 5/8ths, 6/8ths and 7/8ths resolutions. Is that correct? Is that also what you think would be most useful for the web at large?
moz-samplesize was removed in bug 1311246.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: