Closed Bug 500500 (JPEG-XR) Opened 15 years ago Closed 4 years ago

Add support for JPEG-XR/HD Photo

Categories

(Core :: Graphics: ImageLib, enhancement)

enhancement
Not set
normal

Tracking

()

VERIFIED WONTFIX

People

(Reporter: jrmuizel, Unassigned)

References

Details

(Keywords: feature, parity-ie, student-project)

The first step in making this possible is to develop a free library for decoding JPEG-XR. Ideally it would be released under the same license as libjpeg.

Information on the format is available at: http://www.microsoft.com/whdc/xps/hdphotodpk.mspx
Keywords: student-project
See http://en.wikipedia.org/wiki/HD_Photo#Licensing for info on licensing, specifically related to the Device Porting Kit code and copyleft licenses.
The developer preview for IE9 has support for jpeg-xr
OS: Mac OS X → All
(In reply to comment #1)
> See http://en.wikipedia.org/wiki/HD_Photo#Licensing for info on licensing,
> specifically related to the Device Porting Kit code and copyleft licenses.

(In reply to comment #2)
> The developer preview for IE9 has support for jpeg-xr
And?
Sorry that I get so upset, but you mentioning this seems to mean two things:
- Either you belong to the big horde of people which are too lazy to follow links (I mean, it's a click with the mouse, come on, that's unbelievable).
- Or you didn't understand that hard to read legal stuff - I mean: it's not thaaat hard. Although I admit it can be confusing.

So, here is my simplified explanation: Mozilla is published under a mix of a GNU/BSD like license. But the MS license explicitly forbids GNU like licenses to be used with the JPEG XR source kit.
So: Anyone who codes and or publishes a Mozilla Firefox with the JPEG XR kit coded in it, is a criminal and can be sued by MS. They would have every right to do so. That's it.

The real solution here is JPEG2000, which sadly isn't implemented either.

If you meant: developing a library from scratch, just by the specification. Why the effort? There are already ready-made JPEG2000 libs with compatible licenses. Why put so much work into a MS format that only a MS browser supports, which even isn't released yet?
(In reply to comment #3)
Or perhaps comment 2 was just suggesting the parity-IE9 tag. Also, AFAIK, no JPEG2000 library currently supports progressive decoding, which would be desired for use in a browser. (Note that I'm not a fan of JPEG-XR, and would love to see JPEG2000 support -- just being the "devil's advocate" here.)

See also: Bug 36351
It's worth mentioning that XR has alpha channel support, which would address the need driving people to periodically beg for JNG.
(In reply to comment #5)
> It's worth mentioning that XR has alpha channel support, ...
Exactly as JPEG2000 has. Transparency _and_ alpha channels.
(In reply to comment #7)
> http://www.microsoft.com/interop/cp/default.mspx
This isn't really helping. You still have to maintain a function-wise exact copy of the MS lib.
I may be wrong here but it seems the Wikipedia article may be out of date regarding licencing problems.

ITU/ISO/IEC in July this year released source code for a JPEG-XR encoding/decoding library ( http://www.itu.int/rec/T-REC-T.835 ) with the following licence:


/*************************************************************************
*
* This software module was originally contributed by Microsoft
* Corporation in the course of development of the
* ITU-T T.832 | ISO/IEC 29199-2 ("JPEG XR") format standard for
* reference purposes and its performance may not have been optimized.
*
* This software module is an implementation of one or more
* tools as specified by the JPEG XR standard.
*
* ITU/ISO/IEC give You a royalty-free, worldwide, non-exclusive
* copyright license to copy, distribute, and make derivative works
* of this software module or modifications thereof for use in
* products claiming conformance to the JPEG XR standard as
* specified by ITU-T T.832 | ISO/IEC 29199-2.
*
* ITU/ISO/IEC give users the same free license to this software
* module or modifications thereof for research purposes and further
* ITU/ISO/IEC standardization.
*
* Those intending to use this software module in products are advised
* that its use may infringe existing patents. ITU/ISO/IEC have no
* liability for use of this software module or modifications thereof.
*
* Copyright is not released for products that do not conform to
* to the JPEG XR standard as specified by ITU-T T.832 |
* ISO/IEC 29199-2.
*
* Microsoft Corporation retains full right to modify and use the code
* for its own purpose, to assign or donate the code to a third party,
* and to inhibit third parties from using the code for products that
* do not conform to the JPEG XR standard as specified by ITU-T T.832 |
* ISO/IEC 29199-2.
* 
* This copyright notice must be included in all copies or derivative
* works.
* 
* Copyright (c) ITU-T/ISO/IEC 2008, 2009.
***********************************************************************/


I don't know if there's anything here that is incompatible with the Mozilla licence or makes developers wary of legal issues, but it seems much more usable than the licence bundles with the Device Porting Kit.
This points to the concern, I believe:
> * Those intending to use this software module in products are advised
> * that its use may infringe existing patents. ITU/ISO/IEC have no
> * liability for use of this software module or modifications thereof.
(In reply to comment #9)
> * Copyright is not released for products that do not conform to
> * to the JPEG XR standard as specified by ITU-T T.832 |
> * ISO/IEC 29199-2.

Not sure how well that'd fly, either.
(In reply to comment #10)
> This points to the concern, I believe:
> > * Those intending to use this software module in products are advised
> > * that its use may infringe existing patents. ITU/ISO/IEC have no
> > * liability for use of this software module or modifications thereof.

As far as I can tell this is just ITU/ISO/IEC covering their asses in case someone in the future decides they own patents that are used in JPEG-XR, as has happened in the past with jpeg. Microsoft have added jpeg-xr to their community promise ( http://www.microsoft.com/interop/cp/default.mspx ) so I wouldn't expect any issues from them.

(In reply to comment #11)
> (In reply to comment #9)
> > * Copyright is not released for products that do not conform to
> > * to the JPEG XR standard as specified by ITU-T T.832 |
> > * ISO/IEC 29199-2.
> 
> Not sure how well that'd fly, either.

As I understand it, the explanation for this is to stop people from trying to make their own fork of the standard, on top of making people confident that all JPEG-XR software can read all JPEG-XR images etc. I wouldn't worry about this one.
I don't really think the reference implementation is suitable for use in Firefox for technical reasons so its copyright license doesn't really matter here.
(In reply to comment #12)
> (In reply to comment #11)
> > (In reply to comment #9)
> > > * Copyright is not released for products that do not conform to
> > > * to the JPEG XR standard as specified by ITU-T T.832 |
> > > * ISO/IEC 29199-2.
> > 
> > Not sure how well that'd fly, either.
> 
> As I understand it, the explanation for this is to stop people from trying to
> make their own fork of the standard, on top of making people confident that all
> JPEG-XR software can read all JPEG-XR images etc. I wouldn't worry about this
> one.
I can understand their reasoning, I just don't think it's GPL-compatible.

(In reply to comment #13)
> I don't really think the reference implementation is suitable for use in
> Firefox for technical reasons so its copyright license doesn't really matter
> here.
Well, it could assumedly provide a good starting point.
Well, the reference implementation could be indeed used for a first good starting point.

Where I'm really unsure:
Does the posted part of the license covers the whole reference implementation? Or does it have other licenses included, or references other libraries with other licenses, which do not work together with GPL and BSD?
(In reply to comment #15)
> Well, the reference implementation could be indeed used for a first good
> starting point.
> 
> Where I'm really unsure:
> Does the posted part of the license covers the whole reference implementation?
> Or does it have other licenses included, or references other libraries with
> other licenses, which do not work together with GPL and BSD?

After having a look through the code myself, the only other library included with the reference code is my_getopt, which is under an MIT licence. The licence I posted earlier (which is the licence in it's entirety) covers all of the other code.
Jeff: Why the reference implementation is not suitable for FF. Could you put some light on technical reasons?
(In reply to comment #17)
> Jeff: Why the reference implementation is not suitable for FF. Could you put
> some light on technical reasons?

Two main reasons:

1. Performace. From my initial tests, it looks like the reference JPEG-XR implementation is about 10x slower to decode than libjpeg. 

2. Security. I don't know that the reference implementation has been hardened against malicious inputs and from my quick look it appears that it has not.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
I don't think this is a WONTFIX. We're not going to ship the reference implementation, but we may very well write our own implementation.

At the very list, I think more discussion is in order.
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
My mistake, thanks! I was misusing WONTFIX.
Just wondering if there's been any change in momentum for the case for having this bug resolved? Particularly in light of the latest version of Flash Player containing native support Jpeg-XR. Interestingly Jpeg-XR support was added to Flash Player primarily for the case of games made with the Molehill 3D API. This is due to the fact that it can compress textures more efficiently and that it supports alpha transparency natively. More can be read about their reasons for implementing the standard at: http://blog.kaourantin.net/?p=116

I can see this being highly relevant to HTML5 games and games which use OpenGL. I think moving forward there'll be more and more demand for an image format supports lossy compression and alpha transparency for use in these areas. Interesting I think too that Jpeg-XR is the format chosen for storing all of the textures in ID's Rage which uses the Tech 5 engine. I imagine the chose the format due to it's ability to decompress regions of a Jpeg-XR image without having to decode the entire file. This could potentially allow for some nice optimisations down the road for OpenGL games which render based off a large Jpeg-XR texture file... or perhaps not.

I'm sure there may be other work arounds to solve the problem of lossy textures that need alpha channels, without having to add support for a new image format. I'm curious though to hear if anyone has any more thoughts on the matter?
(In reply to myutwo33 from comment #21)

One more thing about Jpeg-XR possibly presenting a benefit to OpenGL games and applications... The fact the Jpeg-XR supports many different bit-depths sounds like a great advantage. Lower bit depths such as RGB555 present an opportunity for a developer to save GPU memory (and download speeds) while higher bit depth such as float32/64bpp present opportunities for high dynamic range textures to be used in OpenGL applications. To me, these sounds like exciting scenarios that Jpeg-XR would be extremely well suited to enabling which would be very difficult to enable with existing supported image formats.
(In reply to Jeff Muizelaar [:jrmuizel] from comment #0)
> The first step in making this possible is to develop a free library for
> decoding JPEG-XR. Ideally it would be released under the same license as
> libjpeg.

They released one the other day (or at least I found one the other day).

Blog post: http://hdview.wordpress.com/2013/04/11/jpegxr-photoshop-plugin-and-source-code/

BSD-licensed library: https://jxrlib.codeplex.com/
(In reply to xo2yo2zo2 from comment #23)
> BSD-licensed library: https://jxrlib.codeplex.com/

Thanks for pointing this out. I'll take a look.
Microsoft have posted some new metrics for JPEG XR vs WebP using jxrlib 1.1. Seems to show JPEG XR in a more favourable light. I'm not good enough at reading these kinds of metrics to know if they're badly skewed or not. Food for thought?

http://hdview.wordpress.com/2013/05/30/jpegxr-updates/
Are there any plans to support this codec? JPEG XR seems pretty stable, the patent issues seem to be sorted out with MS's Community Promise (http://archive.is/DZfS4), the licensing issues are null with the release of the BSD licensed library (https://jxrlib.codeplex.com/), and it seems to offer desired improvements to JPEG including:

* increased compression
* better color accuracy
* transparency, through an alpha channel

Are there any objections to JPEG XR, or is the issue just a lack of manpower?
JPEG XR is a codec that is about more than compression efficiency and I would love to have this to maintain e.g. dynamic range from RAW's. Photographers regularly deal with 12 and 14 bit per channel RAW photos, but common JPEG implementations really just seem to compress down to 8 bits per color channel. This is one of the core differences between a RAW photo and a JPEG photo. Of course, WebP also supports more bits per channel so that's another option here. JPEG or HEVC-MSP, however, aren't.
(In reply to Jonas Nordlund from comment #28)
> WebP also supports more bits per channel
WebP actually doesn't support more colour depth than JPEG. Like JPEG it only supports 8-bits per channel. I consider this to be one of the major flaws in the format.
JPEG XR library is BSD licensed.

I hope mozilla implement JPEG XR decoder soon.
A support for JPEG-XR should be added soon to Firefox. Stay tuned.
See also:

Chromium Issue 56908: Add JPEG XR support
https://code.google.com/p/chromium/issues/detail?id=56908
Alias: JPEG-XR
Keywords: feature
Hardware: x86 → All
Whiteboard: [parity-ie]
(In reply to Kailas from comment #17)
> Jeff: Why the reference implementation is not suitable for FF. Could you put
> some light on technical reasons?

The reference implementation did not support the decoding of a partially available image. Nor did it support progressive decoding. If you use IE with a slow connection you will see that its JPEG-XR decoder is lame. Apparently, they modified it in order to do a progressive decoding (when the image is in a corresponding layout), but it does not works as it should. And when an image is in SPATIAL mode, or has a planar alpha layer, it simly downloads the entire image before decoding it.
I fixed hundreds of bugs in the reference implementation and supported the progressive decoding, as well as the decoding of images that are partially available. The code is now being tested by Microsoft Open Technologies Inc. and hopefully will become part of the Firefox build soon.
(In reply to Please Ignore This Troll (Account Disabled) from comment #15)
> Well, the reference implementation could be indeed used for a first good
> starting point.
> 
> Where I'm really unsure:
> Does the posted part of the license covers the whole reference
> implementation? Or does it have other licenses included, or references other
> libraries with other licenses, which do not work together with GPL and BSD?

Like I mentioned earlier, a Firefox decoder for JPEG-XR images is now being tested by MS OpenTech. I am not sure when/whether it will be handed to Mozilla Foundation. The decoder itself is about 2500 lines of code. A modified decoding part of JXRLib comes with it. Until the decision is made by Microsoft Open Technologies, I can't share the patch with anybody. But I can share the binaries (Win32, Linux64, Mac OS X and Android),
(In reply to Ted Kapustin from comment #34)
> Like I mentioned earlier, a Firefox decoder for JPEG-XR images is now being
> tested by MS OpenTech. I am not sure when/whether it will be handed to
> Mozilla Foundation. The decoder itself is about 2500 lines of code. A
> modified decoding part of JXRLib comes with it. Until the decision is made
> by Microsoft Open Technologies, I can't share the patch with anybody. But I
> can share the binaries (Win32, Linux64, Mac OS X and Android),

Ted, is there anything Mozilla can do to help MS OpenTech move forward with your patch? Can you share contact information for someone at MS OpenTech?
Just to clarify, the current feeling is that JPEG-XR's compression compression performance is generally worse than JPEG's and thus even though JPEG-XR adds additional features it's probably not worth adding support for.
(In reply to Jeff Muizelaar [:jrmuizel] from comment #36)
> Just to clarify, the current feeling is that JPEG-XR's compression
> compression performance is generally worse than JPEG's and thus even though
> JPEG-XR adds additional features it's probably not worth adding support for.

Could you elaborate? I've never encountered a claim that JPEG-XR has worse compression performance than JPEG, and given its algorithms that would seem almost impossible. Is there some data out there on this?

This paper indicates that JPEG-XR compresses better than JPEG and JPEG-2000 (at 4:2:0), though it's a very small study: http://mmspg.epfl.ch/files/content/sites/mmspl/files/shared/QoE/IQA/SPIE09.pdf

Their data is here: http://mmspg.epfl.ch/iqa

JPEG-XR is somewhat obscure (though not quite as obscure as JPEG-2000 – even though Safari supports it, Apple makes no mention of it anywhere on their website, perhaps for patent-related reasons.) However, JPEG-XR an official ISO standard, and Edge is now including JPEG-XR in its accept header. Relatedly: https://blogs.windows.com/msedgedev/2015/10/07/using-extended-srcset-and-the-picture-element-to-tailor-your-image-to-every-device-and-layout/

Right now it seems to have a small efficiency advantage over JPEG (pending any data you were thinking of that indicates that it's *worse* than JPEG.) It would be interesting to find out how much headroom there is for further improvement. For example, a great deal of effort has been devoted to both lossless and lossy optimization of JPEGs, over a span of 20 years. There are a plethora of tools to that end. No such effort has been directed toward JPEG-XR. There is one publicly available encoder that has not been updated in 3 years. It's plausible that a quick review could yield significant improvements to encoding efficiency, on the order of MozJPEG's nice wins. If JPEG-XR is already more efficient than JPEG, a single round of improvements might open up a significant delta. I don't know how much headroom there is – I'm just saying maybe a codec expert should take a closer look at it.

(webp is getting better too. It would probably be worth taking a deep look at both formats and their potential for further improvement. There is no standard or spec for webp. That makes me a bit nervous – it feels too slapdash to entrust image assets to.)
Here's a study of image formats that we did:
https://people.mozilla.org/~josh/lossy_compressed_image_study_july_2014/#psnr-hvs-m-data

JPEG-XR shows no obvious advantage over JPEG and JPEG is better according to some metrics (if measuring SSIM mozjpeg can be tuned for SSIM and basically match JPEG-XR's performance)
(In reply to Jeff Muizelaar [:jrmuizel] from comment #38)
> Here's a study of image formats that we did:
> https://people.mozilla.org/~josh/lossy_compressed_image_study_july_2014/
> #psnr-hvs-m-data
> 
> JPEG-XR shows no obvious advantage over JPEG and JPEG is better according to
> some metrics (if measuring SSIM mozjpeg can be tuned for SSIM and basically
> match JPEG-XR's performance)

Yes, I read the two MozJPEG studies. Your statement about JPEG-XR seems to rest only on the Mozilla studies, taking no account of all the other evidence out in the world. That's a strange way to approach this question.

On the MozJPEG studies... I took another look. Something that strikes me is how much they lack ecological validity. They're based entirely on PNG source images, and those images are broadly dissimilar to the images that people will be converting into JPEG-XR, webp, etc. for the web. The Tecnick images, which dominate the set, are artificially normalized for other purposes, and are all exactly the same dimensions. The Kodak images were apparently taken by film cameras some decades ago, then converted into Photo CD format.

I would not use any of these images for testing image codecs for the web. I definitely wouldn't draw any conclusions or make any decisions for a browser used by millions of people based on that kind of research. The decisions Mozilla makes re: image formats will have massive implications for the bandwidth use, energy use, and page load times for thousands and millions of site owners and visitors. I would want some serious, valid data before making any such decisions.

So, if I wanted to decide on browser image format support for the next few years, I would:

1. Look at the use cases, workflows, and asset pipelines as they are right now, in July, 2016. This unpacks into:
-- What kinds of images would be converted to alternate formats? Where do the images come from? What stage are they in the pipeline?

I assume the answer to this is that a lot of images are born on smartphones as JPEGs. Professional assets are presumably born on DSLRs as a raw format and converted to JPEG with pro tools like Photoshop or Lightroom. (And logos and simple graphics are PNGs or SVGs, with a strong performance preference for PNGs -- see AMP HTML for example.)

It seems like the main focus of any format should be to convert JPEGs into something smaller and more efficient. Or another way of looking at it – we could really use a format designed *just for that purpose*.

2. Project how the use cases, workflows, and pipelines will change, if at all, over the next few years.
-- What do we expect to be happening in the near future as far as image generation and destination?

Probably more HDR, though smartphones seem to be able to implement HDR while producing normal JPEGs (as opposed to actual HDR formats like OpenEXR).

And dual lens cameras like the new HTC and iPhone 7. Not sure how any of this will impact image formats...

I'd care a lot about decode performance. MozJPEG looks great on that score. JPEG-XR is supposed to be faster, less complex than JPEG-2000, but not necessarily JPEG. webp decode performance is a mystery, and Google won't provide any real world data.

In any case, the data I'm aware of on JPEG-XR, webp, and JPEG-2000 adds up to essentially zero data from a scientific standpoint. The strange images used by the MozJPEG studies may or may not have impacted the results – I'd want to find out, and do about twelve times as much research as that before making decisions for millions of users. For any new decoders, I'd architect them for modern CPUs/GPUs exclusively, on the observation that old browsers will not be adding support for a new image format, thus only new browsers and platforms are in play. I'd forget about conventional scalar programming, and approach the codec as though SIMD were the only instructions, or GPU. That might not strictly work, but the exercise could spark some nice performance wins nonetheless. Something like this applied to JPEG-XR or webp might be cool: https://blogs.msdn.microsoft.com/ie/2013/09/12/using-hardware-to-decode-and-load-jpg-images-up-to-45-faster-in-internet-explorer-11/
I agree it would be great to have more data and would be happy to reconsider our position in light of such data. However, given the data that we have we can't justify doing this research ourselves. I suggest you encourage Microsoft to do this work as they seem most invested in JPEG-XR.
Here is some good data: http://www.slideshare.net/JoeDuarte/clipboards/akamai-jpeg-xr-savings-slide

Akamai took 2308 JPEG images from 100 different websites and converted them to JPEG-XR. The conversion saved between 26.7 and 29.1%.

That's a good example of ecologically valid research. Do what users will do – convert JPEGs. Akamai's study is far more authoritative than Mozilla's. Mozilla took a bunch of artificially and uniformly modified PNGs, some of which descended from 1980s film cameras, and converted them to JPEG-XR. No user will do anything like that, so the study design is bizarre.

The savings Akamai saw are what we'd expect given what we know about the JPEG-XR codec and algorithms. It almost can't be worse than JPEG, mathematically speaking. In fact, Mozilla's results are so strange that they really ought to be replicated. In any case, we have the Akamai data, which is much richer and more valid than anything I knew about before. You could also get data from Cloudinary and imgix if you needed more.

Note also that JPEG-XR is already implemented in Windows, so couldn't Firefox just use the Windows implementation? I'm not altogether sure why browsers don't use OS-provided codecs more – Firefox might have briefly used Windows' H.264 implementation, but that seems rare. If it's already in Windows, and the codec is open source, isn't this an easy win?
(In reply to José Duarte from comment #41)
> Here is some good data:
> http://www.slideshare.net/JoeDuarte/clipboards/akamai-jpeg-xr-savings-slide

I can't access this...

> 
> Akamai took 2308 JPEG images from 100 different websites and converted them
> to JPEG-XR. The conversion saved between 26.7 and 29.1%.
> 
> That's a good example of ecologically valid research. Do what users will do
> – convert JPEGs. Akamai's study is far more authoritative than Mozilla's.
> Mozilla took a bunch of artificially and uniformly modified PNGs, some of
> which descended from 1980s film cameras, and converted them to JPEG-XR. No
> user will do anything like that, so the study design is bizarre.

Converting from JPEG isn't a valid methodology because the converted don't have the same quality as the originals. You could 2308 JPEG images and compress them 30% with mozjpeg to achieve the same result.

> The savings Akamai saw are what we'd expect given what we know about the
> JPEG-XR codec and algorithms. It almost can't be worse than JPEG,
> mathematically speaking. In fact, Mozilla's results are so strange that they
> really ought to be replicated.

The studies that we did have the full source code for replicating them. You're welcome to replicate them on which ever set of images you like.

> In any case, we have the Akamai data, which
> is much richer and more valid than anything I knew about before. You could
> also get data from Cloudinary and imgix if you needed more.
> 
> Note also that JPEG-XR is already implemented in Windows, so couldn't
> Firefox just use the Windows implementation? I'm not altogether sure why
> browsers don't use OS-provided codecs more – Firefox might have briefly used
> Windows' H.264 implementation, but that seems rare. If it's already in
> Windows, and the codec is open source, isn't this an easy win?

OS-provided codecs are not necessarily available on all the platforms that we support and since we don't have the ability to update them create a bad situations if there are security bugs.
> I can't access this...

Apologies. Should be fixed now. Also, the slide can be accessed at its source here: http://www.slideshare.net/safruti/extreme-image-optimizations/9

> > 
> > Akamai took 2308 JPEG images from 100 different websites and converted them
> > to JPEG-XR. The conversion saved between 26.7 and 29.1%.
> > 
> > That's a good example of ecologically valid research. Do what users will do
> > – convert JPEGs. Akamai's study is far more authoritative than Mozilla's.
> > Mozilla took a bunch of artificially and uniformly modified PNGs, some of
> > which descended from 1980s film cameras, and converted them to JPEG-XR. No
> > user will do anything like that, so the study design is bizarre.
> 
> Converting from JPEG isn't a valid methodology because the converted don't
> have the same quality as the originals. You could 2308 JPEG images and
> compress them 30% with mozjpeg to achieve the same result.
> 

1. Converting from JPEG must be a valid methodology since that is what people need a new codec for, and what they will in fact use it for. That's why we're here. It's not to convert from DNGs or Amiga formats. (Well, converting from raw like DNGs could be a use case – it's what JPEG-XR was aimed at. They wanted it in cameras, as the original format of a photo. But most web assets are JPEG.)

2. Yes, you can lossily reduce JPEGs by 30%, which reduces their quality. I assume Akamai knows this, and that the gains they report from JPEG-XR and webp were realized while maintaining higher quality levels than the alternative of just chopping JPEGs by 30%. Otherwise, there would be no point in doing what they did, in converting to other formats. The whole point of new image formats is to achieve better quality/size tradeoffs, so normally when people report size savings they're holding quality constant.
Just my $0.02 here:

1) Converting existing JPEGS isn't a good measure. First off, you're taking jpeg encoding artifacts and re-encoding them -- that will make things worse and you *will* lose quality in the process. No way around that. Any website that decides to take jpegs and recompress them will be losing quality -- quite likely a lot of it.
A real comparison of different lossy codecs should be taking raw or lossless compressed images and encoding both with different lossy codecs. I believe that is what has been done here in Mozilla's test.
2) "Quality" is relative. If you compress with different codecs you cannot assume that whatever arbitrary mathematical quality value is used actually compares between codecs. At most it compares different encodings with the same codec. If you measure file size by arbitrary quality then it will always favor whatever codec happens to have better compression at the same arbitrary calculation value, but that has *nothing* to do with visual/perceptual quality. The latter is what would be important here if you compare codecs.
(In reply to José Duarte from comment #41)
> That's a good example of ecologically valid research. Do what users will do
> – convert JPEGs. 

That's an excellent reason to drop jpeg XR right now :-).

A hint for the futire outlook might be that MS didn't tune the encoder since renaming the format from hd photo from windows media photo. Quite unlike Google's regular updates to WebP - which features an excellent near-lossless mode, too. The high-bpp feature of XR isn't that important for browser usage and alpha is not enough to change to an imcompatible format... so there's a reason why jpeg XT is now the focus of development: https://jpeg.org/jpegxt/index.html
Mass bug change to replace various 'parity' whiteboard flags with the new canonical keywords. (See bug 1443764 comment 13.)
Keywords: parity-ie
Whiteboard: [parity-ie]
Type: defect → task
Type: task → enhancement

Both IE and Spartan Edge development has been discontinued. No other browsers (including Chromium Edge) support JPEG-XR. Chromium wontfix'ed their issue. I don't think it's worth supporting JPEG-XR now.

Of course Chromium [wontfix'ed] their issue because they had WebP instead, which practically speaking for the web offers the same functionality (a lossy format with transparency, which was the main gap jpeg-XR was filling prior to it; the wide-gamut support of JPEG-XR has never seen much real desire in the web world despite hardware being able to support it; even simple ICC color correction has not been a priority at all as evidenced by Firefox's state).

I think at this point in time, looking at other formats that don't bring anything significant to the table compared to what browsers already offer now with gif/jpg/webp/png/apng is folly, no matter how much better the formats could be for certain use cases; that includes new players like PIK, HEIF, JPEG-XL, etc.

Without anything significant in an image format for the web, i.e. filling another gap that is widely desired by web designers/users (whether in clear perceptive difference or file size/bandwidth or some other feature that will be coveted), I don't think it's realistic to expect any browser to add further new image format support.

I therefore suggest this and other new image format addition bugs should all be wontfix'ed unless they really do meet the criteria of significant advantage for the web and its typical consumptive use on desktop&mobile.

Now there is a completely new and open C++ library for JPEG-XL here: https://gitlab.com/wg1/jpeg-xl , with Apache 2.0 License.

Important: is a completely new standard, XL, not XR.

Added dev-doc-needed, as if this format ever gets implemented, documentation, BCD, and possibly other databases (hopefully by then we'll have a "file format compatibility" database) will need updating to recognize it.

Keywords: dev-doc-needed

Edgium doesn't seem support JPEG-XR anymore so I think the chances of JPEG-XR being broadly supported are basically zero.

Status: NEW → RESOLVED
Closed: 13 years ago4 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.