Closed
Bug 1196066
Opened 9 years ago
Closed 9 years ago
Make the ICO decoder work correctly when it receives data in very small chunks
Categories
(Core :: Graphics: ImageLib, defect)
Core
Graphics: ImageLib
Tracking
()
RESOLVED
FIXED
mozilla43
Tracking | Status | |
---|---|---|
firefox43 | --- | fixed |
People
(Reporter: seth, Assigned: seth)
References
Details
(Whiteboard: [gfx-noted])
Attachments
(4 files, 1 obsolete file)
7.84 KB,
patch
|
tnikkel
:
review+
|
Details | Diff | Splinter Review |
22.65 KB,
patch
|
tnikkel
:
review+
|
Details | Diff | Splinter Review |
985 bytes,
patch
|
tnikkel
:
review+
|
Details | Diff | Splinter Review |
38.67 KB,
patch
|
tnikkel
:
review+
|
Details | Diff | Splinter Review |
In bug 1196065, I could not enable the ICOMultiChunk test, because it currently fails. This means that the ICO decoder does not decode ICO files correctly when they're received in very small chunks.
We need to fix that.
Updated•9 years ago
|
Whiteboard: [gfx-noted]
Updated•9 years ago
|
Assignee: nobody → seth
Assignee | ||
Comment 1•9 years ago
|
||
OK, we need to go ahead and get this fixed; the issues in the ICO decoder are
probably responsible for bug 1155722 and they've made it nearly impossible to
write a correct patch for bug 1201796.
This part is a binary-only patch. In part 3, where I fix the issues in the ICO
decoder, we'll become somewhat stricter in which ICO files we accept. Right now,
we only really care about the offset specified for a resource in its ICO
directory entry. The ICO decoder moves to that offset and, if it's a PNG, feeds
the entire rest of the ICO file (including any number of other resources!) to
the contained decoder. We need to stop doing that. For one thing, supporting it
would make the code in part 3 substantially uglier, but more importantly, in bug
1201796 we'll need to decode multiple resources from the same ICO file in one
pass, and that means that we *can't* feed the entire file to the contained
decoder. So from now on, we're going to require that the *size* specified in the
ICO directory entry for a resource is also correct. This is web-compatible,
because Blink also has this behavior, and won't render these reftests without
the corrections to the ICO directory entry sizes that I've made in this patch.
Indeed, even ImageMagick treats these files as corrupt.
To sum it up, then, this patch alters two PNG-in-ICO reftests to make the sizes
specified in their ICO directory entries correct. In part 3, we'll require this,
or else we'll fail to decode these files. Our new behavior is the same as Blink
and ImageMagick.
Attachment #8660480 -
Flags: review?(tnikkel)
Assignee | ||
Comment 2•9 years ago
|
||
So the fundamental problem in the ICO decoder is that streaming is not handled
well. Data may come in from the network split into chunks of arbitrary size, and
the decoder needs to produce the same results no matter how the data is split
up. Unfortunately, right now, the ICO decoder doesn't always produce the same
results.
I don't think this is something that we should be reimplementing for every
decoder in ImageLib. This is a concern that is separable, and in this patch I've
done just that. An image decoder is fundamentally a lexer, so what we need is a
framework for writing a lexer that takes care of the streaming problem so the
code that implements a specific image decoder doesn't have to worry about it.
This means that each image decoder will be simpler and more readable, that bugs
in stream handling can be fixed in one place, and that we can test stream
handling independent of any specific image decoder.
This patch adds a new class to ImageLib: StreamingLexer. An image decoder hands
off all of its input to a StreamingLexer and provides it with a lambda that
implements a state machine. StreamingLexer calls the lambda, providing a state
and the data that should be processed during that state. The lambda does the
processing and returns a new state and the number of bytes that need to be
processed during that state (via Transition::To()), or terminates processing
(via Transition::Terminate()). StreamingLexer implements buffering internally so
that all of the data needed for a given state is available when the lambda is
invoked, so the image decoder itself does not need to deal with buffering or
streaming at all.
One additional wrinkle is that sometimes it's useful to process the input
unbuffered. This is mostly helpful for skipping sections of the input (it
wouldn't make sense to buffer large amounts of input just to ignore it) and for
passing the input to another decoder that handles its own buffering. For these
reasons, Transition::ToUnbuffered() allows the image decoder to perform
unbuffered reads when it's required.
The details of the API are explained in StreamingLexer.h. Also included are a
suite of tests for StreamingLexer.
It's worth noting that the main method of StreamingLexer, Lex(), is a bit of a
bear. I've experimented with splitting it into smaller methods, but I wasn't
pleased with the results; there's so much mutation of temporary state that you
end up having to pass tons of references around, and the code doesn't feel any
cleaner.
Attachment #8660482 -
Flags: review?(tnikkel)
Assignee | ||
Comment 3•9 years ago
|
||
This part rewrites nsICODecoder to use StreamingLexer. Unfortunately this
requires major changes to the entire file, but I hope you'll agree that the
result is cleaner and easier to understand.
Behavior is identical except that, as mentioned in the comments for part 1, we
require that each resource's ICO directory entry has the correct size, and we'll
only send that amount of data to the contained decoder. Well, there's also one
other difference: this version works correctly. =)
Attachment #8660484 -
Flags: review?(tnikkel)
Assignee | ||
Comment 4•9 years ago
|
||
After part 3, nsICODecoder can decode an ICO no matter how the data is split
into chunks, so we can enable the ICOMultiChunk test.
Attachment #8660485 -
Flags: review?(tnikkel)
Assignee | ||
Comment 5•9 years ago
|
||
Updated•9 years ago
|
Attachment #8660480 -
Flags: review?(tnikkel) → review+
Assignee | ||
Comment 6•9 years ago
|
||
Doh, tests didn't run on most platforms due to a couple of warnings causing the
build to fail. This updated version of part 3 fixes those warnings.
Attachment #8660522 -
Flags: review?(tnikkel)
Assignee | ||
Updated•9 years ago
|
Attachment #8660484 -
Attachment is obsolete: true
Attachment #8660484 -
Flags: review?(tnikkel)
Assignee | ||
Comment 7•9 years ago
|
||
Updated•9 years ago
|
Attachment #8660485 -
Flags: review?(tnikkel) → review+
Updated•9 years ago
|
Attachment #8660482 -
Flags: review?(tnikkel) → review+
Updated•9 years ago
|
Attachment #8660522 -
Flags: review?(tnikkel) → review+
Assignee | ||
Comment 8•9 years ago
|
||
Assignee | ||
Comment 9•9 years ago
|
||
Thanks for the reviews on this. I know part 3 was a beast!
Backed out along with the other bug from that push in https://hg.mozilla.org/integration/mozilla-inbound/rev/9244da13f5e8 for mulet gij28 bustage: https://treeherder.mozilla.org/logviewer.html#?job_id=14348078&repo=mozilla-inbound
Flags: needinfo?(seth)
Assignee | ||
Comment 11•9 years ago
|
||
Assignee | ||
Comment 12•9 years ago
|
||
(Whoops, forgot to clear needinfo on this. This bug wasn't responsible for the gij28 bustage.)
Flags: needinfo?(seth)
Comment 13•9 years ago
|
||
https://hg.mozilla.org/mozilla-central/rev/2a2dbca2c1b8
https://hg.mozilla.org/mozilla-central/rev/c5b9434adaba
https://hg.mozilla.org/mozilla-central/rev/854cd37aee76
https://hg.mozilla.org/mozilla-central/rev/88bfad4990d8
Status: NEW → RESOLVED
Closed: 9 years ago
status-firefox43:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla43
Comment 14•9 years ago
|
||
I'm filled Bug 1229720 on ICO files that started in FF 43 to me, may be its related.
You need to log in
before you can comment on or make changes to this bug.
Description
•