Closed
Bug 915001
Opened 12 years ago
Closed 12 years ago
[B2G][Helix][gallery][yangshiqi]can not save the edited picture(>1M).
Categories
(Firefox OS Graveyard :: Gaia::Gallery, defect, P1)
Firefox OS Graveyard
Gaia::Gallery
Tracking
(blocking-b2g:hd+, b2g-v1.1hd fixed, b2g-v1.2 fixed)
RESOLVED
FIXED
blocking-b2g | hd+ |
People
(Reporter: lecky.wanglei, Assigned: djf)
Details
Attachments
(6 files, 1 obsolete file)
User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; aff-kingsoft-ciba; .NET4.0C; .NET4.0E)
Steps to reproduce:
【Detail Description*】:[B2G][Helix][gallery][yangshiqi]can not save the edited picture(>1M).
1.enter into gallery
2.choose a effect to edit it
3.press save icon right top of the screen
【Expect Result*】:The pictrue has been saved
【Real Result*】:It didn't work
【Test Count*】:5
【Found Count*】:5
【Gaia commit ID*】:c0ea0a4943dc8d3751b07f5b5c5d3abe06364a14
【Gecko commit ID*】: 170f9e477571127cd40997fa2abe262ed43f0e4d
【Log*】:20130911152902382.rar
【Network environment】:
【Resume operation】:
【Carrier】:
Severity: normal → critical
blocking-b2g: --- → hd?
Priority: -- → P1
Comment 1•12 years ago
|
||
Dominic, can you or point someone that can check the gallery bug
Flags: needinfo?(wchang) → needinfo?(dkuo)
Updated•12 years ago
|
Severity: critical → normal
blocking-b2g: hd? → ---
Comment 2•12 years ago
|
||
I used unagi and saw some errors about WebGL, here are the logs:
[JavaScript Warning: "Error: WebGL: GL error 0x505 occurred during OpenGL context initialization, before WebGL initialization!" {file: "app://gallery.gaiamobile.org/js/ImageEditor.js" line: 1151}]
[JavaScript Warning: "Error: WebGL: Error during OpenGL initialization" {file: "app://gallery.gaiamobile.org/js/ImageEditor.js" line: 1151}]
Looks like the attached image caused errors on WebGL, and we need some help from gecko devs.
Flags: needinfo?(dkuo)
Hi BEATRIZ
Could you accept this issue from Telefonica on V1.1HD?
Jerry, please help to check context initialization fail
Flags: needinfo?(cku) → needinfo?(hshih)
Comment 6•12 years ago
|
||
I assume this is not happening in images captured by the camera but only on images imported from other sources. If that is the case I don't think we should block on this bug.
Comment 7•12 years ago
|
||
I have reproduced this problem in my helix device.
checking...
Flags: needinfo?(hshih)
Comment 8•12 years ago
|
||
(In reply to Daniel Coloma:dcoloma from comment #6)
> I assume this is not happening in images captured by the camera but only on
> images imported from other sources. If that is the case I don't think we
> should block on this bug.
If the image is captured by the camera, it will not produce this problem in my helix device.
Since the image size is small tham 1M according to lecky's observation.
Comment 9•12 years ago
|
||
Gallery app use webgl to accelerate the picture effect edit operation.
App creates gl context failed with 2100*2100 size for image(size 2100*2100) in attachment 802828 [details],
but succeed with small size.
It might be no enough memory.
I will check the gl context creation flow.
Comment 10•12 years ago
|
||
GL error code:
http://www.opengl.org/wiki/OpenGL_Error
GL error 0x505 => GL_OUT_OF_MEMORY
Comment 11•12 years ago
|
||
We will get GL error 0x505 after calling fFramebufferRenderbuffer() at line 1541.
https://git.mozilla.org/?p=releases/gecko.git;a=blob;f=gfx/gl/GLContext.cpp;h=bb6d95f06e38cbdae22f215325c8f164d0abc5f7;hb=v1.1.0hd#l1541
We also get warning from driver.
W/Adreno200-ES20(16024): <qgl2DrvAPI_glFramebufferRenderbuffer:2569>: GL_OUT_OF_MEMORY
According to gles 2.0 spec, FramebufferRenderbuffer() does't return GL_OUT_OF_MEMORY error code.
It's weird.
Comment 12•12 years ago
|
||
CJ,
This error message comes from adreno gpu driver.
W/Adreno200-ES20(16024): <qgl2DrvAPI_glFramebufferRenderbuffer:2569>: GL_OUT_OF_MEMORY
Do we have qcom contact for gpu driver problem?
Flags: needinfo?(cku)
Comment 13•12 years ago
|
||
Diego, are you the right person for this?
Flags: needinfo?(cku) → needinfo?(dwilson)
Comment 14•12 years ago
|
||
I'm working with our graphics guys.
Is there a limit on how big a picture we can edit? If not, should there be one?
Flags: needinfo?(dwilson)
Comment 15•12 years ago
|
||
(In reply to Diego Wilson [:diego] from comment #14)
> I'm working with our graphics guys.
>
> Is there a limit on how big a picture we can edit? If not, should there be
> one?
I have queried the adreno 200 gpu maximum texture size support and it is 4096*4096.
In the test case, the texture size 2100*2100 is fine.
Certainly, we need to check whether we have enough memory to create 2100*2100 texture.
But I have a question about the glFramebufferRenderbuffer function.
We have create the texture object before and have no gl error code.
What does the glFramebufferRenderbuffer return GL_OUT_OF_MEMORY mean?
The message:
W/Adreno200-ES20(16024): <qgl2DrvAPI_glFramebufferRenderbuffer:2569>: GL_OUT_OF_MEMORY
Comment 16•12 years ago
|
||
Here's the response from the graphics team:
The render target being allocated in this case is at the very edge of the
capabilities of the hardware. The rendering fails because it is unable to
find a valid bin configuration. Specifically, it can't find a configuration
that doesn't violate the limit of five bin ID groups. It is a consequence of
the bin configuration algorithm that the allocation of a 2560x1920 render
target succeeds but a 1920x2560 render target fails. There are internal
padding and alignment requirements that make these two configurations
substantively different.
Note that although we support render target sizes up to 4096x4096 in
dimension, that is no guarantee that every possible configuration of bit depth
and z-buffer can be accommodated at that maximum size. It only means that
a size of up to 4096x4096 is not inherently invalid. There is no way to
directly calculate the maximum image size that the driver can support.
Comment 17•12 years ago
|
||
The error also happens on v1.2 hamachi when patches in Bug 905882 are applied.
Comment 18•12 years ago
|
||
(In reply to Diego Wilson [:diego] from comment #16)
> Note that although we support render target sizes up to 4096x4096 in
> dimension, that is no guarantee that every possible configuration of bit
> depth
> and z-buffer can be accommodated at that maximum size. It only means that
> a size of up to 4096x4096 is not inherently invalid. There is no way to
> directly calculate the maximum image size that the driver can support.
If we can't get the maximum size of render target that the driver can support,
how can we limit the maximun size of image that we can edit?
Comment 19•12 years ago
|
||
(In reply to Diego Wilson [:diego] from comment #16)
> Note that although we support render target sizes up to 4096x4096 in
> dimension, that is no guarantee that every possible configuration of bit
> depth
> and z-buffer can be accommodated at that maximum size. It only means that
> a size of up to 4096x4096 is not inherently invalid. There is no way to
> directly calculate the maximum image size that the driver can support.
I have another implement of gallery effect edition.
Since we might not create the canvas as big as the image is and we don't need to show this full size processed image to screen, we can just create a small canvas(ex. 16x16) for webgl context providing. Then create a full size webgl frame buffer object(FBO) as the webgl processing render target. Finally, we can read the processed image from FBO.
The only problem is that I can't save the processed raw RGBA image data into jpeg format.
The encoder is implied in the canvas.toBlob() function. We can't just call the encoder.
Could we have a new set of API for image data encoding/decoding?
Flags: needinfo?(dflanagan)
Comment 20•12 years ago
|
||
(In reply to Jerry Shih[:jerry] from comment #18)
> If we can't get the maximum size of render target that the driver can
> support,
> how can we limit the maximun size of image that we can edit?
Yes that is unfortunate. The image resolution we’re working with here is sort of at the edge of the chipset. Can the application be on the lookout for GL_OUT_OF_MEMORY when calling glFramebufferRenderbuffer() and try to fall back to a CPU solution?
Assignee | ||
Comment 21•12 years ago
|
||
Jerry,
Thanks for bringing this bug to my attention. I wrote the current WebGL code that does image processing, but am very inexperienced with WebGL, so I'm not sure I understand what you are proposing or what you are asking me.
I'm guessing that you're asking whether the Gallery app can break large images up into tiles and process those tiles separately to avoid out-of-memory issues.
Tom: what do you think about this? Can we process large images in chunks? If we tried, I think we'd stop doing the borders in webgl.
Flags: needinfo?(dflanagan) → needinfo?(therold)
Comment 22•12 years ago
|
||
@David: I think what Jerry is suggesting, is to render the effects to a FrameBuffer, i.e. an offscreen texture and then safe that. However, it seems like toBlob() can't save/encode offscreen textures.
@Jerry/Diego: Are you saying that Canvas.toBlob() only returns the content of the front buffer (i.e. the buffer/image that we actually see on the screen?) Would doing a two-pass rendering help? 1) Render image effect to frame buffer and 2) copy frame buffer to screen?
@all: I am not sure if splitting the image is a good idea. If I am not mistaken, passing multiple image fragments to the shader wouldn't help us. We would not have way to associate certain fragements with their texture coordinate. I.e. all texture fragments would drawn on top of each other? I don't know enough about texture coordinate mapping to make a call.
Flags: needinfo?(therold)
Comment 23•12 years ago
|
||
What one could do, is instead of just having one surface (i.e. rectangle) to draw on, we could use four of them (hence the image has to be split into four parts as well) and render each texture fragment to its own geometry. I have some reservation with this method though:
1) Splitting an image into four parts probably requires a 2D canvas? We will probably run into memory/speed issues again.
2) There may be rendering artifacts along the edge of the rectangles, where they meet. This can be by caused by floating point "inaccuracies" or probably a number of the other things as well.
Comment 24•12 years ago
|
||
Sorry for the confusion.
This is my implementation.
1. as Tom said, I render the effects to an offscreen framebuffer texture.
2. then save the offscreen texture to device via mediadb.
Canvas.toBlob() only encode the the content of the front buffer.
So my offscreen framebuffer texture can't encode to jpeg format.
If I want to encode into jpeg, I should create a full image size 2D canvas(We can't use 3D canvas. we will suffer the same problem) and redraw this offscreen buffer to the 2D canvans. Then use toBlob() to encode.
If we have encoder api, we can just do that without using canvas.toBlob().
(In reply to Tom Herold from comment #22)
> @David: I think what Jerry is suggesting, is to render the effects to a
> FrameBuffer, i.e. an offscreen texture and then safe that. However, it seems
> like toBlob() can't save/encode offscreen textures.
Yes, toBlob() only handle the canvas's front buffer.
> @Jerry/Diego: Are you saying that Canvas.toBlob() only returns the content
> of the front buffer (i.e. the buffer/image that we actually see on the
> screen?) Would doing a two-pass rendering help? 1) Render image effect to
> frame buffer and 2) copy frame buffer to screen?
We can copy the processed offscreen texture to a 2D canvas.
But if we have a new encoder api, it might be better.
> @all: I am not sure if splitting the image is a good idea. If I am not
> mistaken, passing multiple image fragments to the shader wouldn't help us.
> We would not have way to associate certain fragements with their texture
> coordinate. I.e. all texture fragments would drawn on top of each other? I
> don't know enough about texture coordinate mapping to make a call.
I don't suggest to split the image. We might have some artifacts at the split edge.
What's the maximum size the gallery app can support?
If people can see their images in gallery, I think gallery should handle these images without any problem.
We still need a limitation of image size.
Flags: needinfo?(therold)
Comment 25•12 years ago
|
||
I think right now the camera is limited to 2MP images because for the memory limitations. If I recall correctly @djf added a compile time option to set/change this limit in bug 900399.
Flags: needinfo?(therold)
Comment 26•12 years ago
|
||
I will upload my simple prototype using fbo later.
I have some problem with javascript. :(
Do we think about the new encode/decode api instead of using canvas.toBlob()?
Flags: needinfo?(dflanagan)
Assignee | ||
Comment 27•12 years ago
|
||
(In reply to Jerry Shih[:jerry] from comment #26)
> I will upload my simple prototype using fbo later.
> I have some problem with javascript. :(
If you can do the webgl, I'm happy to help with the JavaScript! :-)
>
> Do we think about the new encode/decode api instead of using canvas.toBlob()?
Is there any proposal already made for a new API? If so, I don't know anything about it. What did you hae in mind?
To do this right, we'd need to discuss with the webapi people, and then discuss on the whatwg mailing lists, etc. I'd love to have a JPEG encoder API that allowed EXIF metadata to be specified, and have publically proposed that we somehow add that to the toBlob() function. But I am not actively working on anything like this.
If we can solve this problem with toBlob(), I think we should do that. If the pixel data that we get back from WebGL is in the right format, we can just pass it to the 2d canvas putImageData() method and then call toBlob(), right? Does that mean that we'll be using 2 times the memory, though?
If we can't get something like this to work, we might have to give up on webgl and rewrite all of our image editing effects in JS using hand-written asm.js I don't want to have to maintain both GPU (webgl) and CPU (asm.js) versions of all the editing code.
Flags: needinfo?(dflanagan)
Updated•12 years ago
|
Assignee: nobody → hshih
Comment 28•12 years ago
|
||
1. create small canvas 3d for webgl context
2. draw image to fbo
3. copy fbo data to canvas 2d
4. save canvas 2d to mediadb
The test case in attachment 802828 [details] is finally ok.
The readback seems cost a lot. :(
David, could you take a look of these js code?
I am a newbie of js.
Attachment #816331 -
Flags: feedback?(dflanagan)
Assignee | ||
Comment 29•12 years ago
|
||
Comment on attachment 816331 [details] [diff] [review]
0001-WIP-use-fbo-for-image-processing.patch
Review of attachment 816331 [details] [diff] [review]:
-----------------------------------------------------------------
Jerry,
I don't really understand webgl, so I can't give this a really thorough review. My two main comments are:
1) I'm not sure I understand how this saves any memory. Is there a distinction between graphics memory and regular memory? I can see how it saves graphics memory, but it looks like it doubles the amount of regular memory needed. Won't we just be trading one kind of OOM for another kind?
2) I worry that calling readPixels() will block the main thread for a noticeable amount of time. Is there any way to make that async?
Note that I haven't actually tried running the patch. Does it work well?
::: apps/gallery/js/ImageEditor.js
@@ +583,5 @@
> ImageEditor.prototype.getFullSizeBlob = function(type, callback) {
> + console.log('bignose gallery call getFullSizeBlob');
> +
> + // RAW RGBA image buffer for processed image
> + var imageResult = new Uint8Array(this.source.w * this.source.w * 4);
I don't understand how we save memory with this change, if we have to allocate this big array here. Doesn't this take up about the same size as the canvas does?
@@ +599,4 @@
> this.source.x, this.source.y, this.source.w, this.source.h,
> 0, 0, this.source.w, this.source.h,
> + this.edits,
> + this.source.w, this.source.h, imageResult);
Do you really need to pass this.source.w and this.source.h again? Is there a use case for changing the image size here?
Also, and more significantly, is there any way to make this async? If I'm understanding this correctly, this call must block while the image processing is done, and that seems really bad.
I had assumed that the old processor.draw() did not block and the call to canvas.toBlob() blocked until webgl processing was done.
But maybe I just don't understand webgl...
@@ +613,5 @@
> + console.log('bignose gallery create canvas 2d context');
> + var context2D = canvas2D.getContext('2d');
> + var image = context2D.createImageData(canvas2D.width, canvas2D.height);
> +
> + image.data.set(imageResult);
Can you create the canvas and the image data first and then pass image.data directly to drawOffscreen()?
As it stands, it looks like you've got two copies of the data here.
@@ +1309,5 @@
> + dx, dy, dw, dh,
> + options,
> + offscreenWidth,
> + offscreenHeight,
> + resultBuffer)
Can any of the code in this function be shared with code in draw()?
@@ +1332,5 @@
> + gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
> + gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
> + gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
> + gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, image.width, image.height,
> + 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
nit: indentation is off.
@@ +1338,5 @@
> + console.log('bignose gallery draw use fbo:create fbo');
> + var FBO = gl.createFramebuffer();
> + gl.bindFramebuffer(gl.FRAMEBUFFER, FBO);
> + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D,
> + FBOTexture, 0);
indentation
@@ +1387,5 @@
> + gl.drawArrays(gl.TRIANGLES, 0, 6);
> +
> + // Read the result data from gl buffer
> + console.log('bignose gallery read gl fbo to array start');
> + gl.readPixels(0, 0, sw, sh, gl.RGBA, gl.UNSIGNED_BYTE, resultBuffer);
This is the part where I'm worried about this being synchronous. Does webgl allow this to be async in any way?
Also, note that you're using sw and sh here. Shouldn't this be offscreenWidth and offscreenHeight, or am I misunderstanding?
Attachment #816331 -
Flags: feedback?(dflanagan)
Comment 30•12 years ago
|
||
Jerry,
Can you confirm if the patch is good for 5mpixel images? This is what the device camera should support.
Please ni me with the answer so I get this back on my list, we need to consider if this should be hd+ for this reason.
Flags: needinfo?(hshih)
Comment 31•12 years ago
|
||
(In reply to David Flanagan [:djf] from comment #29)
> 1) I'm not sure I understand how this saves any memory. Is there a
> distinction between graphics memory and regular memory? I can see how it
> saves graphics memory, but it looks like it doubles the amount of regular
> memory needed. Won't we just be trading one kind of OOM for another kind?
According Diego comment 16, We might not suffer OOM probjem Instead, It might be the driver issue.
The original code create a 3d canvas context as large as the image is.
We might create context failed because we can't support this large size render target(including z-buffer, render target).
> 2) I worry that calling readPixels() will block the main thread for a
> noticeable amount of time. Is there any way to make that async?
I notice that the toBlob() function also block the js thread.
When I press the save button, the origin and my implementation all will block for a while especially for large image. Can js handle toBlob() in other thread?
Comment 32•12 years ago
|
||
I disable the webgl z and stencil test attribute.
All work fine. :)
In B2G, the canvas 3d also uses fbo.
We might have problem to create large size z or stencl buffer.
Do we have try server or QA test flow for gaia code?
Attachment #816331 -
Attachment is obsolete: true
Attachment #817004 -
Flags: feedback?(dflanagan)
Comment 33•12 years ago
|
||
Performance test for 1024*1024 image
E/GeckoConsole( 3538): Content JS LOG at app://gallery.gaiamobile.org/js/ImageEditor.js:613 in anonymous: bignose gallery: draw and toBlob process time=1321
E/GeckoConsole( 3538): Content JS LOG at app://gallery.gaiamobile.org/js/ImageEditor.js:604 in anonymous: bignose gallery:process time=10
Comment 34•12 years ago
|
||
This is my 5Mpixel(2592*1944) test image from internet.
Gallery can handle this image now.
I have question about the 5Mpixel.
2592*1944=5038848
5038848/1024/1024=4.805
Is this size enough?
Flags: needinfo?(hshih) → needinfo?(wchang)
Comment 35•12 years ago
|
||
HD+ing this bug as helix comes with 5mpixel camera and we need to at least support images taken on this.
Feel free to let me know if i am mistaken here.
(In reply to Daniel Coloma:dcoloma from comment #6)
> I assume this is not happening in images captured by the camera but only on
> images imported from other sources. If that is the case I don't think we
> should block on this bug.
Daniel, I think this comment is only valid prior to enabling the camera's 5mp capacity. Previously we had the camera restricted at 2mpixel.
blocking-b2g: --- → hd+
Flags: needinfo?(wchang)
Comment 36•12 years ago
|
||
(In reply to Wayne Chang [:wchang] from comment #35)
> HD+ing this bug as helix comes with 5mpixel camera and we need to at least
> support images taken on this.
> Feel free to let me know if i am mistaken here.
>
> (In reply to Daniel Coloma:dcoloma from comment #6)
> > I assume this is not happening in images captured by the camera but only on
> > images imported from other sources. If that is the case I don't think we
> > should block on this bug.
>
> Daniel, I think this comment is only valid prior to enabling the camera's
> 5mp capacity. Previously we had the camera restricted at 2mpixel.
I don't see how we can do this at this point. We'll need to restrict the camera down to 2mp or disable the image editing.
Comment 37•12 years ago
|
||
(In reply to Milan Sreckovic [:milan] from comment #36)
> > Daniel, I think this comment is only valid prior to enabling the camera's
> > 5mp capacity. Previously we had the camera restricted at 2mpixel.
>
> I don't see how we can do this at this point. We'll need to restrict the
> camera down to 2mp or disable the image editing.
I agree, if this change takes significant work it feels out of scope for 1.1. We should re-target fixing that in 1.2/1.3 in that case.
All the edits that ImageEditor.js can make seem to only affect one pixel at a time. It should be straightforward to tile the edits -- the code should be able to operate on a 512x512 (for example) tile of the source image at a time, pasting the results into a destination 2D canvas (ideally non-GL/Skia accelerated, no reason!) which finally has toBlob called on it to compress back to jpeg.
This will both make it more responsive (since we can go back to main loop in between tiles) and able to handle any size source image, assuming we can keep at least the original image (in a texture, not in the <img> tag -- ~20MB assuming 2600x2000 8bpp RGBX) + 1 512x512 tile (1MB*3 buffers for WebGL -- again assuming 8bpp RGBX) + readPixels buffer (1MB) and the buffer destination canvas (20MB again). So we're looking at around 45MB total, which should be fine memory wise. (The webgl context, textures, etc. can all go away before we do the jpeg compression, leaving us with just the 20MB 2D canvas.)
Assignee | ||
Comment 39•12 years ago
|
||
Vlad,
Thanks for weighing in here. I suggested tiling in comment 21, but didn't pursue the idea. If we add blurring or sharpening effects in the future then that might not work, but you're right that for now it should.
Do you think we need to do an explict readPixels() out of the 3d canvas, or can we just do a copyImage from the small 3d tile canvas into the 2d canvas? That would be simpler to code, if the memory implications are the same.
Borders become a little harder to handle in the tiles case. But the current border options suck anyway, and UX is considering removing them in 1.3. If we can just disable borders in 1.1hd, then I suspect that a tiled implementation is pretty straightforward.
Yup, you can just drawImage straight from the webgl canvas to the 2d one; I forgot about that. It'll do a readpixels under the hood and will be more efficient anyway.
For borders -- not a lot more work to do borders, you just have to do some math to get them into the right place on the tiles, or just draw them on the 2D canvas after the effects.
For effects that sample beyond one pixel -- just need to include enough of a border region in the tiles to cover the filter size and to only copy the relevant region to the destination. So should be doable even in the future as this evolves.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment 41•12 years ago
|
||
(In reply to Alex Keybl [:akeybl] from comment #37)
> I agree, if this change takes significant work it feels out of scope for
> 1.1. We should re-target fixing that in 1.2/1.3 in that case.
Agree if this work is going to be significant it should not be targeted for v1.1hd, but what would our remedy be here?
Can we downsize the image as it gets opened by the editor?
Restricting the camera capture size is likely not a viable option here.
vladv, djf,
Can you estimate the amount of work/landing effort required here with your suggestion in comment 38?
flagging product for input, please see comments 35 - 37
Flags: needinfo?(vladimir)
Flags: needinfo?(ffos-product)
Flags: needinfo?(dflanagan)
Not I, I don't know the code enough here -- I'll let djf decide how easy he wants it to sound ;)
Flags: needinfo?(vladimir)
Comment 43•12 years ago
|
||
(In reply to David Flanagan [:djf] from comment #39)
> Vlad,
>
> Thanks for weighing in here. I suggested tiling in comment 21, but didn't
> pursue the idea. If we add blurring or sharpening effects in the future then
> that might not work, but you're right that for now it should.
>
> Do you think we need to do an explict readPixels() out of the 3d canvas, or
> can we just do a copyImage from the small 3d tile canvas into the 2d canvas?
> That would be simpler to code, if the memory implications are the same.
>
> Borders become a little harder to handle in the tiles case. But the current
> border options suck anyway, and UX is considering removing them in 1.3. If
> we can just disable borders in 1.1hd, then I suspect that a tiled
> implementation is pretty straightforward.
From a product perspective, this is a reasonable tradeoff for users to take full advantage of the camera sensor on the Helix device. Let's disable borders for 1.1HD if necessary.
However, it sounds like we may be able to still do borders per comment 40.
Updated•12 years ago
|
Flags: needinfo?(ffos-product)
Reporter | ||
Comment 44•12 years ago
|
||
hi wayne:
As we consult on weekly meeting, this issue is a blocking case for HD. We have modified the severity.Thanks!
Comment 45•12 years ago
|
||
David,
In attachment 817004 [details] [diff] [review], I only disable the webgl z and stencil test function.
The webgl processing code seems that it doesn't use z or stencil test.
It can handle 5Mpixel image.
Assignee | ||
Comment 46•12 years ago
|
||
I have a work-in-progress patch based on Vlad's suggestion above at https://github.com/mozilla-b2g/gaia/pull/12905
It is not working reliably for me, but I'm testing it on a Unagi, and graphics on the unagi are unstable these days.
Jerry, does this work for you on the helix? Can you see anything wrong with the code?
Flags: needinfo?(dflanagan) → needinfo?(hshih)
Assignee | ||
Comment 47•12 years ago
|
||
Jerry: if you can get my code to work, feel free to fix it up and find someone to review it!
Comment 48•12 years ago
|
||
Need to say that we had more devices including 5MPx camera and running v1-train that are suffering this issue. Changing flag nomination to leo? to cover them as well.
blocking-b2g: hd+ → leo?
Comment 49•12 years ago
|
||
Hi David,
Just looking for clarification here, does what Jerry have in attachment 817004 [details] [diff] [review] help here?
He showed me this working with a couple of 5mp images including some taken with the camera, although the saving performance is not optimal but it seems to work.
Jerry,
Can you otherwise also look at David's WIP and see if you can follow up?
Thanks!
Flags: needinfo?(dflanagan)
Assignee | ||
Comment 50•12 years ago
|
||
Wayne,
The only working device I've got right now is a unagi, and it is so flaky that I can't meaningfully test anything. If Jerry's patch fixes the bug, then it is better than nothing. The concern that I have with it are that it appears to increase the overall memory use, and I worry that it will lead to more OOMs. It works for 5mp images, but we don't know, for example, that it would work for 8 or 10mp images.
The approach Vlad suggested (and that I've attempted to implement) reduces memory use, should work with much larger images, and, by processing the image in smaller batches, should also improve the responsiveness of the phone while an image is being saved.
Flags: needinfo?(dflanagan)
Comment 51•12 years ago
|
||
Thanks David, I guess that means if we cannot get Vlads proposal implemented in time we can fall back to Jerry's patch and perhaps restrict the image size to something meaningful.
I'll follow up with jerry today to see how he's getting on with your WIP.
Reporter | ||
Comment 52•12 years ago
|
||
I agree with wayne.
I will test this patch on my device and see its performance.
Comment 53•12 years ago
|
||
To clarify, who's working on the patch based on Vlad's suggestion which sounds like the right path forward here. Is that you djf? Thanks.
Comment 54•12 years ago
|
||
(In reply to David Flanagan [:djf] from comment #46)
> I have a work-in-progress patch based on Vlad's suggestion above at
> https://github.com/mozilla-b2g/gaia/pull/12905
>
> It is not working reliably for me, but I'm testing it on a Unagi, and
> graphics on the unagi are unstable these days.
>
> Jerry, does this work for you on the helix? Can you see anything wrong with
> the code?
I'm working on another issue.
I will try this patch during Paris rendering work week.
Flags: needinfo?(hshih)
Comment 55•12 years ago
|
||
ni? assignee to get attention.
HD+'ing for now to get some priority on this, but we'll need this for leo as well if any other partner will release another v1.1 software on a 5mp device.
Dominic, Can you check the patch that Jerry provided in attachment 817004 [details] [diff] [review] ? This should be a fallback in case we cannot follow through David's WIP in time.
blocking-b2g: leo? → hd+
Flags: needinfo?(hshih)
Comment 57•12 years ago
|
||
(In reply to Wayne Chang [:wchang] from comment #55)
> ni? assignee to get attention.
>
> HD+'ing for now to get some priority on this, but we'll need this for leo as
> well if any other partner will release another v1.1 software on a 5mp device.
>
> Dominic, Can you check the patch that Jerry provided in attachment 817004 [details] [diff] [review]
> [details] [diff] [review] ? This should be a fallback in case we cannot
> follow through David's WIP in time.
Sorry for the late reply.
[:djf]'s patch splits the image into several subimage and process them step by step.
At my helix device, I notice that there are some artifacts at the split edge(the red area in attachment 821385 [details]).
Another problem is that I can't see the processed image after I press the save button in galley. The image is not saved.
I need to add some log to check these problem.
Reporter | ||
Comment 58•12 years ago
|
||
I have tested the patch Jerry provided in attachment 817004 [details] [diff] [review] [diff] [review] in our device.
It works and can save the works as another image file.
The shortage is that the saving speed is slow. It takes more than 5 seconds(The image was taken by the back-face camera) without any prompting information.It is very easy to cause the user misconceptions.
Comment 59•12 years ago
|
||
Hi [:djf],
Javascript will have error at the line below when we process the second strip.
https://github.com/davidflanagan/gaia/blob/3450f20cc40c1cd127dda2e3da02ecbc32c92a81/apps/gallery/js/ImageEditor.js#L1338
I will check this error in b2g.
Assignee | ||
Comment 60•12 years ago
|
||
(In reply to Jerry Shih[:jerry] from comment #57)
> [:djf]'s patch splits the image into several subimage and process them step
> by step.
> At my helix device, I notice that there are some artifacts at the split
> edge(the red area in attachment 821385 [details]).
> Another problem is that I can't see the processed image after I press the
> save button in galley. The image is not saved.
> I need to add some log to check these problem.
I see these artifacts in edit mode on a helix even without my patch applied (this is on a m-c build). So I think that is some other webgl bug on the helix, but not related to this patch.
Assignee | ||
Comment 61•12 years ago
|
||
I've had some time to look at this now. I've improved my tiling algorithm, and can now process a 5mp image in ~5 seconds (on a Helix running 1.1hd) I think this is the same speed that Jerry was reporting for his patch, but because the tile based solution allows us to return to the event loop between tiles, we could display a progress bar while saving.
5 seconds still seems surprisingly slow, though, so I'm going to try a couple of more things before I accept that as the best we can do.
Comment 62•12 years ago
|
||
Thanks David for the update, can we say by the end of next week (nov 8) lets have this landed if you don't find anything better?
5s with a progress bar indication is much better than what we have now.
(In reply to David Flanagan [:djf] from comment #61)
> I've had some time to look at this now. I've improved my tiling algorithm,
> and can now process a 5mp image in ~5 seconds (on a Helix running 1.1hd) I
> think this is the same speed that Jerry was reporting for his patch, but
> because the tile based solution allows us to return to the event loop
> between tiles, we could display a progress bar while saving.
>
> 5 seconds still seems surprisingly slow, though, so I'm going to try a
> couple of more things before I accept that as the best we can do.
Assignee | ||
Comment 63•12 years ago
|
||
Wayne,
Yes, November 8th sounds like a good deadline.
Comment 64•12 years ago
|
||
David, can I see your improved algorithm at https://github.com/mozilla-b2g/gaia/pull/12905/ ?
I will try to profile the 5M pixel use case(encoding to jpeg? memory copy?).
Assignee | ||
Comment 65•12 years ago
|
||
Jerry,
This patch converts the process of saving an edited file to use tiles. And it adds a progress bar during the saving process.
The complicated parts of the patch are to get borders right, but I think the other stuff is pretty straightforward.
Note that this patch is for the v1.1.0hd branch because I wanted to test it on that release. After you have reviewed this, I will create versions for master and v1.2.
Assignee: hshih → dflanagan
Attachment #826120 -
Flags: review?(hshih)
Assignee | ||
Comment 66•12 years ago
|
||
This patch would be better if we changed the header bar while saving the file to change "Edit" to "Saving...". But that would be late-l10n so I didn't include it in the pull request. It is easy to add if localization is still allowed on 1.1.0hd, however.
Comment 67•12 years ago
|
||
Comment on attachment 826120 [details] [review]
link to patch on github
comment at:
https://github.com/mozilla-b2g/gaia/pull/13300
We have progress bar now. The proformance improvement might be created in other bug.
Attachment #826120 -
Flags: review?(hshih) → review+
Comment 68•12 years ago
|
||
comment at:
https://github.com/mozilla-b2g/gaia/pull/13300
We have progress bar now. The performance enhancement might be created in other bug.
Updated•12 years ago
|
Attachment #817004 -
Flags: feedback?(dflanagan)
Assignee | ||
Comment 69•12 years ago
|
||
Thanks for the quick review Jerry. I've addressed your review comments, except for this one:
>We can create a shared GL texture containing the image data when we enter the edit mode.
>For each tile processing, we can share the same GL texture. Only modify the texture coordinate in rectangle vertex.
>It reduces memory copy and texture uploading time in draw() function.
My webgl skills aren't good enough to make that change. Feel free to file a followup bug if you would like to try it. I worry, though, that what you are proposing has two problems:
1) Doesn't it double the memory usage because we'd have image data in the texture and then another copy in a 2d canvas so that we can call toBlob() on it? Gallery has so many OOM issues that having an extra decoded copy of the image seems bad.
2) Does web gl allow us to create arbitrary size textures? If not, then the approach you suggest will stop working at some point as we start increasing the resolution of the camera. It might work for 5mp images, but will it work for 8mp images?
Comment 70•12 years ago
|
||
Just a couple of drive-by replied to comment 69:
- it is really important for performance to avoid uploading textures everytime you draw;
- webgl can have textures of any size up to NxN where N is the value returned by webgl.getParameter(webgl.MAX_TEXTURE_SIZE), which you can safely assume to always be >= 2048 on any real-world hardware, and is 4096 on the Adreno 200 Enhanced GPU found in hamachi. Anyway, the preview in the galery app should never have to use textures bigger than the screen resolution (e.g. typically 320x480), right? You do have a guarantee that WebGL textures can be at least as large as that, on all devices.
Assignee | ||
Comment 71•12 years ago
|
||
I've been testing my patch with the attached image. When I crop on the top or left, there are sometimes artifacts along the bottom (where the final row of tiles is not full-size.) This doesn't seem to happen when I crop only on the bottom or right.
I suspect a floating point issue: I should probably ensure that I round everything to integer pixels. It would be interesting to see if cropping to an even height is different than cropping to an odd height, too.
Assignee | ||
Comment 72•12 years ago
|
||
As I suspected, it was a rounding error. I've updated the pull request with this fixed and Jerry's review comments addressed.
After a discussion on IRC today, I understand why performance is so bad here. It seems that webgl isn't really the right solution for editing large images, and in the future we might want to use webgl for the previews, but use asm.js and regular JS for editing and saving the full-size image.
Comment 73•12 years ago
|
||
(In reply to David Flanagan [:djf] from comment #72)
> As I suspected, it was a rounding error. I've updated the pull request with
> this fixed and Jerry's review comments addressed.
>
> After a discussion on IRC today, I understand why performance is so bad
> here. It seems that webgl isn't really the right solution for editing large
> images, and in the future we might want to use webgl for the previews, but
> use asm.js and regular JS for editing and saving the full-size image.
I think WebGL is a potentially quite good way to do this, but the way we appear to go about it looks really sub-optimal. In particular, canvas1.drawImage(canvas2) is really slow, and will be extremely slow on tiling architectures if we iterate like [drawTile, readTile, drawTile, readTile]. Instead, we should be doing [drawTile, drawTile, ..., readTile, readTile, ...]. Even still, using drawImage is unoptimized, so we'll get readback from WebGL (slow) and if we're using skia-gl for canvas2d, texture upload for canvas2d (also slow). canvas.getImageData is also quite slow.
It might be helpful to sit down together to go over how to do this with better performance.
Assignee | ||
Comment 74•12 years ago
|
||
Fixed on the v1.1.0hd branch: https://github.com/mozilla-b2g/gaia/commit/d19550e2fb7331d5e4a54e8e998753378e7fd44a
Next will apply the patch to 1.3 and then uplift to 1.2.
status-b2g-v1.1hd:
--- → fixed
status-b2g-v1.2:
--- → affected
Assignee | ||
Comment 75•12 years ago
|
||
(In reply to Jeff Gilbert [:jgilbert] from comment #73)
> (In reply to David Flanagan [:djf] from comment #72)
> > As I suspected, it was a rounding error. I've updated the pull request with
> > this fixed and Jerry's review comments addressed.
> >
> > After a discussion on IRC today, I understand why performance is so bad
> > here. It seems that webgl isn't really the right solution for editing large
> > images, and in the future we might want to use webgl for the previews, but
> > use asm.js and regular JS for editing and saving the full-size image.
>
> I think WebGL is a potentially quite good way to do this, but the way we
> appear to go about it looks really sub-optimal. In particular,
> canvas1.drawImage(canvas2) is really slow, and will be extremely slow on
> tiling architectures if we iterate like [drawTile, readTile, drawTile,
> readTile]. Instead, we should be doing [drawTile, drawTile, ..., readTile,
> readTile, ...]. Even still, using drawImage is unoptimized, so we'll get
> readback from WebGL (slow) and if we're using skia-gl for canvas2d, texture
> upload for canvas2d (also slow). canvas.getImageData is also quite slow.
>
> It might be helpful to sit down together to go over how to do this with
> better performance.
In this case, it is an hd+ blocker, so we just have to get a fix in.
I'd love to improve the peformance, but Gallery has OOM problems so much that I can't afford to have two decoded copies of the full-size image at once. If saving an edited copy of a 5mp image allocates 40mb, that will probably kill the user's background apps.
Assignee | ||
Comment 76•12 years ago
|
||
The patch for v1.3 is here: https://github.com/mozilla-b2g/gaia/pull/13393
I'm going to wait for Travis to run on it and then land.
It looks like automatic linting on commit is not on in the 1.1hd branch, and I introduced gjslint errors in ImageEditor.js for the hd branch. So I suppose I should do another pull request to fix those.
Assignee | ||
Comment 77•12 years ago
|
||
And here's the pull request for v1.2: https://github.com/mozilla-b2g/gaia/pull/13394
Assignee | ||
Comment 78•12 years ago
|
||
Backed my patch off the hd branch because of lint: https://github.com/mozilla-b2g/gaia/commit/abeba7177be443214cc0f037f7b2e8556ed89759
Assignee | ||
Comment 79•12 years ago
|
||
Relanded to v1.1.0hd, with lint fixed: https://github.com/mozilla-b2g/gaia/commit/8b4330eb3575ac7496b2f124de9438edfce07358
Assignee | ||
Comment 80•12 years ago
|
||
Comment 81•12 years ago
|
||
Awesome! Thanks David, Jerry and everyone's input on this :)
Reporter | ||
Comment 82•12 years ago
|
||
It have caused a new issue, the picture save action need 5-6 seconds.
You need to log in
before you can comment on or make changes to this bug.
Description
•