Closed Bug 120809 Opened 23 years ago Closed 14 years ago

Save as function refetches data or images that are in the cache

Categories

(Core Graveyard :: File Handling, defect, P3)

x86
All
defect

Tracking

(Not tracked)

RESOLVED WORKSFORME

People

(Reporter: johann.petrak, Unassigned)

References

(Depends on 3 open bugs, Blocks 1 open bug, )

Details

(Keywords: perf)

Attachments

(1 file)

Using 2002011708/linux: the save image function seems to refetch the image 
although it is already there (it is shown on the html page or is even shown
by itself) and therefore probably 
in the cache. This is a big problem if the server where the images comes from
is slow and the image is large. Having the user wait endlessly to refech
something from the server that is already there seems, um strange.
I have the impression that this does not happen every time an image is saved,
but I dont know when the image is refeched and when not.
The image in the URL field *does* seem to get refetched every time though.
Doing the same with NS 4.x does not refetch the image.
We should not be regetting from net if it's in the cache... we _do_ need the
headers, but those are in cache, right?

reporter, are you sure the image is in cache?  Does about:cache list it?
yes it is listed both in the memory and disk cache lists and if after 
looking there and fining it i do another save, the image seems to 
get refetched yet again. This happens both if i do "save image as"
and "save page as.." (as the current page just is the image).
Also headers seem to get refetched from the server as there is
a considerable delay before the
save as dialog is shown (see http://bugzilla.mozilla.org/show_bug.cgi?id=118719)
this seems to be the same as http://bugzilla.mozilla.org/show_bug.cgi?id=118487
and both might be related to the infamous
http://bugzilla.mozilla.org/show_bug.cgi?id=40867 ?
Possibly a dup of bug 115174.

Basically, the persist object is not being told to use the cache so it fetches
from the network each time. 
Could somebody confirm this bug? It might be necessary to copy
the URL to a new browser window as the server denies access 
from other hosts (different referrer host not allowed).
But once the gif is shown, it will be listed in both the memory
and disk cache but still will get refeched from the server 
every time a "save image as" or "save page as" is done.
I believe this is a dup of bug 84106
Depends on: 40867
As a variation, mozilla won't even use the cached image for refreshing a page.

Try this one (or another game at the site):
   http://www.dragongoserver.net/game.php?gid=1136
Galeon and opera will present the whole board very quickly, and still faster on
refreshing or making a move, but mozilla will do it one ... image ... at ... a
... time ... every ... time ...
   Network monitor shows clearly that it's refetching every single duplicate.

Based on Adam's comment, I'm reassigning this to Ben.  Once we're doing what's
necessary to use the cache, then if it's still broke it's a network/cache
problem.  But first we need to make sure everything is lined up properly before
we even get to the networking code.
Assignee: law → ben
Summary: Save as function refeches data or images that are in the cache → Save as function refetches data or images that are in the cache
Status: UNCONFIRMED → NEW
Ever confirmed: true
Michael I dont see the wrong behavior you describe with the URL you gave:
all instances of the same image appear at the same time (i.e. all white
pieces) and from the speed of saving those images it *looks* as if they
are not refeched. 
The seem to get refeched on page reload -- but this is probably what
is desired for a reload (dont know the spec for reload).

How can the HTTP requests that actually occur be traced by an end user
(i.e. without fiddling with the code and recompiling)
Status: NEW → ASSIGNED
Priority: -- → P3
Target Milestone: --- → mozilla1.0
Moving Netscape owned 0.9.9 and 1.0 bugs that don't have an nsbeta1, nsbeta1+,
topembed, topembed+, Mozilla0.9.9+ or Mozilla1.0+ keyword.  Please send any
questions or feedback about this to adt@netscape.com.  You can search for
"Moving bugs not scheduled for a project" to quickly delete this bugmail.
Target Milestone: mozilla1.0 → mozilla1.2
Bug 40867 is now fixed. Does this bug still exist?
The bug is still there: when I want to save a page concerning my bank account, I
get an invalid session error (which shouldn't happen if the page were retrieved
from the cache). Similarly, "View source" still doesn't work. :(
This bug is still alive and kicking.  Bug 40867 does not really affect this bug
in any way, since that put in the architecture for getting a _page_ from cache,
not the images/scripts/css associated with the page.
*** Bug 146133 has been marked as a duplicate of this bug. ***
*** Bug 150835 has been marked as a duplicate of this bug. ***
Keywords: perf
*** Bug 164636 has been marked as a duplicate of this bug. ***
There seems to be a variation. When you see an image it must be somewhere in
memory, page info might reveal, it is not cached. Even then we should be able to
save without refetching.

pi
How about the target milestone?

pi
QA Contact: sairuh → petersen
Depends on: 177329
Depends on: 84106
I may have missed something in the discussion, but on general principle,
shouldn't the "Save Image As..." function save from the current browser window
regardless of cache status?  I mean, if the browser received and rendered the
image, shouldn't all the information needed to save the image still be in the
browser's regular memory (at least until the window is closed or a different
page is rendered)?
> I mean, if the browser received and rendered the image, shouldn't all the
> information needed to save the image still be in the browser's regular memory

No.  The in-memory representation _could_ be converted back to an image on disk,
but it would not be the same image (just one that looks similar).  The original
image data is not stored in memory once the decoding process finishes; only the
decoded data is.
Hmm.  While the ability to "dump" a rendered graphic may be orthogonal to "Save
Image As..." on cached objects being broken (which I've been seeing too), I for
one would find "dumping" very useful.  Does this sound like a useful function to
add, or is there some mechanism for doing so as Mozilla currently stands?
How does one "dump" an animated graphic, exactly?

The back end to dump any frame of an image exists (see imgIRequest;
imgIContainer; gfxIImageFrame interfaces and in particular the scriptable
getImageData function on the last one).  There is no UI, nor should there be by
default; an add-on to do this could be written trivially in JavaScript.
*** Bug 200625 has been marked as a duplicate of this bug. ***
*** Bug 203531 has been marked as a duplicate of this bug. ***
This seems to happen on Win XP too (with 20030426 Mozilla Firebird/0.6).
OS->All, based on duplicates. Clearing milestone.
OS: Linux → All
Target Milestone: mozilla1.2alpha → ---
A serious side-effect of this bug is that there is no way at all to save certain
images, e.g., the comic at <http://seattlepi.nwsource.com/fun/hagar.asp>.  At
this page, the image can only be downloaded if the URL of the current web page
is transmitted to the image-hosting site when requesting the graphic; hence the
"Save image as" does not work, since it tries to redownload the image without
the appropriate referral.  (The download will at first appear to work, but upon
opening the image, you'll find it has been replaced with a different image
saying "image unavailable").
*** Bug 177036 has been marked as a duplicate of this bug. ***
Discovered this problwm while at http://hubblesite.org/newscenter/archive/
today.   Images were refetched when I saved them.  I remember this same problem
having been fixed in Mozilla back in the pre v1.0 beta era.   Checked my
preferences and found that my disk cache was set to 1,500KB (I must have fat
fingered this setting weeks ago.)   Changed my disk cache to 40,000KB and tried
it again.  The images are now saved from the cache.   I then tried the go game
on dragonserver and I do not see the creation times of the dragonserver cache
items being changed nor do I see enough bandwidth being used to download
everything again.
I don't want to tweak my cache setting. I think that if I can see an image or
page displayed, the browser should be able to save the image or page or display
the page's source without redownloading it. I don't care if it's cache proper or
it's ass. If I can see it, then the data is there, and there's no need to make
me to reconnect to the net to refetch the data.
The fact that you can see it does not mean the data is there in its original form.

For example, there is no a priori reason that a browser should keep the comments
in the original HTML around when it's showing the page (and XML processors are
explicitly allowed to drop comments altogether).
Ah, I see. It probably applies to images as well -- certain parts that contain
comments and other things that are not essential to displaying the image may be
dropped. Then I will increase my cache as this bug (is it still a bug
considering that increasing cache fixes it?) has driven me nuts.
It's important to understand that the image displayed on the screen is not the
JPG that was downloaded.  The JPG is just a bunch of bits in a file from which
an image was rendered.  The save image as function apparently applies to the
source data, not the rendered image.  If the cache settings are not large enough
to store the original file then the save as function gets the original from the
server.   Saving the image from what is displayed would require processing to
create a JPG from a bitmap and would give you the image degradation assiciated
with sequential applications of lossy compression.
If that is the case, why is it that images saved using the "Save Image As"
function will be corrupted if the image displayed on screen is corrupted? In
fact, the saved image file looks exactly the same as the one displayed on
screen, i.e. the same corruption.
*** Bug 214887 has been marked as a duplicate of this bug. ***
*** Bug 146550 has been marked as a duplicate of this bug. ***
*** Bug 216584 has been marked as a duplicate of this bug. ***
This is a MOZ killer.  And it remains in 20030815.  It makes Moz very slow to
save images and, as noted, saves them often in a corrupted form.  This needs a
big fix since nothing like this happens in Opera or NS4.8 saving the same images.

Beker

I got this with a 50 000 Kb cache downloading a 1 Mb size jpg file.

Allthough I had several browser windows opened I have never seen this with
mozilla only since I switched to firebird.

Since I run on a modem line this was not very funny. Maybe the default cache
size is to small as a default but I have not needed to change this ever before
to save images or pages.

Also since I work as a webdeveloper I do hope that save page or view source uses
the original loaded page as comments is vital for debugging pages.
*** Bug 208687 has been marked as a duplicate of this bug. ***
Blocks: 227714
*** Bug 229258 has been marked as a duplicate of this bug. ***
Hello:

I too thought that 'Save Image As ...' refetches images from the server.  But at least in Firefox 0.8, this is not true.  I confirmed this by doing an image save with my firewall blocking all traffic; works fine.  Firefox (Mozilla?) does however treat the image as a download, and adds it to the download manager log.

This was the cause of the problem for me.  The download manager log had grown so large that Firefox would hang for a few seconds every time it tried to update it.  To see if this is the cause of the delay (which leads to the suspicion that the image is being refetched), got to your profile directory and delete the file 'downloads.rdf'.  The browser will create a new (zeroed) copy and the delay will disappear!  At least it did for me.

The question is, should 'Save Image As ...' add to the download log?  Even if the answer is yes, there should be a way to turn off the tray area popup notification (Windows).

HTH,
Tamer
In Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.6) Gecko/20040113 Save
As refetches --- I watched the lights blink on the ethernet hub while it did it,
a PDF in this case, just tonight.  It was weird because even the Save button in
the Acrobat menus wouldn't work --- it complained about the file connection
having disappeared or something (sorry I didn't record it, but if someone wants
the specific message I'll replicate the incident).
Comment 42 is pretty much unrelated to this bug.  This bug is about the fact
that there _are_ circumstances when it refetches, not that it refetches every
single time.  Sometimes it doesn't.
(In reply to comment #44)
> Comment 42 is pretty much unrelated to this bug.  This bug is about the fact
> that there _are_ circumstances when it refetches, not that it refetches every
> single time.  Sometimes it doesn't.

I wouldn't say it's unrelated.  The original reporter of this bug used the phrase "seems to refetch" and I don't see specific instructions for replicating the bug (except possibly for the disk cache being set too small, in which case the behavior is by design, and I might add reasonable).  It's very possible many of the people commenting were experiencing the same problem I was experiencing and thought the delay was due to a refetch.
The instructions are trivial:

1)  Turn on Mozilla's HTTP logging (or just put printfs in the relevant code)
2)  Go to a site that sends images with no-cache headers
3)  Look at the log output.

We will refetch on save.  We shouldn't be doing that, even if the cached data is
expired, as long as it's in the cache.  Arguably, we shouldn't refetch even if
the data is NOT in the cache -- we should just not save that image.

But feel free to keep spamming this bug (with a browser that has a broken
implementation of "wrap", no less).
(In reply to comment #46)
> The instructions are trivial:
> 
> 1)  Turn on Mozilla's HTTP logging (or just put printfs in the relevant code)
> 2)  Go to a site that sends images with no-cache headers
> 3)  Look at the log output.

This is the first mention of the 'no-cache' header in this thread.
Considering that people use that header for a reason, Moz/Firefox's behavior
is not unreasonable (is it even different from IE's?, not that that's a 
standard!).

> But feel free to keep spamming this bug (with a browser that has a broken
> implementation of "wrap", no less).

This time I put in carriage returns.
And the broken browser is Firefox 0.8.  Go figure!
Hitting the server during a "save the page I am viewing" operation is
unreasonable, in my opinion.

As for no-cache headers, that's just the simplest way to reproduce this bug. 
There are other ways, which are pretty clear if you know anything about the save
as code in Mozilla.

And please file a bug on firebird about that wrap thing.  Just make sure you
don't have some random-ass extension causing it first.
>Hitting the server during a "save the page I am viewing" operation is
>unreasonable, in my opinion.

of course that image may not be in the cache anymore...

LOAD_ONLY_FROM_CACHE may be a possibility, but what if the image is not in the
cache anymore?
I would say that we need to prompt the user in that case, the same way we do
when we need to refresh on back.
*** Bug 243691 has been marked as a duplicate of this bug. ***
*** Bug 247210 has been marked as a duplicate of this bug. ***
*** Bug 247452 has been marked as a duplicate of this bug. ***
>Hitting the server during a "save the page I am viewing" operation is
>unreasonable, in my opinion.

I think so as well. I was very disappointed when visiting this page :
http://www.geo.ulg.ac.be/edusat/fr/ikonos/bruxelles-ikonos-pan.htm

You get a bitmap image fetched by a cgi-script called with "post" arguments, 
and when you try to save the image, you get only a little html-text saying that
the link doesn't exist, because Mozilla has tried to download the URl of the
script without the arguments.

For an "ordinary user", this sort of incident seems out of understanding.
Also, I find that saving images is very slow ; if it could be faster by 
avoiding to recall the image, it could be a must. 
(In reply to comment #54)
> >Hitting the server during a "save the page I am viewing" operation is
> >unreasonable, in my opinion.
> 
> I think so as well. I was very disappointed when visiting this page :
> http://www.geo.ulg.ac.be/edusat/fr/ikonos/bruxelles-ikonos-pan.htm
...
> For an "ordinary user", this sort of incident seems out of understanding.
> Also, I find that saving images is very slow ; if it could be faster by 
> avoiding to recall the image, it could be a must. 

I too have about 8 second delays whenever the image-save-as is functioning. I
deleted the download.rdf file, made sure my cache was huge (50MB), and Mozilla
1.7.3 is a reasonably late model, all running on win xp sp2. All to no avail.

My workaround was to go into my personal cache (using ztree), find and save the
cached file as a .bmp file (win XP here). Then it can be looked at easily.

I notice this when trying to save any kind of inline image whose URL is actually
a "random image" CGI script, for example a banner ad. The script is called again
when I save the image, and the file I end up with is a different image than the
one I'm looking at in the browser (unless I luck out and get the same random
image again). This makes such images effectively impossible to save. The image
does show up as being in "Disk Cache" when I look at the Page Info.
Blocks: 288462
*** Bug 312059 has been marked as a duplicate of this bug. ***
*** Bug 289478 has been marked as a duplicate of this bug. ***
*** Bug 302691 has been marked as a duplicate of this bug. ***
Blocks: 216490
This bug is a real pest. Yesterday it cost me another 15 minutes in time critical circumstnces.
Could s/body please post an interim message advising on the state of affairs?
Thank you!
Update to this bug: 
At today 2.0.0.6 release of firefox

i still have problems to save images from firefox IF the image is no longer in the server:

1- I have some tabs OPENED with only images opened, no html just images.
2- The image has been deleted from the server!!!
3- I try to save the images by drag and drop or by save as... and i get a 0 size jpg the first time i try to save one, and a jpg with the html of some 404 error the second time i try to save it!!!
4- the only i can do is copy the image and paste into mspaint, the save it. :)

How-to reproduce bug: Open images from your hard disk, delete or rename it and try to save it, u can't!
after six years still in Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9

MSIE at least offers to save a .bmp file if it accidentally throws away the original image file
Saving Page As on http://oodt.jpl.nasa.gov/better-web-app.mov will download the file again, even though saving it from the Media tab in Page Info will copy the disk cache like it should.

Mozilla/5.0 (Windows; U; Windows NT 5.1; nl; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11
Please wish a happy sixth birthday to bug 120809!!
DEVS!  The answer is already here.  Do the same function as whatever you use to save it from the Media tab in Page Info.  It will copy the file the right way every time.

I am trying to find the bug that describes why the filename is set to "goto.htm" instead of the actual filename that the resoource is called when using the Right Mouse click, Save As.
(In reply to comment #71)
> I am trying to find the bug that describes why the filename is set to
> "goto.htm" instead of the actual filename that the resoource is called when
> using the Right Mouse click, Save As.

Bug 311742?
A workaround for anyone who uses Firefox to pay bills and wants to save receipts: view source, copy it and save it in a file. This is what helps me. Otherwise (if I save a page as normal user would), I end up with **** in the file because Firefox refetches the page.
I am astonished that Seamonkey-1.1.10 works and saves link targets and images etc. whereas Seamonkey-1.1.11 silently fails to save anything, note: on the same machine, installed from binary tar.gz on a Scientific Linux 5.0->5.2 i686 on AMD64. 

I solve this by reverting to 1.1.10.

There may be inconsistencies in my installation; 
Seamonkey-1.1.10 often stops/crashes 
./run-mozilla.sh: line 131:  4410 Segmentation fault      "$prog" ${1+"$@"}

which mostly happens when I request "save link target" or "save page".
I have been suffering this bug for months.  Firefox never saves images which I am viewing when I choose Save As, it always re-downloads them.  For example on GMail.  Very frustrating on large images.  Even worse when you have a page with 300+ images, all loaded, and you tell it to save the page and all images -- it spends the next 20 minutes redownloading them all.  Very nasty for the server, doubling all bandwidth costs.  Very nasty for the user, wasting huge amounts of time downloading something that is already visible on the screen.
I should mention that I am also unable to save credit card purchase receipt pages, unable to save HTML files, and on and on.  This is still happening in Firefox 3.0.4, seven years after this bug was reported, and nine years after it was originally brought up (and supposedly fixed) in another report.

I don't understand how something so integral to both web developers and average users can go unfixed for nearly a decade.
Obviously this bug is very important to you, so you'll be fixing it. Welcome to open source.
Jerry Goodwin, obviously, you are not a Firefox developer. At least, not one who cares about its quality. However, thank you for calming my fears that open source software will replace software for writing which people get paid.
Assignee: bugs → nobody
Status: ASSIGNED → NEW
So per comment 0 this bug is about images, right?  Are we talking full-page images or images included via <img>?

People seem to have been dupping all sorts of stuff here, ignoring the "one bug per problem" setup we're supposed to have.  Please reopen any bugs that are not actually duplicates of THIS BUG AS FILED (which as I said is about images).  Other kinds of content:

1) Are not saved via the same codepath
2) Use a totally different cache

so they need to be dealt with separately.

That's assuming the goal is to fix the issues, not just to complain, reduce the number of bugs by any means possible, and obfuscate issues.

Again, I would love details on reproducing this bug on a trunk nightly (needs to be trunk, since the image cache has been completely rewritten recently and now pins all visible images in the cache).
It's images included in <img> (at least for me). The problem appears chaotically, in the image properties it says "Image size: unknown(not cached)", but some images saved immediately saying just "scanning for viruses", others(most) are reloaded. Cache is not full, and also the images cannot be found in the disk cache immediately after they have loaded in the page, is it correct? I guess not. I'm talking about 1 single site yet. Netscape 9.0.0.6 acts just the same. Well guys having this bug in the 3rd release is really a shame :).
Comment 84 just happens to not have the one thing I asked for: steps to reproduce.  Can you point me to a specific site and a specific image that shows the problem?

Of course it's also explicitly talking about images that are NOT in the cache, whereas this bug is about images that ARE. So perhaps it should be filed as a separate bug, eh?
(In reply to comment #83)
> So per comment 0 this bug is about images, right?  Are we talking full-page
> images or images included via <img>?
> 
> People seem to have been dupping all sorts of stuff here, ignoring the "one bug
> per problem" setup we're supposed to have.  Please reopen any bugs that are not
> actually duplicates of THIS BUG AS FILED (which as I said is about images). 
> Other kinds of content:
> 
> 1) Are not saved via the same codepath
> 2) Use a totally different cache
> 
> so they need to be dealt with separately.
> 
> That's assuming the goal is to fix the issues, not just to complain, reduce the
> number of bugs by any means possible, and obfuscate issues.
> 
> Again, I would love details on reproducing this bug on a trunk nightly (needs
> to be trunk, since the image cache has been completely rewritten recently and
> now pins all visible images in the cache).

Here are steps to reproduce this bug in Firefox 3 on Windows Vista:
1. Open an image file stored on a server in the firefox browser.  If the image is still on the server, you can right click -> "save image as" and be able to save the file.
2. Have the server remove the image file while the image is still in your firefox browser's cache.
3. If you right click -> "save image as" for this cached image on your browser, your browser will save this image as a corrupted or invalid 2.01 KB picture file (I tested this with a jpeg file).

Strangely enough, in Firefox 2 on Windows Vista, doing the above steps will correctly save the image from your cache instead of saving the image as a corrupted image file.

In Firefox 3, if I use the "Save Images" plug-in, I can save cached images that have been removed from the server by using the "save images from current tab" feature.

This bug in Firefox 3 can be very annoying on imageboard websites where the image retention may be low.
Very well.  I just performed these steps:

1)  Loaded http://web.mit.edu/bzbarsky/www/test.png in Firefox 3.
2)  sshed to that server.
3)  Did "mv test.png test2.png" on the server.
4)  Right-clicked the image in Firefox and selected "Save image as".

The correct image was saved.

Can you point me to a server that actually shows the problem, perhaps?
I also did that test with Firefox 3 on Windows Vista and couldn't reproduce the buggy behavior. I even tried to load the no longer existing image in a different browser to make sure that the web server doesn't serve a cached copy, and then I saved the image being displayed in Firefox with no problems. 

I also did the same test with the image being part of an HTML file. Again, no problem noticed.

I used a Hostgator's server (uses Apaches on Linux).
Depends on: 486921
I have the same bug as #84 with g.e-hentai.org (:)). The site seem to have some copy protection which comes out that some images aren't loading. (Win XP SP2) Also in Opera it's impossible to save some of the images (option "save" for the image disabled), but those that it allows to save, are never reloaded. Firefox 3 most of the time reloads the images when saving them, and in the properties it shows exactly as in #84. Though, fresh Safari works just fine with the site..
I see this problem in imagefap.com. Often it reloads an image, when you click save, that was already loaded.
The files with this problem always show "Image size: unknown(not cached)" in the properties.
Rodrigo, George:

Can you please give me specific steps you use?  Which exact url you load, which exact image you save, exactly how you save it.

Please note that this bug is about images that _are_ in the cache and are not being saved, by the way.
Trying to reproduce the bug only made me discover another one...
I use a greasemonkey script to load all images (original size) in one page, instead of having to save one at a time (and can save the page to get all images).
I loaded one gallery then cleared the cache (I verified that the cache was empty). When I reload the page all the images appear without downloading but show the "Image size: unknown(not cached)" in properties. Maybe they are in some cache other than the disk cache?
I will test more and come with steps to reproduce soon.
There are at least 3 different caches images might be in.  Some cache the bytes off the network, some cache decoded image data.  Not all are cleared by the "clear cache" thingie.
Using Minefield 3.6a1pre 20090405

Here the link:
http://imagefap.com/gallery/1594591
There is NO porn here (if you have adblock) :)

and the script I use is:
http://userscripts.org/scripts/show/15603
(needs greasemonkey)

(alternatively you can open many images using "open link in new tab", then save then after they load, some will download again, despite being in cache)

using the latest nightly, firefox shows "0 KB (0 bytes)" in properties on some files even after they load.
Right click and "view image" loads image again (refetches from server)

Another step:
1.Load the imagefap page in the link with the greasemonkey script installed
2.Go back to link (F5 doesn't work) so as to load the script again
3.Some images load from cache while others refetch

Expected Result:
All images load from cache
> some will download again, despite being in cache

Are you sure they're in the cache?  Those URIs look like exactly the sort of thing that would trigger hashtable collisions on our cache and cause some of the images to evict others.

That's certainly what I see happening over here.
I haven't tested about:cache today, but remember that images showed there and still refetched.
OK, well everything I've seen refetch so far has not been cached.  I'd still love steps to reproduce the problem, including which specific images you're opening in a separate window, in which specific order, which ones you're saving, in which specific order.  Whatever is needed for me to do EXACTLY what you're doing.
Based on more tests I am confused if is the cache that is the problem. I got new steps:
1.open link: http://imagefap.com/gallery/1594591
(same as before)
2."open link in new tab" for 10 images (this number is random)
3.right click and "view image" on one image
4.sometimes the image load from cache with the correct file name "12345.jpg"     OR
it refetches (reload) from the server and the file name is "getimg.php"
(file name as appears on top of firefox, not when you click "save as")

Sometimes 10 images is not enough to see the getimg.php, just open more images.
Attached image what I reproduced —
I clicked on one single image on imagefap and got this.

What I see here:
1. The image is in disk cache but it has 0 bytes.
2. The image is not in memory cache (not shown on pic)
3. It's not a cache collision because cache collisions are marked as (not cached) in "page info"
4. In page info the image is shown immediately (not from network). I'm guessing it's from decoded-image cache.
5. After highlighting the url and pressing enter, the image reloaded from network but was put in disk cache correctly this time around.

Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2a1pre) Gecko/20090406 Minefield/3.6a1pre

Is there anything I can do to debug it further?
Huh.  That's pretty weird (what comment 99 is describing).  Mind filing a separate bug on that, with directions on how I can get into that state?  That's certainly not a properly cached image, somehow.  ;)
Blocks: 487883
Filled bug 487883 with Radek's steps (comment 99).
Blocks: 303268
Blocks: 499255
QA Contact: chrispetersen → file-handling
Reported:  	2002-01-18

Innovations and bleeding edge development.
FlashGot grabs images from cache and its GPL. Use its code?
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b8pre) Gecko/20101107 Firefox/4.0b8pre
still happens all the time
(In reply to comment #107)
> Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b8pre) Gecko/20101107
> Firefox/4.0b8pre
> still happens all the time

Hence still being marked NEW as opposed to FIXED or INVALID. There's still a hope that this could make it into Firefox 4 for the polish phase though.
I would urge everyone commenting to:

1)  Read comment 97.
2)  Read https://bugzilla.mozilla.org/etiquette.html
3)  Provide some actionable steps to reproduce like comment 97 asks for.
(In reply to comment #97)
> OK, well everything I've seen refetch so far has not been cached.  I'd still
> love steps to reproduce the problem, including which specific images you're
> opening in a separate window, in which specific order, which ones you're
> saving, in which specific order.  Whatever is needed for me to do EXACTLY what
> you're doing.

1: Go to http://artgerm.deviantart.com/art/Birds-of-Prey-issue-7-179662205?q=&qo=
2: Click image to enlarge
3: Open download manager/dialog thing
4: Watch image redownload, scan for viruses and then the toaster popup
(In reply to comment #111)
Image is set to not be cached by the server.

Request URL: http://fc06.deviantart.net/fs71/f/2010/261/8/1/birds_of_prey___issue_7_by_artgerm-d2yys8t.jpg

Request Method: GET
Status Code: HTTP/1.1 200 OK

Request Headers
 12:50:25.805
 Accept:image/png,image/*;q=0.8,*/*;q=0.5
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding:gzip, deflate
Accept-Language:en-us,en;q=0.5
Cache-Control:no-cache
Connection:keep-alive
Host:fc06.deviantart.net
Keep-Alive:115
Pragma:no-cache
Referer:http://artgerm.deviantart.com/art/Birds-of-Prey-issue-7-179662205?q=&qo=
User-Agent:Mozilla/5.0 (X11; Linux x86_64; rv:2.0b8pre) Gecko/20101109 Firefox/4.0b8pre
I think there is a clash of USER EXPECTATIONS and specmanship here.

I know that I, as a user, want to click-right on a displayed thingy that I can see right in front of my eyes in a browser window and have the thing I see saved to disk.  I don't want to be referred to an RFC, and I don't want some dimwit external webmaster's confused (or deliberately unhelpful) Apache options to get in my way.

I really can't imagine a scenario in which a user requesting "Save as..." for anything which the user can see, bright as day, right there in living pixels on a display, should ever be refetched from anywhere, regardless of what nonsense he data-supplying web site specifies.  This of course is "nonsense" from a USER perspective; I fully understand that browser implementers are obliged to pay attention to this cache-busting time-wasting nonsense for all sorts of reasons elsewhere.

The fact that so many of us get so frustrated by this bug (a bug from a user perspective) should indicate that following various HTTP server-browser communication spec to the letter isn't necessarily the correct behaviour where browser-user expectations are involved.

Think of it this way: if I run a screen-grabber program and record the pixels displayed in a browser window, I don't expect and I don't want and I would be furious if that caused the browser to go off and spend a whole lot of time and bandwidth grabbing _different_ pixels.  By analogy, when I as a browser user request the browser to record what I see (via some "Save As..." menu or the like), then I expect exactly that HTML/JPEG/etc that was used to produce the pixels I saw to end up saved on my local computer.
(In reply to comment #112)
> (In reply to comment #111)
> Image is set to not be cached by the server.
> 
> Request URL:
> http://fc06.deviantart.net/fs71/f/2010/261/8/1/birds_of_prey___issue_7_by_artgerm-d2yys8t.jpg
> 
> Request Method: GET
> Status Code: HTTP/1.1 200 OK
> 
> Request Headers
>  12:50:25.805
>  Accept:image/png,image/*;q=0.8,*/*;q=0.5
> Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
> Accept-Encoding:gzip, deflate
> Accept-Language:en-us,en;q=0.5
> Cache-Control:no-cache
> Connection:keep-alive
> Host:fc06.deviantart.net
> Keep-Alive:115
> Pragma:no-cache
> Referer:http://artgerm.deviantart.com/art/Birds-of-Prey-issue-7-179662205?q=&qo=
> User-Agent:Mozilla/5.0 (X11; Linux x86_64; rv:2.0b8pre) Gecko/20101109
> Firefox/4.0b8pre

This one? http://jaynekeller.tumblr.com/post/1501553792
(In reply to comment #113)

Could not have said better, and want to make it well known he's not the only one who feels that way.
(In reply to comment #113)
For me, I understand the concept of the bug in that, you should respect the
administrators wishes and re-download the content on each run. However, if
you're yet to leave a page, it should be in a kind of temporary cache that
means you can access the content quicker. You shouldn't have to load the same
image twice, which the current saving process does. However, it would seem that
we're pegged into this behaviour based on the way the browser behaves.

In terms of conditional it should be as simple as IF the item is loaded MOVE it
to user specified location ELSE load it.

As things stand, we're currently just wasting bandwidth.
Richard, your issue is NOT THIS BUG.  Your issue is that you want things to be pinned in cache or that all resources should be stored somewhere other than the HTTP cache in addition to being in the HTTP cache.  We have separate existing bugs for this, which I suggest you read to see the tradeoffs that are involved.

THIS bug is specifically about not using the cache even when the content to be saved is in the cache.  Let's please stop adding unrelated things to it?  It's already so long as to be nearly useless.

> This one? http://jaynekeller.tumblr.com/post/1501553792

When I save the big image on this page (which is the one I assume you meant, given lack of actual steps to reproduce), using a clean profile and today's trunk build, I see it come from the cache.  There is no second network request.  (And please don't overquote?  It just adds to the length problem.)
(In reply to comment #117)
1: Go to http://jaynekeller.tumblr.com/post/1501553792
2: Open download manager/dialog thing
3: Watch image redownload, scan for viruses and then the toaster popup

Was the reproduction steps. I watched it download and scan for viruses.

NEW CASE STR:
1: Go to http://tindink.tumblr.com/post/1525703940/i-was-at-sea-the-other-day-and-loads-of-meat
2: Open download manager/dialog thing
3: Watch image redownload, scan for viruses and then the toaster popup

EB:
1: Go to http://tindink.tumblr.com/post/1525703940/i-was-at-sea-the-other-day-and-loads-of-meat
2: Open download manager/dialog thing
3: Watch image scan for virus and then the toaster popup without redownloading.
I'm clearly missing something in comment 118...  I load http://tindink.tumblr.com/post/1525703940/i-was-at-sea-the-other-day-and-loads-of-meat then I open the download manager (Tools > Download manager).  Then what?  Is there a missing "save image as" step somewhere?  On the assumption that there is, if I "save image as" the polar bear image from that page, I see it get read from cache. My browser does NOT go out to the network to reget it.
Open the page, open the download manager, then go to the tab with the page open. Right click on the largest image on the page (the polar bear as a wave image) and click save image as. Moving/copying the image from the caches should be instantaneous. Instead I see in the download manager "Downloading image" and am forced to wait as it downloads before it pops up a toaster. Perhaps it's a matter of perception and the download manager should be tweaked to describe the behaviour properly. But the process seems to me to be redownloading the image. If it isn't redownloading the image, why is the process of saving such a long (length of time, not steps) one?
The image in the cache is not stored in a separate file.  So saving it from the cache means reading image from disk and writing to a different file on disk.  This is done asynchronously, so as not to lock up the UI in the process waiting on disk operations.  So wall-clock time will depend on the speed of your disk, the latency of your file writes for non-disk-related reasons (antivirus, etc), and so forth.

Reducing the real and perceived time of the save operation is a valid bug (or pair of bugs), but unless your network sniffer shows actual _network_ access it's not THIS bug.  I would suggest filing those bugs on the download manager, for sure.
Hopefully I'm not adding noise, but I don't see instructions on how to tell if the image is being saved from cache or is being downloaded again. I think that would be very helpful!
https://developer.mozilla.org/en/HTTP_Logging (works in release builds and everything) and then seeing how many GET requests we send for the image.  The LiveHttpHeaders will likely let you get this info too.  Or any packet sniffer (ethereal, wireshark, ettercap, pick one of your choice).
Depends on: 610775
Depends on: 610776
(In reply to comment #121)
Those are both filed, are we saying that this is invalid based on those two?
I'm just saying that this bug certainly used to be a problem, but seems to work for me now, as filed.  Unless someone can actually reproduce this as filed, it should be resolved as "worksforme" and separate bugs filed for other issues.
Hi,

Let me make a note which will immediately result in the usual Mozilla nastygram and referrals to etiquette pages and all that uninteresting and tiresome jive.  (Don't bother: I've heard it and I don't care.)

There's a pattern which some people might perceive at work, and that almost certainly nobody who could change it cares enough to observe or to care about, but here it is for reference purposes:

101 people -- USERS of the software -- vote for something as being a bug that drives them up the wall, or troubles them enough to make the real life effort to report.  It takes effort for them to determine there is a problem, to carefully search the bug database to try to locate other reports of the same symptoms that bother them as users, to register with the bug reporting system, and to add their "me too", either in the comments or through voting.  Now perhaps the users don't have the incredible levels of bugzilla-fu that the masters of XULXMLRDFSQLXPCOMETC have attained, and perhaps they don't split hairs exactly right or carefully check which of 20 similar sounding bugs is _really_ their problem, but they did try to do so and they did try to make their own contribution to the usability and value of the community-authored software they are using.

At some time the software system's active developers decide, based on some criterion or another, technically valid or splitting hairs or perfectly legitimate or otherwise, that what the naive users may have thought is a bug is really some OTHER bug (perhaps CLOSED or WORKSFORME or WONTFIX, ideally!) that they have split off for internal development purposes, some new bug with no votes and no comments and no priority for anybody.

The original bug is closed.  100 users are ticked off that another irritating Mozilla screw-up reported a decade ago will effectively never be addressed, and the new bug sits unwanted, uninvestigated, and unloved for another 10 years.

Repeat.

Anyway, that's one software user's perspective of a pattern that sometimes appears to exist here in bugzilla.mozilla.org land at some times.

PS Let me save you some time: https://bugzilla.mozilla.org/etiquette.html
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → WORKSFORME
Facebook has ruined me; now I want a "Like" button for bugzilla comments.

But more usefully...
If the annoyed users receive comments posted to this bug, could someone more knowledgeable post links to bugs those users should vote for to properly indicate support for the bugs that do fit with how the developers work?  Comment 121 through comment 125 indicate that such bugs exist in this case, but there is no link.  I hope that if the voting users do vote for those bugs, their votes will prevent the new bugs from being uninvestigated for such a long time.

(It would be even better if bugzilla had a feature to help users transfer votes from closed bugs to replacement bugs, but I'm not holding my breath on that; just pointing them in the right direction to do it with the current system would be a help.)
I reported laaarge times to save to flash USB pen drives.

Apparently that bug was renamed as this bug. I doubt it, because saving to hard disk is far speedier than saving to USB.
(In reply to comment #129)
> I reported laaarge times to save to flash USB pen drives.
> 
> Apparently that bug was renamed as this bug. I doubt it, because saving to hard
> disk is far speedier than saving to USB.

I forgot to add: Saving to USB pen drive as NOT being resolved. Still takes far more time that saving to Hard disk, and is not related to slower USB writing, because other browsers (like IE in many versions) do not have this problem.
(In reply to comment #117)
> Richard, your issue is NOT THIS BUG.  Your issue is that you want things to be
> pinned in cache or that all resources should be stored somewhere other than the
> HTTP cache in addition to being in the HTTP cache.  We have separate existing
> bugs for this, which I suggest you read to see the tradeoffs that are involved.
[...]

Oh no, not again! There have been bugs that were marked as duplicates of this bug for what Richard is describing (e.g. bug 164636 -- see the "you're viewing" in the bug title). This is really confusing, in particular when one reaches such bugs via a search. Could the duplicates be re-duplicated to the correct bugs, please?
Product: Core → Core Graveyard

This is currently present.

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: