Closed Bug 69938 Opened 19 years ago Closed 3 years ago

Downloads are stored in $TMPDIR|$TMP|$TEMP|/tmp first and then moved to the selected path only after the download finishes / location is selected

Categories

(Firefox :: File Handling, defect)

x86
All
defect
Not set

Tracking

()

RESOLVED WONTFIX

People

(Reporter: nitro_jawt, Unassigned)

References

(Depends on 2 open bugs, Blocks 3 open bugs)

Details

(Keywords: dataloss, helpwanted, Whiteboard: INVALID? | PLEASE READ BUG 55690 BEFORE COMMENTING)

Attachments

(1 file)

From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux 2.4.1 i686; en-US; 0.8) Gecko/20010215
BuildID:    2001021503

When downloading a file (any file), regardless of the path selected
in the download file dialogue, the file is created with a temp name
in the /tmp dir. This is incorrect behaviour, and leads to broken
downloads, full /tmp dir, and possibly worse. Many systems (mine
included) have a restricted size for /tmp, so trying to download
any file larger than that size will fail - even though you have
selected to save it into a path that DOES have enough space.

Reproducible: Always
Steps to Reproduce:
1.Restrict /tmp size (seperate partition etc)
2.Download a file larger than /tmp capacity
3.Watch /tmp fill up and download fail

Actual Results:  A file of random size (less than orginal size) is created in the
selected destination. Temp file appears to remain in /tmp dir.

Expected Results:  Mozilla should save the file DIRECTLY into the selected path,
rather then save it into /tmp then move it
although it might as well be a dupe of bug 59808
Assignee: asa → neeti
Component: Browser-General → Networking: Cache
QA Contact: doronr → gordon
Actually, it's not a dupe of bug 59808.... Mozilla should
write the file directly to the selected path, NOT into
ANY temp dir, be it /tmp or ~/.mozilla/tmp or whatever else.
CONFIRMED Linux 20010215

Status: UNCONFIRMED → NEW
Ever confirmed: true
Assignee: neeti → gordon
Target Milestone: --- → mozilla1.0
Cache bugs to Gordon

*** This bug has been marked as a duplicate of 59808 ***
Status: NEW → RESOLVED
Closed: 19 years ago
Resolution: --- → DUPLICATE
This bug is NOT a duplicate of bug 59808.

59808 pertains to the incorrect placement of temp files.

This bug pertains to the incorrect USAGE of ANY temp dir.

ie, if you fix bug 59808, this bug will NOT be resolved.

Let me explain this bug again. Say for instance I have two drives
in my system, one mounted as root (/), and one mounted at /home/ftp.
Now, lets say that the root file system has 150Meg free, and the 
/home/ftp file system has 3Gig free. So, I'm logged onto this system
as some user, who happens to have write acess to the /home/ftp
directory tree. I decide I'd like to download the latest RedHat iso
images, so I use Mozilla to go the RedHat site, and find the files to
download. Mozilla asks me where I want to put those files, and I say
in /home/ftp. Now, Mozilla, instead of writing those files into /home/ftp,
starts writing them into a temp dir (any temp dir, I don't care). Chances
are, that temp dir is NOT under /home/ftp. So what happens? Mozilla happily
fills up the root file system with (partial) files I'd asked it to store
on a different file system.

This, I consider a bug in Mozilla. One that is very different from bug 59808.

Thank you,
- Andrew
Status: RESOLVED → REOPENED
Resolution: DUPLICATE → ---
Doug, this doesn't look like it has anything to do with the cache.  Can you take 
it?
necko does not enforce where files should be downloaded to.  

Bill, does the xfer do something like this?
Assignee: gordon → dougt
Status: REOPENED → NEW
Component: Networking: Cache → Networking: FTP
I agree with Andrew.
2 more problems that can pop up with this bug are:
Attempting to save to a directory in which you don't have write access causes
mozilla to save the file to /tmp.  Once the file has been downloaded and mozilla
can't move the file to the target directory, mozilla pops up with an error
message saying that it can't write to the target location and deletes the file
from /tmp.  This may not be a problem with a small file, but if you've spent the
last hour downloading the file it might cause some irritation. :)

I haven't tested this scenario but there could be a problem here:  If /tmp is
larger than the target location mozilla will successfully download the file to
/tmp.  I would then imagine that mozilla will complain about not being able to
move the file to the target location.  I don't know if mozilla leaves the file
in /tmp or deletes the file.

I have also experienced the situation experienced by Andrew (the case of /tmp
being too small to save the file, but the target location being big enough).  I
think mozilla keeps on downloading the file, even after /tmp fills up.

If the user doesn't have access to /tmp, mozilla downloads the first Megabyte of
the file, waits for a while and then annouces that the download is complete. 
The file isn't written to the target location (not even the first Megabyte).
This seems to be closely related (if not a dup of) bug 55690.  It's the way the
uriloader works.
*** Bug 79637 has been marked as a duplicate of this bug. ***
Component: Networking: FTP → Networking
*** Bug 87494 has been marked as a duplicate of this bug. ***
what is milestone "mozilla1.0" anyway?  Moving to future.
Target Milestone: mozilla1.0 → Future
Win32 build saves a downloaded file in \Windows\Temp with a random name,
as bug 79637 reported.
Status: NEW → ASSIGNED
OS: Linux → All
As long as all files are saved to /tmp, it would be nice of Mozilla checked the
size of the file against the space available in /tmp before spending hours on a
clearly fruitless download.  Though it would be still nicer if it just
downloaded to the correct location directly, as originally requested.
-> bill
Assignee: dougt → law
Status: ASSIGNED → NEW
Seems to me that anyone judging Mozilla and finding it has this kind of
problems, noone will see it as a serious browser ever. Why not putting the
Mozilla1.0 milestone back as a target. It should not be hard to fix. The file
could be at least named by the requested name (it is not on Win32 systems).
to file handling.
Assignee: law → neeti
QA Contact: gordon → benc
*** Bug 105392 has been marked as a duplicate of this bug. ***
really -> filehandling
Assignee: neeti → law
Component: Networking → File Handling
QA Contact: benc → sairuh
Downloading to a temp filename first also slows down the download process.  Last
night I downloaded a 90 MB file and copy from the temp file to the actual file
took almost as long as the download itself!

4.x would download directly to the file specified.  If the download was
canceled, it would just clean up that file.  Why do we need to go through the
extra trouble of downloading to an intermediate file?  This is probably a
remnant of starting the file download before a location is chosen, but I don't
believe we do this anymore do we?

I'm on Windows, so this is definitely not specific to Linux.
*** Bug 107129 has been marked as a duplicate of this bug. ***
It sounds like my problem is probably a dupe of this, so I won't add a new bug
for this behavior.  However:

on Windows 98, build 2001120703 (however, I have noticed this behavior for a
very long time: since at least April), when I am given a dialog box that asks
"Enter name of file to save to..."  no matter what the file extension, here is
what happens:

when I choose a filename and directory and press okay, a dialog appears
indicating that the file is being saved to something like:
"C:\WINDOWS\TEMP\618mgc1k.exe"

when the download completes, the download dialog goes away, but the file is not
copied into the location I requested.  Instead, only the temp directory file
exists.  I then have to go manually move it into my target directory.
Just wanted to add that I've seen this bug under Mac OS X at least since today,
1/11/2002. Rather, until now I had thought downloading files didn't work under
OS X as far back as 11/28/2001 , but upon finding this bug exists I can confirm
that the downloads do work, get written to /tmp, but do not show up where the
user (me) has requested the file to be downloaded to.

This is pretty serious as a useability error, as the casual user will not know
to look in a /tmp or /temp folder for their downloads and think "Mozilla is
broken, it can't download files. Better fire up Internet Explorer"

I do not know (yet) if this error appears in the Mozilla CFM-OS X build, but it
definitely appears in the Mozilla macho-OS X build.
This bug would be easily fixed if the predownloaded file were placed in either
the user defined default directory (bug 7840) or in the last used download
directory. The predownloaded file could have a name like
"MozillaTempDownload_01.exe.moz". Then, as soon as the user select the actual
path and filename he want, the predownloaded file is renamed (and moved, if
necessary) to the user chosen name/location.

This has many benefits:

1) would allow Mozilla to use predownloading (bug 55690) because the partial
file would be where the user expects it to be and to (restricted) temp folder
(or RAM) would be filled up.

2) If the download fails, the user will know where the partial file is (if
mozilla were to fail to delete it). Maybe, the user could even resume the
download by doubleclicking on the file.

3) Once a Download Manager (bug 75364) is implemented with resumable downloads,
the user could doubleclick on a partially downloaded file (recognizable by the
".moz" extension) to resume the download (assuming both predownloaded files AND
partial downloads have a ".moz" extension).
Blocks: 75364
According to comments #21 and comment #6 I'm adding keys: dataloss and perf.
Keywords: dataloss, perf
*** Bug 116505 has been marked as a duplicate of this bug. ***
*** Bug 121880 has been marked as a duplicate of this bug. ***
To work around the problem described in bug 121880 I made
~/.mozilla/default/Cache a symbolic link to a directory on another file system.
 This was initially fine but the link has since been removed and replaced with a
directory.

This makes this more serious than I originally thought, since you can't even
conveniently work around it.

(Also I'm unclear why 121880 was marked as a duplicate of this bug, as /tmp is
not involved in any way.)
Me too, I want to report a bug that might be a connected to this problem. I have
updated from 0.9.6 to 0.9.7, yesterday (actually, I have removed the old
binaries and installed the new ones): Mozilla/5.0 (X11; U; Linux i686; en-US;
rv:0.9.7) Gecko/20011221.

When I click on a Link that points to a tar.gz then the "what to do/save to"
dialogs are entirely skipped and the file is downloaded to the tmp directory
immediately. Once I tried to copy the downloaded file but the tar was broken so
I never tried this again (instead I tried "save link as" which did not work,
either, and then I got back to konqueror).

One more thing, I have the feeling, that in the beginning (after installing)
some download procedures worked, but know I get more and more cases as the one
described above. that sounds a bit unreal so I rather suppose it depends an the
link? e.g. I can't download the mozilla tar.gz (be it 0.9.7 or 0.9.6 from the
www.mozilla.org anymore.

I did not have this problem with 0.9.6.
chantal.ackermann@web.de, chances are the following is happening:  1)  You have
"never ask" set for the gzip mime type and 2) your mailcap file (/etc/mailcap
most likely) has gunzip set as a handler for the type.  Try going to helper app
preferences and clicking the "reset" button to see whether that makes the "what
do I do?" dialog appear.
Boris, you are perfectly right. I checked /etc/mailcap. Thanks for the hint - I
will get 0.9.8 at once. -- Chantal
So... is this still an issue then?
*** Bug 125479 has been marked as a duplicate of this bug. ***
*** Bug 126650 has been marked as a duplicate of this bug. ***
Just a note; my macho mozilla build from 2/19/2002 still downloads mangled files
in /tmp but by 2/22/2002 build doesn't download at all. It may be unrelated to
this bug per se. Any activity on this front? It's been very pleasant to see the
NSS ported to the Darwin-macho build (yay secure sites!) and would be even more
pleasantly surprised to see this bug get resolved for a 1.0 release...

Unless the download manager is going to get rid of all these issues altogether?
Blocks: 129923
*** Bug 134524 has been marked as a duplicate of this bug. ***
*** Bug 136570 has been marked as a duplicate of this bug. ***
*** Bug 137466 has been marked as a duplicate of this bug. ***
*** Bug 138969 has been marked as a duplicate of this bug. ***
*** Bug 139988 has been marked as a duplicate of this bug. ***
> Downloading to a temp filename first also slows down the download process. 
> Last night I downloaded a 90 MB file and copy from the temp file to the
> actual file took almost as long as the download itself!

In case it's not obvious to everyone how this crazy thing could be happening, I
thought I might as well spell it out: someone is trying to be clever by first
downloading a file to a tmp location; then, when the download is known to be
complete and good, renaming it to its eventual location, thus avoiding
overwriting an existing file of that name until the time when we know we have
the whole file.  That's all well and good, but it doesn't work unless your temp
file and the destination file are on the same partition.  And realistically, the
*only* way to be sure they are on the same partition is to put them in the same
directory.  So if you want to do this, you can't write to "/tmp/foo.tmp" and
then rename to "~/xyz/foo", you have to write to "~/xyz/foo.tmp" and then rename
to "~/xyz/foo".  (Though, on Unix, using a dot-file like "~/xyz/.foo.tmp" would
be more sociable.)
The more social hidden file version is dangerous.  In the event things go awry,
having 90 MB hidden files cluttering up your drive is annoying (where did my
space go?)  However, I agree with the main point, which was that using the
destination folder is a pretty good idea and the only simple way to avoid the
partition issue.
*** Bug 143728 has been marked as a duplicate of this bug. ***
*** Bug 145661 has been marked as a duplicate of this bug. ***
*** Bug 146716 has been marked as a duplicate of this bug. ***
I must agree that this is a dangerous bug.
I just came in to file this as a new bug report, because it didn't even occur to
me that this could be a long-standing problem, and I discovered that not only is
it a longstanding problem... there's apparently no intention of fixing it!

I am... astounded. really.

At the very least there should be an option to disable predownloading until this
bug is fixed.
Hmmm. I changed my "temp" directory in prefs.js and it's ignored. What fun.

Alternate fix: modify download dialog to:

  Attempting to download from [url]. Click here to copy URL to clipboard
  and use a non-broken download agent (wget, Internet Explorer, Netscape 4.x)
  to fetch the file.

   [OK]

if "temp" directory isn't on the same partition as the destination.
*** Bug 151736 has been marked as a duplicate of this bug. ***
*** Bug 151873 has been marked as a duplicate of this bug. ***
*** Bug 152269 has been marked as a duplicate of this bug. ***
*** Bug 152433 has been marked as a duplicate of this bug. ***
Isn't this just a dupe of 55690, rather then blocked by it? There's only one fix
I know of for eaither bug, and it's the same thing.
*** Bug 133404 has been marked as a duplicate of this bug. ***
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0rc2) Gecko/20020510

On this version of Mozilla downloads go into the cache first, which for me is in
my home directory and under quota. This means I can't download anything larger
than my quota, even though I do have enough space in a different filesystem
that's not under quota and is meant to be used exactly for this purpose.

Even worse, when I clicked Cancel, the partially-downloaded file remained in the
cache - using up my limited disk space - and I had to find it and delete it by hand.

Can somebody explain the reasoning behind the current design please?
*** Bug 155240 has been marked as a duplicate of this bug. ***
*** Bug 155991 has been marked as a duplicate of this bug. ***
*** Bug 157895 has been marked as a duplicate of this bug. ***
Blocks: 158464
*** Bug 159683 has been marked as a duplicate of this bug. ***
*** Bug 162009 has been marked as a duplicate of this bug. ***
 The suggestion in comment #49 would not work.  Often, you have to have a   cookie set to download something.  For example, if you want to download Oracle   from www.oracle.com, you need to register and agree to the license first   before it lets you download.  Attempting to use a download manager results in  it downloading the login page instead of the file.    Also, simply changing the temp path would definitely not work (i.e. as  suggested in bug 55690).  The file should be saved to the user-specified  destination and not anywhere else.  My suggestion would be to save the file  with an .INCOMPLETE extension and then rename it to the proper name.  Predownloaded files should be moved to the directory specified in the "save as" dialog; alternatively, predownloading could be disabled. 
*** Bug 162327 has been marked as a duplicate of this bug. ***
Blocks: majorbugs
I can't believe that this bug has been around for 18 months. How did Mozilla 1.0
get released with it in there, let alone 1.1? I find it appalling that I have to
go back to NS4 or Konqueror to download files... I have lots of disk space on
other partitions, why on earth do we go through /tmp.

If there really is a technical reason why it has to be done this way or can't
easily be fixed, can we at least have control over where the temp directory is
located?
This is indeed very annoying. Our situation: Mozilla is running locally,
/home is on remote server. So what happens is :
1. Mozilla sucks file from network, stores on local /tmp
2. In the background, file goes into Cache on /home/.mozilla over slow NFS
3. Mozilla copies local /tmp file over slow NFS to server /home.
 
If /tmp is small, not only the file gets lost, but also the host gets into 
severe trouble, so 'critical' is really needed here.
If /home is small, file gets lost (that's the problem of the user), but
Mozilla could have saved a lot of time downloading it to /tmp just
by checking if space on the target filesystem is sufficient.

Downloading first to a 'secret file' on the same partition as the
target filesystem would be a solution, but only if the file is not
copied, but moved (renamed) to the target name, otherwise 
we would need twice the space (and bandwidth) to copy it, that 
would be stupid, too. The 'secret file' also cannot be too secret, 
otherwise (if things get hosed) the user would not be able to find 
leftovers sucking up the filespace.

We have lived with the old netscape approach (directly store it where 
it belongs) a long time, and those dull and stupid idiots tackling the file 
before it has downloaded completely certainly have died out by natural 
selection over the years. So what is the reasoning of not doing it the 
way it worked for decades ?
I'm looking at the code; you can set your temporary directory through the
environament variable TMPDIR, so that should keep some of us happy for a bit.

Any additional opinions on what the default behavior should be?  Is
$HOME/.mozilla OK?
I basically want the file to go directly to the final directory. Rename if
really need be, but do not move it around any more than absolutely needed.

$TMP_DIR seems suboptimal to me. I have it set to /tmp/... (lot's of space on a
local disk), but most of the time I want the stuff that I download to end up on
my NFS mounted home directory anyway so why not put it there?. If I really want
to go to /tmp, I'll explicitly tell Mozilla about it.

I do NOT want the file to go into ~/.mozilla by default. For one, because of
the: if the file is to go to /tmp, it should go there without passing over the
network (think shared 10Mb ethernet!) 3 times. In addition, using .mozilla would
effectively completely prohibit me from ever downloading any file that I cannot
fit into my file server disk quota. With the quota policy we have over here,
this is a real issue.

I second the suggestion in comment #67. It describes exactly our situation here,
too, and anything else just does not work very well and makes me glad we have
'wget' installed.
I agree. Any temp file should be saved in the same location that the user
specified the file to be download to in the "File Save" dialog. Anything else is
confusing and misleading for the user. (ESPECIALLY when the user gets an "out of
disk space" error after downloading a file for 20 minutes when they specifically
selected a disk partition that had enough space.)

I believe the reason for the current behavior is so the file can begin
downloading before you have selected a destination.  This can be handy for small
files, where by the time you select the destination the download has finished. 
But it's pretty obnoxious for big files, when the temp partition is space or
quota limited.

One obvious solution is not to start the download until the destination is
selected.  Another solution might be to copy the partial download to the
permanent home as soon as it is selected, and carry on downloading from there,
though it may cause a bit of a "hang" if you wait a long time to select the file
name and it's running over NFS or something.  Yet another would be to disable
this behavior for large files or files larger than the available space, though
that really takes it out of the hands of the user.

Overall, I think I'd like a checkbox to enable this and a temporary directory
selector in the prefs dialog, so I can leave it on in general but turn it off
for .ISOs or whatever.
I think the file should go directly to /some/destination, just where I told
Mozilla to store it to. It is not a temporary file, it's a *download*.
Downloaded files should go straight to where the user specified, under the  
same filename.  If you don't want dumb windows users to click on the file  
before it's downloaded, save it with a special extension (.moz).  You could  
even have some special program that pops up a message box saying "this file is  
not complete yet" and make it handle .moz files.  
  
As for predownloading (before the selection box): it's not necessary.  If you  
really want to keep it, make the browser download it to /tmp and then move the  
file when the user selects and resume the download.  Waiting a few seconds for 
the file to copy is not half as bad as having a 500MB download fail at 200 MB 
because there wasn't enough space in /tmp. 
I have read many debates on this bug and it has many implications.

Currently Mozilla downloads a file (with a random name) into a temporary file
stored in a temporary directory, then moves that file to where user selected it
to go and the proper name.

In addition to all the arguments and problems selected here, there are a few
more things that get in the way by this behaviour:

1) while download is in progress, if user looks into the folder where file was
selected to be saved, he sees nothing about that file -> possible confusion on
the part of a less experienced user.

2) if one starts a download with a certain filename, then starts another
download from a different URL, but SAME filename (think mirrors), Mozilla will
happily accept the second filename to be saved and proceeds with the download.
I never waited to see what happens when the second download finishes, because I
stopped it, realizing the stupidity of the situation.

Here's my suggestion on how the proper behaviour should be and why:
These are for at least windows and linux platforms.

right click -> save target as.

1a) immediately pop up the save as dialog for the user to select a destination
location and a filename.
1b) at the same time, in the background, mozilla should, in a separate thread,
connect to the server and start the download (even if user still did not decided
on the location or filename, even if download finishes before this) and place
the downloaded data into the cache.
2) user selected filename and location, file should be created at that location
(not in any temp dir, with any random name). If data is already available in the
cache, it should copied to the file and as more data arrives, it should be
placed both into the cache and the file.

Steps above are consistent with Netscape 3.x and 4.x behaviour and personally I
consider them superior to the current behaviour of Mozilla, or of MSIE.

The following problems may arrise:
- cache space may become full before user selects filename (on fast lans this is
possible). The background thread that performs the download should initiate a
cache cleanup (also in bg) and continue. If downloaded data exceeds cache space
and no more cleanup is possible, download should pause/stall until user selects
filename.
- if object already exists in the cache, (download requested by file->save as,
right click->save image as) it is complete, a background thread should start and
check if file needs to be downloaded again and proceed with doing so if required
(prefs handling of cache checking, expiration of cache object), regardless of
when the user selects the destination and filename. The rest should proceed as
described above
- of object is currently being downloaded, (download requested by file->save as,
right click->save image as), after user selects a filename, mozilla should
mirror already existing data to the filename right away and mirror incoming data
as it arrives.
- the background thread that initiates the download must also start collecting
statistical data about ETA and transfer rate (speed), which will be displayed
after the user selects the filename, in the save as progress window.
 Netscape 3.x and 4.x start collecting these statistics only when the save as
progress window is first displayed and initially may show huge speeds on slow links.

The above described behaviour is what I consider to be the most efficient as in
fast response of the browser to user requests and also how proper, concise
information is given by the UI back to the user.
Trying to outsmart the user isn't smart. Pre-downloading is completely against
expected behaviour - this is a such a bad thing I can't stress it enough - and
the tmpdir/tmpfile approach causes very real and extremely annoying problems for
many users, advanced and beginners alike. 
I don't understand the whole point of saving a few seconds at all, but if it's
so necessary to some of you developers, PLEASE at least give the rest of us a
prefs checkbox to revert back to the most basic, simple and straightforward
Netscape 4.x behaviour.

PS. On non-Microsoft systems, even incomplete files (even open ones) are
sometimes extremely useful. But surely you didn't already know this, no.

Using copy-link-location & wget is oh so much fun...
Please don't confuse storing a partially completed download in the TEMP
directory with pre-downloading. They are separate issues.

I agree that partially completed downloads should NOT go in the temp directory.
They should go where the user wants the finished file to ge. Please also see
comment #25.

Some ideas:

+- Open from / Save to Directory ------------------------+
|                                                        |
| (*) Prompt for download location                       |
|     (*) Default to last download directory             |
|     ( ) Default to [ D:\downloads\         ]  (browse) |
|                                                        |
| ( ) Always download to [ D:\downloads\     ]  (browse) |
|                                                        |
+--------------------------------------------------------+


+- Save As ---------------------------------------------+
|                             ____________________      |
| Recently used directories: |____________________|\/|  | <-- bug 24625  :)
| ----------------------------------------------------- |
|           __________________                          |
| Save in: |_MyDownloads______|\/| [^]  [Favorites]     | <-- bug 115981 :)
| +---------------------------------------------------+ |
| | files (or favorites) listed here                  | |
| |                                                   | |
| |                                                   | |
| |                                                   | |
| |                                                   | |
| +---------------------------------------------------+ |
|                      _______________________________  |
| Predownloading File ||||||||____37_%________________| | <-- this bug :)
|                                                       |
| The predownloaded file will be moved to the path and  |
| filename you select once you click SAVE, or deleted   |
| if you click CANCEL.                                  |
|                                                       |
|                              [ SAVE ]    [CANCEL ]    |
+-------------------------------------------------------+
*** Bug 141790 has been marked as a duplicate of this bug. ***
*** Bug 171906 has been marked as a duplicate of this bug. ***
*** Bug 173681 has been marked as a duplicate of this bug. ***
Keywords: 4xp
*** Bug 174308 has been marked as a duplicate of this bug. ***
There seems to be a lot of debate going on here, but not a great deal of bug
fixing. Given that the most serious problem here IMO is the DATALOSS issue, why
not just check the available disk-space in /tmp before starting the download?

Something along the lines of...

NS_IMETHODIMP nsExternalAppHandler::OnStartRequest(nsIRequest *request,
nsISupports * aCtxt)
{
  NS_ENSURE_ARG(request);
  nsCOMPtr<nsIChannel> aChannel = do_QueryInterface(request);
  PRInt64 DiskSpace;
  NS_GetSpecialDirectory(NS_OS_TEMP_DIR, getter_AddRefs(mTempFile));
  nsCOMPtr<nsILocalFile> tempdir = do_QueryInterface(mTempFile);
  nsresult result = tempdir->GetDiskSpaceAvailable(&DiskSpace);
 
  // Get content length.
  if ( aChannel )
  {
    aChannel->GetContentLength( &mContentLength );
  }
  if (DiskSpace < mContentLength) {
    nsAutoString tempFilePath;
    mTempFile->GetPath(tempFilePath);
    SendStatusChange(kLaunchError, NS_ERROR_FILE_NO_DEVICE_SPACE, request,
tempFilePath);
    mCanceled=true;
}

... would at least prevent a user wasting their time trying to download
something which is (eventually) going to fail!
Adrian, your Comment #80 almost looks like a patch. Do you know which program it
should reside in? Adding HelpWanted keyword in case you don't know.

If you do know, can you attach it as a patch and remove the HelpWanted keyword.
Keywords: helpwanted
I think GetDiskSpaceAvailable has some outstanding bugs that need to be fixed 
before that could go in.  I seem to remember disk quotas not being checked and 
Windows mounting drives into folders causing incorrect values.
Also, now that I think of it, not checking for free space before downloading is 
bug 65770
While GetDiskSpaceAvailable may be buggy, is it better than the current situation?

Also, marking this bug as dependant (NOT Duplicate) on Bug# 65770
Depends on: 65770
David,

Re: Comment 81:
The file is /uriloader/exthandler/nsExternalHelperAppService.cpp, Line 1161 onwards.

I deliberately didn't upload this as a patch - I don't have the type of in-depth
knowledge of Mozilla or C++ to turn this into something that is "production
quality" (error messages galore in the debug console !).
I cobbled this together just to check it was technically possible!

My main concern was just to get this bug moving forward - this bug has gradually
turned into a debate forum (for understandable reasons) rather than a place for
developers to collaborate in trying to fix an irritating bug.

Re: Comment 82 & Comment 83:
Do you want me to post this as a patch for discussion in bug 65770 instead?
I once posted a suggested fix for bug 106708, and before I could blink I became
the owner of the bug - which is frightening bearing in mind my aforementioned
lack of C++/Mozilla expertise :-)
I REALLY don't want to become the owner of this patch ;-)

Re: Comment 84:
IMO, anything is better than nothing!

I'm a Network Administrator, and I can just imagine the reaction of some of my
non-techie user when they are informed an hour into a download that their
download cannot be completed because C:\windows\temp is full.

1) They will ask "Why is it saving to C:\windows\temp rather than my D Drive"
and I will answer "Because the download hasn't finished yet" (Terry Pratchett
calls this "lies to children").
2) They will ask "Why didn't it tell me there wasn't room before I started
downloading?" to which I will answer "Um, there is no technical reason why it
shouldn't, but it doesn't at the moment!" which is NOT a very satisfactory
answer when you think about it.
I've nominated this, and 65770 for Mozilla 1.2b.
Keywords: mozilla1.2
I've read over this bug a couple times now, and I must say that I really don't
get it.  I apologise if I'm missing something completely basic, but this is the
fundamental issue as I understand it:

Mozilla begins downloading files as soon as possible, and saves them in the
wrong place.

The summary doesn't really match this, but it can be read more liberally to
support it than the summary of bug 55690, which I think should simply be marked
invalid.  And here's why:

The fundamental problem is that you simply don't know where the file is
ultimately going to go, so you pick someplace essentially at random: $TMPDIR,
~/.mozilla/Cache, /tmp, or wherever.  Most likely there is no $TMPDIR, /tmp is
too small or is filling up by some other user, ~/.mozilla is also too small (see
comment 56), etc.  We used to run cron scripts to delete everyone's cache once a
week so that other work could get done.  The bottom line is that there is no
safe directory, no matter what bloat^Wprefs you add.  Comment 25 suggests having
the ability to resume a download, but comment 62 states that even this isn't
always possible, so you're still no better off, with *.moz or otherwise.

So what is the solution?

Don't save the file.

Now, here's where I get laughed at, I'm sure, but just don't do it.  If you're
sitting on your private T1 downloading from your peer, you're getting about
150k/sec.  That's what, 8, let's call it 10, seconds per megabyte?  And surely
if you have your own T1 you can afford $25 for 512MB RAM.  Keep it in memory! 
Allocate 1MB RAM for buffer - that give you 10 seconds to find a place to save
it.  Allocate 10MB RAM - you get well over a minute to find a directory.  If
you're low on memory, you're probably downloading a lot slower as well, and
Mozilla is hardly lean-and-mean on the RAM requirments as it is.  Who's going to
care if it uses another 10MB RAM?  Embedded people can change the download
buffer pref.  If it overflows, let it block.  No big deal ...

Further, as long as you have this in RAM, hitting Cancel can do some sort of
lazy cache save.  Keep it around for garbage collection just in case, and you
even have resumability.  Exit the browser and it's gone because we've already
said we're not saving it anywhere to disk.  Then you don't have to worry about
dead session cookies either.

The simple combination of facts that:
 *) No disk space is guaranteed safe
 *) The only safe place to save it is RAM
 *) RAM is cheap
 *) Swap is cheaper
says to me to leave it in RAM.  Save it to disk when the user tells you where to
put it, and not an instant before.  If they don't like where they put it and hit
cancel and then download it again, resume from where you left off in RAM and
repeat the process.  Otherwise, just flush it to disk and continue the download.

This bug looks to be all over the map and simply out of control.  I hope this
suggestion puts some sanity into the ordeal and doesn't simply add to the
confusion.  I now see that bug 129923 comment 3 even mentions this idea, but for
some reason that I can't see, it's been completely ignored ...
GetDiskSpaceAvailable, or any such function for this matter, buggy or not, is 
fundamentally unreliable due to underlying OS and it's problems. 

- On windows boxes with fat32 that experienced a crash and did not run scandisk, 
everybody knows that free disc space reported by the OS can be anything (usually 
much less the actual free space). 
- On windows boxes, at least win98, win95, a shared drive is reported to have 
2GB in size, regardless of what the actual disk size is. If this information is 
used...
- one of the most obvious ways to prove this is the following setup:
have a linux box, sharing a folder with samba. the folder resides on a partition 
with very little disk space (even few megs). Samba allows user to follow 
symlinks and a symlink exists in the shared folder to another folder on a 
partition with lot's of disk space. A windows box mapping this shared folder 
will ALWAYS report those few megs as free space, even though if going into the 
symlinked folder, one can write gigs of data...
Any save that will (as it should) check for free space available and compare it 
to the file size, if not enough disk space, should give the user the following 
options: 
- abort save and let the user choose another location, or cancel the save 
completely.
- continue saving (see my previous comment on mapping folders with symlinks to 
other partitions), continuosly checking that the amount of data written equals 
the amount of data requested to write for each disk-write operation and alert 
the user that save failed due to lack of disk space if these 2 values do not 
match.
<rant>
I can't quite believe what I've read in Comment 88!

> GetDiskSpaceAvailable, or any such function for this matter, buggy or not, is
> fundamentally unreliable due to underlying OS and it's problems. 

And Mozilla's behaviour now is considered RELIABLE is it?
Surely GetDiskSpaceAvailable is _more_ reliable than the current behaviour?

If I ran my network this way we'd still be using Dumb Terminals over RS232
connected to our old 4MHz 68000 Unix machine, because "Windows is unreliable".
</rant>

Comment 87:
This is a good idea in theory, but Mozilla needs to set an upper limit after
which it HAS to start writing to _somewhere_ - here's an extreme example:

User clicks on link for REDHAT ISO image and goes to lunch, forgetting about the
Save Box. Result: Mozilla fills up Physical memory and then Virtual Memory. User
comes back after lunch to find his machine has either 1) ground to a halt or 2)
has crashed completely.

Jamie Zawinski makes a good point in Bug 59808, comment 10:
> It seems to me that there is already a well-defined place for Mozilla to write
> large amounts of temporary data: ~/.mozilla/*/Cache/.

Wouldn't this be a better place to store temporary downloads as well?

What about a TEMPDIR setting in prefs.js which can be set to
$HOME/$TEMP/$MOZILLA_CACHE/HARDCODEDPATH?
Rather than "hard coding" the behaviour of Mozilla by insisting on using $TEMP,
this at least would give users a choice!
For the future, the installer could even try to "best-guess" this setting!
Hello All,

normaly I don't download great files, that why I've never got noticed of this bug.

But yesterday I've tryed to download an ISO-image and it didn't work. It just
took a few seconds to determited what happend. The first thougt I had was
"Hmmm.. did I use MSIE or Mozilla ..."
I always ask my self why MS use this stupid function for MSIE but I was really
shock to see Mozilla does this also.
After I readed the comments of this bug, I know why a temp file /cache file is
needed but I also agree to the people who have better solutions then this MSIE
behavior.

I hope you fix this bug really soon and please don't tell me "It's not a bug its
a feature".

Markus



Replying to comment 90:

: Comment 87:
: This is a good idea in theory, but Mozilla needs to set an upper limit after
: which it HAS to start writing to _somewhere_ - here's an extreme example:
: User clicks on link for REDHAT ISO image and goes to lunch, forgetting about
: the Save Box. Result: Mozilla fills up Physical memory and then Virtual

Absolutely.  I need to learn to become more terse.  I wrote in comment 87:

:: Mozilla is hardly lean-and-mean on the RAM requirments as it is.  Who's
:: going to care if it uses another 10MB RAM?  Embedded people can change the
:: download buffer pref.  If it overflows, let it block.  No big deal ...

My understanding of this feature is to not wait the 20 seconds of directory
navigation to start downloading the file.  Allocating 10MB is essentially a
no-op on most OSes until the RAM is actually needed, and there's no problem in
setting a limit (yes, it can even be a pref for Embedders.  Mayhaps 0...)  You
click to download an ISO and go to lunch, it can fill your buffer up to the
maximum you've set and then block until you return.  I don't think this feature
should be (ab)used as described in your scenario.

The key point here is that in this case, the download just blocks at some time,
and then continues when you tell it what to do.  Saving the file anywhere on
disk creates a much worse situation.  Again, see the latter half of comment 56.
 Saving anything to disk before knowing where to put it invites trouble for no
benefit, especially no benefit over keeping it in RAM.

The only concern is how Mozilla is architected in this arena.  Hopefully the
loaduri handler (if that is the correct entrypoint) returns a memory pointer and
not a file descriptor.  (Returning a fd seems like a very bad way to go when
caching is involved, but I really don't know what happens.  I read that there
was some voodoo involved but a cursory glance was unenlightening.)

: Jamie Zawinski makes a good point in Bug 59808, comment 10:
: > It seems to me that there is already a well-defined place for Mozilla to 
: write
: > large amounts of temporary data: ~/.mozilla/*/Cache/.
:
: Wouldn't this be a better place to store temporary downloads as well?

I'm as big a fan of jwz as anyone, but, again, comment 56.  I think there was
another comment along these lines somewhere, but my point is that simply moving
the problem around doesn't solve the problem.

: What about a TEMPDIR setting in prefs.js which can be set to
: $HOME/$TEMP/$MOZILLA_CACHE/HARDCODEDPATH?
: Rather than "hard coding" the behaviour of Mozilla by insisting on using
: $TEMP, this at least would give users a choice!

This I don't understand: instead of $TEMP use $TEMPDIR?  Again, you're not
solving the problem.  Most simply: the problem is saving the file anywhere on
disk before you know where it goes because you may be wrong.  It is more simple
to avoid a problem than solve one.  *Avoid* saving to disk.  Keep it in RAM. 
Add an autodownload-max-buffer-size pref, set reasonably.

: For the future, the installer could even try to "best-guess" this setting!

Here's a thought on reasonability: How do you determine "free" disk space?  Do a
df of a live system with a shared /tmp?  Figure out the current quota for the
current user in $HOME?  Picking some value out of /dev/urandom is probably saner
than any of these.

Or: Figure a download rate of 200k/s and 30 seconds to select a directory.  Set
max-buffer to 6MB.  If you're downloading at 200k/sec, you're probably not
terribly concerned about a few wasted seconds.  If you're downloading at 5k/sec
and it takes you 10 minutes to select a directory, that's only 3MB of data
downloaded.

Which makes more sense to you?
QA Contact: sairuh → petersen
Comment #90:    ~/.mozilla/*/cache is NOT a good place to store large files.  The .mozilla   directory exists for storing small config files and the cache (which is   configurable), but it is most certainly not the place where large files should   be stored.  I have a special directory to which I always download stuff, and  it is on a separate filesystem from /home.  Writing to ~/.mozilla/cache is  even worse than writing to /tmp.  In fact, this is one of the most idiotic  proposals so far.    Making it configurable could work, but it would be a hassle and would still  not always work (what if you want to save to removable media and don't have  space anywhere else?).  Also, it doesn't fix the broken behavior but just  masks it.    How about downloading the first 50 megs or so to RAM/temp directory, then  pausing if the server supports it or cancelling the download if the user still  hasn't specified the filename?  It seems to me like very logical behavior and would not require very drastic changes.  Finally, to people who think GetFreeDiskSpace is not reliable: if people have a corrupted file system (didn't run scandisk), they shouldn't be downloading large files.  And it seems to work fine in all other cases. 
My last word on the subject - I promise!

This is a highly visible and embarrassing bug - 62 Votes and DATALOSS speaks
volumes for me.

Spooling a download to a memory buffer is likely to be a non-trivial task, with
many ramifications (sorry about the pun).
OTOH, as a SHORT-TERM measure giving users the ability to change /tmp to a
user-definable location via a preference IS trivial.

In fact it is so trivial it can even be done in Windows without touching the
Mozilla code at all! Simply manually alter the TMP and TEMP environmental
variables as in the following batch file:

set TEMP=<user defined path> eg D:\
set TMP=<user defined path> eg D:\
C:\program files\mozilla.org\mozilla\mozilla.exe

This will only work if a user is starting from scratch - i.e. has not enabled
Quickstart and Mozilla is not already running.
excuse me but for NOW, I have some big troubles to download ISO images.
Can we expect, that very shortly, Mozilla waits for the user input and then
download  the file in the specified directory.
For sure, it works. May be, it wastes a lot of time (at least 10 seconds) but
for sure, IT WORKS and I can download my linux distributions.
 The performance penalty the user percieves due to this bug is huge for someone 
using multiple windows and saving linked files and/or images from these pages, 
constantly switching between them.
 Although I like Mozilla a lot and I use it quite often, I always revert back to 
ns4 when I need the fast response times it has when browsing and saving lot's of 
data.
Blocks: 157548
Another performance issue is if the download is finished and Mozilla moves the
file from the temp dir to the target location Mozilla does not respond to any
user action and is unusable for that time (even no GUI redraw). This is
noticable on large files, especially if you move from one partition to another
on the same physical disk or to a network drive. On my old PII 400 system
Mozilla hangs >30 sec for a 100 MB file. Other programs are responsive during
that time.

I don't know if this behavior is only a side effect of the bug or the cause.
Referring to the my comment 97 I found the bug for the Mozilla lock during
moving. It's bug 103766.
On comment #97 : 

The apparent lock is more extensive then described in #97:

scenario: browser containing only image. (not eve a very large one). Images has 
completed loading.
right click-save image as.
on a P4, 1.9GHz there is a noticable delay (at least .5sec) until dialog 
appears. Why? image should be in cache by now. 
After I press save, mozilla stops responding completely. other browser windows 
if present do not respond if focus requested. scroll bars are unmovable. in 
short: no response. 
after some time (on the average 1.5-2 sec on my box), the save as window 
appears, but in the first stage, only the titlebar and the contour is visible.
after a noticable delay (average .5sec at least), the save as window gets 
painted and in the progress meter I see the striped blue hash filling, often 
drawn broken in several segments. 
after the save as window has closed, there still is a noticable delay until 
mozilla starts responding.

- if I request browser window close while save as window still exists, mozilla 
may stop respoding for anything between 2 and 20 seconds. 
- if I request browser window close immediatly after save as window closed, 
mozilla still does not respond for anything between 0.5 and about 5 sec.
- if I request browser window close about 0.5-1 sec after save as window closed, 
there are good chances that mozilla will, by this time, respond.
*** Bug 177810 has been marked as a duplicate of this bug. ***
*** Bug 178428 has been marked as a duplicate of this bug. ***
*** Bug 179780 has been marked as a duplicate of this bug. ***
*** Bug 181196 has been marked as a duplicate of this bug. ***
In addition to the 100+ previous commets, mostly cogent, I recently hit the case
where both the TMP directory & the final location had enough space & the
download still failed after about 1.5 hours for lack of space.  As best I could
determine Mozilla download manager was putting 2 copies of the file in TMP!

If 2 years (minus about 2 months) of discussion can't resolve this basic issue,
just don't start the download till the location & file name are selected. 
Having something that ALWAYS works & in an expected manner is much better than
the current sitution.  Checking the file size & space available would be nice,
after the above is implemented.  If the operating system has bugs in the value
it returns for the free space available then it looks to me that trying to fix
this in Mozilla is probablly a fruitless task.  Ignoring the file size/space
problem because some of the cases in some of the many operating systems don't
return good values seems to me a mis-ordering of priorities.  In years of
experience, pre-downloading is only occasionally nice.  Mostly, it doesn't make
any difference.  I ended up using IE6 to get the file.  
I see this bug could lead to total rewrites in caching code ;)

So I would add:

You should have some way of handling inclomplete downloads of files of any type.
If it is image - it is pity when after page refresh they all disappear and
you can't save half of image (I'm using dialup).

So:
1. Keep info about source URL, downloaded and actual size(if known) of file
  for every cached item.
2. Allow "continue downloading" for "Download manager" even if mozilla was
restarted. Do not show "100%" if file actually uncomplete.
3. This whould allow you to implement nice "pre-downloading" - first download to
/tmp , then "pause" it (after user have path selected), move it to new location
(the partialy downloaded file), "continue" downloading.
4. It will allow you behave nicely if space on disk vanish suddenly during
 download.
5. Add info about downloading process to image "Properties" dialog.
6. (something nice to have) Integrate "download manager" into "page info" to
see downloading process of all items on current page.

Should there be new bug for all this and previosly discussed?!
Before filling a new bugreport take a look at dependents of bug 129923.
Most of proposed issues already present in other bugreports.
As said in comment #104
"If 2 years (minus about 2 months) of discussion can't resolve this basic 
issue, [...]"

I've suggested Mozilla to many people, and I know some of them tried it, then 
discarded it because of the horrible behaviour related to downloading and saving 
files.

Somebody, please, put soome order into this chaos!
*** Bug 183635 has been marked as a duplicate of this bug. ***
*** Bug 185738 has been marked as a duplicate of this bug. ***
*** Bug 187051 has been marked as a duplicate of this bug. ***
I have only 20MB free in /tmp. This makes it impossible to download anything 
larger than that using Mozilla. I am forced to use MSIE on my laptop for those 
downloads, then FTP them over to my BSD box. My old Netscape Communicator 4.7 
used to handle this perfectly.

Please fix this inane behaviour ASAP. There is absolutely no reason to download 
files first to /tmp and then copy them to the destination. If I wanted my 
browser on my FreeBSD box to work the way Windows works, I'd be running Windows 
on that box :)

This is absolutely intolerable. Fix it, or I'll have to patch my local source 
myself. This is a SEV 1. Is anyone listening??????
Yes, I am listening! I think you are absoult right. Storing downloads under /tmp
is absolut not needed and is a BIG minus!
As for comment #111: I can't get your point. If you are able to solve the 
problem on your box, why don't you post a patch here?
*** Bug 189352 has been marked as a duplicate of this bug. ***
*** Bug 189334 has been marked as a duplicate of this bug. ***
I just entered bug 189334 which was just marked a duplicate.  I just wanted to
add one thing to the discussion here, which was the second part of my bug report.

While copying from /tmp to /path_that_I_specified, Mozilla hangs (on Windows
2000) until the copy is done.  I can't use Download Manager to open "Explorer"
for other downloads that have completed, and I can't get back to the Mozilla
window to do more browsing.

If the root cause of this problem is fixed (i.e., Mozilla should download to
where I told it to, not an intermediate location) then the hanging will stop. 
However, if it's decided that it should still copy to /tmp, then perhaps running
the save in a separate thread would be useful in order to avoid the hang.
Ken, you're talking about bug 103766
I tried to understand why this bug is not solved yet. It seems to me that
no satisfactory solution has been found. The main reason: there is a nice
feature of downloading ahead which would not be nice to sacrifize...

What I have seen recently, is that a download manager utility would ask
at the installlation: whre would you like the tmp files to be placed?
It offered first and second choice...

To me the cleanest solution would be to allow the user to set the path for
tmp files. So it will be possible for the user to plase it where it is
suitable according to the given system configiuration.

This should work on all OSs and platforms.
> To me the cleanest solution would be to allow the user to set the path for
> tmp files.

You can already do this in your system's setup ($TMPDIR/$TEMP on Unix, %TEMP% on
Windows, etc).

> The main reason: there is a nice
> feature of downloading ahead which would not be nice to sacrifize...

This is not the main reason.  Please read bug 55690 for in-depth discussion of
the real problem -- dropped connections.
I don't think dropped connections is a real problem.  If it where, there would
have been lots of complaints for NS 4.  I would be glad to get the NS 4
behaviour back.
If you read the bug I pointed to, you will see that NS4 maintains the connection
but throttles it.  This is impossible in the current Mozilla architecture. 
That's being worked on, however.
I hate to be so dense, but is there something fundamentally wrong with what I
suggested in comment 87?  Or is this simply the case of "that's a different
problem - file a different bug"?
The fundamental problem there is your assumption that lots of network == lots of
ram.  Think thin clients on a LAN.  100Mb/s download speeds, and maybe 64MB of
RAM.  Not an uncommon scenario.
No no - I understand ... thin client on gigabit downloading an ISO: this thin
client must have access to disk space then, no?  And that would then justify the
assumption that there is swap available ... Let's assume that there is no other
network activity, 80% network efficiency, and the pref called
"Advanced->Cache->Memory Cache" vanishes.  At 80MB/sec, you would fill your
available RAM pretty quickly and adversely impact the machine performance, granted.

However, besides the fact that most "thin clients" run apps on the beefy server
with lots and lots of RAM, I also said that once your memory cache is maxed out,
you block the download.  Just stop reading - TCP will deal with it.  If you're
on a fast network, do you really care that you're losing the difference in time
between how long it takes to select a location and the time to fill the cache? 
Literally, you're talking seconds.

There will still be a delay while the cache is flushed to disk, but this is
minimal compared to the sudden, unexpected, unobvious freeze when an ISO is
copied 10 minutes after you started the download and moved on to watching a
flash movie; it won't seem strange - just that clicking "save" takes a second to
acknowledge; and there's no mysterious "cannot download" messages when you run
out of disk space even though you told it to save to a blank 120GB HDD.
> Just stop reading - TCP will deal with it

That's the big question.  Will it?  People testing my patch in bug 55690 which
does exactly that (you _did_ read the bug as I asked, right?) report dropped
connections.  See comment 119 and 121 of this bug.

This "feature" is _not_ being kept around to make downloading faster.  It's
being kept around for lack of a good way to stall the connection.  And that will
soon not be an issue once bug 176919 is fixed (again, you _did_ read the
dependency list of this bug and the bugs that block it, right?).  At that point,
I plan to turn off this pre-downloading thing.
I'm just going to profess ignorance.  I've been following a few of these related
bugs, and will assume that I misunderstood some very low level aspect of the
problem.  It looks like this is what's happening (if I've read everything
correctly):

You need to know whether to display or save content, which you can't do unless
you click on a link.  Now, you have this data streaming in.  Somewhere (I wasn't
able to find it) someone reported a bug because Mozilla cached to RAM and then
the connection dropped, so "pre-downloading" was "invented".

The holy grail now seems to be async I/O to handle this, while there is no
direct effort (since it's a tangent) to addressing the other listed complaints,
such as that of /tmp (or $TEMP or %TEMP% or ...) being too small to handle a
download and it getting aborted [see comment 0].

If that's the case, it still seems to me that (in this extreme case) letting the
connection drop should be fine, as long as you do something intelligent like
continue the download when the user hits 'save'.  This should work for HTTP, and
FTP should not have been an issue to begin with ... but with my record so far,
I'll just leave everyone be.  Sadly, reading != comprehension.  :-\
> as long as you do something intelligent like continue the download when the user
> hits 'save'

Not if there was POST data... reposting is bad, bad, bad.  This is why we do not
want to drop connections.  ;)

But yes, the basic goal is to have proper limited buffering in RAM combined with
stalling the TCP stream so that we do not have to buffer on disk.  That's what
the async I/O patch does.
If the user hits right-click Save-as, none of this should even be an issue.
Mozilla knows beforehand to save, and never needs to buffer anything.
Yeah, and if the user does that, we don't store anything in the temp directory....
I'd give up any form of pre-downloading to have it save in the place I tell it
to, and I'm sure many others would as well. Perhaps an option is in order?

In addition, any form of RAM buffering is a bad idea IMO, as it will affect
Phoenix as well as Mozilla. Phoenix is made to be a lot lighter on RAM usage,
and creating some large buffer for that is the wrong solution.
Could people please do each other a favor and stop making inane comments?

Read the bug.  Especially comment 125.

Boris, will it help to flame back?  You're only accusing people of making inane
comments because you're approaching the problem with a closed mind!

You seem to be saying that the current behavior is because bytes start flowing
on the download before the user selects a destination file, and due to the
current architecture, those bytes must be consumed immediately, thus you need
some place to put them, thus you stuff them in a temp file until the user picks
a new file.

It seems to me there are solutions other than asynchronous I/O.  For example,
you could wait to actually start the bytes flowing until the user picks a file,
or switch over from the temp file to the real file as soon as it is selected.
I'm not flaming... I'm just very tired of repeating the same facts over and over
again....

Fact:  HTTP sends the headers and data start in a single TCP/IP packet
Fact:  We need the headers before we even know whether to put up the dialog,
       since the headers contain the Content-Type and the dialog only comes up
       if we can't handle that type.
Conclusion: We cannot "wait to actually start the bytes flowing until the user
            picks a file", since the bytes are already flowing when we find out
            we need to ask the user.

As for moving the file, that is indeed a viable option.  It's discussed at
length in bug 55690, which I'm sure you read (there's a reason I keep telling
people to read it).  It's not a particularly good solution when working with
some filesystems (eg the Mac one).

I didn't call Collin's comment inane because it does not raise good ideas.  It
does.  It's inane because those good ideas have already been raised, discussed
to death, evaluated, sifted through, pored over, etc for years now.
Whiteboard: PLEASE READ BUG 55690 BEFORE COMMENTING
> Conclusion: We cannot "wait to actually start the bytes flowing until the user
>            picks a file", since the bytes are already flowing when we find out
>            we need to ask the user.

Then drop the connection immediately, "lose" the bytes and start over when the
user picks the file location.  It will be much kinder to the remote host when my
ISO image dies at 512MB and I have to reload it all over again using a different
http or ftp agent that saves files where I specify instead of droppping them in
my temp directory.

If you MUST keep this behavior (I know people seem to like it for small
downloads and slower links), just make it an option.  Call it "pre-downloading"
or something and let the users turn the bloody thing off if they so desire.  

If only the minority of people are downloading large files and suffering the
temp fillup (or the obnoxious system freeze during the file move upon
completion), then make "pre-downloading" the default.

"Solving" this problem by manhandling the data (a move is fine on the same the
partition when it's just a file descriptor change, but that idea is totally sunk
when the save location is on another parition) is pretty much doomed to failure.
 So, why bother?
> Then drop the connection immediately, "lose" the bytes and start over

Translation:  "repost the POST data and make that purchase over again".  Right.
 That's exactly what we should do.

John, based on the fact that your comment adresses issues that I explicitly
stated (in comment 125 and comment 131)  will no longer exist soon, I must
surmise that you did not take the time to read this bug before commenting.  Or
even to read the last 5 comments.  Please do read the bug in the future; that
will help you not waste people's time....  I understand that 10 minutes of your
time are more important to you than 1 minute each of 50 other people's time, but
reading the bug before commenting is really considered the polite thing to do in
Bugzilla...
What about the following algorithm?

1) Start DLing into /tmp, just as now
2) When a few minutes later user finally picks the final file location (xxx
bytes are already downloaded), do new_file=fcreate(newpath), fseek(new_file,xxx)
and start writing the data from the HTTP stream right there;
3) in background, overwrite the first xxx bytes of the target file with those
from the file in /tmp
4) erase the file in /tmp.

That would work, if we had a way to fseek.  Unfortunately, it seems that fseek
is not sufficiently portable, so code that uses it directly is not allowed in
high-level parts of Mozilla (like the code in question).
Though... nsISeekableStream may do what you want
How would this work with download resumeing?  Do we have something that can
remember that we need to copy bytes from the partial fragment in addition to the
actual download location?
Let me illustrate it:

Step 1: link is clicked, data is being loaded
/tmp/tempfile:             |bbbbbbbbbb|
                                      ^ stream "pointer" for HTTP data

Step 2: New location has just being picked:
/tmp/tempfile:             |bbbbbbbbbb|
/final/location/file.dat:  |0000000000|
                                      ^ HTTP-in

Step 3: One thread keeps processing the HTTP stream, the other one 
        has just started data relocation
/tmp/tempfile:             |bbbbbbbbbb|
/final/location/file.dat:  |0000000000BBBBBBB|
          "reloc" "pointer" ^                ^ HTTP-in

Step 4: Random snapshot of the process
/tmp/tempfile:             |bbbbbbbbbb|
/final/location/file.dat:  |bbbbbb0000BBBBBBBBBBBBBB|
                          "reloc" ^                 ^ HTTP

Step 5: Relocation complete
/tmp/tempfile:             |bbbbbbbbbb|
/final/location/file.dat:  |bbbbbbbbbbBBBBBBBBBBBBBBBBBBBBB|
                                                           ^ HTTP

Step 6: Temp-file erased
/final/location/file.dat:  |bbbbbbbbbbBBBBBBBBBBBBBBBBBBBBBBBBBB|
                                                                ^ HTTP


So, if download is interrupted _before_ step 2 starts, then we resume the
downloading of tempfile. If it's interrupted _after_ step 2, we resume the
"final" file. "Relocation" thread is not supposed to be interrupted ever; if it
is (due to emergency conditions), it should just destroy the target file (yes we
lose data, but we're SUPPOSED to lose data in emergency after all, or it
wouldn't be an emergency). For instance, we can have that "autopurge-flag" on
the "final' file setuntil step 5, so it will be automatically erased by OS after
the crash.
What about ditching this 'relocation'?
Just merge the files after completing the download.

This would help get rid of nasty problems if the 'reloc thread' would get
interrupted...
and the file could be reassembled on next download attempt.
Additionally a pref to control maximum preload size, preload dir (cache, tmp,
something other) and reassembly.
What about disabling preload completely on POST-headed files,
and just resuming?
In response to comment 141:

Why not ditching relocation:

1) "merging parts" of size X (on "tmp" drive) and size Y (on dest drive) will
require at least (X+Y) ADDITIONAL free space after the downloading, totaling at
(X+2*Y), because no OS known to me has means for a programmer to point at two
parts of a file and say "concatenate these without hauling data back and forth".
Thus, you'll have to concatenate them by writing them byte-by-byte to the output
stream, which will also be slower, because proposed "relocation thread" has to
process only X bytes, and your "concatenation function" should process X+Y
bytes. Required free space is a very serious issue, because "Why I cant DL a 1Gb
file if I have 1.1Gb free?" is a common complain from Mozilla users.

2) With "relocation thread" you need to store only one path at a time. With
"merging", you have to store two (tmp-file and dest-file) until the very end of
download, because you'll need them at that moment to do the merging. With
relocation thread, you only remember "temp-path" BEFORE the final location is
picked, then pass both temp-path  and dest-path to relocation thread and since
then remember only dest-path.
See bug 55690 comment 238.  All the proposed solutions so far are failing to
solve that problem, which makes them unacceptable.
From reading through this bug and bug 55690, it seems like the real problem 
that is keeping these bugs from being fixed is that people disagree about 
the right solution. One camp says we should wait to open a file to save 
into until they pick the file/path, risking timing out the connection if 
they're really really slow about it. The other camp says we've got to keep 
he data flowing, and need a file to put it into until they choose, even if 
it means that this might prevent them from downloading the whole file 
after they waste a lot of time filling up their /tmp partition.

Both solutions are valid, both have risks, and different people would 
rather take different risks. So make it a pref and be done with it. Both of
those solutions are easy to implement (first one just waits to do something 
it already does, second one keeps going like it does now). Making a pref 
and having both of those options available would seem to let pretty much
everyone get their gripes resolved.

Regarding Boris' comment 125, that says that just not reading from the
connection while the user chooses a file causes the connection to drop: Is there
more information available about those drops? Any reasonable tcp or server
timeout should be greater than the normal time to choose a location for a file
(5-15 seconds).

Personally, I feel that if we make it a pref, slow users can choose to have it
the way it is now, and users who download files bigger than their /tmp can
choose to not start writing the file until they've chosen its location. A user
can even switch back and forth depending on what we're doing. If people have a
pref available, and a connection drops because they took too long, it is their
problem.

And it would at least be some form of progress on these two bugs that are over 2
years old. Not doing anything because there isn't a perfect solution is lame.
Giving people a choice of which solution is better for them is progress.

Also, will somebody change the status from NEW to ASSIGNED? This isn't a new bug.
> Is there more information available about those drops?

You know everything I do -- it's all in bug 55690.  I've never been able to
reproduce the drops myself, so....

ASSIGNED means someone is working on it.  That would be a lie.  No one is, since
no one seems to care to actually DO something (and since I don't have the time
to work on this until the school year is over).

Please see my comments in bug 55690; repeating the same comments in multiple
bugs is bad form....
I just went through this mess trying to download some RH ISOs. Why don't we
simply put a check box on the save dialog that pops up after the download starts
that says: "don't use tmp"? If it is checked just start the download over
bypassing tmp. Using 1.3b

Konq has the same problem. I finally resorted to gftp to get the files.
*** Bug 192357 has been marked as a duplicate of this bug. ***
*** Bug 195455 has been marked as a duplicate of this bug. ***
*** Bug 196038 has been marked as a duplicate of this bug. ***
Imho (and for a start), having some properly documented entry in prefs.js
would be enough for me, so that I know how I can adjust Mozilla in this
respect and eg. choose a "standard" partition where I "always" have enough
space. I'd go for "vi prefs.js" to adjust if I only knew it.

In the longer run, one might be interested in having alternative ways to
deal with it, like other commentators outlined:
- start downloading somewhere and move/overwrite when the user has picked
  a different location (sounds fragile to me - who guarantees that the
  computer doesn't crash while in the process of overwriting the first few
  megs? how do we have two independend file descriptors/handles/streams
  that allow independend positioning and writing?)
- have some UI and a pref that the user can select to say "start downloading
  no earlier than I've selected the target location"
- ...
Keywords: mozilla1.2, perf
Summary: Downloads are stored in /tmp regardless of path selection → Downloads are stored in $TMPDIR|$TMP|$TEMP|/tmp first and moved to the selected path after the download finishes
two years old.
how's going on?
*** Bug 203564 has been marked as a duplicate of this bug. ***
*** Bug 204027 has been marked as a duplicate of this bug. ***
I read the comments, and i could suggest three options, easy to implement and
that would not modify the general behaviour of the program:
 1. most of the complaint come from downloading large file, that turn to be most
of the time iso images or installation programs.
  ASK the user  to set a list of extension or/and mime-types that should be
always "downloaded" to a file. For example extension such .iso .exe .tgz .zip
.tar .bin could be a "preset list", and corresponding mime-types. When such
files are found the download is not started until the user has not selected the
file to write.
 2. on thge right key menu add beside "save link as..." , also "download as...".
The differesce woul;d be thet in the latter case mozille will pop the file
selection window, and after having received the filenaem would launch a download
directly into that file, without any caching or temporary file.
 3. the two combined ....
Re: comment 154: 'on thge right key menu add beside "save link as..." , also
"download as...".
The differesce woul;d be thet in the latter case mozille will pop the file
selection window, and after having received the filenaem would launch a download
directly into that file, without any caching or temporary file.'

This is what "save link as..." already does: comment 129

To my mind this is the whole point;  I had tremendous problems caused by this
"bug" until I learned that left-clicking a link is the wrong way to download a
big file.  It is conceptually simple to grasp that clicking a link means "load
this into mozilla", giving you a chance to save it afterwards, and that "save
link as" means "save this directly from the remote site to where I specify". 
However, it is not well reported or widely understood.

If mozilla could detect that a download had failed due to lack of space, and pop
up an error pointing out the distinction between "save link" and normal
left-click, that might be the best solution.  Maybe then a user would only have
the problem once.

The file extension idea would help in the majority of cases, but it could be a
real problem in cases where a normal html page has a wierd extension.
I have been bitten with this problem a few times myself.
This needs a fix in terms of user configurable behaviro ASAP IMHO.

One thing missing from the discussion, or at least, only partially
touched by the discussion is the failure of mozilla to use
"resume" feature of ftp transfer. (Sure not all the ftp daemons
support this, but many do today.).

Since mozilla as of now doesn't seem to CARE what the user
specifies for the final filename (and there is no relation
to the temporary filename used by mozilla for initial storage) initially,
mozilla has not offered me, for quite a while,
to use the re-get or resume
feature of ftp transfer even if the ftp daemon would
support it. It simply starts transferring
from the beginning of files if I am trying
to transfer the large file, which I have tried to copy and failed
in the previous attempt and large partial file is already on my disk.

This is a waste of bandwidth, and user time.

So my idea for fix from the end user perspective is

 1. Restore the old Netscape behavior in terms of the filename
     and where the intermediate storage is (that is, identical.).

 2. For those adveturous types, offer the use of temporary files
     or whatever for IMMEDIATE download as soon as the download
     is chosen as is done by mozilla now .

And orthogonal to the above fix (offering selection of behavior),
make sure ftp's resume feature is properly implemented and
offered if applicable! (Especially for users who choose the behavior 1
above in the fist place.).

Transferring big files of which URL is
ONLY offered via some clever web CGI interface after
I hit a button accepting some agreemnts or so makes
web browser important as ftp agent, but 
lacking resume feature kills me when I need to work with
64kbps connection.  

(Comment #37 of Bug 55690 rings a bell.)

PS: BTW, when was the last time, ftp resume featue worked indeed?
It is a pity that it doesn't seem to work any more.
Looking through bugzilla, I found bug 128359.
It eems like that the ftp resume feature has not worked for 
more than a year or so.
I wish it is implemented correctly. 

Maybe the "optimization" discussed here, like the
use of random temporary files instead of user chosen
file for intermediate storage, might have
invalidated some implicit assumption on which ftp resume
feature was built in mozilla before.
So Whoever works on this bug 69938 ought to consider that
the reworking in the past might have broken this important
ftp feature in the process, and might break it again.
I'd like to add that some other programs have found ways to show that the
downloasd as of yet is incomplete by using a predictable temporary file name.
Rsync springs to mind. The comment by ishikawa@yk.rim.or.jp seems very valid to
me, although implementing the "full size" solution in comment #76 may be too
much. Perhaps, for a start, one should be able to adjust the behaviour in the
preferences dialogue like this:

Have one button to enable/disable pre-download with the following behaviour:

- If pre-download is on, download to "MOZILLA_FIVE_PREDOWNLOAD" or so, or to
  TEMP if the former is undefined, or to /tmp if both are undefined.
- If pre-download is off, download directly to the target location, probably
  using a temporary file name like rsync.
*** Bug 206132 has been marked as a duplicate of this bug. ***
let me show a brief.

p1:
show a [save as...] dialog as before;

if (exist(file))
    // do not show (Overwrite? Yes/No) dialog
    // but you may show dialog asks [Restart/Resume/Cancel]
    switch (choice) {
        case RESUME: resume; 
        case RESTART: get;
        case CANCEL: goto p1; // I'm not a programmer :-)
    }
else
    get; // use selected path
IMHO the best implementation would be the following:

1. Flow starts, and the file is downloaded in $TEMP
2. Meanwhile, user pick up a location in SaveAs dialog
3. The partial downloaded $TEMP file is closed, 
4. The partial downloaded $TEMP file is copied to final destination
  (note that if something goes wrong now, i.e. write permissins, user have
   an imediate notification as a consequence of the user interaction performed)
5. The final file is reopened in append and the download flow redirected there

How does the nasty IE actually handle this ?
Any suggestion ?

Suggestion 160 could not be the best option always ....
If you are on a fast link you could actually got a lot of data while looking 
for a space where to store the file. 
In the meantimne you could hog the server for a file that could not be of any 
use [say: you discover that you have anpother copy of the same file !].
usefult options:
 1. set a parameter for the maximum quantity of data before the actual filename 
is choosen
 2. set a parameter so set maximum speed at which data is retrieved bewfore the 
filename is choosen
(in both event if you are on a fast link in the worst case you should wait just 
a fraction of the time you spent setting the name, if you are on a slow link 
both option would not go into effect.
*** Bug 207644 has been marked as a duplicate of this bug. ***
Attached file browser.download.dir
Just saw a download fail after half an hour - /tmp filled up, where of course
I had requested download to a place with lots of room.
Searching earlier reports I see that hundreds, probably thousands, had the same
experience. Unbelievable that default Mozilla still cannot download. But sth
that I have not seen mentioned: removing the line
 user_pref("browser.download.dir", "/tmp");
from prefs.js fixes this problem for me (build 2003050714).
It is appalling that this hasn't been fixed after so long time!

Bill Law hasn't even commented this bug since 2001-05-02, the only comment he
has given btw.

Anyway, I'm setting the tmp-variable to the same drive as I usually download
stuff to, that way no copying is done after the download I think.

But why is the file *also* saved in the disk-cache?

It wipes out the cache and wastes space, it's annoying to put it mildly.
What version are you using?

I have 1.4betta and

browser.download.progressDnldDialog.keepAlive = false
 (see Location: "about:config")

but really it is in browser cache.

MAy be there should be some option about max cache size? or switch in download
SaveAs dialog - cache or not
methinks that downloads should generally bypass the cache
There is no point whatsoever in caching downloads, downloading already is
storing objects into a cache, only manually and by choice. This, though, has
nothing to do with Feature 69938 (also mislabeled as Feature 55690). 
But what's going on with the legendary Feature 69938? Nothing, it seems. It's
not a bug at all, but Oh Boy is there a problem, YES! Hardware: all, actually.
OS: all. Severity: Critical. Bug report filed in February 2001. 105 votes.

I still argue that Feature 69938 has everything to do with predownloading, since
using a temporary directory/filename is justifiable only if downloading is
started before the target directory/filename is known. Thus, the problem is
efficiently dealt with by giving users the choice and documenting the behaviour
(in both cases) well enough. The problem needs to be disposed of NOW, not in the
Target Milestone: Future. The predownloading scheme with its infinite problems
can be perfected LATER! Just as the world may be perfected later. 
Perfectionism comes with an infinitely high price tag.

When starting a download
...
( ) Prompt for the target location first, then start downloading into it.
(x) Immediately start downloading (to $TMPDIR) in background, then prompt for
    the location to which the file will be moved when the download is complete.

(Additionally, a possibility to choose the temporary directory wouldn't hurt.)


"Beyond your tunnel vision reality fades, like shadows into the night..."
                                              -- Gilmour/Samson, Lost For Words
PLEASE READ ALL THE COMMENTS BEFORE SPOUTING OFF.  The problem is NOT due to the lack of 
some dialog box, but with HTTP 1.1 and already-open connections.
I don't know much about the Moz architecture, but I would like to re-advocate my
previous statements in comment 87.  I didn't realize that these downloads were
invalidating the cache, and if that is indeed the case, I have a possible
suggested implementation, which is basically to push all this stuff into the
cache layer.  Would it be possible, then for Mozilla to operate in two "modes"?
 In mode 1, the user right click and selects save, or shift-clicks, and nothing
is downloaded until the destination is specified.  In mode 2, where we don't
know what to do until we have already begin a transfer of are using keepalive,
we do what I suggest in comment 87 by pushing the functionality into the cache.
 In this case, for each new http connection or request, we request a 'cache
object' from the cache manager.  The cache manager handles any size constraints
or reallocations needed during the download.  At some point, we can call
"commit()" or "persist()" on the object.  If no name is specified, the cache
manager handles it internally in the cache.  If a name is specified, the file
should be saved there (or return an error).  If "close()" is called without a
commit, the object data is not saved and does not count against memory or
persisten cache limits.  [Or, this could live outside the cache altogether and
when we want to persist the object we simply hand it off as a parameter to a
cache object constructor, etc...]

Does this seem a reasonable option?
Okay. After rereading (yet again) this bug and Bug 55960, as well as Bug 65770
(the other one this one depends on), I'm going to try and say something useful.

First, in response to comment #168, and [many] others like it in these bugs, I
want to say something about choices and consequences. The comments that shoot
down many of the proposed solutions in these bugs keep saying things like "we
can't do it that way because the connection is already open and we can't stall
it or it will drop". Do you know how long it takes for a connection to drop
because data stopped flowing on it? In 99.9%+ of cases, it has got to be at
least a minute. And anyone who takes that long to choose a place to save a file
will just have to learn to deal with a dropped connection here or there. 

We have the choice to _fix_ this bug, in any of the simple ways that have been
proposed, and even some of the patches that have been made (for this and for Bug
55690), and the consequences of that are that some people (a few, at most) may
occasionally experience a dropped connection. On the other hand, we could choose
to continue _not_ fixing these bugs, with the consequences being that lots of us
will _continue_ to keep suffering with these ancient bugs. Which of these
consequences would we rather live with? (Please, let the keywords "critical" and
"dataloss" influence your choice!)

I bet the "dropped connection" problem can even be solved, at least to a degree,
simply by giving the user some _choices_, and letting them choose their own
consequences.

If everyone keeps shooting down every single proposal because it isn't perfect,
then I think this bug will live on for at least another 2.5 years without being
fixed, because a "perfect" solution will be hard to find. Please, let's only
shoot down proposals that would be _worse_ than the bug.

There are plenty of ideas in these bugs on how to fix them. Many of them are
very reasonable. Let's just pick the best one we can (or the best three, and
make it a preference!), and do it. I'm not the first person to point out the
~150 votes for Bug 69938 and Bug 55690, nor the nearly 5 years of combined
existence of these bugs. I'm also not the first one to point out that even NS4
did this better than the current version of Mozilla, which frankly, is totally
embarrassing.

Again, we don't need a _perfect_ solution to these bugs, just _anything_ better
than what we've got now.
RE: Comment 170 -- Amen!
> I'm also not the first one to point out that even NS4
> did this better than the current version of Mozilla,
> which frankly, is totally embarrassing.

I'm still keeping NS4 about so that I can use it
when I need to download ISOs and the like. I can't
even start to express how lame it is that this bug
keeps being knocked forward.
bill is not currently working on this bug.  
Assignee: law → nobody
May I add that I think that the least we could do is minimize the chance of
hitting a situation where this bug surfaces.

When a link has the type attribute, mozilla ignores it and contacts the server
to find out that the data should be saved where it could have detected so from
the type in the anchor.

Example:
From the link:
  a href="http://localhost/my-zip.zip" type="application/x-zip" 
mozilla can conclude that a download is going to follow and should offer the
download dialog immediately, just as if the right mouse click and "Save Link
Target As" had been selected.

Instead, mozilla contacts localhost to find out that the incomming data has
content-type application/x-zip and hits the problems described in this bug.
First: If no-one is working on this bug, who is going to take over for this? 
Who can and is willing to take this on?

Second: Whatever happened to the idea of simply pre-downloading to the
destination directory and not $TMP?
*** Bug 212167 has been marked as a duplicate of this bug. ***
*** Bug 212170 has been marked as a duplicate of this bug. ***
*** Bug 212173 has been marked as a duplicate of this bug. ***
*** Bug 216400 has been marked as a duplicate of this bug. ***
Several people do not notice that problem since they seldom download big files.
But a good example is downloading Openoffice 1.1. For Windows, if takes 65428253
bytes, and it is zipped. I also remember StarOffice to be 95 Mbyte big...
I've been using Safari now for a while... though I keep making builds of Firebird, Mozilla, and 
Camino at random intervals, and by request (if you need a copy über optimized for OS X, look <a 
href="http://homepage.mac.com/michael2k/FileSharing9.html">here</a>.) and I noticed 
something about Safari...

It doesn't have any cache settings, possibly under the assumption that the user of such a product, 
unlike Photoshop or Final Cut Express, need not worry about cache location, cache size, and cache 
speed.

Instead, I have observed, Safari uses ram and relies on the OS being intelligent enough to cache 
pages out to disk, as part of the virtual memory subsystem. I notice this because occasionally I will 
see 2gb of swapfiles in my swap partition... Which all go away as soon as I restart Safari, or close 
one of 15 pages... or so.

As for the file download, I think it buffers some of it in memory, and places the file in the 'default 
location', which *is* a preference. Again, it relies on the robustness of the memory subsystem to 
be sufficient to handle any file of any size.

So I suppose my comment on cache would need to be directed elsewhere, but why not have a file 
saved to buffer in memory, and store it there until the user can make a decision where to put it? 
And if nothing else, put it at a user defined 'default' like the desktop... I know I've read similar 
comments like this over a year ago, here, but I figure it's time to resuggest what I thought was the 
most useful suggestion I had heard. The Safari anecdote is only an example of a system where 
such a system was implemented, and successfully. Of course the real problem seems to be that no 
one is assigned to this bug, anymore...
A little OT, but I have my browser.cache.memory.capacity set to just 3000 to
rely on the OS(Linux) to cache files. It helps reduce waste of memory.
*** Bug 217143 has been marked as a duplicate of this bug. ***
*** Bug 217143 has been marked as a duplicate of this bug. ***
I recently submitted the SAME bug for Windows. The temporarty file is stored in
the directory stated as %temp% environment variable on Windows 98/ME. But this
Windows temporary directory may be located on a disk with insufficient available
memory.

Therefore, the bug is NOT an implementation problem under Linux, but a bug in
the conception of the Downloading process, whatever the system.
+1 Vote to fix it
*** Bug 219260 has been marked as a duplicate of this bug. ***
*** Bug 219672 has been marked as a duplicate of this bug. ***
*** Bug 219672 has been marked as a duplicate of this bug. ***
I've resorted to using the program wget to download large files.
It will save the file using the underlying URL of the link, which can
be a problem with filenames kept with long random numbers to hide them from
crackers, but otherwise it writes the file to disk in small chunks, which
is what people here are suggesting Mozilla do.
So this is a workaround suggestion until somebody steps up and fixes
this bug/feature.
*** Bug 223522 has been marked as a duplicate of this bug. ***
*** Bug 225378 has been marked as a duplicate of this bug. ***
*** Bug 226829 has been marked as a duplicate of this bug. ***
This really needs a fix now. There's an additional problem with the workaround
'right click-> save as...': A lot of websites now present downloads via a php
script where you just can't right click->save. So left click->(do something else
while mozilla downloads,copies, and blocks...) is the only choice there, with all
the implications given here. I think the predownloading stuff should be switched
off at all for downloads, it does not save much and overly complicates the
situation to get a quick fix here.
We get hurt here by this problem every day because we must download large amounts
of data which is only PHP-provided, therefore we must use Netscape 4.x to get
on with our work in due time.
This is the last of the most visible and most annoying bugs, esp. since
it could be fixed easily if we just would go back one step to keep things 
simple and easier to maintain.
The same problem here. I have a memory disk for /tmp and it's really annoying,
that Firebird saves files in /tmp instead of my destination directory. Many
users only have an 256MB partition für /tmp. Or they have no special partition
for /tmp and Firebird fills up /. (I know, my English isn't the best. <: )

The second problem is, that every file must be moved from /tmp to the
destination. That's really funny, if you want to download a 300MB file.

Bear with me if this has already been mentioned here, I've really tried hard to
read everything, but I may have overlooked it.
There's an additional problem related to this bug. Mozilla actually uses *twice*
the size of any downloaded file, cause it will create it in tmp, *and* in the
cache, no matter what size your cache is set to.
So if you try to download a file of 500MB, and have "only" 900MB free on your HD
(assuming that Mozillas cache and tmp sit on the same drive), you'll be unable
to download the file completely, as it will fill up the disk with 1GB (twice the
downloaded file). 
What a waste of resources.
RE: comment 196: That is Bug 55307 "Downloads are stored in the cache first".
IMHO these two bugs should be one, along with all the others dealing with the
location of the download files.
AND to voice my opinion, the downloaded files should be stored immediately, and
*only* in the user specified target directory, nowhere else. To avoid the user
trying to use the downloading file before it's complete, just give it a temp.
name and rename it when done. Though frankly, this is IMHO a non-issue, and
could as well be ignored completely. A user that tries to use a file that is
still downloading deserves getting an error, and not some artificial workaround
for their stupidity. :-)
> A user that tries to use a file that is still downloading 
> deserves getting an error, and not some artificial workaround
> for their stupidity. :-)

Using file which is not yet complete isn't always stupidity. For example, i
could want to start listening a music file, before it is completely downloaded.

More important is, that we should ask user, before removing incompletely
downloaded files, if download happens to cancel or gets cancelled by user itself
- it could be possible to continue downloading that file using other utilities
(eg. "wget").
The natural solution to me seems to be extended cache options plus an extra
download option:
The Cache should have additional options like "Don't cache ojects smaller than
..", and "Don't cache objects larger than ..". That should be rather easy to
implement.
The download dialog should have an additional check box "bypass cache (don't
store in cache)". That may be a little bit more difficult to implement.
And, of course, the directory that is "/tmp" should be configurable per user or
profile.
Doesn't that sound as simple as reasonable?
Referring to the comment 199 about "use of file before download is complete":
Use of hardcoded directory "/tmp" is different from using a random temporary
filename for download. Mozilla could rename such a temporary file after download
is complete. A temporary name makes it harder for anybody to use incomplete
files. However truly random temporaty names might require a temporary directory.
I'm not voting for keeping /tmp however.
Please read the topic of this bug carefully. It's about NOT having anything
downloaded to /tmp or whatever first. And, as a side requirement, not to 
'rename' (which always means a time consuming, blocking copy for practical
reasons) anything, but to directly store it to where the user wants it to
be stored. The request is to remove ANY intermediate steps to stop the
resource wasting Mozilla currently does (I hope I got this right, Andrew ?).
It should be noted that in selecting where to temorarily place the downloading
files it DOES NOT follow the explicitely defined Set Temp= or Set TMP=
environment variables.  I have explicitely reseved a 4GB partion for swap and
temp in order to resolve these issues on the over used C drive.  
Your taylor is rich :)

4Gb - I prefer install sofware or create my pwn documents in such space. You can
waste 4GB of hdd, I cannot (my pwn quota on disk is only 3GB)
Yes, this _BUG_ is really really annoying. I have only a big /tmp partition,
because Firebird fills it useless with big download temp files to move them
later over the whole hard disc to the correct location. Stupid behavior!
Unecessary! Unacceptable! Waste of ressources.
*** Bug 229240 has been marked as a duplicate of this bug. ***
For me, this bug is absolutely critical and must be taken care of ASAP. 
I am writing XML code to help people **easily** manage their local documents 
and Mozilla's handling of the file just runs counter to that purpose. 
I wonder if this problem is connected to another strange behavior in the 
"Helper application" feature: I have registered a file type
("application/x-my_app"). If I chose "Open with default application", the file 
does open - after being copied to the temp directory alas - but if I chose 
the "Open with" option the program is launched, but it opens an empty document 
instead of the desired results.
My temporary solution is to fiddle with the "OnDDECommand" (in Visual C) to try
and reconstruct the original path of the file: not ideal... 
The bug is all the more regrettable that I have taken care to specify the
"file:///" protocol in the file's path. Shouldn't that be enough clue to not 
create a temp file?
Sorry I can't help more but please, give us a speedy resolution!
Alain

I cannot believe the amount of comments on this bug and nothing has been done.

Proposal:

Why don't we leave some decisions up to the user. Like:
1) Where he wants the file to be saved immediately. If there isn't enough space,
tough!
2) Chances are that if the file didn't download completely, the user will find
it out when he tries to use it.  Leave it up to him to delete the incomplete files.

What can we maybe do to improve the above tasks of the user:
1) In the dialog where you choose the save directory, we can display the amount
of diskspace available on that partition along with the size of the file to be
downloaded. 
2) As someone else said before, we can start files with a certain extension and
then rename on completion.  This way it is easy for the user to search for
incomplete files.  In fact, a download manager could actually use those files
for its entries that can be completed.
3) In 1) above we can even start downloading in a cache directory (this will
give us the file size for the dialog), and upon the user making his choice, move
the already downloaded portion to the chosen destination and continue from there.
(In reply to comment #208)
> I cannot believe the amount of comments on this bug and nothing has been done.

I quite agree. I'm not a Mozilla developer, but I am a Windows and Un*x user and
C programmer, and I'm fairly sure I could fix this, but keep feeling that the
Mozilla developers must be working on this, and that the various issues involved
are sufficiently convoluted that without being intimately familiar with the
Mozilla code I couldn't make an attempt at it. It may be that the Mozilla
developers don't think this is a major bug because they all have big discs, or
because of the widespread awkward nature of it they think it's Someone Else's
Problem, but this really does need to be fixed; once upon a time the Netscape
browser was the non-Microsoft user's friend, but with this bug it isn't any more.

Technically there are a lot of things to be looked at here, to do with use of
cache, inspection (where available) of free space, moving between filesystems,
variable FTP or HTTP server behaviour, and many more, but somebody (preferably
someone very familiar with the Mozilla code) has to take the lead both in
proposing a workable solution and in co-ordinating the fix, and if they do I
think lots of people (like me) will volunteer to make the appropriate changes in
smaller parts of the code.

> Why don't we leave some decisions up to the user. Like:
> 1) Where he wants the file to be saved immediately. If there isn't enough space,
> tough!

Not good enough, if the user runs out of quota (or is liable to run out of
quota, or we reach the specified cache size, or whatever other issue) in his
cache dir, we ought to suspend the download not cancel it, at least while the
file selection dialogue is popped up, and even if the download is cancelled, we
should seek a way to keep a partial download in disc cache so that we can resume
it later.

> 2) Chances are that if the file didn't download completely, the user will find
> it out when he tries to use it.  Leave it up to him to delete the incomplete
files.

Not if it's in the cache dir, no; if s/he's chosen a target and we run out of
space, certainly leave a .part sort of file there, but if it's gone to cache we
should know about it and either resume when asked again or flush from cache in
the usual way. Actually I think it'd be better to remember that a .part has gone
somewhere too, so that if the user tries the download again s/he can try it
again to a different target (perhaps with more space available) and we can use
the .part (if still present) to shorten the download retry.

> What can we maybe do to improve the above tasks of the user:
> 1) In the dialog where you choose the save directory, we can display the amount
> of diskspace available on that partition along with the size of the file to be
> downloaded. 

This is one of the big multi-platform nasties, in that the ways of finding this
out on different platforms and particularly in environments where filesystems
are shared are all different. Also, displaying the space available while the
dialogue is available is not the same as knowing how much space there will be a
few minutes later.

> 2) As someone else said before, we can start files with a certain extension and
> then rename on completion.  This way it is easy for the user to search for
> incomplete files.  In fact, a download manager could actually use those files
> for its entries that can be completed.

Agreed but still the part downloaded before a target is selected will need to be
handled differently from the part afterwards.

> 3) In 1) above we can even start downloading in a cache directory (this will
> give us the file size for the dialog), and upon the user making his choice, move
> the already downloaded portion to the chosen destination and continue from there.

I agree completely, but as noted previously this is a seriously complex issue!

Come on Mozilla developers, get your fingers out, or flame me if you wish...

Cheers,

John.
farewell spam
I have downloaded the nighty build from 23/01/2004. The file seems to be
downloaded directly in the destination directory without prestoring in a temp
buffer. The partially download file is named filename.part. 

It's seems that the problem is resolved. 
(In reply to comment #211)

I have looked at the CVS and it appears that version 1.240 of the
nsExternalHelperAppService.cpp file contains a fix that moves the file away from
temp once the user has picked his destination.

Would this automatically go into Firebird?

*** Bug 232662 has been marked as a duplicate of this bug. ***
For some reason this bug had never hit me with Firebird .7, but I just got it
while trying to download a big movie in Firefox .8. Since the movie is bigger
than the available space on my C: drive (where Firefox's temp lives,
apparently), I can not use Firefox to download the file. Period. This means I
have to open another browser to get the file. The option should at least exist
(even in about:config) for me to change the default temp directory to a drive
with more space, although I'd rather it just save the file where I tell it to.
For some reason this bug had never hit me with Firebird .7, but I just got it
while trying to download a big movie in Firefox .8. Since the movie is bigger
than the available space on my C: drive (where Firefox's temp lives,
apparently), I can not use Firefox to download the file. Period. This means I
have to open another browser to get the file. The option should at least exist
(even in about:config) for me to change the default temp directory to a drive
with more space, although I'd rather it just save the file where I tell it to.
*** Bug 232006 has been marked as a duplicate of this bug. ***
*** Bug 236356 has been marked as a duplicate of this bug. ***
(In reply to comment #212)
>
>(In reply to comment #211)
>
> I have looked at the CVS and it appears that version 1.240 of the
> nsExternalHelperAppService.cpp file contains a fix that moves the file away from
> temp once the user has picked his destination.

Sure, because Bug 55690 is fixed. I would therefore recommend that the summary
should be changed to reflect this (the file is now moved after location is chosen).
Since the file gets moved once a location is selected from Bug 55690 which
seemed to be the original issue in this bug, what changes remain to be made for
this to be considered closed? With the other bugs that this has a dependency on,
it would seem to indicate that this bug is more of a tracker bug for handling
download problems as one dependcy is for disk full checking and the other for
dealing with streams and resuming I believe.
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7b) Gecko/20040229 Firefox/0.8.0+
(daihard: XFT+GTK2; opt. for P4/SSE-2)

I just wanted to say that this bug, in my experience, has definitely been fixed
- partial downloads are saved as the selected file name with a .part extension
in the directory where I told Firefox to put it. If the /tmp directory is still
used, then it isn't used very much.

I think this bug can be VERIFIED FIXED.
This bug is about the fact that the downloads end up in /tmp at all.  Read
comment 0.  None of those issues are really addressed by bug 55690, unless you
move _fast_ to choose your final download location.
So where should the download go before you select a destination?  The cache
directory?
How about nowhere?
(In reply to comment #223)
> How about nowhere?

Seems to me that this can be solved if we let the user configure his/her browser:
 - Start downloads immediately to /tmp
 - Start downloads immediately to cache
 - Do not start download while the saving location is unknown
All of these download problems is for me a huge regression from Netscape 4.  Why
not just see what Netscape 4 does in this situation and do like that?
> Seems to me that this can be solved if we let the user configure his/her browser:

That one gets my vote too, if only to get this thing fixed at last after more
than 3 (!) years. Wherever the files are placed, there will always be real-life
scenarios in which somebody will run into a disk space, disk quota, network
bandwith, or whatever else problem and have no way to work around it. 

If this can't get fixed properly, then at least give people the choice of what
they want! Personally I don't want any pre-downloading at all: the file should
go there where I told Mozilla to drop it and nowhere else, because I have both
disk quota and local ethernet bandwith problems when downloading large files.
Since we now move the file once a path is selected, this bug (as described by 
comment 0) is now invalid.

The only possible remaining bugs from comment 0 are if we fail to download the 
file successfully when $TEMP fills up before the user has picked a new 
destination, and if we leave the download file there after it is moved. Can 
someone test that these cases (and if they fails, file new bugs)?

The entire _point_ of $TEMP is to store temporary data. Data being stored on 
disk while being downloaded from the wire before it is moved to the right place 
on disk is exactly that. Temporary. If you want the file put somewhere else, 
change the value of $TEMP. That is, after all, the entire point of that 
environment variable.
Status: NEW → RESOLVED
Closed: 19 years ago16 years ago
Resolution: --- → INVALID
That was just what I thought - because .part files are generated to store the
downloaded file after a destination is chosen, this doesn't really apply anymore.

Removing self from CC list.
> The entire _point_ of $TEMP is to store temporary data.

I have it set. But guess what: on my local office box, I have: /tmp, /var/tmp,
and (a local convention) /scratch. Each of these has different characteristics
with respect to size, clean up policy etc. 

Now go back and read what I wrote in comment 67: sometimes I do NOT want any
"mozilla temporary data" to go to my nice and fast local disk. AT ALL. Not for
disk space reasons, but for network bandwith reasons. I can also very well
imagine people wanting to make sure that all the bits out of some
security/privacy sensitive file that actually hit any disk end up on an
encrypted file system. So here's yet another set of potential values for $TEMP
and they're ones that I definitely do not want to be the default for everything
I do.

For normal use I do not care where the stuff ends up. But when I do, I do not
want to have to restart Mozilla. Mozilla already has a dialog box asking for a
location and should honour it from the start. If people insist on enforcing
pre-downloading, the target location must be changable from within Mozilla
without requiring a restart.
Shouldn't that be FIXED instead of INVALID, since it was indeed fixed? 
I would be fine with a hidden pref to control where we put the temporary file
during a download. That would be a separate bug though.
> Since we now move the file once a path is selected, this bug (as described by
> comment 0) is now invalid.

Actually, I have to, technically, disagree.  Looking at the original bug
description, this is what was asked for in terms of how it should behave:

> Expected Results:  Mozilla should save the file DIRECTLY into the
> selected path, rather then save it into /tmp then move it

Currently, it still doesn't do that.  It saves into /tmp until a destination is
picked and then moves it.  (Which is better than it was in terms of satisfying
what was request, but still not 100%.)  Obviously, to satisfy the conditions of
the bug description, no saving should be done at all until a destination is given.

Which is exactly what Boris said in comment 221.

At the very least, this should not have been resolved as INVALID.  Code was
checked in that changed Mozilla behaviour.  If it's deemed to now be
"satisfactory" (which is still in question) it should have been resolved as FIXED.

Reopening since INVALID is wrong, and there's still obviously some debate as to
what the original intent of this bug was.  (Andrew, are you still following
this?  Does the current resolution satisfy what you'd wanted?)
Status: RESOLVED → REOPENED
Resolution: INVALID → ---
If you think the bug is valid (I don't, given the description, but whatever)
then this bug is WONTFIX.
Status: REOPENED → RESOLVED
Closed: 16 years ago16 years ago
Resolution: --- → WONTFIX
<sigh>.  I would have thought that what I said in comment 221 was enough, but
clearly it wasn't....

Reopening.  Unless you happen to be an owner or peer for the relevant module,
don't touch the resolution of this bug again.  We need to fix this to fix other
correctness issues (as mentioned in bug 55690 comment 238 and on) and we plan to
fix it once we resolve the necko bugs currently blocking it.  In the meantime it
needs to stay open so the problem is tracked.  Either that, or file a new bug on
the issue (since this one is sorta noise-ridden). Until then, this stays open.

If we decide to handle our content-encoding issues in some other way (and not
until then), this bug may become WONTFIX.
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
(In reply to comment #227)
> The only possible remaining bugs from comment 0 are if we fail to download the 
> file successfully when $TEMP fills up before the user has picked

Yes, we do.  The only way to fix this is to not use $TEMP.

Hixie,

your arguments for resolving this bug as invalid or wontfix is not shared by me
and I'm not alone.

You say:
"The only possible remaining bugs from comment 0 are if we fail to download the 
file successfully when $TEMP fills up before the user has picked a new 
destination"

This was the point for this bug in the first place, right?

Is it fixed Hixie?

Why did you ask others to test it if it's still a problem and then resolve the
bug invalid and wontfix before you even got an answer?

I agree with Boris Zbarsky that it shouldn't use temp at all. Choose
target-destination, *then* download.
> Yes, we do.  The only way to fix this is to not use $TEMP.

You could also fix it by pausing the download when you are near to running out
of room before you know the final destination.


This bug doesn't fall under dbaron's guidelines for when a bug should be
confirmed (I haven't a clue, from reading the comments in this bug, including
those mentioned just recently, what the actual bug is). But if you think having
it open instead of having clear bugs filed on any remaining issues is the way to
go, then go wild.
Whiteboard: PLEASE READ BUG 55690 BEFORE COMMENTING → INVALID? | PLEASE READ BUG 55690 BEFORE COMMENTING
(In reply to comment #237)
> > Yes, we do.  The only way to fix this is to not use $TEMP.
> 
> You could also fix it by pausing the download when you are near to running out
> of room before you know the final destination.
> 

KISS!

> 
> This bug doesn't fall under dbaron's guidelines for when a bug should be
> confirmed (I haven't a clue, from reading the comments in this bug, including
> those mentioned just recently, what the actual bug is). But if you think having
> it open instead of having clear bugs filed on any remaining issues is the way to
> go, then go wild.

If you don't know what the actual bug is, how can you resolve it to anything?

This bug was opened on 2001-02-23 and no one is going to mess it up when we
finally see some progress.
Nice to see something happening with this issue. I'll try to summarize a bit.

There really should be an option to disable what is known as predownloading; I
can't see anything else ever closing this case.

Us users do not see or understand or even care what actually happens when a link
is clicked and - uh-oh - the resulting content type requires user input. We just
see the Wrong Place filling up and feel the heavy I/O when the predownloaded
part is moved, if the download didn't fail miserably even before that. 

There is no Holy Location of Partial Downloads, it might change every time.
What did NS4 do behind the scene? However inelegant or downright dirty,
bandwidth-wasting and connection-hogging stuff NS4 does, it just works.

55690 is solved, thanks for that. "Move-after-user-input" is a major improvement
compared to sucking whole objects to the Wrong Place, but it isn't a complete
solution to this bug.

The summary of this bug is slightly outdated, but serves users of older versions
well.

-- 
jk
Updating summary for clarity - as per comment 0 all the way to the most recent one.
Summary: Downloads are stored in $TMPDIR|$TMP|$TEMP|/tmp first and moved to the selected path after the download finishes → Downloads are stored in $TMPDIR|$TMP|$TEMP|/tmp first and then moved to the selected path only after the download finishes / location is selected
*** Bug 238086 has been marked as a duplicate of this bug. ***
*** Bug 240094 has been marked as a duplicate of this bug. ***
Severity: critical → normal
*** Bug 242166 has been marked as a duplicate of this bug. ***
*** Bug 242195 has been marked as a duplicate of this bug. ***
*** Bug 219099 has been marked as a duplicate of this bug. ***
Sorry guys, trying to get removed, but bugzilla didn't show me in cc first time.
*** Bug 245203 has been marked as a duplicate of this bug. ***
Wouldn't the severity of this bug quality as critical? We have data loss here
*shrug*
It seems to me that the best solution to this bug would be to create a bool
preference, browser.download.predownload, to determine if the download begins as
soon as you click on the link, or as soon as you select the path.  If
browser.download.predownload is false, wait until the user specifies a path,
then begin downloading the file.  If browser.download.predownload is true, begin
downloading to the default download directory.  Once the user specifies the
path, begin writing to a temporary file in that path, and transfering the
contents of the default directory temp file to the user-specified path either at
that time or at the completion of the download.  Pre-downloading sould pause if
the temp file size approaches the limit of the default directory.
I'm still absolutely sure that all this pre- and whatever-downloading stuff is
more than unnecessary and overcomplicating. The only one who knows where the
stuff he downloads should be placed and has space is the user himself. So give
him the one and ONLY decision and then download. As a couple of comments 
already stated, you essentially don't win anything with predownloading, as soon
as the user decides you again have to copy something, and the whole problem 
this bug is about starts over. All you win is a overly complicated code for a 
marginal, unneeded feature. To cite a famous guy here: Keep it simple, stupid!
pre-downloading (downloading in the background) has a benefit for everyone who
is paying per connection time if pre-downloading finishes earlier than not
pre-downloading.
In such case, pre-loading to memory should be just enough. Price on connection
time occurs mostly on slow lines, so the d/l won't take much memory until user
chooses place where to store the file.
Please think about another fact. If i have a small bandwith and downloading
stuff from other places i don't want to wait longer for finishing the download
because some other files are still downloaded in the background! Mozilla isn't
the only application which uses the internet connection. We also have FTP
programs, other mail applications and many others. If every application would
claim the less bandwith how long should i wait for my real process? 
To add information to Comment #252.

Where I am, price on connection time exists all the way up to 256kbps (ADSL).
Calm down, people.  The solution is rather obvious:  Leave the default behavior
as-is, and make an option to switch in the options dialog.
How about this: pre-download to the default temp location, and then once a path
is selected, start a new file in the new location which is padded with zeros up
until the point where the predownload finished, and continue writing to the file
from there. As this is happening, spawn off another thread that moves the
predownloaded content to fill the zeros in the new file.

If you do this, bug 103766 will need to be fixed in order for it to go smoothly.
Ummm...

I am the original reporter of this bug, three years ago.

When I did my single and only bugzilla bug report up to now. I expected to get
some emails about the progress and finally about a corrected version to download.
As I do not have time to partipicate in the browser development, I usually
deleted the bugzilla mails regarding this thread, as there was no progress.

After years and hundreds of emails I wanted to stop this spam.

So I sighed and decided to look into why that bug is still not resolved.
I delved into bugzilla and tried to find out what happened, why my suggested
solution was not implemented and what is to be done.

The first thing I wondered about why another person is named as "reporter".
Probably a mistake like mis-setting fields in the database or such.
This is a stupid problem anyway, as I cannot update the bugs' keywords and
correct the (wrong) modification that someone did on the summary etc.
But that's not the point of my comment.

I want the bug resolved to get rid of that bugzilla spam and to be able to use
mozilla for download of huge files regardless of the space on the $tmp directory.

My research results:

This IS a CRITICAL bug.
There are some 50 duplicate bug references alone in the above comments.

When I today began to research in bugzilla, I found that this bug is OLD.
There are dozens of open and probably many, many more closed threads about this
problem.

So please set it to high priority and start to fix it!

Dear Devs, there are some particularly constructive and instructive comments you
should read on how to proceed.

#25
#48
#66
#70
#75
#250
#255

(many more constructive messages inbetween, but basically repetitions of the
problem situation, the relation to the predownload feature issues, and solution
ideas)

I conclude my reading this way:

| SOLUTION:
|
| PROVIDE A PREFERENCES OPTION FOR ENABLING/DISABLING "FILE PRELOAD" MODE
|
| When enabled, preload files, storing intermediate files in mozilla cache.
|      (Anyway, Dont use $TMP for content storage.
|       This is bad for stability and security reasons.)
|
| When disabled, downloaded files are directly stored in
|       the user-provided-path, NOT using $TMP space.
|      (This was my original suggested solution, and at that time I was
|       not aware of the conflict with the preload feature)

Please dont forget that disabling of preloading also is an important option when
shared bandwidth or traffic costs are an issue.
Another pro for implementing this option is to remove other side-effect problems
of preloading in some particular, but not very common, contexts.

Dear devs, if you do not think this bug is critical, PLEASE read at least
comment #37 of parallel bug thread 7840.

So please FIX this now and let Bill Gates be unhappy.
Thanks.


Note to devs regarding whiteboard status:

This is a DIFFERENT issue than bug 55690. The problem there regards the
treatment of mime handlers and NOT the treatment of user-initiated downloads
(i.e., those "save file..." contexts).
This issue is EASILY confusable with the problem my bug report was about.

Another parallel thread is bug 59808. See its comment #8 (which is very ignorant
of the problems and needs of hardware- or OS-wise limited systems like Windows)
and put it into contrast to mentioned comment #37 of bug 7840. You see
, there are many reasons to finally fix the problem.

Urgh!

When I wrote my previous message I didnt realize that the bugzilla parser would
add links to my references to comment numbers in OTHER bug threads.

This means that ALL links in my previous comment #257 are WRONG :(

So please go to the bug thread links instead and search the quoted comment there.

For your convenience I here repeat the list of the comments in the link form,
which were not linked in the original posting because of the missing leading
"comment". 

So, if you didn't already scroll thru this thread to read the most informative
comments, please do this now using these (correct) links:

comment #25
comment #48
comment #66
comment #70
comment #75
comment #250
comment #255

Sorry :(
Stefan:
I'm unhappy to disappoint you, but you were _not_ the reporter of this bug. You
reported bug 179780 (more than 100.000 bugs after this one here was reported).
And when your bug was resolved as duplicate, your e-mail address was added to
the cc field of this bug.
So the solution to your first problem, the bug spam, might be just to remove
yourself from the cc list.

The solution to the second problem, the bug itself... well, in Open Source you
cannot force anybody to solve the bugs _you_ hate in _their_ spare time. The
best you can do is help as good as you can, either fixing it yourself, or
providing information to help others fix it. You did the latter; this is good
and _might_ make somebody more inclined to fix this. (But please note, there is
still no obligation for anybody, as also described in the Bugzilla Etiquettes.)
.
If I may point out, the solution I suggested in comment #256 would work for
everybody without the need for any new configuration options. IMO it would be
the perfect solution.

Also for cleanliness sake, you could add a detection to see if the temp dir
fills up during the predownload, and if it does, pause the predownload until the
save to destination is entered (its either that or you get an error before you
even select a download location.)
No longer blocks: 57342
*** Bug 231825 has been marked as a duplicate of this bug. ***
No longer blocks: majorbugs
Flags: blocking-aviary1.1?
Flags: blocking-aviary1.1? → blocking-aviary1.1-
I AM the original reporter of this bug :)

Looking at this from the perspective of a FireFox 1.06 user, I
would conclude that this bug is NOT completely fixed, however it
is getting better. When I first reported this bug, the immediate
problem I faced was related to a small /tmp dir, however the crux
of the problem for me was always that Mozilla/FireFox should ONLY
put the file where you tell it... There are many other scenarios
where creating the file elsewhere is bad...

My current issue with FireFox's behaviour in this regard is a little
different, but the same basic problem. If you select a link that 
leads to a download, the download is started somewhere other than
where I choose, then moved into the user chosen directory. The upshot
of this is that under NTFS the file gets the permissions of the temp
dir, NOT the destination dir. This is just another example of why the
file should be CREATED in the user selected dir...

There is however a work around... If you right click a link as select
"Save Link As" FireFox does actually create the file in the chosen
directory. I suspect the difference is to do with the mechanism to
spawn external apps when a link is selected versus just telling the
browser to save the file...
(In reply to comment #263)
This bugs is clearly a WONTFIX (IMO).

When you left click on a link, Firefox will already start downloading the file
when the Save/Open dialog pops up, so it does not yet know the destination folder.
Once you choose an actual download folder, the file will be moved there.

When you right click > Save As, the download is not started yet, because we
don't send any headers to the server, and only after choosing the destination
location will the download start.
(In reply to comment #264)
> This bugs is clearly a WONTFIX (IMO).

I strongly disagree, FWIW.

> When you left click on a link, Firefox will already start downloading the file
> when the Save/Open dialog pops up, so it does not yet know the destination folder.
> Once you choose an actual download folder, the file will be moved there.

IMO, this "predownload" feature should be optional, as many others have
suggested already. With a small /tmp, this can easily cause problems. Others
have already pointed out the issues with permissions that are caused by creating
it in /tmp and moving it, though there are probably ways around that.

> When you right click > Save As, the download is not started yet, because we
> don't send any headers to the server, and only after choosing the destination
> location will the download start.

Is there any good reason why this couldn't be the way it works when you click on
a link too, when you've got the predownload feature disabled?

I agree with others who have expressed their gratitude that something is finally
being done about this 4 year old bug.
(In reply to comment #265)
> (In reply to comment #264)
> > When you left click on a link, Firefox will already start downloading the file
[...]
> IMO, this "predownload" feature should be optional

Well, you don't find out that it's a download until you start the HTTP
connection and get the content type; the predownload feature means we don't have
to either block or cancel the HTTP connection, which is more polite to servers
and routers.

I'm inclined to think predownloads should go to the user's configured cache
directory, not some other tmp directory, and should block if the cache gets full
 before the user picks a final destination. The cache size is a browser
preference so we can avoid portability, partition size and quota issues.
Simple solution: Why not use the cache area Mozilla anyway uses ? That one already
is configurable.
(In reply to comment #265)
> (In reply to comment #264)
> > This bugs is clearly a WONTFIX (IMO).
> 
> I strongly disagree, FWIW.
> 
> > When you left click on a link, Firefox will already start downloading the file
> > when the Save/Open dialog pops up, so it does not yet know the destination
folder.
> > Once you choose an actual download folder, the file will be moved there.
> 
> IMO, this "predownload" feature should be optional, as many others have
> suggested already. With a small /tmp, this can easily cause problems. Others
> have already pointed out the issues with permissions that are caused by creating
> it in /tmp and moving it, though there are probably ways around that.

This can easily be fixed by copying instead of moving (copyed files always
inherit) then deleting the original file.
(In reply to comment #267)
> Simple solution: Why not use the cache area Mozilla anyway uses ? That one already
> is configurable.

I don't see how this fixes one of the problems associated with this bug which is
that files saved to a remote location cause the browser to freeze during the
transfer, unless you propose the cache be moved every time someone wants to save
to a remote location??
Is it possible to start the download in the background without creating a temp-file and to store the (beginning of the) download in the memory?
If the user has choosen the download location, the data (already downloaded to memory) can be written from the memory to the download location (without creating a temp-file).
The pre-downloaded data can be very large, especially if the user goes away without waiting for the dialog, and/or the connection is high speed. Supposing they start downloading a CD image of 600MB and leave before the dialog? Are you going to use 600MB of RAM for it? Yes, that might be unusual but you have to have a reasonable response for it and filling up their swap file is not enough.

Summarising the arguments so far:

The fact is that there is NOWHERE that is guaranteed to be safe/big enough to pre-download to, and yet, pre-downloading is pretty much required: a) to find out the kind of content and action to take, b) it's the polite thing to do on HTTP, c) It's convenient d) It avoids possible side-effects of restarting downloads (e.g being charged twice).

Some kind of compromise is obviously needed, because there are good arguments for every approach, but they conflict heavily. 

In some ways it seems that the current behaviour is the best compromise possible...  TMP/TMPDIR is intended for roughly these kinds of things, the only issue with that is the size of the file is effectively unlimited.

BTW: On reason not to use the cache: If I download a 50MB file, then do I want my entire 50MB cache flushed out just because I forgot to wait for the download dialog? In any case the cache is limited in size by browser preference which leaves you with the same issue as TMP dir, and you could still run out of space on the cache drive before the preference setting. The only difference is that this setting is visible in preferences.

Although my strongest inclination would be KISS - disable pre-download - it doesn't seem possible to meet the requirements that way, so some kind of more complex solution might be needed.

*** Bug 340202 has been marked as a duplicate of this bug. ***
I just lost a 4GB DVD ISO download because of this bug. Now I have to wait till next month to download it because of my ISP's bandwidth caps.
Just use any on downloading software:
 wget, reget, getright or any other. They are just better for that case.

BTW is there 4Gb limit on file length in Mozilla?

https://bugzilla.mozilla.org/show_bug.cgi?id=238853
"FTP fails to copy files over 4,294,967,295 bytes (4 GB)"
Even if Moz does not have a download file length limit, your file system has a limit to how big a file it can store: http://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits

If Mr. Baker's file system is FAT32 he may be running into that file system's 4G file size limit.

To bring this on-topic, this, I think, is another example of a situation where the tmp location may be less than ideal depending on the user's file system. Ex: a system that came installed with FAT32, to which the user later adds a second drive formatted in NTFS (16 exabyte file size limit.)
NTFS
(In reply to comment #273)
> I just lost a 4GB DVD ISO download because of this bug.

Specificaly how do you blame that on this bug? Why do you not provide details or useful info?

Mozilla WILL resume HTTP downloads, I'm not sure what versions do, but I have resumed failed downloads many times. Leave the Download Manager window open, don't close it 'til you check the downloaded file, if it's the wrong size then simply redownload it again over-writing the existing file. I haven't tested this very well to know under what conditions, or if it works over FTP, but it does, so Mozilla does have the capability to resume (the bytes to download are sent in the request header). But I'm not sure that Mozilla does a file-length check, so a connection-close may fool it into thinking the file is done, and (I believe) this is the case when it can be resumed.
I have also used GetRight to resume failed Mozilla downloads. Now once I was forced to use IE6 to download some large files, I wanted to stop one and resume it later, however I couldn't figure out where the file was being stored, and it was just a waste of time looking for it. Also a failed download either gets deleted, or written over and IE does not support resume over HTTP. But I hate when someone compares IE to Mozilla, so s'cuse me while I hit myself.
For downloading files, FTP is the way to go, and it almost always supports resume (browsers are HTTP, not FTP). A good download manager, as already mentioned, will support FTP, HTTP and resume for both.
I would, personally, love to have a nice FTP extension for SeaMonkey that supports all this (or even some of it).

Sorry, but I don't think any of this has any relation to this tmp dir bug.



Wrt comment #274: Use curl, not wget. While I like the handling of wget more than that of curl, I recently bumped into a 4GB limit of wget, too, which resulted in a garbled file. Curl was able to handle that file correctly.
(In reply to comment #278)
> While I like the handling of wget more
> than that of curl, I recently bumped into a 4GB limit of wget, too, which
> resulted in a garbled file. 

That means your wget is outdated. Use version 1.10 and up.
Well, I actually did use wget 1.10.2.
On comment #274: I recently stumbled over the USB modem cable when a download was 80% finished, and the download aborted. I wonder who will be the first browser to offer a "continue incomplete download" (from the download manager?). Why should I use a thing like "wget -c" for that? I admit I DO use "wget -c" for that, but just because the browser's functionality is way back to 1993' FTP "reget" ;-)
This showed up in 2001 why the heck has nobody fixed it?
The bug is still active and will stay active indefinitely because

1) people decided that predownloading is necessary, therefore
2) the browser has to "decide" where this predownload has to go, and therefore
3) any possible automatically selected location (including cache, memory etc.)
   will a) fill up prematurely for the one or other user, and 
        b) often this location is not on the device where it should end 
           after the user has done his selection, causing copying instead 
           of moving of data, which blocked the browser.

There are 3 ways out of this situation:

1) omit the requirement of predownload (this is done when going via the right
   mouse button 'Save link target as...', as far as I remember) and ask the 
   user immediately.
2) give the User a choice in the preferences where the predownload area is,
   or via $TMPDIR (but that may conflicts with other requirements for the
   system), but he will probably hit the problem at least once until someone 
   hints him that this has to be configured (maybe a popup the first time a
   download is done to set the preference ?)
3) pause a predownload if space is gone and ask the user how to proceed, or
   wait then until he selected the final location.
(In reply to comment #283)
> 1) omit the requirement of predownload (this is done when going via the right
>    mouse button 'Save link target as...', as far as I remember) and ask the 
>    user immediately.

This can only be done when the user know in advance it while be a download.  When the download status is determine by looking at the headers, that requirement cannot be avoid.

> 2) give the User a choice in the preferences where the predownload area is,
>    or via $TMPDIR (but that may conflicts with other requirements for the
>    system), but he will probably hit the problem at least once until someone 
>    hints him that this has to be configured (maybe a popup the first time a
>    download is done to set the preference ?)

That is not something the average user would use.  The average user would probably not understand those option.  I think those configuration could be had as extension but should not be in the standard distribution.

> 3) pause a predownload if space is gone and ask the user how to proceed, or
>    wait then until he selected the final location.

I think this is more trouble that anything else.  As long as the save target as work as expect (without downloading to tmp and then doing a copy), it would be sufficient for the average user to have a dialog saying : "Your download failed because there is no room left on the tmp device, use 'save target as' to start back your download to avoid this problem".
(In reply to comment #284)
> This can only be done when the user know in advance it while be a download. 
> When the download status is determine by looking at the headers, that
> requirement cannot be avoid.

We always know when it will be a download versus displayed. If we didn't, we wouldn't know when to throw a download dialog. As soon as a download dialog is called for you can pause or close the connection.
(In reply to comment #284)
> (In reply to comment #283)

> > 2) give the User a choice in the preferences where the predownload area is,
> >    or via $TMPDIR (but that may conflicts with other requirements for the
> >    system), but he will probably hit the problem at least once until someone 
> >    hints him that this has to be configured (maybe a popup the first time a
> >    download is done to set the preference ?)
> 
> That is not something the average user would use.  The average user would
> probably not understand those option.  I think those configuration could be had
> as extension but should not be in the standard distribution.
> 
> > 3) pause a predownload if space is gone and ask the user how to proceed, or
> >    wait then until he selected the final location.
> 
> I think this is more trouble that anything else.  As long as the save target as
> work as expect (without downloading to tmp and then doing a copy), it would be
> sufficient for the average user to have a dialog saying : "Your download failed
> because there is no room left on the tmp device, use 'save target as' to start
> back your download to avoid this problem".
> 

Why do you want to limit the software to the capability of the mythical "average user"?
why not put the predownload in the disk cache and move it as soon as the final destination is known (which is much more likely to be on the same filesystem than $TMPDIR) ?
I seldom if ever use 'download directory(ies)', my downloads go to far off directories and into new folders 99% of the time.  I download about ~1000 files or ~500GB/week easy through a browser.  I depend on file tracking, and i get it only if the files download to the final destination.

Who has never had a something crash or mess up one's downloads?? 

In TEMP directories file names and locations probably are missing.  But, putting files exactly where they belong increases recoverability by a large factor for me.  I would prefer to see the file.part in the folder, that would tell me which file from what URL and i download it again.

I would really feel heaps of gratitude for the fixer of this particular bug.
I think, downloading to the system-wide temp directory is not only annoying, but may be a severe security risk, too. Imagine a user downloading his bank account statements. He set his download directory to an encrypted partition, so he is totally shure that nobody else than himself will ever see them. Oh, pardon. Before moving them into his encrypted partition, Firefox escrows a copy into a world readable directory.
If Firefox does it halfway-sane, then this should be a lesser problem under *nix because then only the user who downloads this stuff should have access to the file, but the fact that, and maybe what, you are actually downloading, is still disclosed (or better, "suggested") to the rest of the system in the form of a file with a "speaking" name that belongs to you.
(In reply to comment #290)
> If Firefox does it halfway-sane, then this should be a lesser problem under
> *nix because then only the user who downloads this stuff should have access to
> the file

File permissions only matter to the operating system under which they were created. Somebody with a boot CD can read whatever they want. Don't discount the threat, it happens a lot. Firefox is leaking information and the user has no way to stop it short of modifying the source.

Granted, but still that system temp directory (still under *nix) will usually be empty if you boot with a CD because it's typically memory-based, anyway. Also, if it isn't, and if there are encrypted partitions to be used, it should already be possible to specify the system location via the TEMP environment variable to point there directly. But then this is also only a guess - I would consider a program that would not honour that environment variable in the absense of other (overriding) configuration to be in need of a significant (and simple!) improvement anyway.
Of course it is possible to circumvent this problem by configuring the system appropriately (i.e. by encrypting the whole system). Anyway, an application that handles sensitive data should not scatter around it without a need. But the most serious problem in my opinion is that a user selecting a secure download directory will be misled into a belief of false security.
As far as I can tell, the current behavior is that a click which brings up the "What should Firefox do with this file?" dialog downloads to the temp directory in the background, with the file being moved once the download is complete. This is explained extensively in bug 55690 and unlikely to change.

However, if the user right-clicks the link and selects "Save Link As..." from the context menu, the download *does* go straight to the selected destination. So I don't see a bug here.
Yes. It seems quite pointless to leave this "bug" open in it's current state, because:
- its working as designed 
- no-one has come up with an overall better design
- it's just getting repeats of the same ideas as before because people are not reading the existing discussion.

I suggest we simply close it (again) and wait for people to submit MORE SPECIFIC bugs/suggestions. Two possibilities from the previous discussion:
a) Provide an advanced preference - probably something for about:config (there is no better default than $TMP, but at least those who have a reason to care can change it)
b) Provide a specific error message if the $TMP directory runs out of space during pre-download. Perhaps this has already been done?

> its working as designed 
It's quite pointless, if a secrurity weakness is caused by design or by implementation, in my humble opinion. It should be fixed. 

> no-one has come up with an overall better design
Simply disable pre-downloads by default until the user enables it and provides a suitable location. Alternatively, the download directory would be a reasonable choice. As a nice side effect, this reduces the possibilty of 	soul-destroying inter-filesystem moves and disk space problems spaces to almost zero. 

> Provide an advanced preference - probably something for about:config
Absolutely useless in my opinion. The unexperienced user is the one saving his bank account statements or even more sensitive into the $TMP directory of a semi-public pool computer of his university/company/library/school. He will never mind that something like about:config even exists.

> Provide a specific error message if the $TMP directory runs out of space
during pre-download
Disk space is really the last thing I would bother about in this issue
> I suggest we simply close it (again) and wait for people to submit MORE
> SPECIFIC bugs/suggestions.
[...]
> Disk space is really the last thing I would bother about in this issue

Perhaps bug 121880 should not be marked as a duplicate of this bug, in that case.  I always thought equating "silently ignores write errors, corrupting your data" with "downloads to an arguable location" rather a strange choice.
Holger, please read the full discussion. Your points have been repeatedly answered.
It is not possible to disable the pre-downloads due to the basic design of Mozilla (and web browsers in general). The download directory is likewise not an option as it is not known until the user chooses. 
Boris mentioned earlier about the possiblity of throttling the pre-download so it can be kept small and in RAM, which NS4 used to do. I can't find any conclusion to that discussion - perhaps the development is still in progress. Personally it seems like that would still have potential problems.
(In reply to comment #296)
> > no-one has come up with an overall better design
> Simply disable pre-downloads by default until the user enables it and provides
> a suitable location.

That would be ok.

> Alternatively, the download directory would be a
> reasonable choice. As a nice side effect, this reduces the possibilty of  
> soul-destroying inter-filesystem moves and disk space problems
> spaces to almost zero.

Huh?

> 
> > Provide an advanced preference - probably something for about:config
> Absolutely useless in my opinion.

Seconded. "about:config" seems to be the core of what makes Firefox "user-friendlier" than SeaMonkey...

> > Provide a specific error message if the $TMP directory runs out of space
> during pre-download
> Disk space is really the last thing I would bother about in this issue

You *would* bother about it if you also had stranded downloads that fail after, say, half an hour of filling out license forms and getting 400MB through a rather narrow-band pipe ("reload" does not work).
(In reply to comment #296)
> > no-one has come up with an overall better design
> Simply disable pre-downloads by default until the user enables it and provides
> a suitable location.

That would be ok.

> Alternatively, the download directory would be a
> reasonable choice. As a nice side effect, this reduces the possibilty of  
> soul-destroying inter-filesystem moves and disk space problems
> spaces to almost zero.

Huh?

> 
> > Provide an advanced preference - probably something for about:config
> Absolutely useless in my opinion.

Seconded. "about:config" seems to be the core of what makes Firefox "user-friendlier" than SeaMonkey...

> > Provide a specific error message if the $TMP directory runs out of space
> during pre-download
> Disk space is really the last thing I would bother about in this issue

You *would* bother about it if you also had stranded downloads that fail after, say, half an hour of filling out license forms and getting 400MB through a rather narrow-band pipe ("reload" does not work). Also, only because big disks are so common nowadays, this does not mean that all the 20GB and lesser disks are already out of service.
How about this "solution":

When the user clicks a file that is being downloaded Mozilla starts downloading in the background (just like now) into whatever directory it currently already uses.

While this happens the user is presented with a "Choose destination" form to select the directory. (still current behavior)

Now when the user has decided on the location the download is immediatly stopped, the file is immediatly moved and the download resumes at the new location.

This would allow the browser to start downloading right away while at the same time fixing the problem with downloading to the tmp directory. (or whatever else)

Worst case scenario is that the download is not resumable. In that case the browser should just start redownloading from the beginning but this time to the correct destination. (You only lose the pre-download time essentially)

Good? Bad?

Harald
No, not good enough; whatever the default or last download directories are may not be either large enough or secure enough for the current download.

The $TMP etc variables are there for saying where you want temporary files to go. Perhaps there ought to be some kind of config setting within Firefox et al to override it, and perhaps there is room for improved behaviour if the $TMP directory runs out of room, but essentially the facility is there to choose the temporary download location to suit whatever the circumstances are, so I no longer think this is a bug.
(In reply to comment #302)
> No, not good enough; whatever the default or last download directories are may
> not be either large enough or secure enough for the current download.

The same applies to $TMP.

> The $TMP etc variables are there for saying where you want temporary files to
> go. Perhaps there ought to be some kind of config setting within Firefox et al
> to override it, and perhaps there is room for improved behaviour if the $TMP
> directory runs out of room, but essentially the facility is there to choose the
> temporary download location to suit whatever the circumstances are, so I no
> longer think this is a bug.

If you think that downloads are temporary files, then perhaps this isn't a bug. I don't believe downloaded files qualify as a temporary file except for the fact that Firefox makes them temporary by virtue of its bad behavior. Temporary files are scratch files and the like, not file repositories for placing things you don't know what else to do with.

And, you can't make the argument that running out of room on the last download directory is any worse than running out of room on $TMP. One argument in favor of last download directory is that the user may not be in control of $TMP, but they are going to be in control of the last download directory.

I still haven't seen a cogent argument for using $TMP, only arguments against suggested alternatives.
Essentially this appears insoluble with the current set of requirements and assumptions.

I by downloading only to RAM and then throttling, as mentioned above would effectively allow the browser to get the content-type and other headers without having to handle the whole download in advance. Supposedly this worked in NS4.

So perhaps, we need to find or create a bug to imlpement throttling, and set this to depend on that. Anyone? Bug 353182 perhaps?
Duplicate of this bug: 438905
Duplicate of this bug: 490802
QA Contact: chrispetersen → file-handling
What does this bug have to do with bug 280661?  I see that this isn't the only bug in which you add this link.
Duplicate of this bug: 627826
(In reply to comment #304)> 
> I by downloading only to RAM and then throttling, as mentioned above would
> effectively allow the browser to get the content-type and other headers without
> having to handle the whole download in advance. Supposedly this worked in NS4.

I agree this solution would be the best. It prevents a false-failure due to space in some temporary location, obtains the header info before the user is prompted for the destination, yet still downloads the file exactly once (unlike some other possible means of satisfying those first two). The user requested one download, and if AT ALL possible, Firefox should do one download. For all Firefox knows the server may only allow the user to access the file a certain number of times.
Duplicate of this bug: 669089
Duplicate of this bug: 669089
Did anyone notice the 10th anniversary of this bug? [SCNR]
by the way 12b4 which is currently set to download files into the download folder is downloading files into the windows temp directory on a newer windows os.  got plenty of space and I didn't change any permissions. 

 earlier, 12b1 on another windows computer set to save to directory I choose and was downloading to random locations which have been apparently used before.
I'd like to add an informational comment about why this behavior can be a *good* thing.

First -- the behavior is consistent with downloads in internet-explorer.
(not a justification, just a note that this is not abnormal practice on *windows*,
I think it is more abnormal on linux, though the same reasons why it was done
on windows can apply to linux.

Reasoning (besides not overwriting current local files with fragments, and not 
leaving fragments in destination dir -- a problem I've had with bugs saved direct to the destination (which seems to be what happens on my FF, for reasons I don't
quite understand, -- disk fragmentation/allocation.

Downloading is a relatively slow process -- not as much when the decision to do it this way was first made (modem speeds), but still, considerably slower than local disk speeds.  Thus while something is downloading -- if other apps are running and writing to the destination disk, the resulting downloaded file can end up getting stored in fragments.  

Say it does something ridiculously stupid like downloads and buffers only 4k at a time (oops, did I say that?) .. it writes out 4k.  Other file activity happens, a second later another 4k, .. etc... you end up with a very badly fragmented file -- this is compounded and made worse when an application uses a small write size (like 4k)... vs. writing, say, 1Meg at a time... at least that would give large chunks .. but 4k chunks can cause a high level of fragmenting (not to mention the horrid performance penalty of doing only 4k reads on the network).

FWIW, my local network's ideal transfer size (it's a 1Gbit ethernet), for optimal speed -- is 16MB.  That's for all out straight linear read/writes yielding 125/119MB/s respectively.  4k reads slow that down to about 3-6MB/s --- it's worse if the local file store is on a network hard disk because then you pay the penalty twice!

Anyway -- Fragmenting is a real issue on NTFS and FAT32.  

A way I'd like to see this solved -- do 16MB read/writes, (to imap, pop, whatever), and out to disk (or max data available if <16MB).  This solves the fragmentation
problem, speeds up network communication by 20-50x, (try sending a large file through sendmail -- @ 4k/chunk... it's abysmally slow when the file is in the 100+MB range.  Something that takes seconds to copy, takes minutes to send to sendmail using small packets.

But then -- what would be the reason to write to tmp first?

Behavior should be 1) open file for write on destination dir (verifies write-ability), download the file into <filename>.temp.  When .temp finishes, rename 
to desired filename, letting it overwrite the empty file name.  If it crashes
mid-download, you end up with an empty target and a partial target.temp.  There's no question that it failed (lately i've seen downloads of large file fail -- 200MB, where it downloaded 200+, but the file was 200+N, so there was no clue until you tried to use the file that it failed).

Then you have rename in the same dir -- very likely to be on same dev (except in
pathological situations where someone mounts a filename -- a real option in linux) -- so it can't be **assumed** to always be true for the code to be safe!)

(failing at start of download with an error ("can't create tmp file on same device as destination file") would be an acceptable (though not ideal) alternative -- since it won't have the issue of the download completing and rename failing and potentially having the entire download wasted.  I.e -- the user will know it's because they have mounted that one file name.. and that can be an unsupported configuration (I'd do it for initial development -- quicker), as alternative would be to do manual copy from src->dst at end of download -- more code than a simple
'rename'...

Combining it with the larger buffer size (hopefully used in all transactions (network I/O + local disk I/O, though my optimal local disk  buffer size is much larger, it isn't practical to use that -- whereas 16MB (or 32MB for double buffering) for a buffer is reasonable/practical.

Both FF and TB would benefit from this -- in terms of efficiency/speed on downloads, and keeping the local disk from  from developing lots of small fragments.

Would that a doable and/or reasonable solution?
(In reply to L A Walsh from comment #315)
> Downloading is a relatively slow process -- not as much when the decision to
> do it this way was first made (modem speeds), but still, considerably slower
> than local disk speeds.  Thus while something is downloading -- if other
> apps are running and writing to the destination disk, the resulting
> downloaded file can end up getting stored in fragments.  

File allocation shouldn't work like that. When the file size is known the application should request the OS allocate the space for the whole file at once. 

> try sending a large file through sendmail -- @ 4k/chunk... it's abysmally slow 

The largest possible network data "chunk" is about 1,500 bytes in most cases. It doesn't matter how your software buffers the data, it ain't sending it out in pieces any larger than that unless the route between hosts is made up exclusively of routers that support large Ethernet frames.
(In reply to Jerry Baker from comment #316)
> (In reply to L A Walsh from comment #315)
>> Downloading is a relatively slow process -- ... 
> File allocation shouldn't work like that. When the file size is known the
> application should request the OS allocate the space for the whole file at
> once. 
---
	Shouldn't isn't a good excuse for why something doesn't deal with
"reality".   I would tend to agree about allocating the whole space if it is
known in advance, but MS has tended toward putting *partially* downloaded
files in tmp, because -- they are not the finished file.   Thus tmp storage
is used to hold "partial file contents" -- only when the download is complete
can you guarantee that you  aren't downloading and leaving **** at the destination.   FF leaves **** all the time -- a download doesn't complete (for
whatever reason), a **** is left.   It can look like the right file (big -- multimeg), but missing the last 100 bytes as an extreme bad case.  The only
way to ensure no ****, is to use tmp space.


> 
>> try sending a large file through sendmail -- @ 4k/chunk... it's abysmally slow 
> 
> The largest possible network data "chunk" is about 1,500 bytes in most cases.
> It doesn't matter how your software buffers the data, it ain't sending it out
> in pieces any larger than that unless the route between hosts is made up
> exclusively of routers that support large Ethernet frames.

---
	Modern networking has come a long way since 1500 byte packets.  For one,
you are forgetting the 9K packet size on my network (or 16K on others) between me and my mail & file & IMAP server.

	Second thing, is something called a TCP window size -- you can have TCP
buffers that are many megabytes long.  Thus mozilla could write a 600meg file in
1 write -- and the OS (and often TCP offload!), can easily handle fragmented
TCP packets that are usually greater than 1500 (only in RAW net packets do you have that limit -- even UDP packet sizes can be > 64K).  A standard intel gigabit network card can easily have 9kB (9014 actually counting header), AND
easily have 2000 transmit and receive buffers -- allowing 18 million bytes (<18MB) to be handled entirely by the network card at maximum speeds.  

	If you have bad software that writes small packets, it jacks up the transfer time tremendously, because if a network packet has to even touch WinOS, it adds significant delay, let alone have to reactivate tbird, tell it to wake up from it's 4k read, and then wait for tbird to send another 4k.   Drives the 
max transfer speed down by easily over 100x.

	It will be much worse on 10Gb networks.  I found optimal size
for file transfers over a 1Gb net to be a 16MB packet size, which makes sense
considering that's exactly what the card can buffer.

	That packet size should be a config-settable, but should default at least to 64KB-128KB.
I feel this bug is ignored: In Firefox 16.0.2 on Linux, Firefox starts a download on /tmp while the file selector is open. Then a popup pop up, saying there's not enough space on /tmp, effectively preventing me to select a directory in the fileselector (of I hit "OK" on the popup, the fileselector closes also).
Duplicate of this bug: 856400
I do not see how #856400 is a duplicate.

When saving to USB drive, Firefox does NOT save to /tmp first. Bug #856400 is not about it saving to /tmp, its about the dialog box / error message running the wrong code before popping up.
By default I am using a 256M tmpfs /tmp.
Got hit by this bug when I opened a link that was slow to connect, got afk.
Got back with the download dialog and another dialog that the destination was out of space.
This out of space condition failed to free the tmpfs but deleted the file in /tmp. (Got space back by restarting Firefox. This additional bug I have yet to repeat.) So, now my /tmp was full and any further download attempt failed immediately with the dialogs.

So, any user filling up /tmp can disable a running firefox process from downloading anything.

I'd like to have an option in preferences to select the preloading destination so I would not need to _restart_ firefox and use TMP= environment value to use another place.

For now, remedied with Download Manager (S3) extension.
(In reply to Jaakko Perttilä from comment #321)
> By default I am using a 256M tmpfs /tmp.
> Got hit by this bug when I opened a link that was slow to connect, got afk.
> Got back with the download dialog and another dialog that the destination
> was out of space.
> This out of space condition failed to free the tmpfs but deleted the file in
> /tmp. (Got space back by restarting Firefox. This additional bug I have yet
> to repeat.) So, now my /tmp was full and any further download attempt failed
> immediately with the dialogs.
---

Your description is a bit unclear, what do you mean the "out of space condition
failed to free the tmpfs" but deleted the file in /tmp?

Answering what I think you meant, let me explain and you can decide if you still
think your condition was a bug.   There are 2 things going on here (I think -- things may have changed in later browswers than I am using).

AFAIK, when you start to download FF asks you where you want to download it, but after a timeout (I think: "browser.download.saveLinkAsFilenameTimeout"=time in "centiseconds", so 1000=10sec, 300=3sec) it will start the download to a tmp file while waiting for you to answer the question of where to put it.  

Thus if you don't answer for a long time, it may have the entire file downloaded before you tell it where to put it.
If you try to put it in the same place (the tmp dir), and what you downloaded was, say, a CD-image (~500-700MB), and your tmp dir is only 256MB, the background downloader may have already filled up /tmp, but may be "waiting" to see if space becomes available ("out of space" conditions can be transitory (something else might free up space) or it may just pause the download to wait for you to enter a "real destination" on a different disk).

Either way, your /tmp will be full.  If you try to store the final file there, you'll be told no space.

The second issue is that even if you delete a tmp file that is *in use* by a program, on unix(linux/bsd), the space for that file is NOT freed until the *program* closes the file.  On *nix systems, names are basically a "hard link" to space somewhere, but also, a program's open "handle" to a file is also a "hard link".  The space won't
be deleted until the "hard link count" to the space goes to 0.  So even if you delete the name, as long as some program is running and hasn't closed its "handle" to the file the space is still reserved -- that's why you got the space back when you restarted FF.  Closing FF, forces it to release all of it's open handles.  That's when the OS can actually free the space.

256MB for a /tmp dir seems very low to me.  (I do a "little unsafe" thing in that I put my /tmp on my "/var" partition (since /var gets written to alot with tmp information as in /var/tmp, and /var/log, I thought I'd combine the two): What I did is created a /var/rtmp (rtmp standing for 'root' tmp).  Both my "/var" and my "/tmp"  partitions are mounted very early in the boot process.  I made my var 8GB.  As soon as /var is mounted, I mount "/var/rtmp" on "/tmp" with an "rbind" using a line in my /etc/fstab:

/var/rtmp /tmp none rbind 0 0 

(don't use a symlink and don't use an existing directory in /var (like don't mount /var/tmp on /tmp -- they are used for different purposes).  Make sure your /var/rtmp is created with mode 1777 and owned by root.root (same as real tmp).

If you want to use some memory to speed up your /tmp, make a ramdisk (like you have now
using tmpfs).. but use the "FSCACHE" option (I've never used it, but it should work for
what you want).  The kernel config option for it says:
----
CONFIG_FSCACHE:

This option enables a generic filesystem caching manager that can be
used by various network and other filesystems to cache data locally.
Different sorts of caches can be plugged in, depending on the
resources available.

See Documentation/filesystems/caching/fscache.txt for more information.
----
(the Documentation dir is in the root of the linux source tree, so you need a copy of the source
to read that file).

Hope this explains and maybe helps how to minimize this problem...?
Thanks, I'll try to elaborate.

(In reply to L A Walsh from comment #322)
> (In reply to Jaakko Perttilä from comment #321)
> > By default I am using a 256M tmpfs /tmp.
> > Got hit by this bug when I opened a link that was slow to connect, got afk.
> > Got back with the download dialog and another dialog that the destination
> > was out of space.
> > This out of space condition failed to free the tmpfs but deleted the file in
> > /tmp. (Got space back by restarting Firefox. This additional bug I have yet
> > to repeat.) So, now my /tmp was full and any further download attempt failed
> > immediately with the dialogs.
> ---
> 
> Your description is a bit unclear, what do you mean the "out of space
> condition
> failed to free the tmpfs" but deleted the file in /tmp?
> 
> Answering what I think you meant, let me explain and you can decide if you
> still
> think your condition was a bug.   There are 2 things going on here (I think
> -- things may have changed in later browswers than I am using).
> 
> AFAIK, when you start to download FF asks you where you want to download it,
> but after a timeout (I think:
> "browser.download.saveLinkAsFilenameTimeout"=time in "centiseconds", so
> 1000=10sec, 300=3sec) it will start the download to a tmp file while waiting
> for you to answer the question of where to put it.  

I was not using the Save Link As... context menu option, just clicking a link to a download.

> Thus if you don't answer for a long time, it may have the entire file
> downloaded before you tell it where to put it.
> If you try to put it in the same place (the tmp dir), and what you
> downloaded was, say, a CD-image (~500-700MB), and your tmp dir is only
> 256MB, the background downloader may have already filled up /tmp, but may be
> "waiting" to see if space becomes available ("out of space" conditions can
> be transitory (something else might free up space) or it may just pause the
> download to wait for you to enter a "real destination" on a different disk).
> 
> Either way, your /tmp will be full.  If you try to store the final file
> there, you'll be told no space.

Just opening the link begins immediately the download to the TMP dir.
The first time it filled the space, and gave the out of space dialog atop the "what you want to do to this file" dialog, so I could only acknowledge the out of space dialog. That one closes the rest of the dialogs for the download.

The first time this happened, the space got not released, but the file was gone from the TMP (du showed 44k used, df 0% free).
I resized the tmpfs (-o remount,size=512M) before restarting firefox, but I was not able to get the rest of the tmpfs filled up similarly with some other downloads.

> The second issue is that even if you delete a tmp file that is *in use* by a
> program, on unix(linux/bsd), the space for that file is NOT freed until the
> *program* closes the file.  On *nix systems, names are basically a "hard
> link" to space somewhere, but also, a program's open "handle" to a file is
> also a "hard link".  The space won't
> be deleted until the "hard link count" to the space goes to 0.  So even if
> you delete the name, as long as some program is running and hasn't closed
> its "handle" to the file the space is still reserved -- that's why you got
> the space back when you restarted FF.  Closing FF, forces it to release all
> of it's open handles.  That's when the OS can actually free the space.
> 
This freeing the space will be going to another bug, if I can reproduce it when I got more time to test.

Basically, my concern in this bug is that all downloads that are initiated without the "Save Link As..." option fail immediately when TMP is full.
One cannot select what to do with the file or the destination to save to, when the "out of space dialog" is given at the beginning of a download.
This gives a minor DoS condition within a multi-user system.

So, there should be an option in the program to set the TMP folder. Restarting firefox with given TMP in env is not a solution.
(In reply to Jaakko Perttilä from comment #323)
>
> 
> Just opening the link begins immediately the download to the TMP dir.
> The first time it filled the space, and gave the out of space dialog atop
> the "what you want to do to this file" dialog, so I could only acknowledge
> the out of space dialog. That one closes the rest of the dialogs for the
> download.
> 
> The first time this happened, the space got not released, but the file was
> gone from the TMP (du showed 44k used, df 0% free).
---
Because the running process that did the download still had the handle open.


> > The second issue is that even if you delete a tmp file that is *in use* by a
> > program, on unix(linux/bsd), the space for that file is NOT freed until the
> > *program* closes the file. 
> > 
> This freeing the space will be going to another bug, if I can reproduce it
> when I got more time to test.
===
But it is not a bug -- it is a design requirement.  If you open a file that uses
exactly 500MB, on a 512MB disk, it is a "design paradigm" for programs wanting
"inaccessible tmp files" to do exactly this type of thing -- open the file -- 
keep it open, but delete the filename on disk.  This leave a 500MB file open
for the application, that it will expect to be able to use until it closes the
file or the application ends.

du looks at how much space is attached to the names that it "sees".  df looks at
the free-space area.  Until the application closes that file handle, the free space
won't return.  It can be worse if other applications are blindly waiting to write out
data (i.e. they might go to sleep and wait until space is available), they the wake
up and could immediate fill the space.

> Basically, my concern in this bug is that all downloads that are initiated
> without the "Save Link As..." option fail immediately when TMP is full.
---
Even without the save-link-as, they will fail because they TRY to start
it in tmp.


> One cannot select what to do with the file or the destination to save to,
> when the "out of space dialog" is given at the beginning of a download.
> This gives a minor DoS condition within a multi-user system.
---
You can use "lsof" or "fuser" on tmp to see what applications are holding open
files.. then kill them off.. that should free up the space.

Other than that, 500MB is awfully small for a /tmp/..

If I download a Distro DVD image, those can easily be >4G, 

(which is why I combined it with 'var' so they can 'share' the space.


> So, there should be an option in the program to set the TMP folder.
> Restarting firefox with given TMP in env is not a solution.
---
    maybe there's an option in about:config?  "browser.download.lastDir" contains
the last download dir...and "browser.download.useDownloadDir" says whether
or not to use the download dir... (not an ideal interface, but if it works...)
Thanks LA Walsh for the suggestions and insight, but I am really not looking for support in the issue. I'm just adding my notes to the bug that got triggered by the out of space condition (by another bug I couldn't reproduce with the same nor a clear profile).

Some additional things I observed with further download tests:
- If TMP is bogus, an alert: "could not be saved, because an unknown error occured. Try saving to a different location" is shown. No possibility to save/select location as failed by default.
- If TMP is missing any of the rwx permissions for the user, the download fails silently (can be seen in the downloads window).
- If TMP fills up before the target is selected with the "Save As..." a null file is created/left to the target dir, and the download fails silently (also seen in downloads window).

Config options browser.download.dir and browser.download.useDownloadDir work only with filetypes that are selected as always to be saved. The rest will fail, as they show the "What to do..." dialog and start downloading in TMP.

In my opinion, there should be an option for setting the TMP-dir in-app, rather than always using the system one.
Product: Core → Firefox
Target Milestone: Future → ---
Version: Trunk → unspecified
WONTFIX?
Flags: needinfo?(paolo.mozmail)
Status: REOPENED → RESOLVED
Closed: 16 years ago3 years ago
Flags: needinfo?(paolo.mozmail)
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.