Closed Bug 134163 Opened 23 years ago Closed 23 years ago

Can't download files to FAT filesystems under Linux

Categories

(Core :: Networking: File, defect)

x86
Linux
defect
Not set
major

Tracking

()

VERIFIED FIXED
mozilla1.0

People

(Reporter: graham.knap, Assigned: brendan)

References

()

Details

Attachments

(4 files, 3 obsolete files)

This morning I tried to download a new Mozilla build, using my existing Mozilla (Linux, 2002-03-18/08) onto a FAT drive. It failed; here's what happened. 1. It downloaded the entire file (probably into /tmp). 2. I received a dialog box stating: /fat-e/download/linux/mozilla-i686-pc-linux-gnu-sea_2002-03-28_08.tar.gz could not be saved, because an unknown error occurred. Sorry about that. Try saving to a different location. 3. The file does exist, but it is zero bytes long. I am 100% sure that the "unknown error" was that Mozilla tried to set the ownership and permissions on the file, which you can't do on a FAT drive... What I would like to see Mozilla do is issue a warning similar to what Konqueror does: something along the lines of "file permissions could not be set". I know FAT is a brain-dead filesystem but unfortunately I do still need to use it, because it's all that Windoze can understand. If anyone's curious about how I have my FAT drives set up under Linux, here it is. It works well; if anyone knows of a "better" way, let me know. :-) 1. create a group "fat-disk" 2. add myself to that group 3. in /etc/fstab, add "gid=fat-disk,umask=002" to the mount options for each FAT filesystem
WFM using a loop device, created using: mkfs.vfat -C /tmp/disk.vfat 102400 mount -t vfat -o loop,gid=cdwriter,umask=002 /tmp/disk.vfat /mnt/disk mkdir -p /mnt/disk/download/linux Downloaded a file from http://ftp.mozilla.org/pub/mozilla/releases/mozilla0.9.9/. The file is saved correctly, didn't see a warning. Could be because I am using a loop device/vfat. Reporter: Can you add a "quiet" to your /etc/fstab, remount and try again? This will prevent the error generated when using chown. Do you mount using the vfat or the fat filesystem?
I will give that a try when I get home later today. Until then -- I mount the drives using vfat, to support Winblows 95's long filenames. Two are fat16; one is fat32. What you're suggesting sounds like a good workaround, at least in my case. However, "man mount" says of the "quiet" flag: "use with caution" -- so I suppose there are situations where you would NOT want to use the "quiet" flag. In these cases, Mozilla should still be able to save the file, but should warn the user that chown/chmod failed. On that note -- just in case anyone asks: Assuming the "quiet" flag eliminates the problem, I do NOT want to see a pref in Mozilla to disable such warnings. There are several good reasons for this, as I'm sure you can imagine... security, code bloat, kludginess (it's the wrong way to get rid of the warnings).
This reminds me of another bug involving symlinks... might be the same issue, but in different circumstances... I'm unable locate that bug right now tho. But I suspect this might be a dupe of that.
Whiteboard: dupeme
Sounds like bug 132971 to me.... and I can't reproduce this bug (nor that one). My setup is pretty much identical to yours (I'm in a group that's the group for the FAT partition and the umask is 007). I'd be very interested in seeing what the results of the "quiet" test are... Is this happening for all files? Only big files? Only small files? How are you actually saving the file? (Did you click and then select the "Save to disk" radio button? Did you right-click on the link and select "Save Link As"? Did you shift-click"?)
Assignee: asa → law
Component: Browser-General → File Handling
Depends on: 132971
QA Contact: doronr → sairuh
Ok, results are in. Adding the "quiet" flag does indeed allow Mozilla to save to a FAT drive. We now know the source of the problem, and we have a good workaround. However, as I said, even if the "quiet" flag is not there, I'd still like to see Mozilla save the file properly, and warn the user that chown/chmod failed. I did have a look at bug 132971 also. On my system, symlinks do not cause trouble... but FAT drives without the "quiet" flag do.
I can reproduce the bug when I click and select 'Save this file to disk'. I cannot reproduce the bug when using: right-click 'Save Link As' 'Save this file to disk' and mounted using the 'quiet' flag I've downloaded a few files from: http://ftp.mozilla.org/pub/js/older-packages/ Here are the results when downloading using 'Save this file to disk': CORBAConnect.zip, 28KB: I do not get an error message js-1.3-1.tar.gz, 470KB: I get 'unknown error' message When I restart the browser and try js-1.3-1.tar.gz again, I do NOT get the error message until I clear the cache. Clearing the cache and trying the above files using 'Save Link As' or 'Save this file to disk' mounted with the 'quiet' flag works ok. Marking NEW, because I haven't used a symlink and bug 132971 is unconfirmed.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Whiteboard: dupeme
To answer bz's questions, with a non-"quiet" FAT drive: - Starting the download with a single left click, or a right-click and "Save link as" gives the failure I described. Starting the download with a shift-left-click allows the file to be downloaded successfully! - File sizes also seem to matter. Using right-click, "Save link as" fails with Mozilla builds (around 12MB), but succeeds with smaller files (up to a few megabytes). Weird, weird, weird... I'm finding that if the file is saved to /tmp while being downloaded, then moving it to a non-"quiet" FAT drive will fail. But if it is saved only to the mozilla Cache directory, then it will move happily, "quiet" or not. The file size AND the method of initiating the download seem to affect this. Shift-clicking seems to never save the file in /tmp. The other methods seem to drop it in /tmp if it is above a certain size... though I haven't a clue what that "certain size" is. Something else that I noticed that is extremely strange: I left-clicked to download a file, then watched the contents of both /tmp and my Cache directory. Mozilla was saving the file to BOTH locations simultaneously, effectively eating twice as much disk space as should be necessary! I can't come up with any other meaningful test cases at the moment, but I'll be watching this bug, and I'd be glad to help wherever possible...
No longer depends on: 132971
*** Bug 132971 has been marked as a duplicate of this bug. ***
To Networking:File. When clicking and saving we spool to a temp file and then MoveTo() the file to the right place. nsLocalFileUnix::MoveTo() tries a rename() call then when that fails (since the files are on different filesystems) it copies by using CopyTo(). nsLocalFileUnix::CopyTo() (at http://lxr.mozilla.org/seamonkey/source/xpcom/io/nsLocalFileUnix.cpp#697) creates the new file, then does: 769 // set the permissions of newFile to original files 770 if (NS_FAILED(rv = newFile->SetPermissions(myPerms))) 771 return rv; After that it starts copying data over, but when newFile is on a FAT partition we never get to that code. In my opinion, failing to set permissions should not be a fatal error here... (though the fact that the default mount options throw an error on chmod is very broken in my mind).
Assignee: law → dougt
Component: File Handling → Networking: File
Keywords: mozilla1.0
QA Contact: sairuh → benc
cc'ing some nsLocalFileUnix guru's. Any idea why SetPermissions is failing?
It fails because the FAT filesystem driver does not support changing permissions of individual files. As a result, all chmod() calls on files on a FAT filesystem fail.
Keywords: nsbeta1
Comment on attachment 77257 [details] [diff] [review] Boris's Suggestion r=bzbarsky
Attachment #77257 - Flags: review+
this fix is ok but we should still add a dialog (as a warning) that explains the problem of being "unable to set permissions" on the downloaded file. clearly that part can wait until after 1.0. Approving this for beta1+ Doug: can you file a new bug for the warning message? Thx.
Keywords: nsbeta1nsbeta1+
Target Milestone: --- → mozilla1.0
If I'm not mistaken then that patch would have Mozilla completely ignore "unable to set permissions" errors whenever they occur. I don't mean to be a pain but I do not like it when an application does not report a failure. On my workstation, where I am the only user, I set my FAT drives to mount with the "quiet" flag. I don't need to see these errors. But what if you're dealing with a failure on a non-FAT filesystem... or a system with more users...? I'm wondering if it would be a good idea to report the failure to the user (but yes, continue to save the file). If you want to go with ignoring the error for now, that's fine; it's a big improvement over failing to save the file. But I would like to eventually see the error reported to the user.
> It fails because the FAT filesystem driver does not support changing > permissions of individual files. As a result, all chmod() calls on > files on a FAT filesystem fail. It's somewhat more complicated. The FAT filesystem has no concept of owner or group so the kernel drive has to invent them for the system. Normally it uses values from the process that mounted the fs but there are options. It is quite possible to for one user to create a file on a FAT fs but it will immediately become owned by another user. The user who owns the file can, in fact, change at least some of the permissions. To do this right, mozilla would have to see why chmod failed. It would probably have to stat the file to see if it exists at all and, if so, who owns it. If the mozilla porcess no longer owns the file then either the fs doesn't permit it or some asynchronous process has done mozilla wrong.
Is there a way to determine the file system type is a clean cross-'nix way?
Nothing even close, sadly.
right. lets just get downloading to fat working and worry about slightly failing on SetPermissions later. Note that the new file is created with read, write, execute by owner. Thoughts?
s/slightly/silently
Attached patch untested patch to SetPermissions (obsolete) — Splinter Review
How about changing SetPermissions to check for the EPERM case. It's not the best solution but should work for the moment.
Comment on attachment 77845 [details] [diff] [review] untested patch to SetPermissions Do we really want to ignore EPERM _everywhere_? It seems like there might be some bad cases for things like saving temp files and whatnot, with possible security ramifications. (Attacker creates writeable file with appropriate name, we ignore the permission error, and write data to it. In the absence of something like O_EXCL exposed through our API, I think we want to be pretty cautious.) I'd be a lot happier with something that just ignored the permission error in the download code.
are there known attacks against a file copy / download that morphes the permissions from whatever to read, write, execute by owner? I mean.... I think that this attack is streching it a bit.... a. attacker would have to get something to call into CopyTo(). Currently only profile management and downloading use this, IIRC. b. somehow the attacker would have to get setPermissions to fail. Note that the downloaded temp file has already has its permission set. c. then the attacker would have to trick the user to launch the exe. I don't claim to be an "expert", but there are much much easier ways to exploit.
So the first issue is that you're not just changing the behaviour of the download/CopyTo case, you're making all setPermission calls fail silently if they fail with EPERM, probably because the user doesn't own the file. Unless I'm really misreading the patch, I guess -- it _is_ 2AM. And, yes, there is a long and painful history of temp file races on Unix, which is why people use O_EXCL for creating temp files nowadays. http://www.linuxdoc.org/HOWTO/Secure-Programs-HOWTO/avoid-race.html, section 6.10.1.2 has some more detail. Given that we name our files quite predictably in, for example, the helper-app and saved-attachment case, we're in the same old, leaky boat. What are the permissions we try to set in the download case, via CopyTo? (If it's OK for us to silently fail to set them, why not just not bother to try? I suspect the answer is "we don't want the files mutable/readable by default", but maybe we should just rely on the user's umask for such things?) It _might_ be OK to fail if the permissions we're trying to set include world read, though that still leaves alteration (rather than the easier spying case). I'm not convinced, but I bet Mitch could sway me. Generally, I think that it's not a good idea for us to (silently!) ignore a failure in a call that we seem to only be making for security reasons.
Mike, I agree with you. Patch 77845 is not the way to go. I am talking about the first patch 77257 which only changes the behaviour of the CopyTo. The file that we are trying to set permission on is set to be S_IRWXU. What scares me is that it has the executable bit set. Maybe we should also only create it as S_IRUSR | S_IWUSR: http://lxr.mozilla.org/seamonkey/source/xpcom/io/nsLocalFileUnix.cpp#763 (note that there is a present race condition with the create, and open a few lines above this).
If you want to push the error test up higher that's easy enough but you would have to create a NS_ERROR_FILE_? value and return that. You could modify nsresultForErrno as well but that would have a slight performance penalty. Of course, this is just an ugly hack that I'm sure won't work as desired. It's true that file name issues are a real problem but mozilla already has the file open so it could eliminate most of the by using the descriptor based system calls like fchmod, ftsat, etc. Assuming mozilla can get the underlying os file descriptor (if it can't it's in deep yogurt anyway), it should have something like a SetPermissionsDescriptor method. Then it could do a fchmod. If that failed and errno == EPERM then it could fstat the descriptor and see if the process euid and egid matched the descriptor's. If they do then just return an error. If not, then it's possible the target fs is doing something odd and only then is it worth considering ignoring the error. On some platforms mozilla uses statfs so for these it could examine the struct statfs for the target. If the structure has a f_type field then mozilla could compare it against the source fs and decide what to do. On a system like Linux with a /proc fs, you could get really paranoid and open /proc/self/fd and see what's going on. How does one get the real file descriptor? BTW, don't rely on umask values. Mozilla loads lots of shared libraries and any one of them, including 3rd-party plugins, can change the umask value arbitrarily.
I would like some kind of fix for this problem in the next day or so. I do not think that it is a big deal if changing permissions fail if the file is created with only user bits set. Please point out where I am wrong. Mitch?
I don't think that would be a big problem either. That's not what your patch does, though: it _always_ ignores a certain class of chmod failure, in a routine that we call from more than just the download/file-created-with-user-bits-only case. (Why does the download code need to chmod anyway? Shouldn't it be passing an appropriate mode to Create in the first place?)
mike, there are two patches attached to this bug. I am suggesting that we go with 77257.
Duh. I need to read for content, sorry. So why aren't we just using myPerms in the Create call? (And why do we have PR_RDWR, instead of PR_WRONLY, in the OpenNSPRFileDesc call, while I'm asking?) Then we don't need to change the permissions at all. rv = newFile->Create(NORMAL_FILE_TYPE, myPerms); PRInt32 openFlags = PR_WRONLY | PR_CREATE_FILE | PR_TRUNCATE; newFile->OpenNSPRFileDesc(openFlags, myPerms, &newFD); We should probably use an nsCOMPtr for newFile, too, since we currently leak newFile and newFD if IsSpecial fails.
I didn't understand that either. However, I didn't want to mess with it so close to mozilla 1.0. If your up for the risk, and can supply the enema.
OK, how does this grab you?
Attachment #77845 - Attachment is obsolete: true
what about what mike said: drop the chmod all together and pass the permissions to OpenNSPRFileDesc()? wtc - are there any problem with PR_Open() which would require an explict call to chmod() to modify the file permissions set on a file? What we are doing is a fd = PR_Open(path, openFlags, S_IRWXU); chmod(path, newPermissions);
Doug: I don't have time to read all the comments in this bug. Based on your question and the platform and OS of the bug, my answer is no. On Linux, PR_Open should be able to set the file permissions on the new file and a separate chmod call should not be required. However, the summary of the bug mentions FAT filesystems, which don't seem to have the notion of users and permissions. So maybe that's the problem?
Mike, tenthumbs - thoughts?
rv = newFile->Create(NORMAL_FILE_TYPE, myPerms); is the way it use to be . . . S_IRWXU was added to fix bug 107641. Did that fix cause this bug? --pete
Comment on attachment 78896 [details] [diff] [review] another suggestion squeezed into this patch. r=dougt
Attachment #78896 - Flags: review+
mike, brendan, can you review patch 78896?
I'm a little confused. I ran a few tests and Linux seems quite happy to let me open a file for writing but without write permission bits (e.g. 0400). I'm wondering what bug 107641 really fixed. This seems to say that attachment 79640 [details] [diff] [review] should work. It's certainly preferable. However, Linux isn't the only unix platform.
> I'm wondering what bug 107641 really fixed. If i remember correctly . . . If a dir was readonly, and you were copying the contents, the precursor implementation wouldn't copy it's children. A readonly dir w/ children would copyTo just an empty dir. --pete
> If a dir was readonly, and you were copying the contents, the > precursor implementation wouldn't copy it's children. Yes that makes sense but SetPermissions is only called for non-directories.
I see what's going on. nsLocalFile::CopyTo calls nsLocalFile::Create which calls exclusive_create for a normal file. All exclusive_create does is use the system open call with the passed-in permissions and then *closes* the file descriptor. nsLocalFile::CopyTo then tries to reopen the file with nsLocalFile::OpenNSPRFileDesc. If write permission wasn't there originally, OpenNSPRFileDesc always fails. Why all of this? Mozilla gets a system file descriptor, why not use it?
Lame layering? exclusive_create and exclusive_mkdir (mkdir is exclusive on Unix-y systems, by definition) hide the fd. Yuck. Fix by keeping the fd in a member variable, inline-expanding exclusive_create and exclusive_mkdir in the process? Those little static helpers are called-once bloat, if you ask me. /be
Keywords: mozilla1.0mozilla1.0+
Attached patch my stab at a fix, untested (obsolete) — Splinter Review
If this works, feel free to r= it and reassign to me. A freebie! /be
*** Bug 138143 has been marked as a duplicate of this bug. ***
Is PR_EXCL (or O_EXCL) really the right thing to do? It prevents mozilla from writing to a dangling symlink. I think there's a bug on that. It also prevents mozilla from writing to a fifo which isn't right either.
I patched, so I'm taking. /be
Assignee: dougt → brendan
tenthumbs: the nsIFile.idl doc comment for create pretty clearly defines that method as doing an exclusive create. Your comment says nsLocalFileUnix.cpp's use of PR_EXCL/O_EXCL prevents Mozilla from writing to certain types of existing files, but I think you mean specifically that CopyTo can't write to such nodes, precisely because it uses Create on the target. That's a bug, I agree, but not in create -- in copyTo. Patch coming up (I'm also de-JavaDoc-ifying comments that are not doc-comments, but that use /** ... */). /be
Blocks: 138000
Status: NEW → ASSIGNED
Attached patch proposed fix, v2 (obsolete) — Splinter Review
Hoping dougt can grab this instead of the last patch, but either's fine for testing the main change. This one does as tenthumbs suggests, and avoids PR_EXCL when opening the destination file in CopyTo. /be
Attachment #79825 - Attachment is obsolete: true
pete, tenthumbs, when you have a moment, please take a look at patch 80100.
Comment on attachment 80100 [details] [diff] [review] proposed fix, v2 do we want to open up the |this| file with myPerm, or is just the user read bit okay? I don't think it matters. lets makes sure that this fixes the download to FAT and doesn't regress pete's won't copy read-only files bug. r=dougt
Attachment #80100 - Flags: review+
The myPerms (mode) parameter matters only if PR_CREATE_FILE is given, so there is no issue with passing them to the OpenNSPRFileDesc call that opens oldFD in CopyTo -- they don't help or hurt (we could pass 0), so I'll leave that code as it was. Testing buddies much appreciated. I'll test this weekend. /be
copliled, tested on FreeBSD. No apparent regressions w/ read only copy fix. Looks good to me. Brendan hacked this bug so he can s/\/\*\*$/\/\*/g and get rid of those javadoc comments once and for all . . . --pete
Brendan: actually I do think Create is buggy. It's just the equivalent of touch which is occasionally useful but not as a general purpose function. It's name is wrong. It should be CreateAndClose which more clearly defines its function. Sure there is a commend the idl file but who reads comments? :-) As I'm sure you know, nsIFile.idl thinks a file can only be a regular file or a directory so it has a simplistic world view which I believe is one of the main reasons mozilla has so many unix problems. But that's not this bug. The patch looks good to me. I do suggest renaming the exclusive* functions since they're no longer exclusive. Maybe CreateAndKeepOpen could just be CreateAndOpen. It probably can't hurt to add the nsError.h change from attachment 78896 [details] [diff] [review], we'll eventually need it.
tenthumbs: I agree with you about nsIFile.idl being broken-by-design (which is not the same as "buggy" in the operates-contrary-to-design sense commonly used, please note), but we're not going to change it for 1.0, we're going to live with it, dammit. Therefore, the comment does rule, and CreateAndKeepOpen makes more sense given the fixed Create name we're freezing than CreateAndOpen (because we're not changing Create to CreateAndClose -- the salient difference between CreateAndKeepOpen and Create[AndOpen] is the Keep part, I want that in the name). I'll fold in the EPERM special case, though -- thanks. /be
Attached patch proposed fix, v3Splinter Review
I didn't add a new nsError.h entry for EPERM. Instead I fixed nsresultForErrno to map EACCES to the existing NS_ERROR_FILE_ACCESS_DENIED, and I mapped EPERM to the same error result. There's no point now (this late in 1.0) trying to teach the rest of the world about the Unix-only distinction between EACCES (access denied due to lack of directory search (x) permission) and EPERM (chmod failed due to process euid not matching file owner). I'll move the r= stamp unless someone objects, fast. Still looking for sr=. /be
Attachment #80100 - Attachment is obsolete: true
Changing nsresultForErrno will cause a small performance penalty. Is it measurable?
Why do we care about tiny failure-case overhead? If a failure is suppressed in a tight loop on the critical path of some layout or user-level task, maaaaybe. Not likely! /be
Comment on attachment 80293 [details] [diff] [review] proposed fix, v3 As promised, r= bumpalong. Need an sr= now. /be
Attachment #80293 - Flags: review+
Comment on attachment 80293 [details] [diff] [review] proposed fix, v3 sr=waterson.
Attachment #80293 - Flags: superreview+
I merged up to top of trunk, resolved conflicts, rebuilt, and checked into trunk. Asking drivers for a= for the branch. /be
Comment on attachment 80293 [details] [diff] [review] proposed fix, v3 a=asa (on behalf of drivers) for checkin to the 1.0 branch
Attachment #80293 - Flags: approval+
makingtest: I'll find some disk space and make this partition in our lab... someday.
Does all of this kludginess go away when file saving works in some sane way on UNIX? (ie, not explicitly coming up with bogus permissions to give the file). You guys have made quite a mess of the whole process.
> Does all of this kludginess go away when file saving works in some sane way on > UNIX? It goes away for the file-saving case, but CopyTo() is still broken without this patch even if it's not used for file saving. Also, the "guys" involved in this bug and the "guys" involced in the uriloader file-saving setup are two pretty distinct groups of people....
jmd, watch yer mouth. I patched this bug in order to help 1.0. I have nothing to do with kludges with permissions. My patch here doesn't farble permissions at all, it uses good Unix open semantics (whereby you can write to a readonly file that you create). /be
Fixed in branch and trunk. /be
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
adding branch resolution keyword.
Keywords: fixed1.0.0
+qawanted: If anyone can "confirm & verify" (make sure it broke on a old build, then verify it works on a fixed branch build) that would be great. I'm very busy this week + my Linux is B-, so the lead time to verifying could be long.
Keywords: qawanted
*** Bug 142347 has been marked as a duplicate of this bug. ***
Keywords: verifyme
VERIFIED: Redhat 8, FAT partition, Mozilla 1.6f.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: