Closed Bug 16797 Opened 24 years ago Closed 24 years ago

[DOGFOOD] tree view for file:// urls not showing content


(SeaMonkey :: General, defect, P1)

Windows NT


(Not tracked)



(Reporter: jud, Assigned: waterson)



(Whiteboard: [PDT-])


(1 file)

The file channel stuff has been changing recently (I think). Sounds like maybe
an OnData or OnStop isn't getting through.
Summary: tree view for file:// urls not showing content → [DOGFOOD] tree view for file:// urls not showing content
Target Milestone: M11
Can you please explain to us WHY you think this is dogfood?  We'll check in on
this bug again tomorrow 5pm.  Thanks!
*** Bug 15925 has been marked as a duplicate of this bug. ***
Whiteboard: [PDT-]
Not needed for dogfood, putting on PDT-
Target Milestone: M11 → M12
*** Bug 17523 has been marked as a duplicate of this bug. ***
Blocks: 18471
This is not working under Linux either.   If I type in file:/u,  I get a blank
page.  If I type in /u or /u/ the browser generates an http url.
This is because the webshell stuff should try http:/u, and if that fails try
file:/u. What happens if you type file:/u? Does that load? If not, then there's
another problem in necko beyond the webshell stuff.
Priority: P3 → P1
I just tried typing file://c|/ into the location bar and didn't get a file list
that way either, so I suspect that there's more to this than just trying the
file: protocol.

It could be related to the nsFileSpec bug that's reported elsewhere (bug
I dug a bit deeper, and believe that this is related to changes introduced in
r1.33 of nsFileTransport.cpp. Specifically, with this change, it seems like we
place too much credence in a file's length as reported by stat(). We need to be
able to handle files who have no size (e.g., stuff in the /proc filesystem),
but can generate a content stream. The HTTP Index stuff used to do this: it
reports a directory's "content length" as zero, but then coughs up bytes when
Attached patch proposed fixSplinter Review
warren: take a look at the fix and let me know if it looks ok. The idea is to
allow mTransferAmount to be zero, in which case, we'll just treat the stream as
being done when we've got no more data to read out of it; i.e., writeAmt == 0
from the mBufferOutputStream.

I removed the terminiation condition wheremTransferAmount == 0, because this
should (?) be covered by writeAmt == 0. I guess that this could cause an extra
pass through the read loop, though. We could further optimize by maintaining
some more state. Is it worth it?
I need the ability to pass -1 into the AsyncRead() method as the read amount. We
used to do this right?
Closed: 24 years ago
Resolution: --- → FIXED
Moving all UE/UI bugs to new component: User Interface: Design Feedback
UE/UI component will be deleted.
Component: UE/UI → User Interface: Design Feedback
No longer blocks: 18471
Component: User Interface Design → Browser-General
Product: Browser → Seamonkey
You need to log in before you can comment on or make changes to this bug.