Closed Bug 83960 Opened 23 years ago Closed 23 years ago

HTTP 1.0: Incorrect "Content-Length:"

Categories

(Core :: Networking: HTTP, defect, P1)

x86
Windows 98
defect

Tracking

()

VERIFIED FIXED
mozilla0.9.2

People

(Reporter: jay, Assigned: darin.moz)

References

()

Details

(Whiteboard: r=gagan, sr=dougt, a=asa)

Attachments

(3 files)

Win98SE Moz Build 2001060308 33.6 Dialup Connection

Some pages on my server will not parse completely with information left out at 
the bottom of the page including counter and graphics.

Click on any recipe in the list.

Server is a Mac running WebStar software.

Same pages parse/display 100% using Communicator and IE.
The url given, has on line 92:

<a href="appetizers.html">Appetizers | <a href="breads.html">Breads</a> | <a 
href="cajun.html">Cajun Cuisine</a>

Note the missing </a> after "Appetizers".  We should be recovering from this, 
though...

Could someone who has a build with Document Inspector look at the DOM to see 
what that looks like down there?
I think that this is lower level than the parser.
If you do a 'save as', which I do not think goes through the parser the saved
html file dies in the middle of the final img tag. The same thing seems to
happen with the recipies.  This appears to be a browser/server failure to
communicate.
Also, tidy complains about a missing <tr> at line 38. Perhaps, the </tr>
on line 37 is a typo.
Fixed the missing </a> as well as the missing <tr>. Same problem, still doesn't 
parse the entire page.

As a side note, the entire site (recipes) and related pages work flawlessly in 
NS 6.5b0

I'm thinking this is related to the "images not loading" bug reported a while 
back, forgot the bug #
This is a reasonably benign symptom of what looks to be a much more serious bug.
The nsPipe class is getting out of synch


Steps to reproduce.

1)Start mozilla in the VC6 debugger. (windows 2000, recent source)
2)Make sure your cache is clear.
3)Break on line 2322 of nsParser.cpp, (in nsParser::OnDataAvailable) on  a line
that reads  
    result = pIStream->ReadSegments(ParserWriteFunc, (void*)&pws, aLength,
&totalRead); 
4) Load http://www.gatewayno.com/cuisine/recipes/oriental/ham_chicken_rolls.html
5) Step in until you get to nsPipe2 line 246 i.e. just inside nsPipe::GetReadSegment
6) look at mWriteCursor, you will see the missing last 82 characters 
of ham_chicken_rolls.html.  As nsPipe seems to use the standard tail chasing
pipe algorithm, the reader is oblivious of the data past the write cursor.

Comments:
	If you merely set a breakpoint and run to GetReadSegment the extra 82 characters
 will be clobbered which leads me to suspect some thread interaction.  (note:
this does not make sense to me, but I have jumped directly enough times to be
sure. You are welcome to tell me I'm hallucinating)
	I am not at all sure why this only seems to happen at this site, and am way out
of my league trying to debug the socket end of the pipe. 
	I hope this helps. 
OK, I finally understand.  This is a server problem. 
The server sends the page with the following header.

HTTP/1.0 200 OK
MIME-Version: 1.0
Server: WebSTAR/2.1.1 ID/33333
Message-Id: <bec68db7.7783@ns1.gatewayno.com>
Connection: close
Date: Tue, 05 Jun 2001 11:35:44 GMT
Last-Modified: Mon, 04 Jun 2001 06:20:51 GMT
Content-Type: text/html
Content-Length: 3135

The content, however is 3218 bytes long.  Not surprisingly the content is also
83 lines long and the lines are terminated with CR-LF pairs.  
This looks like it should --> Evanglelism.

It is worrisome that more data is being written to the pipe than is being
reported, though it looks like we will avoid buffer overruns. 
Ok, you say its a server problem. I've been running WebStar Server software 
since 1996 and have not experienced this problem until Mozilla came along. The 
entire site runs just fine through Navigator/Communicator and NS 6.xx as well as 
IE. I'm all ears to a server adjustment .........
Ok, On investigation, I think that you are correct. Your server is only
claiming HTTP 1.0 and rfc 1945 says (section 7.2.2)

Note: Some older servers supply an invalid Content-Length when
sending a document that contains server-side includes dynamically
inserted into the data stream. It must be emphasized that this
will not be tolerated by future versions of HTTP. Unless the
client knows that it is receiving a response from a compliant
server, it should not depend on the Content-Length value being
correct.

So this does look like a mozilla bug. Note: I am no expert, just trying to help
route the bug to the right place. 

An experiment that could be tried to verify my theory would be to insert about
83 spaces before the </html> tag and see if the page looks better (NOTE: This
should not be a long term fix).  If it does, change the component of this bug to
Networking: HTTP, and simultaneously send an email to WebSTAR telling them that
their Content-Length is incorrect.

I would change the component but I am insufficiently godlike.

Adding the 83 lines worked like a charm. The page works in Mozilla now. But of 
course this is not the correct fix, dunno where to go from here. Perhaps an 
upgrade to the latest WebStar is in order.

Rerouting to Networking: HTTP
Component: Browser-General → Networking: HTTP
The attached code fixes things.  It allows mozilla to load pages from old,
arguably broken servers at the cost of slowing moilla down when loading pages
from old, possibly broken servers.
reassign to get someone to review.
Assignee: asa → neeti
QA Contact: doronr → benc
--darin
Assignee: neeti → darin
david: the patch is not quite right... we can only ignore the Content-Length
header for HTTP/1.0 if the server does not send a "Connection: keep-alive" header.
Attached patch An updated patchSplinter Review
You are, of course, correct. As far as I can gather from Googling and looking at
your code we need to also check Proxy-Connection. The awful code I wrote that
peers into the header should probably be replaced by

if (mConnection && !mConnection->CanReuse() ){
   mPossiblyMendaciousServer=PR_TRUE;
}

but I cannot guarantee that that is correct.  Are the ReuseCount and IdleTimeout
functions of nsHttpConnection going to be connected to anything?  

I do not know of a server serving HTTP 1.0 with "Connection: keep-alive" so I
must plead guilty to submitting this pretty much untested.  
Status: NEW → ASSIGNED
Priority: -- → P1
Target Milestone: --- → mozilla0.9.2
Summary: Pages Not Parsed 100% → HTTP 1.0: Incorrect "Content-Length:"
my patch simplifies some of the logic for determining when to ignore the
content-length header.  it also makes sure that we completely ignore it.
there seems to be some images (GIFs) on the page that do not display at all.
this does not seem to be bug 77072 since netstat does not show a bunch of
stagnant sockets as in bug 77072.  this seems to be some kind of problem
with imagelib.  seeing this on linux with a recent trunk build.
That's probably bug 74313 you're seeing
r=gagan
Whiteboard: r=gagan, sr=?, a=?
sr
Whiteboard: r=gagan, sr=?, a=? → r=gagan, sr=dougt, a=?
a= asa@mozilla.org for checkin to the trunk.
(on behalf of drivers)
Blocks: 83989
Whiteboard: r=gagan, sr=dougt, a=? → r=gagan, sr=dougt, a=asa
fix checked in.
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
The fix for this bug seems to have caused a problem for me on linux.  Now when I
am loading a page, and hit stop (or click a link) while the page is loading, I
get a segfault.
I backed out the 5 files that Darin checked in on 6/13/2001 15:44, and it no
longer crashes when I do that.
If this is unrelated, let me know and I'll file a new bug.
Here's a backtrace:
#0  0x4051eec5 in main_arena () from /lib/libc.so.6
#1  0x407f021b in nsHttpConnection::OnTransactionComplete ()
   from mozilla/dist/bin/components/libnecko.so
#2  0x407f210b in nsHttpTransaction::Cancel ()
   from mozilla/dist/bin/components/libnecko.so
#3  0x407f62a9 in nsHttpChannel::Cancel ()
   from mozilla/dist/bin/components/libnecko.so
#4  0x40ce8d17 in imgRequest::Cancel ()
   from mozilla/dist/bin/components/libimglib2.so
#5  0x40ce8c58 in imgRequest::RemoveProxy ()
   from mozilla/dist/bin/components/libimglib2.so
#6  0x40cea12a in imgRequestProxy::Cancel ()
   from mozilla/dist/bin/components/libimglib2.so
#7  0x407cf5b9 in nsLoadGroup::Cancel ()
   from mozilla/dist/bin/components/libnecko.so
#8  0x408c0089 in nsDocLoaderImpl::Stop ()
   from mozilla/dist/bin/components/liburiloader.so
#9  0x408bf087 in nsURILoader::Stop ()
   from mozilla/dist/bin/components/liburiloader.so
#10 0x40905348 in nsDocShell::StopLoad ()
   from mozilla/dist/bin/components/libdocshell.so
#11 0x4090d9df in nsDocShell::InternalLoad ()
   from mozilla/dist/bin/components/libdocshell.so
#12 0x4091481a in nsWebShell::HandleLinkClickEvent ()
   from mozilla/dist/bin/components/libdocshell.so
#13 0x40914218 in HandlePLEvent ()
   from mozilla/dist/bin/components/libdocshell.so
#14 0x400c5a77 in PL_HandleEvent ()
   from mozilla/dist/bin/libxpcom.so
#15 0x400c5993 in PL_ProcessPendingEvents ()
   from mozilla/dist/bin/libxpcom.so
#16 0x400c6908 in nsEventQueueImpl::ProcessPendingEvents ()
   from mozilla/dist/bin/libxpcom.so
#17 0x406acd03 in event_processor_callback ()
   from mozilla/dist/bin/components/libwidget_gtk.so
#18 0x406aca7d in our_gdk_io_invoke ()
   from mozilla/dist/bin/components/libwidget_gtk.so
#19 0x40345aca in g_io_unix_dispatch () from /usr/lib/libglib-1.2.so.0
#20 0x40347186 in g_main_dispatch () from /usr/lib/libglib-1.2.so.0
#21 0x40347751 in g_main_iterate () from /usr/lib/libglib-1.2.so.0
#22 0x403478f1 in g_main_run () from /usr/lib/libglib-1.2.so.0
#23 0x4026b5b9 in gtk_main () from /usr/lib/libgtk-1.2.so.0
#24 0x406ad1d0 in nsAppShell::Run ()
   from mozilla/dist/bin/components/libwidget_gtk.so
#25 0x4068f8ea in nsAppShellService::Run ()
   from mozilla/dist/bin/components/libnsappshell.so
#26 0x804e6b9 in main1 ()
#27 0x804eea5 in main ()
#28 0x404489cb in __libc_start_main (main=0x804ed78 <main>, argc=1,
    argv=0xbffff934, init=0x804b120 <_init>, fini=0x8050b18 <_fini>,
    rtld_fini=0x4000aea0 <_dl_fini>, stack_end=0xbffff92c)
    at ../sysdeps/generic/libc-start.c:92
i've filed bug 85806 for this crash.  thanks for the stack trace!
Verified by investigation bug 91025, which has been seen cross-platform.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: