Closed
Bug 193921
Opened 22 years ago
Closed 21 years ago
unable to http publish because not prompted for a password
Categories
(SeaMonkey :: Composer, defect, P3)
SeaMonkey
Composer
Tracking
(Not tracked)
RESOLVED
FIXED
mozilla1.4beta
People
(Reporter: Brade, Assigned: darin.moz)
Details
(Keywords: dataloss, regression, verifyme, Whiteboard: [adt1][ETA: 2003-06-06])
Attachments
(5 files, 1 obsolete file)
216.18 KB,
text/plain
|
Details | |
72.76 KB,
application/zip
|
Details | |
33.76 KB,
text/plain
|
Details | |
43.51 KB,
text/plain
|
Details | |
1.23 KB,
patch
|
Brade
:
review+
alecf
:
superreview+
jesup
:
approval1.4+
|
Details | Diff | Splinter Review |
I just realized that when I attempt to http publish with the trunk builds, I am
not being prompted for my password (and publishing fails silently). This is a
regression from mozilla 1.0.
Reporter | ||
Comment 1•22 years ago
|
||
Reporter | ||
Comment 2•22 years ago
|
||
Reporter | ||
Updated•22 years ago
|
Attachment #114820 -
Attachment is obsolete: true
Attachment #114820 -
Attachment is patch: false
Comment 3•22 years ago
|
||
Composer triage team: nsbeta1+/adt1
Assignee | ||
Comment 4•22 years ago
|
||
this log file unfortunately does not include log output for nsHttp:5. i
mentioned this to brade, and she said she would generate a new log.
Comment 5•22 years ago
|
||
Using my own Windows debug build from the trunk (tree updated yesterday), I did
this:
set NSPR_LOG_MODULES=nsHttp:5,nsSocketTransport:5,nsStreamPump:5,nsPipe:5
and then logged 3 different scenarios. In all 3 cases, publishing always
appears to succeed each time I publish. Only on the 2nd try of scenario 3 did
the file actually get published.
1. 0ftp-log-httptest-no-prompt.txt // 1 try: no prompt
2. 1ftp-log-httptest-twice.txt // 2 tries: no prompt, then prompt
3. 2ftp-log-httptest-in-dialog-twice.txt // 2 tries, pwd entered in pub
settings
In scenario 3, there is no prompting expected or experienced because I entered
my password in the Settings tab of the Publish dialog.
Also note that in scenarios 2 and 3 I published to a different file name each
time but with the same server, user id, and password.
Comment 6•22 years ago
|
||
Related to comment #5: I forgot to mention that I clicked the Browse button on
the composer toolbar after each publishing attempt. The only other HTTP activity
is the opening of my home page (http://www.mozilla.org/).
Assignee | ||
Updated•22 years ago
|
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla1.3final
Assignee | ||
Comment 7•22 years ago
|
||
ok, what's going on here is that the very first publish attempt was for an URL
like this:
http://mcs@rocknroll.mcom.com/...
at first, we just try publishing the document without any username or password.
the server challenges us with a 401, and then we look to see if we have a
username and password to send. we find that the URL contains a username, so we
try sending the username with no password. the result is strange. what we get
looks like a HTTP/0.9 response. that is, there is no discernable HTTP status
line. we treat all HTTP/0.9 responses as HTTP/1.0 200 OKs. as a result, we
"think" that the username="mcs" and password="" is valid and moreover correct
for rocknroll.mcom.com. we then automatically use that username and password
for all future publishing attempts, making it seem like nothing is working.
now, i don't know if we are really getting a HTTP/0.9 response. i'll have to
try publishing to rocknroll myself to see what it gives me. it's
possible/probable that we are misinterpreting the response. i'm sure rocknroll
would send back another 401.
Reporter | ||
Comment 8•22 years ago
|
||
Darin--In the past, I thought this was the sequence (Maybe I'm not remembering
correctly though):
attempt to connect without any login/pw
challenge
attempt to connect with given login and prompted password
Assignee | ||
Comment 9•22 years ago
|
||
kathy: i don't think any of the auth code has changed recently, so this must be
something related to the fact that the second challenge response is malformed or
malparsed.
Comment 10•22 years ago
|
||
Here is a TCP trace of an HTTP publishing attempt. Look at Frames 7, 8, and 10
(which make up the HTTP response to the initial "unauthorized" PUT). 7 and 8
look OK to me, but 10 looks like extra output sent by the HTTP server. Very
strange. Is this a HTTP server bug or misconfiguration? Even if it is, it is
very dangerous to assume that an HTTP response that can't be parsed is a
"success" response, at least when publishing.
Comment 11•22 years ago
|
||
Comment on attachment 115036 [details]
TCP trace of publishing attempt
Also worth noting: in Frame 13 (the 2nd PUT) we see this:
Authorization: Basic bWNzOg==\r\n
Base64-decoding the credentials produces:
mcs:
(my user id followed by a zero-length password). I am not sure what behavior is
expected, but if a 401 error is returned I would expect to be prompted for my
password.
Comment 12•22 years ago
|
||
For completeness, here is a TCP trace of a successful HTTP GET for a document
that requires authentication (I replaced the credentials with *'s). No strange
extra data from the HTTP server in this case, and I get prompted for my user id
and password and all works perfectly.
Assignee | ||
Comment 13•22 years ago
|
||
ok, reducing severity. this is a server bug. we might end up trying to work
around the problem by assuming the user didn't mean for us to send an empty
password. remember, empty passwords can be valid.
pushing off to 1.4
Severity: critical → minor
Target Milestone: mozilla1.3final → mozilla1.4alpha
Comment 14•22 years ago
|
||
Another interesting tidbit: using Netscape 7.02, HTTP publishing to this server
does work. Why? Because necko closes the HTTP connection between PUT attempts,
which means the extra data sent back by the web server in response to the first
PUT attempt is ignored.
Comment 15•22 years ago
|
||
One more data point: using Netscape Communicator 4.78, HTTP publishing to this
same server works fine and I do not see an extra HTML document returned by the
web server after the first HTTP PUT. Very strange. Of course what Communicator
4.78 sends is slightly different; it speaks HTTP 1.0, not 1.1 etc.
Assignee | ||
Comment 16•22 years ago
|
||
oh man! i think you just hit the nail on the head. we are not supposed to be
reusing the connection when we get a 401 (or 407) response. this is not a
server bug afterall. we must not be canceling the connection properly or something.
back to 1.3 final... this needs to be fixed. i suspect this will impact any
site using HTTP auth.
Severity: minor → major
Flags: blocking1.3?
Priority: -- → P2
Target Milestone: mozilla1.4alpha → mozilla1.3final
Updated•22 years ago
|
Flags: blocking1.3? → blocking1.3+
Assignee | ||
Comment 17•22 years ago
|
||
i spoke too soon. another look at the log files shows that this is a HTTP/1.1
server, and the connection can be reused. my previous statement was based on
the fact that we always cancel reading the transaction which resulted in a 401,
so that we don't waste cycles reading the error page sent by the server.
canceling the transaction usually results in the connection being killed.
however, in this case the error page is very small and we actually snarf it all
up before processing the 401 (these happen on different threads). as a result,
the connection is already in the idle (can be reused) list when the transaction
is canceled. hence, the death of the transaction has no impact on the
connection which was used.
so, this isn't really a regression, and i'm back to believing that the server is
faulty. this bug may only be appearing now because perhaps necko is doing a
better job of utilizing its background thread to read data quickly off the
socket (that was the point of bug 176919).
Assignee | ||
Comment 18•22 years ago
|
||
hmm... i'm not able to duplicate this problem. i tried publishing to:
http://rocknroll.mcom.com/users/mcs/publish/demos/test1.html
i entered username=mcs with no password (obviously i don't know your password),
and i was prompted to enter your password.
build id 2003-02-21-08 linux trunk. your log files indicates that you can repro
this with the 2003-02-18 build under win2k. i'll give that a try.
Comment 19•22 years ago
|
||
I think I've been seeing bugs in the Calendar that are related to this bug.
Keywords: calendar
Assignee | ||
Comment 20•22 years ago
|
||
hmm.. so, i was able to repro this bug under windows (build 2003022308).
Assignee | ||
Comment 21•22 years ago
|
||
ok, this problem is not 100% reproducible because of the issues i mentioned in
comment #17. however, i can confirm that the server is definitely at fault
here. from attachment 115036 [details], observe that frame #7 contains the 401 response
headers along with the header "Content-Length: 223" and frame #8 contains
exactly 223 bytes of data. so far so good. however, notice that frame #10
contains an additional 147 bytes of data. that is a big problem! the server is
sending us some garbage here. it looks like the server is sending us both the
error page for a 401 as well as the error page for a "bad request". as a
result, after writing the revised PUT request with basic auth credentials, we
immediately read the 147 bytes as if it were the response to our second PUT
attempt. the 147 bytes is parsed as a HTTP/0.9 response (which has no status
line or headers), which is treated just as if it were a HTTP/1.0 200 response.
i can try to work around this bad server behavior, but nonetheless i don't think
this bug should be blocking 1.3 final. we aren't doing anything wrong AFAICT.
i'll try to come up with some sort of workaround for 1.4.
Flags: blocking1.3+ → blocking1.3-
Target Milestone: mozilla1.3final → mozilla1.4alpha
Comment 22•22 years ago
|
||
I will try to find out if the web server in question is misconfigured or just
buggy. I think it is a bug in the Mozilla HTTP code to treat data it does not
recognize as meaning "success" when doing an HTTP PUT (I can understand why
you'd do that for HTTP GET, but not for PUT).
I understand the reasoning for not holding up 1.3 for this, but because of this
bug and bug 193908 (FTP) I can't publish at all to one important server
(rocknroll) inside the Netscape firewall using recent Mozilla builds. So it is
really disappointing that both bugs may be deferred. Is there anything I can do
to help get these fixed? I suppose I could work on a patch....
Comment 23•22 years ago
|
||
One of the Netscape Enterprise Server engineers has confirmed that there is a
bug in their server. They are working on a fix.
But I also think it is very bad for the Mozilla code to assume that a
garbage/unparseable response means "success" if it comes in response to a PUT or
similar HTTP verb. I am going to see if it is easy to patch the Mozilla code to
at least recognize this as a failure rather than reporting success. It would be
good to not fail silently, because users will think everything is working fine
even though their document is not being published at all (and therefore they
will lose data).
Assignee | ||
Comment 24•22 years ago
|
||
mark: the work around would probably be to disable keep-alive when publishing.
Reporter | ||
Comment 25•22 years ago
|
||
Darin--yes, you could fix it in that way. However, isn't it a bad assumption
for necko to treat an error condition (during PUT) as success?
Assignee | ||
Comment 26•22 years ago
|
||
spoke with kathy over IRC. looks like we can assume that a HTTP/0.9 server
would only understand a GET request (though we might want to extend that to POST
as well since it is so easy to write a CGI script that outputs HTTP/0.9). this
means that we should be able to get away with treating a HTTP/0.9 response to a
PUT request as a failure. there's code in nsHttpTransaction that detects 0.9
responses.
Comment 27•22 years ago
|
||
Treating HTTP/0.9 responses to PUT as an error would probably work. However,
note that we're getting "Bad Request" fromthe server. Thats a 400, not a 401, so
its definately a server bug.
OTOH, the workarround shouldn't be bale to break anything, so.... I'm still not
sure what you'd do when you get an HTTP/0.9 style response from the PUT - you
can't really try again, because you'd theortically get the same response, right?
Assignee | ||
Updated•22 years ago
|
Target Milestone: mozilla1.4alpha → mozilla1.4beta
Assignee | ||
Updated•22 years ago
|
Priority: P2 → P3
Assignee | ||
Comment 28•22 years ago
|
||
mark: any update on a server side fix?
Comment 29•22 years ago
|
||
I think the Netscape Enterprise Server engineers have a fix in hand. I am not
sure when it will be released, or if Sun's web server has the same bug (probably
it does).
Updated•22 years ago
|
Whiteboard: [adt1] → [adt1][ETA needed]
Comment 30•22 years ago
|
||
Darin is on vacation and staff is wondering if this bug is indeed a server bug
or something that can be fixed on the client side. Any suggestions would be
appreciated.
Reporter | ||
Comment 31•22 years ago
|
||
Doug--comments 23-27 are the most relevant pieces to the current proposal for
addressing this bug on the client side.
There is a server bug but it is also a client bug. The client is not reporting
an error when it fails to PUT the file (the client gets confused by the server's
bad response). (see last part of comment 23)
The client could do one of these things which I believe would allow us to synch
with the servers again:
#1--close the connection after the initial response (before the bogus server
response is received); establish a new connection for 2nd put attempt [I'm not
sure on the feasibility here]
#2--return errors for PUT attempts that have HTTP/0.9 responses (comment 23 and
comment 26)
#3--Disable keep-alive for PUT (see Darin's comment 24)
#4--something else :-)
bbaetz's notes in comment 27 may be helpful in determining which of the above
(or other) solution to use.
bbaetz--the goal in the short run is just to report a problem and close the
connection. I'm happy with that behavior when dealing with buggy servers. :-)
Comment 33•22 years ago
|
||
This is what darin suggested in comment 26. Note: you still can't publish to
this website.
Comment 34•22 years ago
|
||
Adding ETA per Cathleen: end of this week (so 2003-06-06).
Whiteboard: [adt1][ETA needed] → [adt1][ETA: 2003-06-06]
Reporter | ||
Comment 35•22 years ago
|
||
Comment on attachment 124770 [details] [diff] [review]
workaround patch
r=brade
Attachment #124770 -
Flags: review+
Updated•22 years ago
|
Attachment #124770 -
Flags: superreview?(alecf)
Attachment #124770 -
Flags: approval1.4?
Comment 36•22 years ago
|
||
Comment on attachment 124770 [details] [diff] [review]
workaround patch
sr=alecf
Attachment #124770 -
Flags: superreview?(alecf) → superreview+
Comment 37•22 years ago
|
||
Comment on attachment 124770 [details] [diff] [review]
workaround patch
Please add fixed1.4 when this is checked in. Also, please leave this bug open
and assigned to Darin for him to check after he returns - to see if the fix
needs to be extended or revised.
Attachment #124770 -
Flags: approval1.4? → approval1.4+
Comment 38•22 years ago
|
||
Checking in nsHttpTransaction.cpp;
/cvsroot/mozilla/netwerk/protocol/http/src/nsHttpTransaction.cpp,v <--
nsHttpTransaction.cpp
new revision: 1.69; previous revision: 1.68
done
Checking in nsHttpTransaction.cpp;
/cvsroot/mozilla/netwerk/protocol/http/src/nsHttpTransaction.cpp,v <--
nsHttpTransaction.cpp
new revision: 1.68.4.1; previous revision: 1.68
done
Darin - please review. there should be a better way to deal with broken servers.
Assignee: dougt → darin
Keywords: fixed1.4
Comment 39•21 years ago
|
||
Verified on win32 and macho (2003-06-11-05) branch builds.
Keywords: fixed1.4 → verified1.4
Comment 40•21 years ago
|
||
RESOLVED/FIXED:
http://bonsai.mozilla.org/cvsview2.cgi?diff_mode=context&whitespace_mode=show&root=/cvsroot&subdir=mozilla/netwerk/protocol/http/src&command=DIFF_FRAMESET&file=nsHttpTransaction.cpp&rev2=1.69&rev1=1.68
+verifyme - if anyone still has an old HTTP 0.9 server, you should get an error
message if you publish to it.
Keywords: verifyme
Comment 41•21 years ago
|
||
Resolved/Fixed
Based on c#40 mostly,
If you feel this should remain open feel free to hit me with a trout.
Status: NEW → RESOLVED
Closed: 21 years ago
Resolution: --- → FIXED
Updated•20 years ago
|
Product: Browser → Seamonkey
You need to log in
before you can comment on or make changes to this bug.
Description
•