Closed Bug 118404 Opened 23 years ago Closed 23 years ago

External JS files not loading; fixed by making new Profile

Categories

(Core :: DOM: HTML Parser, defect)

x86
Windows 98
defect
Not set
major

Tracking

()

RESOLVED FIXED
mozilla0.9.9

People

(Reporter: d_king, Assigned: harishd)

References

()

Details

(Keywords: regression, relnote, Whiteboard: [driver:brendan])

Attachments

(8 files)

From Bugzilla Helper:
User-Agent: Mozilla/4.79 [en] (Win98; U)
BuildID:    2002010403

The JavaScript that creates a menu list on the left side of the page isn't 
working. It does work under Netscape 6.2.1 and I recall it worked under Mozilla 
0.9.7 (although I haven't retested that yet).

Reproducible: Always
Steps to Reproduce:
1. Open page


Actual Results:  No Menu List

Expected Results:  Menu list on this page, and on 
https://teldrug.healthcare.cigna.com/healthcare/teldrug/app/public/PrescriptionC
enter.jsp?n=11130 which is accessed via the "Customer Service" link.
works for me; 2002010506/linux. sure you've enabled all the appropriate js
thingies in edit/pref/advanced/scripts&windows ?
WFM, win98SE, 2002010403
My Edit-Pref settings are as follows.

Under Advanced - "Enable Java" is set, and "Enable JavaScript for Mail &
Newsgroups" is not set.

Under Scripts & Windows - Everything is set except for "Open windows by themselves".
Just in case, I just uninstalled and reinstalled 2002010403. No luck.

However, on the JavaScript Console I am seeing lots of warnings, and two errors,
one about "init" not being defined and one about "leftnav" not being defined.

Based on my limited JavaScript skills, I would think this means that the <SCRIPT
SRC> isn't pulling in the remote source files.
hmm, for the record, I'm not seeing any js errors.
It seems we've been having trouble loading external JS files recently.
See bug 117827 - creating a new Profile seems to fix the problem.

David: could you try the site with a new profile? To do this, launch
Mozilla from the command line as follows:
 
       [(path to Mozilla)] ./mozilla.exe -profilemanager


When the Profile Manager comes up, there's a button called "Create Profile".
I'll bet that with the new profile, the problem goes away, as with bug 117827.

In the meantime, reassigning to Browser-General : this is not a problem
with the JavaScript Engine. If anyone knows a bug for the problem of 
external JS files not loading, please post it here - thanks.
Assignee: rogerl → asa
Status: UNCONFIRMED → NEW
Component: Javascript Engine → Browser-General
Ever confirmed: true
QA Contact: pschwartau → doronr
Creating a new profile did fix the problem. Now, to ask the same question as was
asked in bug 117827, WHY?

From reading that bug, it looks like it's something in prefs.js, I'm attaching
my old 'broken' one in case someone can spot the problem.
David: thanks for verifying this so quickly.

Asa: we have a major problem on our hands. As evidenced by this bug
and bug 117827, recent builds can fail to load external JS files
unless the user creates a new Profile.

Is this perhaps related to the new Preferences we've recently added?
(Edit > Preferences > Advanced > Scripts and Windows)
Summary: JavaScript Menu list doesn't work → External JS files not loading; fixed by making new Profile
One of the files in my old profile directory is causing the problem. So far I've
ruled out prefs.js and a few others. I'm currently going through a process of
elimination to find which file is breaking it.

The reason I'm doing this is once I found that a new profile fixed the problem,
I copied all the files from the old profile, and it broke again. So, which file
was it.....well I hope to know soon.
OK, I going to say it's a caching problem. I copied all files, one by one, from
the  old profile to the new one, and everything was fine. But, when I copied the
cache directory from the old profile, the problem was back.

So, either my cache is corrupt (which I doubt as I cleared it via Prefs less
than 12 hours ago) or the new JavaScript options don't work well with existing
cache files.

Well, that's it from me...
David: thanks again!

As a result of David's findings, reassigning to Networking:Cache 
for further analysis. cc'ing Asa so he can continue to follow this -
Assignee: asa → gordon
Component: Browser-General → Networking: Cache
QA Contact: doronr → tever
More info, this time not very good...

If you take a look at http://www.metrocast.net/~dgk/ (my horrible home page) you
will see is has some JavaScript SRC as well.

<script src="http://wordsmith.org/words/word.js"></script>

Which doesn't seem to work, even with a new profile. I verified the origin file
using Netscape 4.79 which ran it fine.

Is this a different problem (as it appears within a <TABLE></TABLE>)?
Note: such problems can be caused by a missing tag of some sort,
tolerated in NN4.7 but not in Mozilla. When I tried to validate 
http://www.metrocast.net/~dgk/ via http://validator.w3.org/,
I was unable to because the validator says there is no <DOCTYPE>. 

David: would you be able to add a <DOCTYPE> and see if your page
passes the HTML validator? If it should pass, would you be able to make
a copy of the page and reduce it to the smallest possible HTML that
shows the bug? Then you can attach it to this bug; that would be great - 
Oops - the validator allows you to select a doctype and a charset.
When I select HTML 4.01 Transitional and iso-8859-1, errors are detected:

http://validator.w3.org/check?uri=http%3A%2F%2Fwww.metrocast.net%2F%7Edgk%2F&charset=iso-8859-1+%28Western+Europe%29&doctype=HTML+4.01+Transitional
Note this comment by the validator:

Line 121, column 6: 

          </p>
             ^

Error: end tag for element "P" which is not open; try removing the end tag
or check for improper nesting of elements

It's that last bit that's suggestive...
I tried

https://teldrug.healthcare.cigna.com/healthcare/teldrug/app/public/PrescriptionCenter.jsp?n=11130

with

Mozilla/5.0 (Windows; U; Win95; en-US; rv:0.9.7+) Gecko/20020110

and got the error instantly.

I have experienced lately in multiple different cases an unreliability in
loading content of all kinds, even accompanied by crashes.
Maybe I get more of this because my computer is rather slow.

So far, only in one case have I been able to produce a testcase from any of
those failures:

Refer to bug 114827, which appears to me simple and critical enough to suggest
that others such as this might depend on it.
I'm still working on getting a minimal piece of HTML for test purposes of my
second problem/example. However, my cable modem conection is very very slow
tonight, and providers tech support have a 3 hour wait time before responding
(they're rather busy apparantly).

Once things are back to normal, I'll also take a look at bug 114827 for
similarities.

By the way, to BHT, hi from a fellow NZ'er.
I've added a new attachment with minimal HTML to demonstrate the problem I
describe in #13 as requested in #14.

validator.w3.org complains about two items in this HTML. An empty <HEAD> (no pun
intended), and the <SCRIPT> doesn't have a TYPE specified.
I've confirmed that my attachment works as expected in Netscape 6.2.1, but
doesn't work in Mozilla 2002011103. 

As the test case removes frame info, I don't think it is related to bug 114827
which is related to frames and seems to be a FRAME SRC using JavaScript problem
rather than a SCRIPT SRC not being loaded problem (sorry BHT).
*** Bug 119541 has been marked as a duplicate of this bug. ***
Note: in the duplicate bug 119541 we tried the following experiment:

1. Bring up Mozilla with the bad profile
2. Go to Edit | Preferences | Advanced | Cache
3. Hit "Clear Memory Cache" and "Clear Disk Cache"
4. Hit OK
5. Close Mozilla
6. Bring up Mozilla with the bad profile again
7. Load the URL

But that did NOT solve the problem -
I think this is highly unlikely that it's a problem with the cache.  Phil, are
you able to reproduce this?
No - I've never been able to reproduce this. Our contributors report
that making a new Profile fixes this. I got the idea that it might be 
Networking:Cache from the work David did in Comment #10, Comment #11.

But I'm not sure what's doing this -
I'm still seeing this once in a while, and replacing my cache files fixes it.

When one clears the disk cache, do the following files remain? :-
_CACHE_001_
_CACHE_002_
_CACHE_003_
_CACHE_MAP_

I figure these are index files to the various cache files. What does Clear Disk
Cache do to these files? Is that the problem?
Those files are normally present after the disk cache has been cleared.  They
contain an index and metadata, but there are reinitialized when the cache is
cleared.  They may not get physically smaller however.

The comments from #10 and #11 simply indicate there may be a document in the
cache that is causing problems, not that the cache service has a bug in it or
even that the disk cache is corrupt.

It might be interesting to get zip archive of the cache directory when it's in
that state.  We might be able to analyse it for problems.
I believe this is the cache directory for the profile that has the problem I
reported in Bug 119541.
I've created two zip archives, one with the "bad" cache files, and one with the
"good" cache files. I couldn't upload one of them due to its size, so I've put
them on my web page at :-

http://www.metrocast.net/~dgk/badcachefiles.zip

http://www.metrocast.net/~dgk/goodcachefiles.zip

I hope these are of help.
I've just tested using the "bad" cache files under Mozilla 0.9.7 and things work
as expected. As such, I see this as a regression that will need to be fixed for
0.9.8.
Keywords: regression
I see the problem when I use the "bad" profile in both 0.9.7 (2001122106) and
0.9.7+.
I just double-checked on 0.9.7 with win98SE

I deleted the contents of the cache folder.
I unzipped badcachefiles.zip into that folder
I went to http://www.teldrug.com

The menus on the left side of the screen appear as they should.
That *really* makes it sound like this isn't a cached problem.  The format of
the cache files has not changed since 0.9.7, so there isn't any difference in
how they are presented to the cache client (http, imglib, etc.).  Something
higher up is interpreting the data differently.
A thought...which may be pure rubbish...but I'll say it anyway.

If the file specified in the <SCRIPT SRC=***> code was cached, would the
JavaScript engine retrieve it from cache, or get it from the server again?

Several assumptions being made there, but I thought it might help sort out this
problem.
This problem also seems to affect the Bugzilla quick search feature.  When I use
the bad profile, Quick Search reports

 Error
 
The bug number is invalid. If you are trying to use QuickSearch, you need to
enable JavaScript in your browser. To help us fix this limitation, look here.

The JavaScript console reports:
Error: QuickSearch is not defined
Target Milestone: --- → mozilla0.9.9
Linking to 0.9.8's tracking bug, and wondering if Phil or anyone nearby can
reproduce the problem so it can be debugged?

Cc'ing jst.

/be
Blocks: 115520
More info:

The file specified in <SCRIPT SRC=> is, in fact, being downloaded. I can tell
this by finding the file in the cache, deleting that file, and reloading the
page. No, I didn't modify the index files.

So, once in cache, is the JavaScript engine not reading it from there?
More info:

The file specified in <SCRIPT SRC=> is, in fact, being downloaded. I can tell
this by finding the file in the cache, deleting that file, and reloading the
page. No, I didn't modify the index files.

So, once in cache, is the JavaScript engine not reading it from there?
Hmmm, #37 and #38 are dups.....now I wonder how I managed that?
The code that reads the JS files that are referenced on HTML pages doesn't know
the first thing about the cache, it just tells necko to open the URI and feed
back the data. If the data isn't loaded it would indicate that this is a necko
problem to me.
OK, so why are the JavaScript files cached? Seems like a wasted effort if the
code isn't going to use the cache.

Also, if it's a necko problem, when why does creating a new profile fix the
problem? Or, to be more specific, why does replacing the cache directory in the
profile fix the problem?

Just because the code that loads the JS files doesn't know anything about the
cache doesn't mean that the files never come from the cache. The cache is a
transparent layer AFA this necko client is concerned. If necko caches the files,
then they should be loaded from the cache, but the code that loads them doesn't
know, nor care, if that's what really happens or if the files come off the
network again.
cc'ing darin.  Something seems to be getting lost between the cache and js. 
Darin, take a look at comment #32.
Javascript not executed even though it is loaded?

That sounds tricky but familiar.

Maybe a handy testcase can help that has JavaScript that is definitely loaded
but does not get executed even when loaded from a local filesystem (until a
browser resize):

<HTML>
<HEAD>
</HEAD>
<FRAMESET cols="30%,*,30%">
    <FRAME src="javascript:'<BODY bgcolor=blue>'">
    <FRAME src="javascript:'<BODY bgcolor=black>'" scrolling=no marginwidth=1>
    <FRAME src="javascript:'<BODY bgcolor=black>'" scrolling=no marginwidth=1>
</FRAMESET>
</HTML>

The case started failing at about the same time when this bug was reported.
I am suggesting to include something similar in a build QA check such as the
smoketest. Please excuse my ignorance. I have only heard about this term. I am
not familiar with the smoketest.
bht, is there a bug filed on that frame src="javascript:'foo'" regression yet? 
It may be a dup or related, but it should be reported separately.

/be
The frameset problem is filed as bug 114827, and that's unrelated to this
problem. In that case the JS does get executed, but the browser doesn't react to
the background in the <body> until a resize reflow.
I just made some tests that confirm jst's statements.
Whiteboard: [driver:brendan]
it would be a great help if someone able to reproduce this bug could generate
and attach a HTTP log file.  this is done by setting the following environment
variables.

under win98:

1) open up a DOS prompt
2) type the following:

    set NSPR_LOG_MODULES=nsHttp:5
    set NSPR_LOG_FILE=http.log
    cd \path\to\mozilla\installation
    .\mozilla.exe

3) reproduce the bug
4) you should find a file called http.log in the current directory which will
contain a juicy log of what you did from the perspective of the networking code.

from this log file we should be able to determine if anything funny is happening
in the networking code.
I don't know if this sheds any light or not, but when this happened to me, I was
working only with documents using the "file" scheme.  No real networking
involved at all.  I can't reproduce the error, however, because I stupidly
replaced my profile instead of just making a new one.

--scole
Attached file HTTP.LOG as requested
Here is a http.log file as per instructions.

This should demonstrate both problems I've reported. One with Wordsmith.org and
one with www.teldrug.com
If this bug occurs with the file: scheme, then it has nothing to do with the
cache whatsoever.  The file: protocol doesn't use the cache.
some interesting things from the log file:

the toplevel document (http://www.teldrug.com/) contains:

<HEAD>

<SCRIPT LANGUAGE="JavaScript">
window.location = "http://www.cigna.com/consumer/services/pharmacy/tel_drug.html";
</SCRIPT>

and while the document's nsHttpChannel is calling the parser's OnDataAvailable,
the channel is canceled with NS_BINDING_ABORTED, and the parser is returning
NS_ERROR_HTMLPARSER_STOPPARSING from OnDataAvailable.

i'm not sure if this is related to the problem, but hopefully someone in the
parser group can investigate this further.  i doubt this is a networking bug.

-> parser
Assignee: gordon → harishd
Component: Networking: Cache → Parser
QA Contact: tever → moied
>while the document's nsHttpChannel is calling the parser's OnDataAvailable,
>the channel is canceled with NS_BINDING_ABORTED, and the parser is returning
>NS_ERROR_HTMLPARSER_STOPPARSING from OnDataAvailable.

What we need to find out here is why the channel is getting canceled. If a
document load is stopped abruptly then the parser has to stop parsing. Darin, do
you think that it's a bad idea to propagate error messages from the parser? 

FYI: I'm not able to reproduce the problem ( probably because I don't have a bad
profile ) what so ever. David ( reporter ), I need a good testcase to
investigate the bug further. 


harish:

necko is designed to allow error codes to result from OnStartRequest and
OnDataAvailable -- these just result in canceling the channel.  error codes
resulting from OnStopRequest generally get ignored.  in this case, it appears
that the channel has been canceled already by the time STOPPARSING results from
OnDataAvailable.  the original cancel appears to be with a status of
NS_BINDING_ABORTED, which is unfortunately used in many different contexts. 
but, this is the error code that actually causes the cancelation.  the
STOPPARSING error is ignored.

so, you might check to see if there are any parser conditions that result in
both an explicit Cancel with NS_BINDING_ABORTED and OnDataAvailable returning
NS_ERROR_HTMLPARSER_STOPPARSING.
Unfortunately, it is not so easy to find out the condition without a
reproducable testcase.
It's reproducible on my computer.  I'll be happy to send any files or run any
tests you want.
Matt, this sounds very promising.
Could you please attach a .js file and a master HTML in that order to this bug
in such a way that the error reproduces for you on this server without reference
to local files or URLs on other servers?

I would suggest that as many people as possible report their test results,
together with CPU speed, estimated connection bandwidth, network latency and
possibly other relevant qualified information. Maybe there is a pattern which we
wouldn't find without such information gathering.

At the very least this would help to eliminate losing the testing environment of
the original "offending" site due to a potential update.
personally, my guess is that the bug is cache related (hence the new profile 
fix) assuming that's the case, we'd probably need info from about:cache .. i'm 
not sure who can explain what bits we'd need.
Harish,

You asked for more info from me, based on subsequent messages, do you still need
this. At this stage, I would point you to #29 which gives you a set of cache
files to duplicate the problem at www.teldrug.com.
I do not know how to create "a .js file and a master HTML".  However, I can
reproduce the same bug on the mozilla.org server by using the Bugzilla
QuickSearch feature: 

OS:  Windows NT 4 SP6a
Build: 2002012203

Steps to reproduce:
1.  Go to http://bugzilla.mozilla.org/
2.  Enter "testing" in the field labelled  "Enter a bug # or some search terms:"
3.  Click the "Show" button

Actual results:  The following error is displayed:

 Error
 
The bug number is invalid. If you are trying to use QuickSearch, you need to
enable JavaScript in your browser. To help us fix this limitation, look here.

Expected results:  A list of bugs containing the term "testing"

Additional information:
The JavaScript console reports:
    Error: QuickSearch is not defined

Additional information:

I have found that this problem does not occur on milestone build 0.9.7 with the
same profile.  Further, if I run build 0.9.7, and then run a recent 0.9.7+
build, such as 2002012203, the problem with Bugzilla QuickSearch disappears. 
The Bugzilla problem will not return even if I reboot.  However, if I uninstall
the 0.9.7+ build and install a new 0.9.7+ build in a new directory, the Bugzilla
problem will return again until I run build 0.9.7 again, at which point the
problem again disappears.

Even more oddly, even though the Bugzilla problem disappears after running
0.9.7, the Petco problem I reported in bug 119541 persists regardless of whether
I have run 0.9.7 or not.
Matt,

I was driving myself crazy trying to duplicate the problem on a different
machine which had 0.9.7, and I upgraded to 2002012304, and I couldn't get
www.teldrug.com to fail.

At this point, I have to agree that it isn't solely a cache problem (yes, I
know, others have already told me this). However, you just saved me lots of time
trying to figure out how to dup the problem.
Matt: I followed the steps described in #61, with build 2002012203, but did not
see any JS error. The resulting page displayed 29 bugs. Do I still need a bad
profile to reproduce the problem?
teldrug fails for me with build ID 2002011604, Win95, 200MHz CPU speed, 28800
bps modem, ca 1s network latency.
It fails also when I run the teldrug files from a local http server.
This bug needs a testcase. The HTML is so convoluted and messy that it could
well be that the issue is simply an application problem.
It would take me a whole day to make a testcase for this.
Although I have done this many times for Mozilla I don't have the time right now.
Just in case: A testcase for this issue should be minimal, just a few lines of
code without all the anchors and images and generated style sheet code etc.
Example of an unrelated testcase that fails in Mozilla:

<HTML>
<HEAD>
<SCRIPT>
function handler(t,u,n){
    alert("handler.caller = " + handler.caller);
}

window.onerror = handler;

noSuchFunction();

</SCRIPT>

</HEAD>
</HTML>

The error shows in the JavaScript console.
The testcase is already 1 level too complicated because it tests window.onerror
AND Function.caller. Only Function.caller is the problem here.
Yes, you need a bad profile.  My good profile works fine.

I just downloaded and tried 2002012304.  I can no longer reproduce the Bugzilla
QuickSearch problem using the bad profile with this build.  However, using the
bad profile, I can still reproduce the Petco problem, and I can reproduce a
problem with the Bugzilla Helper:

1.  Go to http://www.mozilla.org/quality/help/bugzilla-helper.html
2.  In the field labelled "Search Bugzilla" enter "testing"
3.  Click the "Search" button

Results:  I am returned to the top of the Bugzilla Helper page

JavaScript console
Error: SearchForBugs is not defined
Error: PrefillBugInfoForm is not defined
Attachment #66175 - Attachment mime type: application/octet-stream → application/x-zip-compressed
Does it come down to Edit | Preferences | Advanced | Scripts and Windows | Allow
JavaScript to Change Images? My test case attachment seems to say that it is.
Change Images doesn't fix www.teldrug.com for me.
Andrew (comment #68),
Your testcase may introduce a new issue that we may not want to cover here.
Proof: I can reproduce teldrug.com with that option turned on.
If you find that a fresh profile gets generated with this option turned off then
you may want to file a new bug for this.

Someone needs to reduce the teldrug.com files to a rock-solid testcase.

I think there is probably an infinite number of testcases that produce results
like Andrew's testcase. Maybe one of them even solves the problem ...??? :)

Has anybody seriously analysed the teldrug.com code?
Maybe after all Mozilla's behaviour is just a fair response to flaky JavaScript
coding and this issue cannot even be considered a bug.

The profile thing should IMHO be looked at only after it is established what the
offending issue is.
Attached file <SCRIPT SRC=this file>
This attachment is the external file (as retrieved from my cache directory)
called commonnav.js which I think is the one that is meant to give us the left
side menu.

This is in response the question about if the JS code is valid. My JS skills
are minimal, so someone else will need to take a look at this.
*** Bug 121804 has been marked as a duplicate of this bug. ***
I noticed that in attachment 63775 [details] there exists the line
user_pref("intl.charset.default", "");
Note the empty string value.

I ran into some problems before when I had this line in my prefs.js - for
instance, the url of links would not show up in the status bar when I moused
over them, and occasionally external JS files would not load.  Deleting that
line cleared them up.  I haven't tried the urls supplied here with that pref in,
so I could be wrong, but it may be related.

Could those who are seeing this bug check for this line in their prefs.js and
try deleting it please.
What Jason describes in Comment #73 sounds like the discussion following
http://bugzilla.mozilla.org/show_bug.cgi?id=120060#c8
Jason,

Your theory was correct.  May "bad" profile contained the following "intl." lines:

     user_pref("intl.charset.default", "");
     user_pref("intl.charset.detector", "");
     user_pref("intl.charsetmenu.browser.cache", "us-ascii, windows-1251,
windows-1252, UTF-8, x-mac-roman");
     user_pref("intl.charsetmenu.browser.static", "ISO-8859-1, x-user-defined");

The only "intl." line my "good" profile contained was:

     user_pref("intl.charsetmenu.browser.cache", "ISO-8859-1");

Commenting out the "intl.charset.default" line in the bad file allowed me to see
the Petco popup I described in bug 119541.
You beauty...

I removed the line in my prefs.js as suggested in #73 (note, I left all my other
"intl" settings alone). It fixed my original problem with
http://www.teldrug.com, and with the problem mentioned in #13.

I can't tell if this is the same issue as mentioned in #74, I haven't had any
coffee yet, so I'll leave that to the more informed.

A big thankyou to Jason for finding a way around this that didn't involve
creating a new profile.

I've noticed that this bug isn't marked as a blocker for 0.9.8, which I will now
accept, as long as the release notes mention the problem, and its fix as
detailed in #73.
Glad my hunch was correct!

So now that we have the cause narrowed down, can someone find where in the code
an empty string for intl.charset.default would be preventing javascript from
loading/running?  I did a quick search in lxr for that pref but with my limited
knowledge of C++ couldn't see what's going wrong.

Having the workaround now is a plus, but Mozilla should be able to handle the
empty string gracefully.  I don't know what the requirements are for release
notes; is this a candidate as David suggests?  Or is it already too late for the
0.9.8 release notes?
Jason, good catch.  Here's what happens:

1)  When the HTML document is loading it calls
    nsHTMLDocument::TryWeakDocTypeDefault as part of determining its default
    charset.
2)  This calls 
    GetLocalizedUnicharPref("intl.charset.default", getter_Copies(defCharset));
    which happily returns an empty string and a success value.
3)  This empty string is set as the document's character set, which is
    henceforward an empty string.
4)  The script loader uses the document character set if there is no charset
    specified in the <script> tag or the http headers.  It has no further
    fallbacks.  It tries to use this charset to decode the script and fails to
    get the decoder, so the script is skipped.  (see code at
http://lxr.mozilla.org/seamonkey/source/content/base/src/nsScriptLoader.cpp#687
    and on.

So possible solutions:

1)  Make the script loader fall back to ISO-8859-1 if the document charset is
    empty (or even if we just fail to get a decoder for the charset we think we
    should be using?).
2)  Fix nsHTMLDocument::TryWeakDocTypeDefault to not set an empty character set
   (it could still set a bogus one, of course).

I'd recommend both, I think...
I have a third option. Assuming it's the prefs.js that is wrong, rather than
code interpreting it wrong, an enhancement to Mozilla would be a way to verify
prefs.js settings.

As this is outside the scope of this bug, as it will include any and all
prefs.js settings, I will file a seperate bug report for this.

In any event, the two options in #78 should be done, as a prefs.js verifier
would be for 1.0.

Now, my next question. Can either of the suggestions in #78 be done in time for
0.9.8? I think the first one would be a quick hack that shouldn't cause too many
problems (but I'm not a C programmer so read that with a large grain of salt).
This is not going to get fixed for 0.9.8, but I added the relnote keyword, and
Asa is cc'd -- so it should get release-noted.

This bug should get fixed in 0.9.9, it seems easy enough to do both of the
winning robustitude things that Boris suggests.

Harish, should this be your bug still?  Or jst's?  Or both?

/be
Keywords: relnote
This fixes the site in the URL field (and yes, I made sure I set the bogus
pref)
Comment on attachment 67458 [details] [diff] [review]
patch with both fixes

sr=jst

I'm fine with taking this bug if bz or harishd doesn't want it on their plate.
Thanks everyone who helped track this down!
Attachment #67458 - Flags: superreview+
Comment on attachment 67458 [details] [diff] [review]
patch with both fixes

r=harishd
Attachment #67458 - Flags: review+
checked into trunk.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
*** Bug 120955 has been marked as a duplicate of this bug. ***
Comment on attachment 67458 [details] [diff] [review]
patch with both fixes

bz, can you please check these two into the MOZILLA_0_9_8_BRANCH.  (I believe
these files are unmodified from their branch points on the branch.)

a=brendan@mozilla.org for drivers, and thanks.

/be
Attachment #67458 - Flags: approval+
checked into branch
*** Bug 123051 has been marked as a duplicate of this bug. ***
Perhaps related (or not) is bug 105636. There, with the following pref, the
reporter gets multiple POST DATA warnings when entering a form:

user_pref("intl.charset.detector", "ja_parallel_state_machine");

Any ideas?
Re #89

I don't think this is the same for two reasons.

1/ If you are using 0.9.8 (or later builds) you shouldn't see the problem.

2/ The problem in this bug was due to an empty string "", rather than yours
where the value has been set.

Of course, if you're not using 0.9.8 or later, I suggest you do so.
> 1/ If you are using 0.9.8 (or later builds) you shouldn't see the problem.

Sure you should.  We just fixed the empty string.  Other bogus values could
still cause this.  Is the value in question non-bogus?  Why's it set?

If other settings in prefs.js could cause the behaviour described in this bug,
then why is this bug marked FIXED? If only a temporary patch has been applied,
then the bug should still be open until a permanent solution is found.
Because the only way to tell that a value is "causing this bug" is to fail to
get a Unicode decoder in the script loader.  As I said, we _could_ fall back to
ISO-8859-1 but we should think about that carefully (in a bug filed specifically
on that issue).  Maybe we just can't decode it because it really _is_ served
with a charset we don't support?  Should we decode it as ISO-8859-1 then?  That
could lead to some wacky behavior.... (possibly worse than what exists).
OK, I understand now. (mental note to self - learn more about
internationalisation issues).

So, back to the comment/question in #89, is this a dup?
No.  The problem is likely the same and comments to that effect could help that
bug out.  But the real questions in that bug should be why is the pref set to
that value (it looks like a decoder name...) and why do we not have that decoder?
Spam: Related future problems could be prevented by work in bug 123027 (verifier
for prefs.js).
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: