Old annoying-content prefs (set using Firefox<1.0) still halt script execution

Assigned to



11 years ago
8 years ago


(Reporter: Jesse Ruderman, Assigned: dveditz)


Mac OS X

Firefox Tracking Flags

(Not tracked)




11 years ago
Old versions of Firefox (before 1.0) had annoying-content preferences hooked up to CAPS, e.g. setting capability.policy.default.Window.status to noAccess.  The exceptions generated by CAPS broke scripts by throwing exceptions, so in bug 117707, new preferences were introduced and the UI was redirected to use the new preferences.

It seems like a significant number of users still have the old prefs, and it's leading to sites breaking.  For example, see bug 165692 comment 2.  More than a few bugs complaining about this issue have been duped to bug 122866, but based on mstoltz's comments there I think that bug is too vague/general.

bht@actrix.gen.nz uses onerror to gather statistics about exceptions seen on his sites, and errors related to setting window.status seem to be among the more common errors.  (Some of these errors are mysterious, though.  See comments in bug 398893 for details.)

See bug 117707 comment 93 for a list of CAPS prefs that were set by the old UI.  Note that some of them (methods and window.status-related) don't end in ".set", so fixing this isn't a simple matter of making CAPS-blocked assignment be a no-op.
> new preferences were introduced 

Isn't this a profile migration issue, then?  Not sure why this is filed in CAPS.

> this isn't a simple matter of making CAPS-blocked assignment be a no-op.

Doing that would be bad in any case.

Comment 2

11 years ago
Thanks Jesse for opening this. We can help and exploit our exception reporting system as an early warning system for Firefox to some extent.

We have installed this system mainly because it had proven impossible to get information from the users themselves. As you write there are mysterious errors. 

From years of experience with this system, I can report that it can take months if not years to get to the root cause (with testcase) of some mysterious errors. I can only propose to eliminate the more obvious and easy-to-fix culprits as soon as possible, because they have the potential to cause other errors in unknown combinations.

btw one of our biggest nightmares are content inserting and content modifying proxies/firewalls. They insert scripts in front of and after the original document. There is little one can do against these - it is a kind of a war to restore event handlers that they disable - some go as far as scanning the whole document for event handlers and javascript: links and yes they do replace them with their own code.

Of course the line numbers of any errors in our code (if any) is offset by the number of lines of the proxy code before the original document.

Maybe some genius will find a way to eliminate these toxic scripts, or at least introduce an easy way to detect them.

I am not joking, it is a big issue. If the "accept-encoding" header is missing or the "gzip" value is missing then that is a warning sign already because at this stage, it appears that none of these "active" firewalls supports compression.

So there we have another reason to remove any aggressive settings that throw exceptions. Neither the users nor the site authors have full control of what code hits the browser eventually.

What I can tell so far is that in this context all these errors happen on line number 0 which is rare, and it seems to happen with the primary statement in an event handler such as onmouseover, onclick, onload etc. or setTimeout().


Comment 3

11 years ago
Serving your page using https would take care of those MITM attacks ;)
Is this still a problem, or can this be closed?
You need to log in before you can comment on or make changes to this bug.