Closed Bug 563439 Opened 14 years ago Closed 14 years ago

Failed image img GET is cached

Categories

(Core :: Graphics: ImageLib, defect)

x86
Windows XP
defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
blocking1.9.2 --- needed
status1.9.2 --- .9-fixed
blocking1.9.1 --- needed
status1.9.1 --- .12-fixed

People

(Reporter: antithesis, Assigned: joe)

References

Details

(Keywords: regression)

Attachments

(1 file)

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100315 Firefox/3.5.9

I described the problem here:

https://bugzilla.mozilla.org/show_bug.cgi?id=425558#c8

If an <img> fails to load (not found), the browser will not try again to load the url for the image, even when a new IMG element is constructed (via DOM methods or assigning to innerHTML).  Thus there is no way to attempt again to load the image after some time has elapsed.  In the intervening time, the url may no longer be returning a failure, but the browser refuses to go try the image url again.  This cache state is cleared when the page is reloaded (F5 is sufficient, Ctrl+F5 not needed).

I checked about:cache and neither the Memory nor Disk cache have a record for the failed url.  Yet the page continues to cached the failure result for the url, even if the page (via Javacript) builds new IMG elements with the url.  In the intervening time, the image url is now valid and will not return failure, but the browser refuses to go check for that.

I think it is erroneous to cache the failure (not found) state for a url.  Each time the url is requested by some elements on the page, the browser must go try again the failed url.  Is there an applicable RFC on this question?

Note that IE6, IE8, and latest Google Chrome did not exhibit this problem.

I understand if you have multiple requests for the same failed url on same problem, it may be a performance enhancement to not pound the url multiple times.  But once the onload() event for the body has fired, you should reset this internal cache, and force reloading of failed urls.  Because this is an error condition, you want to err on the side of scripting being able to restoree from the error, versus performance.

Reproducible: Always
Typo 'problem' should be 'page'
> But once the onload() event for the body has fired, you should reset
> this internal cache, and force reloading of failed urls.

Why?  What's magic about onload?

> you want to err on the side of scripting being able to restoree from the
> error, versus performance

That's not obvious to me.  What matters is how common the case is of there being a performance hit in this situation.

I _would_ fully support having a way for the page to clear the cache for a given image, or for all images on the page, as a recovery method.

In any case, this isn't a network cache issue; there's an image cache above the network layer, as described in the HTML5 spec.
Status: UNCONFIRMED → NEW
Component: Networking: Cache → ImageLib
Ever confirmed: true
QA Contact: networking.cache → imagelib
Here are applicable HTTP specifications:

http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.11

Note that 410 is cache-able, and 404 is apparently not.  Thus the browser should not be caching the 404 result until the next page reload.

I assume my Apache server is returning 404 for image file not found.  How would I verify this from within FF3.5.9?
I have some more testing to do.  This problem might be on my server configuration.  When I try this:

http://kongtact.com/img/user/2/doesnotexit.JPG

My .htacccess file is Rewriting all 404 to the home page.  So the browser is receiving text/html instead of an image content-type.  So this might be causing the browser to cache this result?

I will remove the redirection for image file types, re-test, and report back.
In interim time, I will respond to Boris on the hypothetical that the bug does remain after I fix my server.

I am not familiar with the HTML5 image cache spec.  Does it specify what to do for HTTP 404?  If not, wouldn't HTTP spec take precedence?

Given some ambiguity about how long a 404 result can be reused (cached), one has to weigh the greater cost.

What is the worst case of reloading every 404 for every instance on a page?  A hit on performance and bandwidth for the error case of a page having many 404 elements (not very likely for a correctly functioning page).

What is the cost of not reloading every 404 for every instance of the page?  The correctly functioning page now breaks.  That is the case we have now, with Kongtact.com's Add Photo function working in IE6, IE8, and Chrome, but failing in FF3.5.9 (it did not fail in 3.5.3, this is new bug/feature in 3.5.9).

What Kongtact.com wants to do is attempt to show the uploaded photo to the user as soon as possible, with minimal handshaking with the server (in order to be robust).  So I set the onerror even for the img tag to show a busy image.  Then when the response from the server for submission for the hidden iframe is complete, I regenerate the img elements, so they have a chance to reload if there were 404 before.  This thus gives the chance for the image to display sooner (asynchronously versus forcing synchronous).  I suppose I could show the busy image instead until the confirmation of upload from the server is received, but this wouldn't solve the case where for example the image file gets deleted by another process (say another browser window open with same user logged in) and then the current user uploads it again but the browser is still caching the 404 result because the page hasn't been reloaded.  Thus it is silly for me to change the way I am doing it, because it is erroneous to cache the 404 while the script is running on the page.

And that is why I say cache the 404 only until onload, because after that, scripts start running on the page, and you can really can not make any assumptions about the validity of the 404 result after scripts start running on the page.  At least before onload, you can make the argument that consistency for all elements referring to same url is important, because onload is sort of demarcation between the static initial page and the start of the dynamic page correct?
> Thus the browser should not be caching the 404 result until the next page
> reload.

The image cache doesn't quite implement HTTP caching semantics; it's an object cache, not an HTTP data cache.  It's closer to the semantics described at http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#fetching-resources step 3 paragraph 1, but images are treated as "already being downloaded" in a slightly wider range of cases, I think.

> How would I verify this from within FF3.5.9?

Either use the LiveHTTPHeaders extension, or follow the instructions at https://developer.mozilla.org/en/HTTP_Logging

The url in comment 4 seems to return 404 for me now.
> A hit on performance and bandwidth for the error case of a page having many 404
> elements (not very likely for a correctly functioning page).

There are a _lot_ of pages out there on which this would be a huge hit.

> because onload is sort of demarcation between the static initial page and the
> start of the dynamic page correct?

Not really, no.  There's tons of dynamic stuff that happens before onload, and lots of (broken) static content dumped into pages after onload...
http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#fetching-resources

I see there Ian Hixie's last call for comments (2009-10-23).  Btw, he and I have a not so smooth history (XUL, style conflation debate).

I can't grok some it quickly ("origin origin"?).

Reading step 3 very carefully, two keywords are "idempotent" action and "download".  HTTP is clear that 404 is NOT idempotent and it is not a download:

http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4

"the server SHOULD include...whether it is a temporary or permanent condition"

http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5

"No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows...that an old resource is permanently unavailable..."

Thus it is quite clear that you are not allowed to cache 404 result, unless the user agent has some extra information about the world context (multiple synchronous processes), that allows it to know that the condition is permanent during the duration within which you cache it.

(Correction: comment 5 erroneously has synchronous and asynchronous juxtaposed.)

Thus I am not required (although I will after this paragraph) to debate the priorities of which way it should be, HTTP specification is 100% clear, and IE6, IE8 (probably IE7 and IE5.5 too) and Chrome (i.e. Safari/Webkit) are all apparently not caching 404 error condition.  I hope Mozilla doesn't leave me no other choice than to popup an alert for each site visitor, explaining why Firefox is broken (3.5.3 and older don't cache the images and thus are excruciatingly slow https://bugzilla.mozilla.org/show_bug.cgi?id=425558#c5, so it is already tempting to advise visitors to upgrade, except now this bug) for Add Photos (with a link to this bug) and suggesting a download of Chrome or click of the IE icon.  Note I am targeting this new site to asia (where I am), and mostly to people using internet cafes, where it is normal to have both IE and FF installed on a Windows box, so it won't be too much trouble for people to click the IE icon.  I really don't want to do this obviously (prefer to be browser agnostic than pick fights) and hope I can convince you.  Or I hope you can convince me why I am wrong, so then I will go jump through extra hoops to assume that browsers can cache 404.  Seems to me that Mozilla had been a huge proponent of adhering to standards, and HTTP is a pretty important and well established standard.  How can you possibly read HTTP to say that 404 can be cached?  I am not trying to create animosity, but really HTTP says very clearly that there is no assumption to be made about whether 404 is temporary or permanent.  How much more clear can that be, especially given that the opening section for 4xx says 404 is a exception to the rule that server SHOULD specify whether 4xx error is temporary or permanent?

Let me explain from correct composeability design philosophically why I should not change my code to work around this bug.  The basic enemy of future composeability of code, is programming code snippets that have side (global) effects.  This makes it impossible to re-use a code snippet without considering how it impacts the side-effects of other code snippets.  Right now, the assumption about whether the image is 404, is placed in the <img> tag using an onerror script to replace the img.src with "busy" image.  This is thus a local snippet with minimal side-effects.  It is robust (interoperable and compose-able), because if the img is loaded and src exists, no outside code has to be aware of this.  Whereas, to work around this bug, I would have to introduce code at a higher level, which knows all the global conditions under which img.src would 404, and places a "busy" image instead.  This lifts a huge spaghetti into code (that will grow and spread tentacles over time), all because Mozilla does not respect that 404 is not cache-able.

>> A hit on performance and bandwidth for the error case of a page having many 404
>> elements (not very likely for a correctly functioning page).
>
>There are a _lot_ of pages out there on which this would be a huge hit.

They were getting hit in 3.5.3 and they are getting hit in IE6, IE8, Chrome (and probably Safari, IE7 and Opera).  Chrome has reputation of being a very fast browser.

Are you missing in your logic, that this will only be a performance and bandwidth hit for multiple copies of the same 404 uri (url) on the same page.  That is multiple duplicates of the same uri "resource not found", an extreme error condition.  Not multiple 404 for different uri.  Why are we optimizing speed and bandwidth for pages that have tons of duplicate resource errors, when HTTP clearly says that if the server has knowledge about the duration of the error, it should return 410 or other 4xx?

Shouldn't we instead be prioritizing for correct adherence to the HTTP standard (so that we make no assumptions about the unknown world context for 404 duration), at the cost of not optimizing extreme error cases?  The extreme error cases will still load correctly, they just won't be speed and bandwidth optimized, and as a positive benefit, the HTTP standard won't be violated and orthogonality (composeability) of scripts won't be impacted.

The reason that 404 exists in the HTTP standard, is as stated, because there are cases where the server does not know the world context.  The whole point is to not force side-effect (global) assumptions about duration.  In the case, where the server does know the global context, it will "SHOULD" return a different 4xx error.

P.S. HTML5 caching has an error then:

http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#concept-http-equivalent-get

"The HTTP GET method is equivalent to the default retrieval action of the protocol. For example, RETR in FTP. Such actions are idempotent and safe, in HTTP terms."

That needs to be qualified.  Not all GET responses are idempotent.  404 is an exception, as clearly stated by the standard.  In fact, all GET responses have to qualified by the cache-control and Expires headers, so that that statement above is grossly incorrect.  Sorry I did not comment on HTML5 proposal sooner, but my input to standard process has not been so warmly received in past (although Hixie did include my input for columns in CSS2) and I simply am not keeping up with all the various proposals that die in the market place.

I think with WWW becoming mature, you need to be super careful with such far reaching things as a change in the way caching works.  Why wasn't this brought in front of more people first?  Or is this still open to input from the community?

Again not wanting to create animosity.  Will be interesting to read your logical response.  I am being overly verbose to be sure I am not misunderstood nor create animosity.

>> because onload is sort of demarcation between the static initial page and the
> start of the dynamic page correct?
>
> Not really, no.  There's tons of dynamic stuff that happens before onload,
> and lots of (broken) static content dumped into pages after onload...

yeah I think you are correct, you are not going to be able to catch 404 ever.  I was trying to find a way that you could cache static webpages, that have a zillion copies of some image (say a bullet icon) that is 404, without breaking the 404 standard.

Maybe you can find a way to detect when scripting of any kind starts, and cache up to that point.  It still breaks the standard, because you don't know if the 404 condition has been terminated during the duration of the page load, but it is much less likely to cause a page to break.

And notwithstanding the HTTP standard, breaking popular web sites is much worse, than not optimizing the speed of pages that have a plethora of 404 errors.
Typo 'catch' should be 'cache'
> and Chrome (i.e. Safari/Webkit) are all apparently not caching 404 error
> condition.

As a note, Chrome and Safari don't share any cache code that I know of.

> Chrome has reputation of being a very fast browser.

Yes, it does.  And sometimes it's very fast.  And sometimes it's very slow.  That's just life in browser-land.

> that have a zillion copies of some image (say a bullet icon) that is 404,

Yep, that's the most common failure case I've seen.

I assume you're still seeing the problem now that your server is returning 404s?  If so, can you please point me to a page showing the problem?  While the image cache doesn't quite implement HTTP caching semantics, it _does_ try to more or less implement parts of them; it may be possible to adjust it to handle your case better.
I was typing the following while you were posting, so let me go ahead and add it to the brainstorm we are having...(note I haven't yet fixed my server, was sleeping, so will go do that now and reply to your latest comment after)

You might argue (probably the only reasonable retort someone can make is) that since the duration of 404 is unknown, then my script should make no assumptions about the duration of which the user agent (browser) will cache the 404 response.  That would be a strong point, but...

However, my script is not making any such assumption.  It is requesting that the browser load a resource, and it is expecting the browser to correctly load it if it exists.  That is not an assumption about the length of caching of 404 response (caching is supposed to be transparent to the final ends), it is assumption that a browser will load a resource when it exists, which is fundamental to HTTP and the correct operation of the WWW.

If user agents change from past behavior and start caching 404 and introducing variable adherence to the expectation that existent resources will be loaded (notwithstanding other network errors), then there is no way for a script to know if it is ever going to work correctly, because even if it polls the server before it requests a resource from the browser, the script has no way to know if the resource will still be there by the time the browser sends it's request.

Isn't one of the most fundamentally important features of HTTP is that it is synchronous unless headers say otherwise?  Isn't GET is only idempotent when it succeeds and when there is a cache or expires header?
Apologies for the delay, as I got sidetracked.

My server is now correctly returns the default Apache 404 response for images.

The problem of this bug report persists.

Until this problem is fixed, I will probably decide to popup an alert for all Firefox visitors, advising them to use IE6+ or Google Chrome, otherwise to suffer very slow page updates (< 3.7.3 not caching images, https://bugzilla.mozilla.org/show_bug.cgi?id=425558#c7 ) or incorrect image uploads (this bug report).

Once this problem is fixed, I may decide to remove the popup or change the message to advise upgrading to the latest version of Firefox.

Thank you for trying to help resolve this issue.  If I can be of further assistance, please email me.
Can you link to the actual page showing the problem please?
Please login as follows:

http://kongtact.com

Email: bzbarsky@mit.edu
Password: firefox

do not check "remember me" (I have a bug with that at moment)

You will be taken to the "Photos" page.  Click "Add Photo" and select a JPG image to upload.  Notice the image is never displayed, only the busy.gif is displayed instead.

Log out and login again.  Notice the image is still not displayed, Firefox 3.5.9 is still caching the busy.gif.  Note this is does not happen in 3.5.3, nor in IE6, IE8, nor Chrome.

Reload the page, and login again.  Notice the image is not displayed correctly.

Thus I conclude that FF3.5.9 is caching the 404 state.  This is because when you first submit "Add Photo", the <img ... onerror='this.src = "busy.gif"'> is fired, because the image is not yet uploaded (in the few millsec from time the upload was started and page is refreshed by Javascript).  Then subsequent instances of same <img...> created in Javascript, do not attempt to load the image from the server (and the image has been uploaded to the server).  Only a reload of the page appears to clear the cached 404 state for the image url.

If the above is unclear, please ask me to attempt to clarify.
Try to not get turned off by the site name or funky graphics and incomplete implementation.  We have a top tier name, coolpage.com, we haven't decided yet.  Working on technical issues first.
> Notice the image is never displayed, only the busy.gif is displayed instead.

I just tried this in the following browsers:

1)  Firefox 3.5.9
2)  Firefox 3.6.3
3)  Firefox trunk

In 3.5.9 I don't see the busy.gif.  In 3.6.3 I see the behavior you describe.  On trunk, the image I chose shows up.  Let me check when the behavior changed here.
Trunk fix range: http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=ff3496b1f6c7&tochange=bf0fdec8f43b

Looks like bug 435296 added a RemoveFromCache call in imgRequest::Cancel, which effectively fixes this bug.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
(Shelby, note that the "FIXED" only means it's fixed on trunk.)

So, if I'm reading this right, we need a fix for 3.6 only?
Probably a good idea, yes.  Can we just backport that one RemoveFromCache call?
blocking1.9.2: --- → ?
Thanks guys.

Excuse my slight confusion and unfamiliarity with the release process, but do we know now what official release version (or greater than which version number) this will appear in?

Respecting Murphy's Law but not being familiar enough to look at source code, I am assuming the imgRequest::Cancel is invoked when a DOM image node is garbage collected?  Would the timing of garbage collection not make the behavior (timing) non-deterministic?  If not the case, can anyone grok offhand what is the condition that is invoking the imgRequest::Cancel?

Subtle correction to my prior comment for clarity:

"Firefox 3.5.9 is still caching the busy.gif"

Rather Firefox 3.5.9 may be caching the 404 error condition, and this could be firing the <img ... onerror='this.src = "busy.gif"'>.
One more correction for clarity (must have been cross-eyed last night):

"Notice the image is not displayed correctly."

Should have been:

Notice the image is NOW displayed correctly.
> do we know now what official release version (or greater than which version
> number) this will appear in?

Definitely Gecko 1.9.3 (Firefox 4).  Possibly 1.9.2.x (Firefox 3.6.x) for some x, depending on when and whether we backport.

> I am assuming the imgRequest::Cancel is invoked when a DOM image node is garbage
> collected?

That's one thing that can call it, yes.  It happens to also be called from imgRequest::OnDataAvailable if we discover that the data isn't something we can handle as an image.
This is the relevant hunk, and it makes sense that it would fix this bug.

@@ -332,19 +326,17 @@ void imgRequest::Cancel(nsresult aStatus
     LOG_MSG(gImgLog, "imgRequest::Cancel", "stopping animation");
 
     mImage->StopAnimation();
   }
 
   if (!(mImageStatus & imgIRequest::STATUS_LOAD_PARTIAL))
     mImageStatus |= imgIRequest::STATUS_ERROR;
 
-  if (aStatus != NS_IMAGELIB_ERROR_NO_DECODER) {
-    RemoveFromCache();
-  }
+  RemoveFromCache();
 
   if (mRequest && mLoading)
     mRequest->Cancel(aStatus);
 }
 
 void imgRequest::CancelAndAbort(nsresult aStatus)
 {
   LOG_SCOPE(gImgLog, "imgRequest::CancelAndAbort");
> Definitely Gecko 1.9.3 (Firefox 4).

I've read a target release date of Q4 2010.

>  Possibly 1.9.2.x (Firefox 3.6.x) for some
> x, depending on when and whether we backport.

That would help me, but I have not yet thought of a widespread bug that makes back porting this fix critical.  Intuitively I expect a compelling case lurking.

> It happens to also be called from
> imgRequest::OnDataAvailable if we discover that the data isn't something we can
> handle as an image.

So then http 404 result for images are never cached.  To reiterate, I think that is correct, we have http 410 for caching that "don't ask me again".  I hope someone doesn't go to improve that later and forget this issue.

Thanks again.  I end my comments here.
> So then http 404 result for images are never cached.

Not exactly.  If your 404 has an image type, it'll get cached.
> > So then http 404 result for images are never cached.
> 
> Not exactly.  If your 404 has an image type, it'll get cached.

Ah interesting, I am tempted to reply then.

So those who want to configure their server to minimize bandwidth hits for missing images, should return 410 (to encourage all agents to cache) and an image content type.  Override server-side for cases such as my usage.

By default, Apache is returning text/html, which is why the fix works for my usage:

HTTP/1.0 404 Not Found
Fri, 14 May 2010 04:01:52 GMT
Apache/2.2.15
209
text/html; charset=iso-8859-1
close

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> 
<html><head> 
<title>404 Not Found</title> 
</head><body> 
<h1>Not Found</h1> 
<p>The requested URL /img/any.jpg was not found on this server.</p> 
</body></html>
I'm pretty sure this doesn't "block" the next release but does want a fix. Joe, please create a branch patch along the lines of comment 23 for us.

With the different behavior described in 3.5.9 do we need this patch there too? A different one?
Assignee: nobody → joe
blocking1.9.1: --- → ?
blocking1.9.2: ? → needed
status1.9.1: --- → ?
Depends on: 435296
Keywords: regression
blocking1.9.1: ? → needed
Any chance of getting this for the branches?
Would it help if I incentivize someone with a $100 Paypal transfer to get this fixed in a 3.6.x that would be an official release within 2 months or less?  Or does it cost more time than that offer?  Or not a priority at any price?

We are preparing to go live with the (targeting high traffic) dating site (renamed asiadear.com) this month, and I would much prefer to not show the "run IE instead" alert for those running a fixed version of Firefox, and change that message to "run IE or upgrade your Firefox".
I have a patch ready to go for this. What release would you like it in, Christian?
Oh right, I have to upload a patch and get it reviewed for stable branch release drivers to decide what releases it should go in. :)
Attachment #465270 - Flags: review?(bzbarsky)
Attachment #465270 - Flags: approval1.9.2.9?
Attachment #465270 - Flags: approval1.9.2.10?
Attachment #465270 - Flags: approval1.9.1.13?
Attachment #465270 - Flags: approval1.9.1.12?
Attachment #465270 - Flags: review?(bzbarsky) → review+
Thank you so much.

If someone can nudge me here once it has been accepted and is surely in next stable branch release, then I will be happy to send the Paypal token of my appreciation (I assume to the email address of record here?).  Again thank you.
Comment on attachment 465270 [details] [diff] [review]
Always remove from cache when we're cancelled

a=LegNeato for 1.9.2.9 and 1.9.1.12.

Please note that for these releases code-freeze is TONIGHT @ 11:59 pm PST. This needs to be landed as soon as possible.
Attachment #465270 - Flags: approval1.9.2.9?
Attachment #465270 - Flags: approval1.9.2.9+
Attachment #465270 - Flags: approval1.9.2.10?
Attachment #465270 - Flags: approval1.9.1.13?
Attachment #465270 - Flags: approval1.9.1.12?
Attachment #465270 - Flags: approval1.9.1.12+
If I understand correctly, this fix will first appear in 3.5.12 and 3.6.9.

Thank you, Joe please check your email for Paypal as promised.

Thanks to everyone who contributed, especially Boris.
I now understand that 3.5.12 and 3.6.9 will correspond to 1.9.1.12 and 1.9.2.6 respectively.
Typo, I mean 1.9.2.9
So the most comprehensive criteria is Gecko version number greater than 20100811 will contain this fix.
Comment 38 is wrong.  The build date is just the build date.  Any Gecko built after today, no matter what source it's built from, will have a build date after 20100811.
Ditto Boris. Using the Gecko build date may appear to work a lot of the time for your users of Mozilla-produced builds since we're not currently shipping anything on older branches. But some Linux vendors, for example, are still shipping 3.0.x security updates and will almost certainly skip this non-security bug but produce a build after that date.

Check the "rv:" string instead. Also note that Firefox 4 is changing the format of the user-agent (details not final) but you will have to update whatever sniffing you do when it does. Or just put up a notice "If this looks wrong upgrade your Firefox" and don't bother sniffing.

re  = /\Wrv:(\d+)(?:\.(\d+))?(?:\.(\d+))?(?:\.(\d+))?/
ver = re.exec(navigator.userAgent);
iscompat = ver[1] > 1 || ver[2] > 9 || ver[3] > 2 ||
           (ver[3] == 1 && ver[4] >= 12) ||
           (ver[3] == 2 && ver[4] >= 9);

Be careful of simple string compares, "10" < "9"

I guess some earlier versions were working for you. you could add those similarly, the above is just an example.

You may have to modify the regexp for non-Gecko browsers that don't support the full spec, or only run the code when you already know you've got Gecko. In particular I don't know the compatibility story for the non-capturing parens (?:pattern). You could leave out the '?:' and then deal with the additional matches in the result array.
Thank you for the correction and the suggested code.

I made the assumption that the build date was determinate (otherwise why publish it).

To be comprehensive, I should version on Gecko, not some particular application (e.g. Firefox) employing Gecko.

Is the rv the ascending Gecko source code version, so then why is it not after the 'Gecko/'? Or there is no ascending Gecko source code version in the user agent string? What practical use is a build date that does not correlate to source code version? Or is a record of build dates and source code correlations maintained in some public database? Seems to me that a publicized build date with no public correlation to source code is mathematically meaningless/in-determinate (or am I missing something?). Time is meaningless without a point of reference, arbitrary or random with respect to all points of reference, i.e. arbitrary or random with respect to all points of reference.

Assuming rv is the Gecko version number, then improved sniff:

ver = navigator.userAgent.match( /\Wrv:(\d+)(?:\.(\d+)(?:\.(\d+)(?:\.(\d+))?)?)?/ )
iscompat = navigator.userAgent.search( /\WGecko\// ) >= 0 && ver[0] != '' &&
            (ver[1] > 1 || ver[2] > 9 || ver[3] > 2 ||
            (ver[3] == 1 && ver[4] >= 12) ||
            (ver[3] == 2 && ver[4] >= 9)

(no need line ending semicolons)
Correction:

ver = navigator.userAgent.match(
/\Wrv:(\d+)(?:\.(\d+)(?:\.(\d+)(?:\.(\d+))?)?)?/ )
iscompat = navigator.userAgent.search( /\WGecko\// ) < 0 ||
            (ver[0] != '' &&
             (ver[1] > 1 || ver[2] > 9 || ver[3] > 2 ||
             (ver[3] == 1 && ver[4] >= 12) ||
             (ver[3] == 2 && ver[4] >= 9))
Oh and I forgot to say that it is unfortunate if the user agent string version test will be broken in version 4 of Firefox.  Because will I (and others) remember to update their tests before 4 is released?  What an enormous global economic hit (mis-allocation of human capital) otherwise.  If you do that, I hope you successfully get the word out.  Maybe spamming every person who ever filed a bug report may be the most targetting means.

Assuming my prior comment about determinancy was not incorrect, I suppose you need to put the Gecko source code version after the 'Gecko/', so won't have to break it ever again.  Could append the build date to the version number if it has some other form of useful determinacy.
Incorrect:

ver[1] > 1 || ver[2] > 9 || ver[3] > 2

Corrected:

ver = navigator.userAgent.match( /\Wrv:(\d+)(?:\.(\d+)(?:\.(\d+)(?:\.(\d+))?)?)?/ )
iscompat = navigator.userAgent.search( /\WGecko\// ) < 0 ||
            (ver[0] != '' &&
             (ver[1] > 1 ||
              (ver[1] == 1 &&
               (ver[2] > 9 ||
                (ver[2] == 9 &&
                 (ver[3] > 2 ||
                  (ver[3] == 1 && ver[4] >= 12) ||
                  (ver[3] == 2 && ver[4] >= 9)
                 )
                )
               )
              )
             )
            )
Apologies for the numerous comments in rapid succession.

If one argued that the build date differentiates between builds of the same version number, I would retort that if there is no correlation to publicly recorded source code version, there is no public determinancy that a different build date is not the same source code as any other build date of the same source code version.  Rather an MD5 of the binary executable would be determinant.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: