Closed Bug 103082 Opened 23 years ago Closed 23 years ago

Re-enable links toolbar but make it hidden by default and have no perf hit

Categories

(SeaMonkey :: UI Design, defect)

defect
Not set
normal

Tracking

(Not tracked)

VERIFIED FIXED
mozilla0.9.5

People

(Reporter: hyatt, Assigned: sballard)

Details

(Keywords: perf)

Attachments

(3 files)

The links toolbar is causing a 2-3% impact on startup/new window and a 5% slowdown on page load times. The page load hit in particular is unacceptable. This bug is tracking the disabling and removal of the toolbar from navigator.xul until such time as the feature can be demonstrated not to slow page load times.
Comment on attachment 52027 [details] [diff] [review] Remove the overlay and get rid of the toolbar. r=doron
Attachment #52027 - Flags: review+
Keywords: perf
OS: Windows 2000 → All
Hardware: PC → All
sr=hewitt
Comment on attachment 52027 [details] [diff] [review] Remove the overlay and get rid of the toolbar. a=asa (on behalf of drivers) for checkin to 0.9.5
Attachment #52027 - Flags: approval+
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla0.9.5
Fixed.
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
Is it perhaps just the auto-show feature which is causing the additional load times? I can imagine this to be the case where the test suite contains a mixture of link/non-link bearing sites so the links toolbar keeps popping in and out, causing additional reflows.
It should be possible to architect this so that there's zero page-load-time overhead if the toolbar is "hidden always" (hidden="true"). If we can either verify that this is true or modify the patch until it is, would we have a chance to get this re-enabled (but hidden by default) in 0.9.5? My particular thought as to how to do this would be to condition the adding of the onload/onunload handlers on isLinkToolbarEnabled(), and then add or remove them as appropriate in toggleLinkToolbar(). That should eliminate the 5% completely when the toolbar's disabled at the expense of a fraction of a percent on startup/newwindow. Would that be acceptable?
(If it would, I'm prepared to produce the trivial patch to do so)
Does it seem reasonable to say that an additional feature (one required for full HTML support) must have zero marginal cost? I reckon ou could reduce page loading times by throwing out images as well. Seems to me that you are really saying this can't go in unless it consumes no CPU cycles in which case no new feature will ever be added to the browser, even if it has spent 33 months in development and undergone unprecendented amounts of peer review?
Jeffrey: In general I agree with you - the link toolbar was held up for an unprecedented amount of time with petty beaurocracy and has had an unbelievable amount of review. But a 5% impact on page load times is a big deal - if you look at jgrm's graphs in n.p.m.performance the uptick is bigger than anything that's gone in in months. It's entirely possible to remove this cost entirely for documents that don't have any links and hyatt has given some good pointers on how to do it in bug 102992 and probably bug 103097 in the future. Also, the patch hasn't been backed out, just temporarily deactivated until the performance issues can be solved. I'm disappointed myself, but this just makes me more determined to do the necessary to get this reactivated - and better performance is always good.
The attached patch should (theoretically - I don't have the capability of doing performance tests) ensure that there is zero overhead on page-load if the toolbar is disabled, because the onload and onunload handlers aren't even installed at that point. The patch also re-adds the overlay (effectively backing out the prior patch in this bug). What the patch does not do, because I'm not sure how but I'm sure someone does, is default the toolbar to hidden status. If this can be done and someone can do tests to confirm that this does not add any performance overhead over not having the toolbar there at all, I'd like to see this patch in 0.9.5 if at all possible. Comments? (Note that this does not address the 2-3% startup and new-window performance penalty and in fact may increase it by a tiny tiny amount due to the additional number of lines of code and an extra function call or two. But hyatt and brendan agreed over in bug 102992 that that aspect of the performance penalty was not enough to warrant backing out the toolbar)
I very much disagree with this disablement, which was implemented over a period of an hour and a half at midnight PST. It would have been polite at least to wait until this morning so that I knew it was going on. No-one's shipping a product tomorrow, are they? BTW, the problem cannot be auto-hide, because JRGM's page loader tests from yesterday did not include auto-hide. The problem could also be a number of other things that landed on the same day, though - hyatt himself made the point to me that so much landed that jrgm's standard tests would not be conclusive. He _also_ said that if I committed to XBLifying the toolbar, I didn't need to go and make performance measurements (implying, at least to me, that if there was a small short-term impact, it wouldn't matter.) jwbaker makes a good point - improving our support for HTML is never going to come at zero cost. However, hyatt is correct in saying that the cost could be reduced, by XBLifying the toolbar (Bug 102992). Even if this should be turned off on the trunk, leaving it enabled for a few more days would have helped us shake the obvious bugs out of it. Now, no-one else can test it. Lastly, I want it turned back on on the 0.9.5 branch. Getting feedback on this new UI feature is extremely important, because of its visibility, and I think our milestone-downloaders can probably live with a 2-3% increase in page load time. Milestones are for testing and talkback, right? Stuart came up with a patch to fix this problem in record time. If people have concerns about page load, how quickly would you like someone to fix them? I'm not sure it could have been done any quicker. Gerv
sballard: your patch looks good. There is one place where you could save a few cycles: <script type="application/x-javascript"> <![CDATA[ - var contentArea = document.getElementById("appcontent"); - contentArea.addEventListener("load", linkToolbarUI.refresh, true); - contentArea.addEventListener("unload", linkToolbarUI.clear, true); + linkToolbarUI.initHandlers(); ]]> </script> [...] +LinkToolbarUI.prototype.initHandlers = +function() +{ + var contentArea = document.getElementById("appcontent"); You could save a call to getElementById() by passing contentArea as the argument to initHandler(). With that change r=jwbaker@acm.org I think that you could also save a couple of calls to getElementById("linktoolbar") by saving a reference to that DOM node in the LinkToolbarUI object. Then you could just reference this.toolBar. It's a small improvement but I presume that traversing the tree with getElementById is relatively expensive.
Ignore me, I did not notice that you removed the var contentArea from the first block of code. r=jwbaker@acm.org
jwbaker: Thanks for the review :) Btw, getElementById should be damn fast in any sensible implementation because it would go straight to it in a hashtable. I'm sure that moz's implementation is sensible :) We need to verify that this patch does eliminate the pageload overhead before we can start considering asking for a= to get it checked in (although sr would be nice :) ). To that end I've emailed jrgm to ask for testing on this patch with the toolbar disabled to see whether it really fixes the problem (it should, because these handlers were the only thing happening at page load time).
Gerv, a 5% page load spike is unacceptable. Look at the other times that has happened in the last year (where the offending patch was conclusively identified), and you will see a consistent pattern of immediate backout. Our conversation took place when the links toolbar's perf impact was minimal and before the perf problem with page load had been identified. A 5% perf spike with page load is huge. It's the single largest spike we've seen in months. We have a zero-tolerance bar for spiking page loads, and this has been established for months. I do not believe this feature should be re-enabled until it no longer plugs in as a load listener (or if it is fully hidden by default).
Hyatt: two things - 1) my patch above removes the page-load listener when the toolbar is disabled. With this patch and a change to default the status to disabled, would you be in favor of allowing the patch back in? If so, we can argue about the other aspects later but let's do that at least, right? 2) Do you really think that *any* page load listener is too much? If so, why is the overhead for doing something so simple so high? I've been assuming that the problem is simply that we do too much in the page load listener - specifically, go find the head, trawl for link items, populate the toolbar, and determine whether to show or hide it. If I could cut this down to something more like "flag=true; if (boolean1 != boolean2) { dosomething() }", where the condition would be false unless links were present, would that still be too much overhead for you?
Yeah, I'm fine as long as the default way we ship 0.9.5 (as well as the trunk) is to not incur a page load penalty. I don't even have a problem with having the 5% page load penalty be incurred when the links toolbar is on, as long as the default setting is off.
Hmm, perhaps I'll open a new bug: "Fire whoever backed out Links Toolbar in a hissy fit like a little baby." What dorks.
Then let's get some testing to ensure that this patch really does give zero page-load penalty when the toolbar is turned off, then get it r/sr/a'd and get it in turned off. We can try to get the work done that would make turning it on by default acceptably fast, but let's *at least* get it IN first, right? Is there anything I can do beyond emailing jrgm to make some performance testing on this patch happen?
Very constructive, Greg. Thanks for contributing.
QA Contact: sairuh → claudius
Patch looks good to me, but don't you need a hidden="true" on the toolbar itself in the XUL overlay file to make it hidden by default?
Probably - does setting a value in the XUL file get overridden by a document.persist()ed value? If so then I'll add that to the patch, if not we need to find an appropriate location to set the value if it wasn't persisted.
Yes, the persisted value overrides.
Dave is right - the way to have it hidden by default is to change the current default status of "maybe" to "hidden". This is overridden by a persisted value, so if people change it to "always" or "maybe" it will stick. I agree it should be hidden by default on the trunk. Its status on the branch (from Friday onwards) is an ongoing discussion. I've talked to, and mailed hewitt about getting sr= for attachment 52053 [details] [diff] [review]. Hopefully he should do this before he leaves for Portugal for two weeks at 3pm this afternoon :-) The patch to change "maybe" to hidden is trivial; I'll attach that now. Gerv
Why is this marked fixed? Was it checked in? My trunk build still shows the toolbar.
You mean change hidden="maybe" to hidden="true", right? :) Btw, 3pm in what timezone? 3pm where I am is 25 minutes from now... ;)
This bug has morphed. Gerv
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Summary: Disable links toolbar → Re-enable links toolbar but make it hidden by default and have no perf hit
Reassigning. Gerv
Assignee: hyatt → sballard
Status: REOPENED → NEW
sr=hyatt on the last patch. Let's check it in and do another round of tests.
Any time, David. Any time. It's obviously needed.
Gerv wrote: >jwbaker makes a good point - improving our support for HTML is never going to >come at zero cost. That's simply not true. Our support for various standards involves writing code and modifying existing code. We can make things faster, both in new code and how it hooks up to existing code, and we must do so in general, or things will start to suck. We must also fix existing code that's too slow independent of new code. We can even trade the two: if new code for a new feature slows things down by 3% but we can make an across-the-board 3% improvement in old code that always runs during page load, then the new code (if it's as fast as we can make it) plus the speedup to the old code could go in. It depends on how universally demanded the new feature is (if more people would prefer to take just the 3% gain and not tread water, then perhaps there's a way to be lazy, or to let users opt into the new feature). We have existing code that needs to be sped up. We also have unknown and hard to predict interactions among existing subsystems, with nonlinear and remote effects from small changes. We have confounding variables, and insufficient signal-to-noise in our few regular performance tests. All of these say to me that we should not tolerate identifiable performance regressions. (I'll talk about privacy problems injected during Netscape 6.0 development some other time.) Having said that, I have to say that I was surprised to see this bug filed and patched so suddenly. I was content to let Gerv do the back out or disabling change. I don't see why hyatt felt compelled to do it in the dark of night (or so it seemed -- probably just when hyatt works). Hyatt, that's the second time you've backed something out recently -- are you buttering us up to do more back-outs? (I'm serious, shaver and others argue convincingly that we don't back out deleterious changes often enough.) I would much prefer it if the regression perpetrator himself did the back-out. I don't think you (hyatt) want to take that on as a general duty, or to let others off the hook. Let's uphold individual responsibility. /be
"Hyatt, that's the second time you've backed something out recently -- are you buttering us up to do more back-outs? (I'm serious, shaver and others argue convincingly that we don't back out deleterious changes often enough.)" Absolutely.
As we approach 1.0, we have to continue to tighten our standards. It should become increasingly more painful to land code, and our tolerance for performance regressions should be zero. People have to not take backouts personally. In this case, the code wasn't even backed out, simply disabled. It's trivial to re-enable it. I didn't see any need to wait on the original author to do the backout when we have a known zero tolerance policy for dramatic page load performance regressions. In this case, I got an r/sr/a and followed all appopriate policy for disabling the code. The fact that it happened quickly simply underscores that the r/sr/a all were in immediate agreement with my decision to do the disablement.
Attachment #52065 - Flags: approval+
Comment on attachment 52065 [details] [diff] [review] Patch including change to make toolbar hidden by default a=asa (on behalf of drivers) for checkin to 0.9.5 reviewers, super-reviewers, please use the Patch Tracker to note your reviews. Thanks
hyatt, perhaps we should take this to a newsgroup, but I still think you're on dangerous ground by acting as high back-out sheriff. If everyone did that for a 5% performance regression, I'd be happier. If only you do it, you risk becoming bad-cop, and (what's more important) others get to be victims. Even if you can manage to avoid being capricious (what, you didn't back out the NSS changes that have slowed down startup horrendously?! Why not?!!), you'll do a disservice to others who need to back-out responsibly, too. I still don't see why this had to happen so fast that Gerv or sballard couldn't do it. What am I missing? I like zero tolerance of performance regressions (as opposed to "zero tolerance policies" misapplied in other domains, e.g., weapons bans leading to frisks for nail clippers and expulsions for pointing one's finger). I think the policy needs to be clearly articulated by staff@mozilla.org, and I strongly believe that individuals who (wittingly or not) regress performance need to have a chance to make good, if the regression is not so horrible that we can't live with it for a day. /be
Anyone should be allowed to back out 5% page load perf regressions, and this policy should be clearly stated. I'm not the only one who has done backouts of known perf regressions. BTW, there is another serious problem with the link toolbar overlay that should be fixed. It is doing a second include of navigator.css, which will substantially bloat the # of rules in the main navigator.xul window. That line should be removed from the overlay.
To relieve bugpsam pressure I transplanted this to n.p.m.general
(Last comment from me.) Hyatt, you're not responding to my point, but I'll respond to yours: *everyone with CVS commit access* is "allowed" to back out anything, and I never said otherwise, as you know. All of us with commit access are acting "on our honor" -- there is no prior restraint. I'll even agree with the strawman that anyone with commit access should back out a 5% performance regression -- *but only after they make a good-faith effort to get the perpetrator to do the back-out* (I'm using back-out generally here, it includes lesser disablement). Get my drift? /be
I doubt very much that hyatt reads n.p.m.general. The reason navigator.css is included is that hewitt told me to put the show/hide style rule there. If all this requires is removing that line, let's do that. If just doing that is going to break stuff, I need more guidance. Gerv
Gerv: I think that since the overlay content is inserted inside navigator.xul which already has navigator.css as a stylesheet, you don't need to explicitly include it in the overlay. That's how I assumed things worked, anyway. So leave the autoshow rule in navigator.css, but remove the reference to navigator.css from linkToolbarOverlay.xul. Another trivial perf improvement that could be added would be if (!linkToolbarHandler.hasItems) return; inside either linkToolbarUI.clear() or linkToolbarUI.doClear(). That would improve performance a little in both enabled modes - possibly even by a lot, as linkToolbarHandler.clearAllItems() looks potentially expensive.
"*but only after they make a good-faith effort to get the perpetrator to do the back-out*" I disagree with this point. I see no reason why, for regressions of the magnitude that we're talking about, that we have to rely on the original perpetrator of the checkin to do the backout.
This is in, right? So this bug should be resolved/fixed?
Yep. Missed this one. Gerv
Status: NEW → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
mass-verifying claudius' Fixed bugs which haven't changed since 2001.12.31. if you think this particular bug is not fixed, please make sure of the following before reopening: a. retest with a *recent* trunk build. b. query bugzilla to see if there's an existing, open bug (new, reopened, assigned) that covers your issue. c. if this does need to be reopened, make sure there are specific steps to reproduce (unless already provided and up-to-date). thanks! [set your search string in mail to "AmbassadorKoshNaranek" to filter out these messages.]
Status: RESOLVED → VERIFIED
Product: Core → Mozilla Application Suite
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: