Closed Bug 108161 Opened 23 years ago Closed 22 years ago

Delay in onmouseover

Categories

(Core :: Graphics: ImageLib, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla0.9.8

People

(Reporter: markushuebner, Assigned: pavlov)

References

()

Details

(Keywords: testcase, top100, topembed)

Attachments

(3 files, 3 obsolete files)

Although the flickering images (bug 92248) were fixed, there
is now a huge delay causing major usability problems.

example:
http://studio.adobe.com/resources/main.html
in the top navigation you must leave the cursor for at
least a second over the image to see the mouse-over effect.
that's unfortunately way to long.
it also happens that there are several items highlighted.

what I discovered ... everything on site A is real slow ...
then going to site B ... surfing a bit around ... going back
to site A and now everything is completely fine! :)

There might also be a connection to bug 93461
OS: Windows 2000 → All
Blocks: 104864
No longer blocks: 104864
Hrm.  I just tried this and I can't reproduce it.  Looks nice and smooth to me.
WFM 2001101914 WinXP -- mouseovers are instantaneous

(Please correct me if wrong, but isn't the testcase keyword
for bugs that already _have_ a testcase?)
marking worksforme.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → WORKSFORME
WFM win XP build 2001103103, win 98 build 2001110603, linux build 2001110612 and 
Mac OS X build 2001110605, marking verified
Status: RESOLVED → VERIFIED
Using build 2001110603 on win2k noted that at the very first visit on the page
http://studio.adobe.com/resources/main.html the mouseover works fine but when
reloading there is a noticeable delay - quite annoying.
Reopening
Status: VERIFIED → REOPENED
Resolution: WORKSFORME → ---
I tested that one too with yesterday's build of Mozilla and indeed there's this 
delay of almost a half second when overmousing (mouseovering?).
Another example would be http://www.theregister.co.uk ... navigation on the 
right hand side. If moving a bit faster over the menu-items there is no 
mouseover effect at all.

Also see this on WinXP using with 2001110603
Yeah, the key is to do a reload after the page has been loaded once.  It really
is slow.

[ Adding some people who know about loading. ]
Is bug 107879 a dupe of this? 

(If it is a dupe, there's a nice testcase in there)
*** Bug 107879 has been marked as a duplicate of this bug. ***
Reassigning
Status: REOPENED → NEW
changed summary. seems that we have some strange side effect which only gets
activated, when reload is done.
Summary: Delay in onmouseover → Delay in onmouseover (after reload)
Here's a minimal testcase from bug 107879. Of the two images in the testcase,
only the one with "align=right" is susceptible to flickering :-/.
There is a delay [left navigation items] at http://www.sony.com already at the 
first run, hence changing summary.
Using build 2001110803 (win2k)
Summary: Delay in onmouseover (after reload) → Delay in onmouseover
*** Bug 110741 has been marked as a duplicate of this bug. ***
I tried the sutdio.adobe.com, and this bug does not always reproduce.  Because 
I use some large images, I thought would offer an URL to help regress this bug:

    http://www.realmspace.com/unicode/onmouseover_slow.html

I had delays upto 8 full seconds...  This is ungodly slooow.  Both IE5 and NS4 
only took 1 second or less.
A couple more examples. It's not just the one you land on being slow, but the
fact that ones you may have passed over stay with the "blank" state for a couple
seconds that is strange for a user:

http://cbs.marketwatch.com/support/default.asp?siteid=mktw
http://www.sec.noaa.gov/SWN/index.html

One thing I notice is that NS6.2 progressively renders mouseover images whereas
the Moz trunk is waiting for the whole image to arrive. That combined with some
bizzaro image caching could well explain the slowdown.
In the above mentioend website:

   http://www.sec.noaa.gov/SWN/index.html

I tried this out with IE5.  I noticed that this site overall seems slow.  
However, it is interesting how IE5 handles the scenario vs mozilla.  Mozilla 
loads a black box in place of the image until it loads, which does indeed 
appear ackward.  However IE5, simply does not load the image until it is fully 
downloaded.  Thus the user sees the original image, and the onMouseOver 
triggers visually after the image is finally downloaded.

This semantic though seems different than the overall bug.  In my case, the 
images were all rapidly downloaded, but IE and NS4, took 1 sec to render the 
image, where as Mozilla took 8 sec.  In this scenario, either communication is 
slower than NS4/IE5 or the image rendering engine is slower...
Keywords: mozilla0.9.7
all the testcase here seems to work fine in build 2001112104 win32 on the first
load for me. When I reload it slows down alot. If I go to another page and come
back it is fast again.
Created a new testcase for this bug. 
Its URL is http://www.world-direct.com/mozilla-testcases/108161/index.html
Modified the testcase at
http://www.world-direct.com/mozilla-testcases/108161/index.html
by adding an image preload function for MSIE.
As far as I can see it doesn't really make a difference, on this example MSIE 
(5.5 @ Win2K) still just seems to be a very little bit faster than Mozilla 
(0.9.5/20011106 @ Win2k).
again modified the testcase at
http://www.world-direct.com/mozilla-testcases/108161/index.html
by replacing the jpg images by gifs. it doesn't seem to make a difference 
though.
The left navigation on
http://www.tagesschau.de/
is another good example ... 
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla0.9.7
Due to a server configuration issue the images lately weren't cached and so the 
whoe thing was on MSIE same slow as on Mozilla. The configuration was changed 
now, the difference is now visible. 
The testcase still resides at:
http://www.world-direct.com/mozilla-testcases/108161/index.html
Keywords: mozilla1.0
Priority: -- → P2
Attached patch part 1 (obsolete) — Splinter Review
so this patch doesn't really "fix" the problem, only hides it by making
VALIDATE_ALWAYS not reload images and use the cached version.. part 2 will be
some code that actually fires http requests and looks for 304s
Attached patch Part 1 and 2 (obsolete) — Splinter Review
Here is part 1 and 2.  This patch does http checks for 304s if VALIDATE_ALWAYS
is set.  This patch depends on bug 114286 being fixed.. I hacked some of http
to give me back what I want for now.  Now on to part 3... only check the first
request of the same url on a loadgroup...
Attachment #60782 - Attachment is obsolete: true
Depends on: 114286
Blocks: 112899
This isn't going to make it for 0.9.7. :(  It should be ready to go as soon as 
the tree opens for 0.9.8.
Target Milestone: mozilla0.9.7 → mozilla0.9.8
Blocks: 71857
Attached patch Part 1, 2, and 3 (obsolete) — Splinter Review
This patch works (still needs a real fix for 114286) but some parts of it are
sorta nasty.

what this patch does (top down):

Changes the current behavior, which currently causes the image to get completly
reloaded each time a request comes in with LOAD_BYPASS_CACHE (shift reload) or
VALIDATE_ALWAYS (reload).  This patch fixes VALIDATE_ALWAYS to only hit the
server once, doing a 304, and resulting in only 1 copy of the image data.
What happens now, is we use aCX to do a pointer compare to see if we have
already validated or doomed a request so that we only kick off 1 304 request or
doom the request once.	aCX is probably not the best thing to use, however it
is the only thing we can currently use (without adding an additional param to
LoadImage (which I will probably end up doing).  Ideally, we could *always* get
to the default load from the loadgroup or the loadgroup was unique between page
loads (the object is reused, so the pointer address never changes), but the
default load doesn't exist once the page finishes loading, so, JS mouseovers,
for example, don't have a default request in the loadgroup.  aCX happens to be
the prescontext, and it is unique between page loads, so it works out.	So we
store a void* that points to aCX on the imgRequest object so that we can
compare the current one coming in to the one in the request we pulled from the
cache.
If request && validateRequest, then we need to do some good stuff, some of it
is very similar to the stuff we do if we have to do a new load.  The biggest
difference is that we have to create a httpValidateChecker object and pass that
off to the channel we created... then we wait for an OnStartRequest to check
for 304's... more on this later.
Previously, when imgRequestProxy::Init was called, it send all the
notifications immediatly to the listener that is passed in to
imgLoader::LoadImage.  Now, the notifications have been moved into a seperate
function that has to be called manually.  This allows us to not send them
immediatly when we have to validate.
If we get a cache hit, then we have to up update the cached request's loadid
(aCX) so that the next one can compare to that.
... then i removed the MULTIPART_MIXED_REPLACE stuff and used literal strings
to break the build dependancy on mimetype. ...

back to httpValidateChecker:
httpValidateChecker can be held on to by the imgRequest so that multiple proxys
can get added to it without much hassle.  When the channel comes back with a
OnStartRequest, we check its response status.  If it is a 304, then we know we
can use the imgRequest we have and we go ahead and send off notifications and
update the loadid, then we cancel the channel.	(canceling the channel
currently causes the http cache to doom the cache entry.. darin has a bug on
that).	Then we null out the imgRequest's mValidator pointer (which points to
'this') since it is no longer needed.  The channel and the imgRequest are the
only 2 people that hold refs to a httpValidateChecker object, so canceling the
channel will release once and nulling out the imgRequest's copy will release
the other.
If we didn't have a 304, then we have to do some fancy stuff:
Start off by dooming mRequest's mCacheEntry since we're about to throw away
this imgRequest and create a new one.  Go ahead and create a new one and toss
it in to the cache and init it.  Create a ProxyListener for multipart.. (as
done in imgLoader::LoadImage).	Since we already have a channel (we're in
onstart), we will use it.  Now we have to walk through the list of proxies that
are waiting and change their owner (swap imgRequest objects).  then fall
through and let the onstartrequest be passed in to the new ProxyListener object
which will then pass it to the imgRequest object.  I made imgRequest::AddProxy
and imgRequest::RemoveProxy both take an aNotify param that tells it if it
should send notifications.. then i moved both functions up in the file.

sooo.. thats about it.
Attachment #61059 - Attachment is obsolete: true
Hmm, is httpValidateChecker different from nsIURIChecker (which checks other
protocols as well as http)?  Can we consolidate those two classes?
They are somewhat different.  nsIURIChecker doesn't currently have any way to
tell me if the URL has been modified. (304, etc.).  Thats easy enough to add
though.  The bigger problem is that I don't always want the channel canceled
since I want to be able to reuse it.. this makes things a bit more complicated
since you are returning status and whatnot in OnStop and I need it in OnStart.
It's not important to my use whether the data come in in onStart or onStop, so
if you want to change it to use onStart for that, it's okay with me (as long as
you give me a little warning so I can change my code too).  All the editor needs
is a list of URLs that didn't work, with some general indication of what sort of
failure is was that we can pass along to the user.
right... I still think these things are pretty different... nsIURIChecker was
designed to tell you if a url existed or not.  I need to play tricks with HTTP
to find out if I can use cached content and continue to use the stream if not. 
I don't see much reason to try and incorporate what I've done in to this interface.
Keywords: patch
*** Bug 85978 has been marked as a duplicate of this bug. ***
Attached patch Part 1, 2, and 3Splinter Review
This patch uses the code that darin checked in for bug 114286.
Attachment #61505 - Attachment is obsolete: true
Priority: P2 → P1
darin, can you r= or sr= this?
Comment on attachment 61893 [details] [diff] [review]
Part 1, 2, and 3

r=bryner
Attachment #61893 - Flags: review+
sr=darin with some changes.  checked in.
Status: ASSIGNED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → FIXED
Blocks: 78019
Blocks: 79020
Blocks: 92722
Blocks: 100016
Blocks: 103432
Blocks: 103558
Blocks: 94594
Looks like this caused a 6% speedup in page load times on btek -- nice one!!
...due to the fact that we now timeout on four of our worst-performing pages (so
the results aren't included, artificially reducing the sample and depressing the
page-load time). :-)

(According to jrgm on IRC last night.)

Just built from CVS and tested with The Register (www.theregister.co.uk). The
toolbar (upper right side of the screen) has buttons which are supposed to
'lowlight' (turn black) on mouseover. With this latest build, I have to keep the
mouse over a button for quite some time before the buttons turn black. So, I'm
afraid, this problem has not been resolved completely yet...

Tested with Linux CVS build (CVS pulled 200201151420 GMT)
The Register testcase WFM using the win32 2002011503 build under WINNT
Blocks: 103902
Blocks: 98429
Keywords: topembed
Is this one back?!
http://www.medianet.at/
http://www.tagesschau.de/
with build 2002032203 on win-xp,1.1ghz,327ram
when moving over the menu-items on the left the old behaviour is back.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
http://www.medianet.at for example worked great in 0.9.9
So it seems to have regressed (again) recently.
please file a new bug if this is back
Status: REOPENED → RESOLVED
Closed: 23 years ago22 years ago
Resolution: --- → FIXED
Filed a new one - bug 113217

sorry - the correct # is bug 133217
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: