Closed
Bug 414513
Opened 17 years ago
Closed 13 years ago
determine proper chunk size
Categories
(Firefox :: Address Bar, defect)
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: sspitzer, Unassigned)
Details
determine proper chunk size
from faaborg's quoted research we know that 100 ms is the max time we can not
respond before people notice lag.
sane plan:
1) figure out an acceptable machine
2) on that machine, figure out how many places we can get through in 100ms, using a search term that doesn't match anything, and where all places have bookmarks (let's assume 1 bookmark, although multiple bookmarks per place is valid.)
use that as our default chunk size in our .js file.
crazy plan #1:
we can add timing code into the product that changes the chunk size
dynamically.
haven't fully thought this out, but say on machine X we determine that
AutoCompleteFullHistorySearch() with chunk size of 100 takes me 10 ms. Clearly
chunk size on that fast machine can be more.
but on machine y, chunk size of 100 takes me 250 ms. that would be bad times,
and we need to do less.
I'm suggesting we add the timing code (we have something with good enough
precision, PR_Interval something right?), and keep track of the high water mark
of how long AutoCompleteFullHistorySearch() takes a member variable.
then, the next time we call nsNavHistory::StartSearch(), if high water mark is
> 100 ms, we decrease the pref for chunk size. if high mater mark < 100 ms, we
increase it.
less crazy plan #3:
alternatively, there might be a way (through PR_SysInfo or something else?) to
figure out the RAM / CPU of the machine, use that to determine if chunk size should be (for example) 50, 100 or 200.
Reporter | ||
Comment 1•17 years ago
|
||
as for why a bigger chunk size is better, to summarize earlier bug comments:
for a search that results in very few hits, the bigger the chunk size, the less total time it takes for us to see all the results and the less time until we see the first hit.
Seems to me there's way too many confounding factors for #3 to work.
This box has a 2.4 GHz cpu, but it's an ancient entry-level celeron, so it's really slow for it's speed. Plus, I've got perhaps the oldest legacy app in the western world running, sucking up cycles like there's no tomorrow. Plus, it's serving a couple of thin clients. I'm just a user, but I'd guess that basing chunk size on cpu/ram wouldn't be optimum in my case.
If you're going with adaptive/dynamic stuff in the url bar, go whole hog, sez I. Sane is boring, but #1 would sure be cool.
Comment 3•13 years ago
|
||
chunks have gone long time ago
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•