Closed Bug 631098 Opened 13 years ago Closed 13 years ago

Slow arrays should always (after creation) have explicit length properties

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla2.0b12
Tracking Status
blocking2.0 --- final+

People

(Reporter: Waldo, Assigned: Waldo)

References

Details

(Whiteboard: [fixed-in-tracemonkey][softblocker])

Attachments

(1 file)

Attached patch PatchSplinter Review
'abcabc'.match(/abc/) creates an array which doesn't have a Shape* for a length property.  The length property just happens to work by an unholy combination of array_lookupProperty and array_getProperty.  This is no good: if a native object observably has a property, it seems entirely natural that a native lookup for that property should return the property.  We should fix this.

There's no way to test this without diving into privates to get JSObject::nativeLookup, which seem inappropriate for a jsapi-test, so no test.

This is equivalent to creating an empty, dense array and then using one of any number of ways to convert it to be slow.  Object.defineProperty would do that right now, and if you passed in "length" it'd do it with no other effect.  So this produces slow arrays with identical characteristics to arrays created in many other ways, reducing complexity and bug surface, thereby reducing the need to fuzz arrays with length properties not backed by Shape* specially.
Attachment #509308 - Flags: review?(brendan)
Patch looks great, thanks.

Fixes from tm are still flowing into m-c, so this shouldn't land until after fx4 branches (clones to a repo, whatever).

Any effect on perf via push to try with "try: -a" or whatever the message is?

/be
I hope to land this now and not to wait for 4.0+, to reduce complexity, bug surface, and attack surface.  Your comment in the [].length redefinition patch suggests that this is an optimization.  But as far as I've been able to discover from blame-following, preserving no-length after removing the shared hack was never a deliberate optimization.  Indeed (see below) it is observably not an optimization.  So I believe we should fix these concerns now as the trap your patch exposes them to be, not hack around them.

I've run SunSpider and v8 tests (both in the sunspider suite) against the patch.  Neither shows any performance difference outside of noise.  Try results are here:

http://perf.snarkfest.net/compare-talos/index.html?oldRevs=45c647b78709&newRev=5410767a8853&tests=%20tdhtml,tdhtml_nochrome,tp4,tp4_memset,tp4_pbytes,tp4_rss,tp4_shutdown,tp4_xres,dromaeo_basics,dromaeo_css,dromaeo_dom,dromaeo_jslib,dromaeo_sunspider,dromaeo_v8,tsspider,tsspider_nochrome,ts,ts_cold,ts_cold_generated_max,ts_cold_generated_max_shutdown,ts_cold_generated_med,ts_cold_generated_med_shutdown,ts_cold_generated_min,ts_cold_generated_min_shutdown,ts_cold_shutdown,ts_places_generated_max,ts_places_generated_max_shutdown,ts_places_generated_med,ts_places_generated_med_shutdown,ts_places_generated_min,ts_places_generated_min_shutdown,ts_shutdown&submit=true

I have no idea how I'm supposed to interpret these.  The variance in changes look great enough that I'm inclined not to trust any of it, and to go with the numbers much narrower shell testing revealed.
blocking2.0: --- → ?
blocking2.0: ? → final+
Whiteboard: softblocker
Deliberate has nothing to do with it, but the original shared/permanent length was an optimization, and defining length per-instance could be a deoptimization from very old shavarray days. That's all, and that's why I asked for perf testing, which seems like enough to address it.

But this bug is no way a blocker so why should we take it in? General words about bug surface, attack surface, etc. do not make the case. It's a good patch but are we serious about being in the end-game, or not? There are real blockers to spend time on, including your bug 631135.

/be
should this not be marked as a blocker?
At this point, neither this bug nor the bug it blocks (now), bug 598996, should block Firefox 4. We can probably take this one without much risk, but it looks too late to work on bug 598996. It didn't get early-enough attention.

This bug could ride-along and I bet we'd be no better or worse off. It has some opportunity cost (I'm still glaring at bug 631135 but maybe Luke should take that one). Dave, what do you think?

/be
Blocks: 598996
(In reply to comment #5)
> At this point, neither this bug nor the bug it blocks (now), bug 598996, should
> block Firefox 4. We can probably take this one without much risk, but it looks
> too late to work on bug 598996. It didn't get early-enough attention.
> 
> This bug could ride-along and I bet we'd be no better or worse off. It has some
> opportunity cost (I'm still glaring at bug 631135 but maybe Luke should take
> that one). Dave, what do you think?

I don't have a strong opinion on this one. It seems low-risk, so there's not much reason not to take it, but by itself it doesn't seem to buy us much, so there's not much reason to take it. Given that the patch is already done and it apparently cleans up the design a bit, I guess it seems worth taking.
It buys us not having to worry about ''.match(//) arrays being anything other than normal slow arrays created by conversion from being dense.  If we don't fix this, Jesse and other fuzzers need to specially and deliberately test arrays created this way.  Regular testing of general-purpose dense and slow arrays won't be sufficient.

Bug 631135 wasn't even filed at the time that I wrote this patch or submitted it to try server.  (Or so my memory goes -- looks like the try repo got reset this morning, so I can no longer check.)  I wrote the bug comment here before taking that bug, only waiting to submit it until this morning to be sure the try perf run showed no clear regression.  Any effort spent here to push to TM (that's nearly all that's left per comment 1) will not affect that bug's progress.
(In reply to comment #7)
> It buys us not having to worry about ''.match(//) arrays being anything other
> than normal slow arrays created by conversion from being dense.  If we don't
> fix this, Jesse and other fuzzers need to specially and deliberately test
> arrays created this way.  Regular testing of general-purpose dense and slow
> arrays won't be sufficient.

Is there a symptom for this bug? I'm curious, since we used to have all slow arrays delegate to Array.prototype.length. Something could have broken, butI don't know of anything.

We should test the different paths anyway, black-box style. Knowing something about (lack of) code factoring is good to motivate testing but the knowledge (once the common code has been factored into a common subroutine) doesn't mean we don't need those tests.

> Any effort spent here to push to TM
> (that's nearly all that's left per comment 1) will not affect that bug's
> progress.

No, TANSTAAFL still applies. In release end-games we leave bugs unfixed, esp. cleanup bugs. Cleanup always costs and it carries some non-zero but probably tiny risk. This week, ok. Next week, not ok. Get it?

I'll r+ now.

/be
Attachment #509308 - Flags: review?(brendan) → review+
http://hg.mozilla.org/tracemonkey/rev/c08656eb9e39
Whiteboard: softblocker → [fixed-in-tracemonkey][softblocker]
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: