Right now, DOUBLE_TO_JSVAL/JS_NewNumberValue only assert that the given double is a canonical nan. It seems like, with js-ctypes or other places where code can do scary things outside of our control, that non-canonical nans may arise and break the JS engine. Since DOUBLE_TO_JSVAL/JS_NewNumberValue are only called from outside the engine (js::Value::setDouble is used inside), adding this check shouldn't affect perf too much. Does this sound right?
Created attachment 462653 [details] [diff] [review]
Yeah, I think this is a very good idea. It's a crazy world.
Sayre: I think this should be pulled into the beta as well.
Comment on attachment 462653 [details] [diff] [review]
> JS_NewNumberValue(JSContext *cx, jsdouble d, jsval *rval)
>+ d = JS_CANONICALIZE_NAN(d);
> return JS_TRUE;
Why this is not done at setNumber level? For example, how do we know that the Math implementation generates only canonical nans?
(In reply to comment #3)
> Why this is not done at setNumber level? For example, how do we know that the
> Math implementation generates only canonical nans?
Because we usually know that a given double is not a non-canonical nan. A non-canonical nan is a qnan with a non-zero payload, and since a qnan's payload is intended to mean something to the user (like explain where the nan came from), it seems that cos/sin/+/- etc take care to not generate qnans with random payloads. This is not fixed in the standard, but it looks like JSC depends on it as well.
It might be worth adding it to all the math library calls and measuring whether it even makes a measurable difference. If its cheap, it might be worth it just for peace of mind.
(In reply to comment #4)
> It might be worth adding it to all the math library calls and measuring whether
> it even makes a measurable difference. If its cheap, it might be worth it just
> for peace of mind.
I still think that it would be better to do that at setNumber level while adding something like setCanonicalNumber and convert the codebase to the latter at places where we know for sure.
Add assertions to start. We can dial in if any botch.
We already have assertions in DOUBLE_TO_JSVAL_IMPL.
Adding JS_CANONICALIZE_NAN to setDouble (which includes setNumber):
Sunspider: *1.010x as slow* 538.0ms +/- 0.4% 543.5ms +/- 0.3%
V8: *1.014x as slow* 5193.9ms +/- 0.3% 5267.9ms +/- 0.5%
cube: *1.108x as slow* 32.0ms +/- 1.9% 35.5ms +/- 3.1%
fannkuch: *1.031x as slow* 40.9ms +/- 0.9% 42.1ms +/- 1.5%
earley-boyer: *1.023x as slow* 1755.3ms +/- 0.7% 1795.0ms +/- 0.5%
(In reply to comment #6)
> Add assertions to start. We can dial in if any botch.
Assertions are not very useful here. They will be hit quite likely only when one, for example, tries to exploit a bug in libc math functions. At that point we will need to rush with a patch.
(In reply to comment #8)
> cube: *1.108x as slow* 32.0ms +/- 1.9% 35.5ms +/- 3.1%
> fannkuch: *1.031x as slow* 40.9ms +/- 0.9% 42.1ms +/- 1.5%
> earley-boyer: *1.023x as slow* 1755.3ms +/- 0.7% 1795.0ms +/- 0.5%
This does not indicate if the slowdown comes from Math.sin etc. implementation or from other places where setDouble is used and where one can guarantee that nan is canonical.
Clearly, I was arguing the point that just changing setNumber to canonicalize is a slowdown. I did a second measurement that *only* canonicalized for libc math calls in jsmath.cpp and the entire slowdown remained.
Was this potential issue already in the code, or was it introduced during the recent development cycle(s)? If the problem you are fixing with this bug has always been there, what historical data is available to indicate a real-world problem? The point is, why take a 1 - 1.5% performance hit to fix a problem that isn't really a problem?
(In reply to comment #12)
> Was this potential issue already in the code, or was it introduced during the
> recent development cycle(s)?
It was introduced with recent changes in SM so we have no historical data.
> The point is, why take a 1 - 1.5% performance hit to fix a problem
> that isn't really a problem?
It is not known if there is a problem. The current code assumption is that libc math functions never return non-canonical NaNs and the current observations confirm this. But that does not mean that libc of maths functions implementations on all supported platforms do not have bugs that, with a specific input, would return a bad exploitable NaN. Such bugs especially potent since C/C++ standards do not insist on canonical NaNs and bad NaNs were never problematic until recently so there were never a pressure to fix such bugs.
Given that my opinion is to take a performance hit until we have reasonable assurances that math behaves properly.
Created attachment 463187 [details]
How about the less pessimistic solution:
While sin(x) could return a non-canonical nan for any of the 2^64 inputs, for most of those inputs, that would be wrong according to the spec: these libc math functions only have a choice in the matter for a select few input values, mostly infinities, zeroes, (canonical) nans. Thus, we could be reasonably confident by testing just these weird double values as part of, say, jsapi-tests. Incidentally, I wrote such a test two days ago (when I started getting paranoid), although its just an ad hoc test, not a jsapi-test.
I am in agreement with Luke. If no data is available, do the needed testing to acquire it. You are all doing a tremendous amount of work (and are doing a great job, btw) to improve the overall JS performance. It seems very counter-productive to be chipping away at perf improvements on the backend without doing the due diligence Luke is suggesting.