User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de-DE; rv:1.7.5) Gecko/20041122 Firefox/1.0
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; de-DE; rv:1.7.5) Gecko/20041122 Firefox/1.0
strange .toString(16) conversion together with sign handling , e.g.:
~(~0xFFFFFF00):-100 (expected: original value)
~0xfedcba .. -fedcbb (a->b .. base complement, but here?!!)
Steps to Reproduce:
1. unexpected "-" sign in a hexadecimal value
2. ~~x != x
3. unexpected '+1' in the textual representation of ~x hexadecimal value (to be
expected only on base complement machines), e.g. ~0xfedcba .. -fedcbb
(4. for the actual implementation I'm expecting a different textual
representation on 1-complement machines, e.g. ~0xfedcba .. -fedcba )
The results above are not really wrong (it's a mozilla extension), but more than
a bit surprising, and not that, what a C programmer is expecting (unsigned mask,
e.g. using ToUint32).
I don't see a documentation of this behavior.
In a beginners course, I wanted to show, why the & 0xFFFFFF is necessary in
(~color) & 0xFFFFFF), and it wasn't easy to explain the results ...
(may be seen as minor bug, but may provoke hidden errors, and there is no really
easy way to invert 32-bit-masks)
Created attachment 171627 [details]
example to show the effect
>1. unexpected "-" sign in a hexadecimal value
Then what did you expect for (-256).toString(16)?
>2. ~~x != x
~ only returns a signed 32-bit integer. If x is not representable as a signed
32-bit integer then this is expected.
>3. unexpected '+1' in the textual representation of ~x hexadecimal value (to be
>expected only on base complement machines), e.g. ~0xfedcba .. -fedcbb
Actually this is completely expected, given 1. and 2.
(In reply to comment #2)
> >1. unexpected "-" sign in a hexadecimal value
> Then what did you expect for (-256).toString(16)?
I don't think about negative hexadecimal values, the operation "-" should be
thaught as being undefined on bit masks.
But ~x should have 1-bits, where x is having 0-bits in one of the both usual
internal representations of x, 2-complement or 1-complement.
comment: of course, there is no "perfect" solution. But in the 35 years I'm
working in the software business, I was very often working with hexadecimal
values, but beside teaching and basic I/O software like toString I'm remembering
no other usages as for masking purposes. And these masks were very very often
including the "sign bit".
256 = 100(16), ~256 = "all bits inverted" - in a 32bit word, this is of course
FFFFFEFF(16), the value of this number as negative number is, I think, of minor
E.g.: the inverse color value to 00FFFF is definitely FF0000 and not -FFFF or,
even worse, -10000(16)
depending to the internal representation, "-256" would then have one of the
values "FFFFFF00" or "FFFFFEFF" - but a programmer making his masks using
negative numbers .. ?!?
actual state in Mozilla:
0xffffffff.toString .. ffffffff (as I expected)
~0xffffffff.toString .. 0 (as I expected)
"0x"+0xefdcba.toString(16) ... 0xefdcba (as I expected)
"0x"+ (~0xefdcba).toString(16) .. 0x-efdcbb (and perhaps 0x-efdcba on a
I think, it should give 0xff102345, representing a 32bit mask, for 2- and
> >2. ~~x != x
> ~ only returns a signed 32-bit integer. If x is not representable as a signed
> 32-bit integer then this is expected.
Yes, but 0xffffffff is the canonical hexadecimal representation of -1 on a
2-complement and -0 on a 1-complement computer with 32-bit words. This and only
this representation is also used in any computer science course for beginners.
After study of ECMA262-3 I'm seeing that I'm in error.
But the user doc should be altered.
It is not enough to show the syntax of numbers.
It would be valuable information to show the number concept, which is based on
ECMA 64 bit-reals, where the 32bit operations are a simulation of 32bit
2-complement integers based on a subset of this format, embedded in a 'best
possible' way into this 64bit environment.
Unluckily "~" is defined by ECMA to work on signed 32bit integers. This should
be documented in an end users documentation.
In all cases "~" is therefore changing the sign:
- the description "the bit values are inverted" will say not enough about the
resulting value without "2-complement" (a user on a 1-complement computer would
expect ~x == -x, and in fact, the bit values are only thaught to be inverted)
- an output of such a value using an over 32bit concept (e.g. using
.toString(16)) will give a surprising effect and may cause ugly programming errors
- especially users of .toString(16) should be informed, what they are really
getting (e.g. ~0 .. -1, not 0xffffffff as expected)
(a bit inverting operation on unsigned 32-bit integers would make less troubles
with bit masks; alternatively an output operation like toStringU32(16), and both
could be able to help documenting the situation)
per comments, marking invalid. I will see about adding a comment about this to
the js references when they become available from devedge.
The section on Bitwise Operators needs more exposition on the effects of the
32bit conversion, negative numbers etc.
sorry for the spam.
Fixed (but I can't close bugs)
marking as fixed.