Closed Bug 465754 Opened 16 years ago Closed 15 years ago

[redux] 64-bit interpreter atom representations are incorrect

Categories

(Tamarin Graveyard :: Virtual Machine, defect, P2)

defect

Tracking

(Not tracked)

VERIFIED FIXED
flash10.1

People

(Reporter: lhansen, Assigned: stejohns)

Details

Attachments

(1 file, 1 obsolete file)

The 64-bit interpreter currently creates int atoms that are compatible with the 32-bit interpreter, ie, they have a 29-bit sign-extended payload.  But the correct representation on 64-bit systems is a 32-bit sign-extended payload.  Some programs may detect this problem, though it's unlikely: it requires storing an atom with a 32-bit payload from native code into a variable from which interpreted code picks it up and misinterprets it.  (In addition, given the structure of the interpreter this will sometimes yield the right result.)
Assignee: nobody → lhansen
Flags: flashplayer-triage+
Flags: flashplayer-qrb+
Blocks: 467776
No longer blocks: 467776
Flags: in-testsuite?
Flags: flashplayer-qrb?
Flags: flashplayer-qrb+
Depends on: 490565
Lars, is this still accurate? I don't recall seeing this behavior last time I was poking at the interpreter.
I believe that description is still correct.
Flags: flashplayer-qrb? → flashplayer-qrb+
Priority: -- → P2
Target Milestone: --- → flash10.x
Attached patch Patch (obsolete) — Splinter Review
This patch is a giant roll-up for pretty much all outstanding 64-bit specific bugs -- mostly Interp-only. Specifically: 482506, 485375, 491489, 493922, 493924, 487253, 467776.
Assignee: lhansen → stejohns
Attachment #382219 - Flags: review?(lhansen)
No longer depends on: 490565
bug 492798 is also fixed with this patch.
Attached patch Patch #2Splinter Review
Updates the first patch to fix some Win64 and Linux64 compile issues, and fix an outright bug in the negate operation: it didn't handle the case of -2147483648 needing to overflow into a double when negated. This didn't show up on Mac64 (only Win64 and Lin64) because the Mac version of doubleToAtom_sse2() was wrong, and always clipped int atoms to 29 bits (even on 64-bit architectures), meaning we were actually overflowing to double atoms... so I fixed that, and attempted to clean up surrounding code a bit while I was there. (More cleanup would be nice in that mess but is scope creep...)
Attachment #382219 - Attachment is obsolete: true
Attachment #382371 - Flags: review?(lhansen)
Attachment #382219 - Flags: review?(lhansen)
Comment on attachment 382371 [details] [diff] [review]
Patch #2

I don't understand the definition of NEEDS_SIGN_EXTEND - should it not be true if the operand is less than zero?
Attachment #382371 - Flags: review?(lhansen) → review+
(In reply to comment #6)
> (From update of attachment 382371 [details] [diff] [review])
> I don't understand the definition of NEEDS_SIGN_EXTEND - should it not be true
> if the operand is less than zero?

yeah, it's confusing; consider that in 64-bit builds, an int atom can have the range of -2147483648 (0xffffffff80000000) to 4294967295 (0x00000000ffffffff). [actually, shifted up 3 bits, but you get the idea]

If the value is <= 2147483647 then the atom is treated as a signed int; otherwise it's considered unsigned. 

NEEDS_SIGN_EXTEND is used only for convert_i, which converts all types to a signed int; if the source is an unsigned int, it retains the same 32-bit pattern but is treated as signed instead of unsigned. so what we have to do is ensure that the int atoms in the range 0x0000000080000000..0x00000000ffffffff get their sign bit propagated so that they are treated as signed. checking for ">0" is overkill (handles atoms that don't need it) but is likely to be cheaper than a more precise check.
pushed to redux as changeset:   2006:d5654b2ac8b2
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Having some problems generating a test case that will catch the problem that was mentioned with negate in the 64bit builds.

var x:int = -2147483648;
var y = -x;
trace(x);
trace(y);
if (x == -2147483648) trace("x == -2147483648");
if (y ==  2147483648) trace("y ==  2147483648");
++x;
++y;
trace(x);
trace(y);
if (x == -2147483647) trace("x == -2147483647")
if (y ==  2147483649) trace("y ==  2147483649")

tc rev700 on win64 will output:
-2147483648
-2147483648
x == -2147483648
y ==  2147483648
-2147483647
-2147483647
x == -2147483647
y ==  2147483649

AS you can see the checks on the values are true for 'y' when they really should not be.
1) y traces out as '-2147483648' yet is also == 2147483648 
2) y after ++y traces out as '-2147483647' yet is also == 2147483649 

How can I actually verify that the old behavior is no longer happening if the checks on these values do not fail properly using an older vm?
(In reply to comment #9)
> How can I actually verify that the old behavior is no longer happening if the
> checks on these values do not fail properly using an older vm?

I don't know if this will help but we could add methods to the System class that perform either representation checking or return a ByteArray with the exact bits of the value.  (Note too that a bogus integer representation in an atom can confuse the number formatter which may account for some of the apparent behavior.)
Resolved fixed engineering / work item that has been pushed.  Setting status to verified.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: