Closed Bug 536673 Opened 15 years ago Closed 15 years ago

Crash [@ UpvarArgTraits::interp_get (fp=0x0, slot=0)]

Categories

(Core :: JavaScript Engine, defect)

x86_64
Linux
defect
Not set
critical

Tracking

()

RESOLVED DUPLICATE of bug 535930

People

(Reporter: hidenosuke, Unassigned)

Details

(Keywords: crash)

Crash Data

Attachments

(1 file)

Steps to reproduce:
1. Open http://www.jcom.co.jp/
2. Click "マイエリアを設定" button

Actual Result:
Firefox crashes

Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.3a1pre) Gecko/20091224 Minefield/3.7a1pre
Attachment #419115 - Attachment mime type: application/octet-stream → text/plain
I just hit this as well, also in an x86_64 Linux debug build.
Assignee: nobody → general
Component: General → JavaScript Engine
Product: Firefox → Core
QA Contact: general → general
This appears to already be fixed on tracemonkey.  The fix landed in this range (based on Linux x86_64 nightlies):
http://hg.mozilla.org/tracemonkey/pushloghtml?fromchange=7fe46c012f25&tochange=593b65940589
Hmmm.  I wasn't logged in when I did my first search for this bug, so I missed security bugs.

(Why are we marking nightly-only crashes security-sensitive?)
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → DUPLICATE
Keywords: crash
Summary: Crash [UpvarArgTraits::interp_get (fp=0x0, slot=0)] → Crash [@ UpvarArgTraits::interp_get (fp=0x0, slot=0)]
(In reply to comment #5)
> (Why are we marking nightly-only crashes security-sensitive?)

I don't know -- scupulous rule-following plus excessive paranoia?

I have argued against restricting tm bugs that both stand alone (they do not tell anything about an independent vulnerability) and will be fixed quickly (and before the next tm -> m-c merge). The latter can catch people by surprise, but sayrer is all over the bugs and waits to merge till tm is "known good".

Still, this is a weak point (human in the loop). Yet I agree that on balance we have been over-using the s-s restriction.

/be
(In reply to comment #6)
> 
> Still, this is a weak point (human in the loop). Yet I agree that on balance we
> have been over-using the s-s restriction.

I would worry more about overusing the s-s restriction if I thought we were shutting out folks that could possibly fix the restricted bugs. As things are now, I don't think it's one of our bigger problems.
(In reply to comment #7)
> (In reply to comment #6)
> > 
> > Still, this is a weak point (human in the loop). Yet I agree that on balance we
> > have been over-using the s-s restriction.
> 
> I would worry more about overusing the s-s restriction if I thought we were
> shutting out folks that could possibly fix the restricted bugs. As things are
> now, I don't think it's one of our bigger problems.

All I know is (1) I've had to cc: people like Luke who are not s-g members and do not have s-s bug access, quite a bit; (2) dbaron wasn't logged in and missed this bug at first. These are enough to tip the balance for me, but agreed it's hard to be sure without more data. This is something we should just try to reach quick agreement on, or else keep being paranoid about.

/be
1) argues for adding more members, particularly the entire TraceMonkey team (is Luke the only omission?) given the fire we play with.  (There's an argument to be made that the s-g idea is a mere formalism in cases like this; it seems a more fundamental concern which should be argued separately, in its own right, if people think that's the case.  I think it's a slowly creeping concern, but I don't have a better idea now.)  2) is a slightly larger concern, but I think we have to recognize that you fundamentally can't get a true view of TM without including security issues, whether caught early and fixed fast or not.

I agree with keeping secret until successful merge to trunk (or merge to branch plus branch release, if that be the case), plus a little fuzz (few days to a week) to allow most stragglers to catch up with updates.  I don't agree with filing visible security-issue bugs, except if the issue is fixed so quickly that it never appears in a nightly.
"The TraceMonkey team?" No, we're an open source project, we do not have a work-for-the-paid-team-or-you're-on-the-outside rule for s-s bug access.

Exploitable bugs in a tm nightly could and arguably should be "caveat user", since we do not make them available as branded builds or promote them in any way. Hackers are not targeting them as far as we know.

The restrictiveness costs; we're here expending yet more time recounting how it costs. We need every efficiency and bit of programmer productivity we can get from the tm/m-c repo split and other technology and people wins, in order to compete.

This is not a negligible issue, although it's not the only issue. It is bigger than the try-server email not linking to exact results pages or waterfall table cells, IMHO.

Unfortunately the apples-to-oranges trade, *if* a tm user were actually hacked, entails incomparable values. Productivity vs. identity theft, not one for us to decide.

But risk = probability * cost. The question remains, is the risk's probability demonstrably more than epsilon above zero?

Again, let's stipulate that the bug in question does not telegraph exploitable flaws elsewhere that won't be fixed promptly.

All we know, from direct experience, is that the restrict-by-default policy costs us. The asserted benefits are based only on fear of exploits no one has demonstrated. They need evidence. Argumentum ad ignorantium is still a fallacy. I think we should look at a "caveat user" policy.

/be
(In reply to comment #10)
> "The TraceMonkey team?" No, we're an open source project, we do not have a
> work-for-the-paid-team-or-you're-on-the-outside rule for s-s bug access.

True, and I had not intended to imply any such policy.  By the "team" I referred to regular hackers on it, which happens to coincide with people also paid to work on it.  If any person shows up and contributes regularly and sufficiently often, that's a deciding factor in determining whether I'd assent to granting access (assuming a standard portion of communication ability and prudence, which I deem necessary preconditions).  I have some difficulty thinking of cases where it wouldn't be for others, given the nature of what we maintain/improve.


> Exploitable bugs in a tm nightly could and arguably should be "caveat
> user", since we do not make them available as branded builds or promote
> them in any way.

If I thought we truly used TM builds as only our own private playground, that would be reasonable.  However, we've publicized them in the past for experimenting with things or checking out the new hotness, and we still do very slightly (if necessarily) when tracking regression ranges.  If we were to decide to abstain entirely from any publicization beyond their necessary use to track regressions, or if we accompanied special publicization by explicit notice of a policy of not hiding transient security bugs, I think I might be able to go along with not temporarily hiding TM-only bugs.


> The restrictiveness costs; we're here expending yet more time recounting
> how it costs.

If we don't know for sure which policy is better (I'm not sure I do), better reasoned debate than a decision without chance to consider the benefits and costs, no?


> Unfortunately the apples-to-oranges trade, *if* a tm user were actually
> hacked, entails incomparable values. Productivity vs. identity theft, not
> one for us to decide.

Who should at least estimate that, if not us, when determining our own policy?  Why are they completely incomparable?


> But risk = probability * cost. The question remains, is the risk's
> probability demonstrably more than epsilon above zero?

Here's a danger of your proposal beyond just attacker targeting TM users: we forget about a deliberately unhidden security bug, and it makes its way into m-c.  I expect this to happen, we're only human.  From there it's a moderate step to such making its way into a release -- improbable but, I think, probable by more than epsilon.  We could mitigate with a [sg:tm] strawman annotation, but that itself introduces a process burden (the worse for being sui generis).  I think there are real costs to what you propose which we must balance against the benefits when considering any change.
(In reply to comment #10)
> 
> All we know, from direct experience, is that the restrict-by-default policy
> costs us. The asserted benefits are based only on fear of exploits no one has
> demonstrated.

I mostly worry about a test case or some slight variant being exploitable on a shipping version of Firefox. We got burned real badly when milw0rm ran with an open-access security bug after 3.5 was released.
(In reply to comment #11)
> ... if we accompanied special publicization by explicit
> notice of a policy of not hiding transient security bugs, I think I might be
> able to go along with not temporarily hiding TM-only bugs.

That's what I'm proposing, with caveats about looking hard at the bug to avoid releasing info of the kind sayrer mentions in comment 12.

We generate almost all such restricted reports. The policy fails if we cannot tell which bugs are isolated and quickly fixable, and which point to weaknesses we don't want to reveal to the world yet per the security bug policy.

> > The restrictiveness costs; we're here expending yet more time recounting
> > how it costs.
> 
> If we don't know for sure which policy is better (I'm not sure I do), better
> reasoned debate than a decision without chance to consider the benefits and
> costs, no?

No one questioned debate, it takes two and I'm participating freely :-P. My point was we have costs without demonstrable benefits.

> > Unfortunately the apples-to-oranges trade, *if* a tm user were actually
> > hacked, entails incomparable values. Productivity vs. identity theft, not
> > one for us to decide.
> 
> Who should at least estimate that, if not us, when determining our own policy? 
> Why are they completely incomparable?

I have apples, you have oranges, we each want the other kind of fruit, we barter or in the modern world use money. The problem is coming up with dollar values for productivity vs. exploits. It can be done only via markets and tort systems in the real world.

Anyway, we have costs without demonstrable benefits, so I'm one of the "us" who is indeed estimating here and finding the current policy too costly :-/.

> Here's a danger of your proposal beyond just attacker targeting TM users: we
> forget about a deliberately unhidden security bug, and it makes its way into
> m-c.  I expect this to happen, we're only human.  From there it's a moderate
> step to such making its way into a release -- improbable but, I think, probable
> by more than epsilon.  We could mitigate with a [sg:tm] strawman annotation,
> but that itself introduces a process burden (the worse for being sui generis). 

This is a problem in general, at least it has been historically: we have not always restricted crash bugs that were probably exploitable (FMR on virtual method call, e.g.). We've gotten better. You have a point that failure can have a bad worst case (P=1, C=high, R=high).

> I think there are real costs to what you propose which we must balance against
> the benefits when considering any change.

Costs to exploited tm users where the bug was quickly fixed? I doubt it.

Costs due to mistakes of the kind you mention above, and sayrer mentions in comment 12 -- you have a point.

To open up bug access to more people, the virtual open-source "team", seems hard right now. Is it possible without convening security-group@mozilla.org?

/be
(In reply to comment #13)
> To open up bug access to more people, the virtual open-source "team", seems
> hard right now. Is it possible without convening security-group@mozilla.org?

Don't think so.  If you feel we have issues right now, particularly wrt Luke, feel free to propose there and I'll second (among others, I'm sure).  I haven't personally felt the need enough to propose yet, but if someone else does I'm more than willing to second to meet someone else's need assuming a certain level of good interaction.
I suspect s-g people will say "just cc: those who need to know" and I think that is a reasonable responses. But it still leaves us with the problem that we don't cc: people often enough or presciently enough.

I know I've had to cc: Luke, but what about others not on s-g? This is a case where you don't know what you're missing out on until someone mentions a bug on IRC, or you see a dependency status change but can't access the other bug, or maybe not till the bad thing happens and we ship something bogus the un-cc'ed person could have spotted.

The over-use of s-s restriction I've seen has been on self-filed and/or fuzzer-found, obscure, probably exploitable but not easily exploited bugs. If we keep on restricting access to these we should have a quick way of cc'ing a standard set of trusted people. A pseudo-user whom we all watch?

/be
Doesn't work, that'd be an easy hole if you weren't an s-g member.  :-)  I can't think of holes if it were a mailing list, tho...
(In reply to comment #16)
> Doesn't work, that'd be an easy hole if you weren't an s-g member.

Sure, my point was to get the mediation of access out of s-g and into bugzilla, which already allows delegation via cc'ing. If the bugzilla pseudo-user were not watchable except by users on a list admin'ed throubh b.m.o, say...

> I can't think of holes if it were a mailing list, tho...

Even easier!

/be
bugzilla doesn't work that way.

anyone can watch any account, but security permissions are applied based on the actual user not the watched account.

you could setup a mailing list, convince s-g to add that list to the group and then dynamically change who is on that list.

I would personally vote as an s-g against granting your mailinglist user access to bugzilla s-g :).

Personally, I only get bugmail for public bugs (this is my choice), which means I, like dbaron will not see such a bug.

If we have advertised t-m builds in the past, that's somewhat of a mistake. I agree with brendan, caveat emptor or something, users who use tracemonkey need to be aware that they're taking a risk. just as anyone with minefield or older incarnations of geckos are told "hey, this thing might eat your hard drive".
Crash Signature: [@ UpvarArgTraits::interp_get (fp=0x0, slot=0)]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: