Closed Bug 1335122 Opened 7 years ago Closed 7 years ago

Crash in mozilla::dom::CreateInterfaceObjects

Categories

(Core :: DOM: Core & HTML, defect, P3)

45 Branch
x86
Windows
defect

Tracking

()

RESOLVED INCOMPLETE
Tracking Status
firefox-esr45 --- wontfix
firefox51 --- wontfix
firefox52 --- wontfix
firefox-esr52 --- wontfix
firefox53 --- wontfix
firefox54 --- wontfix
firefox55 --- unaffected
firefox56 --- unaffected

People

(Reporter: jesup, Unassigned)

Details

(4 keywords)

Crash Data

This bug was filed from the Socorro interface and is 
report bp-dfd0eb0c-6864-48fb-aeee-6725f2170124.
=============================================================

We have a lot of DOM Binding crashes with random addresses in CreateInterfaceObjects(); including (like here) some clear UAFs.  Almost all are windows (all but 1 in the last month); and they come from a variety of sources for DOM binding creations.  Many are EXEC crashes, even worse.  This points to a likely flaw in refcounting or lifetimes for the Binding code (though I'm guessing blindly here) - but in any case this points to some serious, perhaps-exploitable problem.

Not sure who should look at it.  Smaug?  Who should be cc'd?
Flags: needinfo?(bugs)
Flags: needinfo?(peterv)
Flags: needinfo?(bzbarsky)
Flags: needinfo?(bugs)
I've tried to look into some of these in the past (or at least related), iirc.  The one linked here is not really usable because it claims the crash is on the Rooted<> declaration, but something like https://crash-stats.mozilla.com/report/index/b120ac57-4a62-494c-a1f0-39ea42170130 is more like what I recall: it's trying to write to the proto and iface cache and crashing.

How we manage _that_ I do not know: the proto and iface cache just hangs off the global, we never create the global (not far enough to be running code like what https://crash-stats.mozilla.com/report/index/b120ac57-4a62-494c-a1f0-39ea42170130 shows) if allocating it fails, and for mainthread (which is what this stack is) there isn't even any lazy allocation inside the proto and iface cache.

Oh, and the crash says EXCEPTION_ACCESS_VIOLATION_READ but the thing being read is the stack variable "interface".

I don't have a usable Windows box right this moment, but it might be worth someone trying to figure out from minidumps which actual instruction is crashing so we would have _some_ idea of where to go digging.  I'm left wondering about GC bugs of some sort, for lack of anything else concrete.  :(
Flags: needinfo?(bzbarsky)
Andrew, can you poke around on a couple of these?  Or poke someone else to?  Thanks
Flags: needinfo?(continuation)
I'm not familiar with debugging minidumps. Maybe you can look at this, Kan-Ru, when you get back? Your help was very valuable in understanding bug 1328768. Thanks.
Flags: needinfo?(continuation) → needinfo?(kchen)
Group: core-security → dom-core-security
(In reply to Boris Zbarsky [:bz] (still a bit busy) from comment #1)
> I've tried to look into some of these in the past (or at least related),
> iirc.  The one linked here is not really usable because it claims the crash
> is on the Rooted<> declaration, but something like
> https://crash-stats.mozilla.com/report/index/b120ac57-4a62-494c-a1f0-
> 39ea42170130 is more like what I recall: it's trying to write to the proto
> and iface cache and crashing.
> 
> How we manage _that_ I do not know: the proto and iface cache just hangs off
> the global, we never create the global (not far enough to be running code
> like what
> https://crash-stats.mozilla.com/report/index/b120ac57-4a62-494c-a1f0-
> 39ea42170130 shows) if allocating it fails, and for mainthread (which is
> what this stack is) there isn't even any lazy allocation inside the proto
> and iface cache.
> 
> Oh, and the crash says EXCEPTION_ACCESS_VIOLATION_READ but the thing being
> read is the stack variable "interface".
> 
> I don't have a usable Windows box right this moment, but it might be worth
> someone trying to figure out from minidumps which actual instruction is
> crashing so we would have _some_ idea of where to go digging.  I'm left
> wondering about GC bugs of some sort, for lack of anything else concrete.  :(

The assembly is heavily inlined and reordered so I have no confidence in deciphering all the instructions. The crash offset for bp-b120ac57-4a62-494c-a1f0-39ea42170130 looks like

        return;
      }
   
      *protoCache = proto;
  61CE42F0  test        edx,edx  
  61CE42F2  je          `js::irregexp::RegExpEmpty::GetInstance'::`2'::`dynamic atexit destructor for 'instance''+9E223h (61CE4303h)  
  61CE42F4  and         edx,edi  
  61CE42F6  cmp         dword ptr [edx+0FFFF8h],0  
  61CE42FD  jne         mozilla::dom::CreateInterfaceObjects+78h (61738721h)  
  61CE4303  push        esi  
  61CE4304  call        js::gc::StoreBuffer::putCell (616C35A7h)  
  61CE4309  jmp         mozilla::dom::CreateInterfaceObjects+78h (61738721h)  
  61CE430E  and         edx,edi  
  61CE4310  mov         ecx,dword ptr [edx+0FFFF8h]  
  61CE4316  test        ecx,ecx  
  61CE4318  je          mozilla::dom::CreateInterfaceObjects+78h (61738721h)  
  61CE431E  push        esi  
  61CE431F  call        js::gc::StoreBuffer::unputCell (616C3AC8h)  
  61CE4324  jmp         mozilla::dom::CreateInterfaceObjects+78h (61738721h)  
        if (protoCache) {
  61CE4329  test        esi,esi  
  61CE432B  je          mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
          // If we fail we need to make sure to clear the value of protoCache we
          // set above.
          *protoCache = nullptr;
  61CE4331  mov         eax,dword ptr [esi]  
  61CE4333  and         dword ptr [esi],0  
  61CE4336  test        eax,eax  
  61CE4338  je          mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
  61CE433E  and         eax,edi  
  61CE4340  mov         ecx,dword ptr [eax+0FFFF8h]  
  61CE4346  test        ecx,ecx  
  61CE4348  je          mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
  61CE434E  push        esi  
  61CE434F  jmp         `js::irregexp::RegExpEmpty::GetInstance'::`2'::`dynamic atexit destructor for 'instance''+9E29Eh (61CE437Eh)  
        }
        return;
      }
      *constructorCache = interface;
  61CE4351  and         eax,edi  
->61CE4353  cmp         dword ptr [eax+0FFFF8h],0  
  61CE435A  jne         mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
  61CE4360  jmp         mozilla::dom::CreateInterfaceObjects+0D3h (6173877Ch)  
  61CE4365  test        eax,eax  
  61CE4367  je          mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
  61CE436D  and         eax,edi  
  61CE436F  mov         ecx,dword ptr [eax+0FFFF8h]  
  61CE4375  test        ecx,ecx  
  61CE4377  je          mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)  
  61CE437D  push        edx  
  61CE437E  call        js::gc::StoreBuffer::unputCell (616C3AC8h)  
  61CE4383  jmp         mozilla::dom::CreateInterfaceObjects+0E5h (6173878Eh)

eax is null so we try to dereference 0xffff8, which matches the crashing address. edi is 0xfff00000. interface is optimized away.

It looks like it's inside http://searchfox.org/mozilla-central/rev/d20e4431d0cf40311afa797868bc5c58c54790a2/js/src/jsobj.h#655 or somewhere around it.
Flags: needinfo?(kchen) → needinfo?(bzbarsky)
I suspect edi=0xfff00000 is ~ChunkMask; and the "and eax,edi" are addr &= ~ChunkMask like you see in Cell::chunk() (Heap.h)

storeBuffer() is "return chunk()->trailer.storeBuffer;"
Hmm.  So I agree that being inside writeBarrierPost is not an unreasonable place to end up for this assignment.  ChunkMask is definitely 0xfffff so on a 32-bit system, 0xfff00000 could in fact be ~ChunkMask.

ChunkSize is 0x100000.  Things are padded so that the ChunkTrailer (what chunk()->trailer refers to) is up against the end of the chunk.  ChunkTrailer contains a ChunkLocation (uint32_t), "padding" (uint32_t), and two pointers.  On a 32-bit system that should give it a sizeof(ChunkTrailer) == 16.  That means the offset of the ChunkTrailer within the chunk is 0xffff0.  And 0xffff8 is the offset of "storeBuffer" in the chunk.

OK, so next->storeBuffer() will return *((next & 0xfff00000) + 0xffff8), which is what we see going on here.

This means that our incoming pointer for "next" or "prev" was bogus, such that next & 0xfff00000 came out 0.  But next wasn't 0 itself, since that's null-checked (for both "next" and "prev").

OK.  In this case, "next" is "interface" and "prev" is whatever used to live in *constructorCache.

"interface" comes from CreateInterfaceObject.  The only values that returns are nullptr and whatever is returned by JS_NewObjectWithGivenProto.

The thing that used to live in *constructorCache... "constructorCache" comes from things like &aProtoAndIfaceCache.EntrySlotOrCreate(constructors::id::StyleSheet) (because we're coming through StyleSheetBinding::CreateInterfaceObjects).  This is clearly mainthread, so that means we went through ArrayCache::EntrySlotOrCreate, which returns (*this)[i].  ArrayCache is defined like so:

  class ArrayCache : public Array<JS::Heap<JSObject*>, kProtoAndIfaceCacheCount>

Array has a "T mArr[Length]", with T == JS::Heap<JSObject*>.  So the default Array ctor should default-construct all the JS::Heaps here.

Heap's default ctor calls init(GCPolicy<T>::initial()), which means init(nullptr) in this case (see GCPointerPolicy).

So my best guess at this point is that we got a bitflip such that "next" or "prev" no longer tests falsy but is a small enough number to give 0 when anded with ~ChunkMask.  :(
Flags: needinfo?(bzbarsky)
(In reply to Boris Zbarsky [:bz] (still a bit busy) from comment #6)
[great analysis deleted]

> So my best guess at this point is that we got a bitflip such that "next" or
> "prev" no longer tests falsy but is a small enough number to give 0 when
> anded with ~ChunkMask.  :(

So, it seems weird that these values would be subject to such a continual stream of bitflips that just happen to occur in these pointers.  Unless this is the sign of an active RowHammer attack targeted at this value (or some other value that hits this frequently by mistake).   This seems less likely than bitflips, which seem unlikely to begin with.  (a handful, maybe - a continual stream of them?  less likely.)

Perhaps we can land a runtime assertion to catch bad ptrs here (or bitflips) and run that in beta?
> So, it seems weird that these values would be subject to such a continual stream of bitflips

Yes, I agree that this is highly suspicious...

Jon, Steve, any other good ideas on how a GCThing can end up with the sort of pointer we're seeing here?

We can certainly add runtime assertions; that would at least allow us to tell whether it's the next or prev value (or both) that is bogus.
Flags: needinfo?(sphink)
Flags: needinfo?(jcoppeard)
I looked at other crashes that have different crash addresses.

In https://dxr.mozilla.org/mozilla-release/rev/142c26e6e6e46b64653bf84db27ad1ed85d6dfc8/dom/bindings/BindingUtils.cpp#915 the crash usually happen at line 956 or 966. I'm not sure about line 956 but line 966 is usually due to invalid protoCache address, sometimes poisoned address like 0xe5e5e501

-> 956	  JS::Rooted<JSObject*> proto(cx);
   957	  if (protoClass) {
   958	    proto =
   959	      CreateInterfacePrototypeObject(cx, global, protoProto, protoClass,
   960	                                     properties, chromeOnlyProperties,
   961	                                     unscopableNames, isGlobal);
   962	    if (!proto) {
   963	      return;
   964	    }
   965	
-> 966	    *protoCache = proto;
   967	  }
   968	  else {
   969	    MOZ_ASSERT(!proto);
   970	  }

I also noticed that these crashes only happen on old releases, with 52 beta accounting for 3 crashes, 51 accounting for 31 crashes, 50 accounting for 40 crashes and a long tail of older releases in the past month. Maybe it's not worth spending more time on this signature for now.
(In reply to Kan-Ru Chen [:kanru] (UTC+8) from comment #9)

> I also noticed that these crashes only happen on old releases, with 52 beta
> accounting for 3 crashes, 51 accounting for 31 crashes, 50 accounting for 40
> crashes and a long tail of older releases in the past month. Maybe it's not
> worth spending more time on this signature for now.

That's more-or-less what I would expect from daily usage patterns; that doesn't read to me as anything but a pretty constant rate of crashes (when compared to usage).
> I also noticed that these crashes only happen on old releases

This just means that almost no one is using nightly or aurora or beta, compared to the number of people using releases.

Invalid protoCache address is pretty weird.  Especially if it's poisoned...  Those crashes were mainthread, not workers, right?  That pointer is into an array hanging off the global, so if the array is dead either the global is dead or we have something deeply messed up.  :(
(In reply to Boris Zbarsky [:bz] (still a bit busy) from comment #8)
> Jon, Steve, any other good ideas on how a GCThing can end up with the sort
> of pointer we're seeing here?

Not beyond the usual (heap corruption, bad RAM, etc).  We removed all the code which gave out low value pointers that had special meanings.
Flags: needinfo?(jcoppeard)
Hm, the above assembly is just a *little* too short. All of the code before the crash address maps pretty directly to the code (with the post barrier inlined in), not even reordered much at all (save for a shared call to unputCell, which is used after pushing two different registers).

But I don't really follow the code at the crash address. My first assumption is that eax would be next aka interface. Except that it jumps to 6173878Eh if eax->storeBuffer() is nonzero, and 6173878Eh is the address basically used to drop off the end of the function. If it *is* zero, it jumps to 6173877Ch, and I don't know what that is. The way the code is written, you'd expect it to first test

    if (next && (buffer = next->storeBuffer())) {

and if eax==next then I'd expect a nonzero eax->storeBuffer() to proceed to test prev instead of jumping to 6173878Eh. Also, the following code (starting at 61CE4365) looks like a totally normal

    if (prev && (buffer = prev->storeBuffer()))
        buffer->unputCell(static_cast<js::gc::Cell**>(cellp));

with eax==prev. But I don't know what jumps there. Maybe 6173877Ch could jump back to it? Under that hypothesis, 6173877Ch would load *constructorCache into eax and jump back to 61CE4365. Hm, all that would make sense.

But instead of hurting my head with that, I'm thinking that it would be nice to see what jumps to 61CE4351, which from the code looks like it ought to be the

    if (!interface) {

line. That would easily determine what register 'interface' is in, and in particular if it's eax. It does seem weird for eax to be *constructorCache -- why would the compiler reorder that? -- but it's also weird that I don't see the *constructorCache dereference in the assembly.

Oh, and also, it would make sense for 'interface' to be in eax since it was just returned from CreateInterfaceObject before the if (!interface) check, so why shuffle it into a different register when it's already in eax?

Is the code before what was pasted in comment 4 accessible? ni?kanru
Flags: needinfo?(sphink) → needinfo?(kchen)
Anyway, if I just assume that eax is interface there, then the question of how it passed a null check yet was null after anding with 0xfff00000.

The crashes kanru pointed to in comment 9, on line 956 in the Rooted constructor, make me suspicious of something going haywire in the rooting mechanism. 'interface' is stored in a Rooted before being returned. The Rooted constructor registers the stack address with the root lists. For the sake of argument, say the compiler messed up the LIFO ordering of the Rooteds when inlining, or maybe just mixed up stack addresses so that the 'stack' field got mixed up with 'interface', and when the possibly-inlined CreateInterfaceObject "returned" it ended up returning &roots[JS::MapTypeToRootKind<T>::kind] instead, and roots had a small address... or maybe 'stack' was fine, but when it ran ~Rooted and did *stack=prev, prev pulled a weird address out of roots[]... ok, I'm doing way too much speculation here. And I don't *really* want to blame this on miscompilation. Still, it does seem possible that if the root list were somehow mangled, we might end up in this situation.
sfink: thanks!

So I was rooting (no pun intended) around in the other reports, and noticed some additional patterns.

The crashes occur on all platforms, but not surprisingly the crash line is different between them.  This may be a big hint when you look at the different platforms and code generated - if can constrain the problem down more (it also tends to speak against compiler bugs).

On mac, as in https://crash-stats.mozilla.com/report/index/b8f9334a-cfa2-4c9a-bdda-222db2170203 you see it in the RootedAPI.h code (with 0x1 as the crash address).  In 45.7esr, it's always an EXEC crash on the CreateInterfacePrototypeObject() for the protoclass (and there are some e5e5 crashes). On 52 and 51 on Windows, it appears to be (mostly) READ errors on the "*constructorCache = interface;" line.

Since it's not unreasonable to presume there's a common problem here (even if it is memtrashing or bitflips, which I mostly doubt), looking at the way to problem moves around on the different versions and the different code layouts for each should greatly narrow the possibilities down.
Yeah, when you sort by version, the patterns are very obvious.  Most versions crash in a single manner (READ, WRITE, EXEC) at a single line.  50.1.0 crashes either on the "constructorCache = interface;" or (with e5e5 addresses, note) on the Rooted<> line.
The full assembly is here: https://pastebin.mozilla.org/8978021

(In reply to Steve Fink [:sfink] [:s:] from comment #13)
> Hm, the above assembly is just a *little* too short. All of the code before
> the crash address maps pretty directly to the code (with the post barrier
> inlined in), not even reordered much at all (save for a shared call to
> unputCell, which is used after pushing two different registers).

They were split into two parts. Now I included all of them.

The code around crash address look like:

6173874B  call        mozilla::dom::CreateInterfaceObject (617388DCh)
61738750  mov         ecx,eax
>    eax = interface
>    ecx = interface
61738752  add         esp,24h
61738755  test        ecx,ecx
61738757  je          61CE4329
6173875D  mov         edx,dword ptr [constructorCache]
>    edx = constructorCache
61738760  mov         eax,dword ptr [edx] 
>    eax = *constructorCache
61738762  mov         dword ptr [edx],ecx
>    *constructorCache = interface
61738764  and         ecx,edi
61738766  mov         ecx,dword ptr [ecx+0FFFF8h]
>    ecx = interface->storeBuffer()
6173876C  test        ecx,ecx
6173876E  je          61CE4365
>    ecx != 0
61738774  test        eax,eax
61738776  jne         61CE4351
>    eax aka old *constructorCache != 0
61CE4351  and         eax,edi
61CE4353  cmp         dword ptr [eax+0FFFF8h],0
>    crash
61CE435A  jne         6173878E


EAX = 00000000 EBX = 0570D000 ECX = 0570D808 EDX = 11D334C8 ESI = 11D3297C
EDI = FFF00000 EIP = 61CE4353 ESP = 003BD080 EBP = 003BD0A0 EFL = 00210246
Flags: needinfo?(kchen)
Tracking for 51 and 52 since this is rated sec-critical. Seems unlikely we will fix this on 51, though.
Steve, given recent comments (15, 16, 17), do you have any further ideas here?
Flags: needinfo?(sphink)
We're heading into the 52 RC build today, so I'm marking this wontfix for 51.
I don't see any crashes on versions later than 52 in the last week.
Ok, Kan-Ru's comment 17 pretty much blew my theory out of the water. eax is *constructorCache. CreateInterfaceObject is not inlined, so its Rooteds should be dead and gone when this code runs.

*constructorCache is getting initialized or corrupted to be something nonzero but small (so that *constructorCache & 0xFFF00000 is zero, ie somewhere in 0x00000001 - 0x000fffff.) bz covered that well in comment 6.

Given comment 21, I'd say we ignore this for now, and if it returns, release assert that *constructorCache is sane.
Flags: needinfo?(sphink)
Still no crashes > 52.
We are still seeing some current crashes, not sure if this is actionable.
All 5 recent 54 beta crashes are in mozilla::dom::SVGViewElementBinding::CreateInterfaceObjects, if that means anything.
I tried to look at some more cases with windbg, basically there are two different types of crashes

   944	void
   945	CreateInterfaceObjects(JSContext* cx, JS::Handle<JSObject*> global,
   946	                       JS::Handle<JSObject*> protoProto,
   947	                       const js::Class* protoClass, JS::Heap<JSObject*>* protoCache,
   948	                       JS::Handle<JSObject*> constructorProto,
   949	                       const js::Class* constructorClass,
   950	                       unsigned ctorNargs, const NamedConstructor* namedConstructors,
   951	                       JS::Heap<JSObject*>* constructorCache,
   952	                       const NativeProperties* properties,
   953	                       const NativeProperties* chromeOnlyProperties,
   954	                       const char* name, bool defineOnGlobal,
   955	                       const char* const* unscopableNames,
   956	                       bool isGlobal)
   957	{
   ...
   985	
   986	  JS::Rooted<JSObject*> proto(cx);
   987	  if (protoClass) {
   988	    proto =
   989	      CreateInterfacePrototypeObject(cx, global, protoProto, protoClass,
   990	                                     properties, chromeOnlyProperties,
   991	                                     unscopableNames, isGlobal);
   992	    if (!proto) {
   993	      return;
   994	    }
   995	
-->996	    *protoCache = proto;
   997	  }
   998	  else {
   999	    MOZ_ASSERT(!proto);
  1000	  }
  1001	
  1002	  JSObject* interface;
  1003	  if (constructorClass) {
  1004	    interface = CreateInterfaceObject(cx, global, constructorProto,
  1005	                                      constructorClass, ctorNargs,
  1006	                                      namedConstructors, proto, properties,
  1007	                                      chromeOnlyProperties, name,
  1008	                                      defineOnGlobal);
  1009	    if (!interface) {
  1010	      if (protoCache) {
  1011	        // If we fail we need to make sure to clear the value of protoCache we
  1012	        // set above.
  1013	        *protoCache = nullptr;
  1014	      }
  1015	      return;
  1016	    }
->1017	    *constructorCache = interface;
  1018	  }
  1019	}

First category contains the crashes with crashing address ending with 0xffff8. Like bz's analysis in comment 6 this is because we are calling |chunk()->trailer.storeBuffer()| on line 996 or line 1017. However note that it's not small number to give 0 when anded with ~ChunkMask but some invalid value to give invalid address when anded with ~ChunkMask.

Addresses are like: 0xffff8, 0x9ffff8, 0xe5effff8, 0x10ffff8, 0x2fffff8

https://crash-stats.mozilla.com/signature/?address=%24ffff8&signature=mozilla%3A%3Adom%3A%3ACreateInterfaceObjects&date=%3E%3D2017-04-28T06%3A32%3A00.000Z&date=%3C2017-05-05T06%3A32%3A00.000Z&_columns=date&_columns=product&_columns=version&_columns=build_id&_columns=platform&_columns=reason&_columns=address&_columns=install_time&_sort=-date&page=1#reports

The second category contains the crashes with other crashing addresses. The crashing addresses are the address of |protoCache| or |constructorCache| and we crash when we dereference this invalid address. The address are read from the parameter from the stack so it already wrong when passed in or the stack was corrupted.

https://crash-stats.mozilla.com/signature/?address=%21%24ffff8&signature=mozilla%3A%3Adom%3A%3ACreateInterfaceObjects&date=%3E%3D2017-04-28T06%3A32%3A00.000Z&date=%3C2017-05-05T06%3A32%3A00.000Z&_columns=date&_columns=product&_columns=version&_columns=build_id&_columns=platform&_columns=reason&_columns=address&_columns=install_time&_sort=-date&page=1
Flags: needinfo?(kchen)
> Addresses are like: 0xffff8, 0x9ffff8, 0xe5effff8, 0x10ffff8, 0x2fffff8

0xffff8: consistent with small number.
0x9ffff8: Not sure; involves at least 0x900000 as starting point, which is 2 bits flipped from 0.
0xe5effff8: Likely started out as 0xe5e5e5e5 which according to firebot is "jemalloc freed junk memory"
0x10ffff8: Could have started off as 0x1000000 which is a memory bitflip from 0.
0x2fffff8: Could have started off as 0x2f00000 which is a bunch of bits flipped from 0...  no idea.

Those 0xe5effff8 crashes are not happy-looking.  :(
Are we not seeing any 55 crashes?
(In reply to Al Billings [:abillings] from comment #29)
> Are we not seeing any 55 crashes?

No 55 crashes. The latest version with reports is 52.1.1esr, 53.0.2, 54.0b5
At this point, we're not going to fix it for 54. I'll leave this open in case it shows up again once 55 goes to beta.
Still no crashes on 55 or 56.
Flags: needinfo?(sphink)
Flags: needinfo?(peterv)
Priority: -- → P3
This doesn't seem actionable. I do see a few crashes on 55 beta, and one on 55 release in the last week.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → INCOMPLETE
Group: dom-core-security
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.