The Hacker Webzine reports that Firefox goes into recursive frame creation with code like
+'ing. Making this a P3.
So there are two things going on here.
1) The null thing. I'll see what's up with that, if it's quick.
2) "Fixing" the testcase in full generality requires solving the halting
problem. Consider replacing the JS with something a bit more
sophisticated: a bit of self-reproducing code that makes the subframe
source be a data: URI just slightly different from the data: URI of the
parent page, and loading a subframe with the same self-reproducing code.
Put another way, once you're running JS you can create lots of frames.
Heck, you could make it all simpler and just use createElement()!
JS allowed to insert nodes in the DOM! Horrors!
Does someone want to reply to the full-disclosure list and point out the fact that the recursion protection is designed to protect against accidents, not malice? That is, it's not a security precaution but rather a workaround for sites with broken 404 pages.
OK, there's nothing special about null. Here's another "exploit":
When I put that in an html file I get 9 nested frames (on 1.8 branch). Pops up pretty fast. When I use two iframes like that the CPU bogs down and hangs the machine for a long while, but does eventually give me two sets of 9 nested frames.
Why 9? I don't understand the recursive frame creation at all, but if we start doing it why stop?
Why is the performance impact so much worse when there's two nested sets?
When I try to load the testcase as a data: uri I don't get any nesting at all. again why? (though this is the correct behavior so I guess the other ones are the real questions.)
We have two kinds of recursion protection. (We used to have 3, but we removed one because it was breaking some IBM web apps; sordid details available in CVS history if desired.)
The two kinds we have are:
1) If the original URI of a subframe is identical to the URI of any of its
ancestors, where the comparison ignores the ref part of the URI, if any,
then the load is not performed. Note that this check does not affect loads
performed by setting window.location, which is what this bug is about.
2) If the frame tree depth is 10, a load is not performed in the subframe,
period. Again, this only applies to the src attribute. A load can be
performed using window.location. In this testcase, this protection kicks
So in order:
> Why 9?
Because MAX_DEPTH_CONTENT_FRAMES == 10
> I don't understand the recursive frame creation at all
Resolving a relative URI of "" against any base that allows relative URIs gives the base URI, so the subframe's location is being set to the URI of the parent document in this case. The same happens in the other cases from comment 0.
> Why is the performance impact so much worse when there's two nested sets?
Because we limit the depth to 10 dochells (original document plus 9 nested subframes), but we don't limit the total number of subframes or the tree breadth. In particular, with two subframes we get 1023 total docshells (1
original document, 2 subframes, 4 subframes at the next nesting level, etc down to 512 subframes at the innermost nesting level). Scrolling around in the subframes should show that each subdocument here has two subframes, not one.
> When I try to load the testcase as a data: uri I don't get any nesting at all.
Resolving a relative URI against a base that doesn't allow relative URIs (such as data:) doesn't work. In this situation, to be precise, it means that we fail to even get a scheme for the new URI, so newURI fails out, aborting the load.
For background reading, see bug 98158 (maybe others too; that one was the latest one.
For a problem similar to this bug that we fixed by a quick hack, see bug 303163.
For future plans, which might in fact mitigate the situation described here if they don't break the web, see bug 305524.
Historically, the reason we have the recursive-URI protection is that some 404 pages have subframes that point right back to the same 404 page as a result of server misconfiguration. Since IE prevents the recursion, the server admins never discover this, so we had to implement something similar.
So in other words the CPU spike is akin to doing
The third way was a global cap on the number of total docshells in a docshell tree (capped at 10,000 or so, iirc). It wouldn't help with the CPU spike for two iframes here, but 3 iframes would give (3^10 - 1)/2 docshells, which is on the order of 29,500.
> So in other words the CPU spike is akin to doing
Yes, or more likely, to avoid the slow script dialog, sticking all the iframes in a single div (can't possibly take long enough to hit the slow script thing with just 1023 iframes) and then inserting the div. Well, and s/1024/1023/ since the 1023 count includes the original document loaded. ;)
Some web apps were blowing past 10,000 docshells?!
Maybe it was 1000? But yeah. I was a pretty wild kinda thing.