Closed Bug 1507410 Opened 4 years ago Closed 1 year ago

Crash in OOM | large | NS_ABORT_OOM | nsGenericHTMLElement::GetURIAttr


(Core :: DOM: Core & HTML, defect, P3)

63 Branch



Tracking Status
firefox-esr60 --- unaffected
firefox63 + wontfix
firefox64 + wontfix
firefox65 --- wontfix
firefox66 --- wontfix


(Reporter: philipp, Unassigned)


(Keywords: crash, regression, Whiteboard: [MemShrink:P3])

Crash Data


(1 file)

This bug was filed from the Socorro interface and is
report bp-98e94876-8217-4fd7-9e85-9265e0181115.

Top 10 frames of crashing thread:

0 xul.dll NS_ABORT_OOM xpcom/base/nsDebugImpl.cpp:628
1 xul.dll nsGenericHTMLElement::GetURIAttr dom/html/nsGenericHTMLElement.cpp:1517
2 xul.dll static bool mozilla::dom::HTMLImageElement_Binding::get_src dom/bindings/HTMLImageElementBinding.cpp:178
3 xul.dll mozilla::dom::binding_detail::GenericGetter<mozilla::dom::binding_detail::NormalThisPolicy, mozilla::dom::binding_detail::ThrowExceptions> dom/bindings/BindingUtils.cpp:3172
4  @0x2eba00f1 
5 mmdevapi.dll _CxxThrowException 
6 xul.dll [thunk]:nsContentList::Length`adjustor{72}'  
7 xul.dll js::num_toString js/src/jsnum.cpp:778
8  @0x1ae9a7 
9  @0x2e9654f1 


this crash signature is spiking up since yesterday on 32bit windows installations and android. the spike is limited to cs builds and all the comments point towards performing actions on some online banking service (George -

there were 1000 crash reports processed yesterday, which is accounting for three quarters of all content crashes from cs builds.
Hello, as a part of George support team, I must say this issue is very critical for us, some clients can't login to their internetbanking using FF. Can you confirm that issue is directly in Firefox browser? Is there something we can say to our clients except of "use another web browser"? Regarding to graph in details of this bug, it seems problem is detected since Semptember 5th, when was Firefox 62 released.

Can you recommend us any changes we can make on our side to prevent these crashes?

Thank you.
Flags: needinfo?(madperson)
the graph in the bug report is lagging behind for two days unfortunately. the real-time graph is at so the issue obviously started getting much worse yesterday. if you deployed any changes in that timeframe it may be worth looking at those or reverting them for the moment.

in regards to the technical questions you were asking, i'm not the right person to answer but i'll try to escalate it and hope someone will get back to you quickly.
Flags: needinfo?(madperson)
Is there some way to reproduce this?
Flags: needinfo?(jdvorak)
Thank you for prompt reply, easiest way to reproduce this problem below:
1) enter website - you're redirected to URL:
2) spent some time on this site, you can type something to field Client number / Username
3) crash appears (sometimes it takes just few seconds, sometimes it takes few minutes)

I reproduced this on android device (Samsung Galaxy S8), but should be reproducible on WIN 32-bit as well.
Flags: needinfo?(jdvorak)
any updates? did you try to reproduce this issue and investigate where could be problem? Thank you.
Wontfix for 63 as we are unlikely to have another dot release before 64 ships in 3 weeks from now, tracking for 64.
Cornel, could your team find the regression range for this bug? Thanks!
Flags: needinfo?(cornel.ionce)
i think it's not a real regression but a signature shift as i'm also able to reproduce oom crashes on the login page of this bank with older versions of firefox.
Crash Signature: [@ OOM | large | NS_ABORT_OOM | nsGenericHTMLElement::GetURIAttr] → [@ OOM | large | NS_ABORT_OOM | nsGenericHTMLElement::GetURIAttr] [@ OOM | large | NS_ABORT_OOM | AppendUTF8toUTF16 | mozilla::dom::HTMLEmbedElementBinding::get_src] [@ OOM | large | NS_ABORT_OOM | AppendUTF8toUTF16 | static bool mozilla::dom::HTMLEmbed…
Whiteboard: [MemShrink]
Attached file memory-report.json.gz
snapshot from about:memory after opening and letting it sit for a bit.
Thanks for the memory report. It has 1.1GB or so of strings. Most of that is images encoded as data URIs.
I managed to manually look for a regression range for this issue, on the mentioned affected platforms - it is reproducible starting with 63.0a1 (20180707100052), when [@ OOM | large | NS_ABORT_OOM | AppendUTF8toUTF16 | mozilla::dom::HTMLImageElement_Binding::get_src ] occured (see the related crash report
Flags: needinfo?(cornel.ionce)
FWIW, the changes to AppendUTF8toUTF16() since August should not have changed the memory requirements for the ASCII case, which is the case data: URLs should exercise.
Priority: -- → P3
On last Friday we downsized images that are used on login page (carousel rotating next to the username field), regarding to graph of crashes workaround works.
Whiteboard: [MemShrink] → [MemShrink:P3]
wontfix for 64 at this point, looks like the big spike was stopped by the site change in comment 14.
Low volume crash, has a priority set. 
Marking fix-optional to remove this from regression triage. 
Happy to still take a patch in nightly.

Low volume crash, the big spike stopped since comment 14. wontfix for 65 at the late cycle.

Component: DOM → DOM: Core & HTML

Following the reporter's steps I am able to confirm that the issues doesn't happen anymore on Windows 10 and 7 on any of the current versions of Firefox Nightly 87.0a1 (2021-02-16), beta 86.0 and release 85.0.2. No crashes occur for the given site.

Closing this issue as Resolved > Worksforme.
Feel free to re-open or file a new bug if this issue reoccurs again.

Closed: 1 year ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.