Bug 1854179 Comment 0 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Shared call stack for almost all crashes we received:

```
MOZ_Crash | mozalloc_abort | mozalloc_handle_oom | mozglue_static::oom_hook::hook | rust_oom | __rg_oom | __rust_alloc_error_handler | alloc::alloc::handle_alloc_error::rt_error | alloc::alloc::handle_alloc_error | thin_vec::header_with_capacity | thin_vec::ThinVec<T>::with_capacity | mozannotation_server::retrieve_annotations | mozannotation_retrieve | CrashReporter::OnChildProcessDumpRequested | google_breakpad::CrashGenerationServer::ClientEvent | google_breakpad::CrashGenerationServer::Run | google_breakpad::CrashGenerationServer::ThreadMain | set_alt_signal_stack_and_start | start_thread | __GI___clone
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations from the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`] and/or investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Shared call stack for almost all crashes we received:

```
MOZ_Crash
mozalloc_abort
mozalloc_handle_oom
mozglue_static::oom_hook::hook
rust_oom
__rg_oom
__rust_alloc_error_handler
alloc::alloc::handle_alloc_error::rt_error
alloc::alloc::handle_alloc_error
thin_vec::header_with_capacity
thin_vec::ThinVec<T>::with_capacity
mozannotation_server::retrieve_annotations
mozannotation_retrieve
CrashReporter::OnChildProcessDumpRequested
google_breakpad::CrashGenerationServer::ClientEvent
google_breakpad::CrashGenerationServer::Run
google_breakpad::CrashGenerationServer::ThreadMain
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations from the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`] and/or investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Shared call stack for almost all crashes we received:

```
MOZ_Crash
mozalloc_abort
mozalloc_handle_oom
mozglue_static::oom_hook::hook
rust_oom
__rg_oom
__rust_alloc_error_handler
alloc::alloc::handle_alloc_error::rt_error
alloc::alloc::handle_alloc_error
thin_vec::header_with_capacity
thin_vec::ThinVec<T>::with_capacity
mozannotation_server::retrieve_annotations
mozannotation_retrieve
CrashReporter::OnChildProcessDumpRequested
google_breakpad::CrashGenerationServer::ClientEvent
google_breakpad::CrashGenerationServer::Run
google_breakpad::CrashGenerationServer::ThreadMain
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`] and/or investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Shared call stack for almost all crashes we received:

```
MOZ_Crash
mozalloc_abort
mozalloc_handle_oom
mozglue_static::oom_hook::hook
rust_oom
__rg_oom
__rust_alloc_error_handler
alloc::alloc::handle_alloc_error::rt_error
alloc::alloc::handle_alloc_error
thin_vec::header_with_capacity
thin_vec::ThinVec<T>::with_capacity
mozannotation_server::retrieve_annotations
mozannotation_retrieve
CrashReporter::OnChildProcessDumpRequested
google_breakpad::CrashGenerationServer::ClientEvent
google_breakpad::CrashGenerationServer::Run
google_breakpad::CrashGenerationServer::ThreadMain
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`](https://searchfox.org/mozilla-central/source/toolkit/crashreporter/mozannotation_server/src/lib.rs#80) and investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Shared call stack for almost all crashes we received:

```
MOZ_Crash
mozalloc_abort
mozalloc_handle_oom
mozglue_static::oom_hook::hook
rust_oom
__rg_oom
__rust_alloc_error_handler
alloc::alloc::handle_alloc_error::rt_error
alloc::alloc::handle_alloc_error
thin_vec::header_with_capacity
thin_vec::ThinVec<T>::with_capacity
mozannotation_server::retrieve_annotations
mozannotation_retrieve
CrashReporter::OnChildProcessDumpRequested
google_breakpad::CrashGenerationServer::ClientEvent
google_breakpad::CrashGenerationServer::Run
google_breakpad::CrashGenerationServer::ThreadMain
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`](https://searchfox.org/mozilla-central/source/toolkit/crashreporter/mozannotation_server/src/lib.rs#80) and investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started spiking in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Example crash: [here](https://crash-stats.mozilla.org/report/index/1b4b2277-b3a2-47f2-b250-c4b2d0230920)

Shared call stack for almost all crashes we received:

```
MOZ_Crash
mozalloc_abort
mozalloc_handle_oom
mozglue_static::oom_hook::hook
rust_oom
__rg_oom
__rust_alloc_error_handler
alloc::alloc::handle_alloc_error::rt_error
alloc::alloc::handle_alloc_error
thin_vec::header_with_capacity
thin_vec::ThinVec<T>::with_capacity
mozannotation_server::retrieve_annotations
mozannotation_retrieve
CrashReporter::OnChildProcessDumpRequested
google_breakpad::CrashGenerationServer::ClientEvent
google_breakpad::CrashGenerationServer::Run
google_breakpad::CrashGenerationServer::ThreadMain
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`](https://searchfox.org/mozilla-central/source/toolkit/crashreporter/mozannotation_server/src/lib.rs#80) and investigate what explains this recurrent `0x0000000773594008` length here.

I wonder if this crash could have some relation with bug 1685642 which also started spiking in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Example crash: [here](https://crash-stats.mozilla.org/report/index/1b4b2277-b3a2-47f2-b250-c4b2d0230920)

Shared call stack for almost all crashes we received:

```
 0 firefox!MOZ_Crash(char const*, int, char const*) @ /build/firefox/parts/firefox/build/mfbt/Assertions.h:281
 1 firefox!mozalloc_abort @ /build/firefox/parts/firefox/build/memory/mozalloc/mozalloc_abort.cpp:35
 2 firefox!mozalloc_handle_oom(unsigned long) @ /build/firefox/parts/firefox/build/memory/mozalloc/mozalloc_oom.cpp:51
 3 libxul!mozglue_static::oom_hook::hook @ /build/firefox/parts/firefox/build/mozglue/static/rust/lib.rs:137
 4 libxul!rust_oom @ library/std/src/alloc.rs:355
 5 libxul!__rg_oom @ library/alloc/src/alloc.rs:423
 6 libxul!__rust_alloc_error_handler @ 	cfi
 7 libxul!alloc::alloc::handle_alloc_error::rt_error @ library/alloc/src/alloc.rs:389
 8 libxul!alloc::alloc::handle_alloc_error @ library/alloc/src/alloc.rs:393
 9 libxul!thin_vec::header_with_capacity @ /build/firefox/parts/firefox/build/third_party/rust/thin-vec/src/lib.rs:414
 a libxul!thin_vec::ThinVec<T>::with_capacity @ /build/firefox/parts/firefox/build/third_party/rust/thin-vec/src/lib.rs:557
 b libxul!mozannotation_server::retrieve_annotations @ /build/firefox/parts/firefox/build/toolkit/crashreporter/mozannotation_server/src/lib.rs:80
 c libxul!mozannotation_retrieve @ /build/firefox/parts/firefox/build/toolkit/crashreporter/mozannotation_server/src/lib.rs:44
 d libxul!CrashReporter::OnChildProcessDumpRequested(void*, google_breakpad::ClientInfo const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/nsExceptionHandler.cpp:3253
 e libxul!google_breakpad::CrashGenerationServer::ClientEvent(short) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:322
 f libxul!google_breakpad::CrashGenerationServer::Run() @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:189
10 libxul!google_breakpad::CrashGenerationServer::ThreadMain(void*) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:379
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`](https://searchfox.org/mozilla-central/source/toolkit/crashreporter/mozannotation_server/src/lib.rs#80) and investigate what explains this recurrent allocation size of `0x0000000773594008` here.

I wonder if this crash could have some relation with bug 1685642 which also started spiking in 116?
A high volume main process crash starting with 116 release, mostly from Ubuntu users. This main process crash is the consequence of a crash in a child process for which the annotations are unreasonably sized.

Example crash: [here](https://crash-stats.mozilla.org/report/index/1b4b2277-b3a2-47f2-b250-c4b2d0230920)

Shared call stack for almost all crashes we received:

```
 0 firefox!MOZ_Crash(char const*, int, char const*) @ /build/firefox/parts/firefox/build/mfbt/Assertions.h:281
 1 firefox!mozalloc_abort @ /build/firefox/parts/firefox/build/memory/mozalloc/mozalloc_abort.cpp:35
 2 firefox!mozalloc_handle_oom(unsigned long) @ /build/firefox/parts/firefox/build/memory/mozalloc/mozalloc_oom.cpp:51
 3 libxul!mozglue_static::oom_hook::hook @ /build/firefox/parts/firefox/build/mozglue/static/rust/lib.rs:137
 4 libxul!rust_oom @ library/std/src/alloc.rs:355
 5 libxul!__rg_oom @ library/alloc/src/alloc.rs:423
 6 libxul!__rust_alloc_error_handler
 7 libxul!alloc::alloc::handle_alloc_error::rt_error @ library/alloc/src/alloc.rs:389
 8 libxul!alloc::alloc::handle_alloc_error @ library/alloc/src/alloc.rs:393
 9 libxul!thin_vec::header_with_capacity @ /build/firefox/parts/firefox/build/third_party/rust/thin-vec/src/lib.rs:414
 a libxul!thin_vec::ThinVec<T>::with_capacity @ /build/firefox/parts/firefox/build/third_party/rust/thin-vec/src/lib.rs:557
 b libxul!mozannotation_server::retrieve_annotations @ /build/firefox/parts/firefox/build/toolkit/crashreporter/mozannotation_server/src/lib.rs:80
 c libxul!mozannotation_retrieve @ /build/firefox/parts/firefox/build/toolkit/crashreporter/mozannotation_server/src/lib.rs:44
 d libxul!CrashReporter::OnChildProcessDumpRequested(void*, google_breakpad::ClientInfo const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/nsExceptionHandler.cpp:3253
 e libxul!google_breakpad::CrashGenerationServer::ClientEvent(short) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:322
 f libxul!google_breakpad::CrashGenerationServer::Run() @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:189
10 libxul!google_breakpad::CrashGenerationServer::ThreadMain(void*) @ /build/firefox/parts/firefox/build/toolkit/crashreporter/breakpad-client/linux/crash_generation/crash_generation_server.cc:379
```

Shared error for almost all crashes we received:

```
out of memory: 0x0000000773594008 bytes requested
```

We are trying to allocate around 29.8GB of memory when we retrieve annotations for the child process crash. We should put an arbitrary limit on [the `length` of the annotations `ThinVec`](https://searchfox.org/mozilla-central/source/toolkit/crashreporter/mozannotation_server/src/lib.rs#80) and investigate what explains this recurrent allocation size of `0x0000000773594008` here.

I wonder if this crash could have some relation with bug 1685642 which also started spiking in 116?

Back to Bug 1854179 Comment 0