Crash in [@ java.lang.IndexOutOfBoundsException: at org.mozilla.fenix.historymetadata.HistoryMetadataMiddleware.createHistoryMetadata(HistoryMetadataMiddleware.kt)]
Categories
(Fenix :: History, defect, P1)
Tracking
(firefox128 disabled, firefox129 disabled, firefox130 disabled, firefox131 disabled, firefox132- disabled, firefox133+ fixed)
People
(Reporter: mstange, Assigned: kaya)
References
(Blocks 1 open bug)
Details
(Keywords: crash, regression, Whiteboard: [fxdroid][group1])
Crash Data
Attachments
(2 files, 1 obsolete file)
Crash report: https://crash-stats.mozilla.org/report/index/b6add386-945d-4407-a375-7c2340240711
Top 10 frames:
0 jdk.internal.util.Preconditions outOfBounds Preconditions.java:64
1 jdk.internal.util.Preconditions outOfBoundsCheckIndex Preconditions.java:70
2 jdk.internal.util.Preconditions checkIndex Preconditions.java:266
3 java.util.Objects checkIndex Objects.java:359
4 java.util.ArrayList get ArrayList.java:434
5 org.mozilla.fenix.historymetadata.HistoryMetadataMiddleware createHistoryMetadata HistoryMetadataMiddleware.kt:94
6 org.mozilla.fenix.historymetadata.HistoryMetadataMiddleware invoke HistoryMetadataMiddleware.kt:346
7 mozilla.components.lib.state.internal.ReducerChainBuilder$build$1$1 invoke ReducerChainBuilder.kt:14
8 mozilla.components.feature.media.middleware.LastMediaAccessMiddleware invoke LastMediaAccessMiddleware.kt:22
9 mozilla.components.lib.state.internal.ReducerChainBuilder$build$1$1 invoke ReducerChainBuilder.kt:14
Comment 1•4 months ago
|
||
Caused by: java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
This bug has the same crash signature as bug 1760937, but the "Caused by" stack traces look different, e.g https://crash-stats.mozilla.org/report/index/b7747cea-c8c5-467e-8616-5b0010240712
Updated•4 months ago
|
Comment 2•4 months ago
|
||
The bug is linked to a topcrash signature, which matches the following criteria:
- Top 10 AArch64 and ARM crashes on nightly
- Top 10 AArch64 and ARM crashes on beta
For more information, please visit BugBot documentation.
It usually happens for me when I click an internal link on Twitter or Reddit.
Does anyone have any hypothesis about this crash?
Comment 4•4 months ago
|
||
Krzysztof, I haven't been able to reproduce this on internal Reddit links. Do you have a specific link you can consistently reproduce this on?
Dear Ladies and Gentlepeople,
Cathy Lu [:calu] pointed me to this bug as it appears I'm affected , see among other crash reports this one.
I'm able to reliably Crash Firefox Fenix on my FP4 and -i f need be - am happy to help out by providing info/data.
Reporter | ||
Comment 6•4 months ago
|
||
Could you make a tryserver build with extra logging that we could run with our regular Firefox Nightly profiles? I cannot reproduce this on demand but it happens around twice a day for me.
Updated•4 months ago
|
Updated•4 months ago
|
I don't have 100% working STRs.
I don't know if it's useful, but I recorded my browsing session on Twitter -> https://drive.google.com/file/d/1HJxnd5_onIqx19Z_M2kdUgxS7n1aAoe5/view?usp=drive_link . File has around 60 MB.
Comment 8•4 months ago
•
|
||
I may have a potential code culprit.
There's only one Array accessor within HistoryMetadataMiddleware
There's an index, previousUrlIndex
, that is used to obtain some tab history data. Based on the stack trace in Sentry, it looks like the code could be trying to use previousUrlIndex
with an index larger than the size of the tab's history items.
Based on this, potentially either some tab history data got asyncrhonously updated while fun createHistoryMetadata
was running OR HistoryState.currentIndex
was not updated from some previous history Action and has an incorrect value.
Comment 9•4 months ago
|
||
Comment 10•4 months ago
•
|
||
In order to help with debugging this, I have attached a link to download an Android app bundle with the local changes.
For transparency, I've also made a phabricator patch (attached above) so anyone can see what logging I've added.
Comment 11•4 months ago
|
||
(In reply to Noah Bond [:007] from comment #10)
Created attachment 9413394 [details]
Android app bundleIn order to help with debugging this, I have attached a link to download an Android app bundle with the local changes.
For transparency, I've also made a phabricator patch (attached above) so anyone can see what logging I've added.
So you would like me to install the AAB (containing additional facilities for logging) on the device and then try to reproduce the behavior? Am I understanding correctly?
Comment 12•4 months ago
•
|
||
(In reply to Ruben89 from comment #11)
So you would like me to install the AAB (containing additional facilities for logging) on the device and then try to reproduce the behavior? Am I understanding correctly?
If you, or anyone, have the time to do so and have been experiencing this crash, yes, that would be a great help!!
After installing the AAB and experiencing the crash, we just need the logs
Reporter | ||
Comment 13•4 months ago
|
||
I haven't had any luck so far installing the build in a way that keeps my current Firefox data. Do you have any suggestions for how to pull this off? I added a job to the try push to get a signed apk here: https://treeherder.mozilla.org/jobs?repo=try&revision=a326d02269ee38242267ea721e5fda3a18bd673b&selectedTaskRun=X4ehmy7XSyKn2yUAw3C1iw.0
But here's what I get when I install it:
% adb install /Users/mstange/Downloads/target.arm64-v8a.apk
Performing Streamed Install
adb: failed to install /Users/mstange/Downloads/target.arm64-v8a.apk: Failure [INSTALL_FAILED_UPDATE_INCOMPATIBLE: Existing package org.mozilla.fenix signatures do not match newer version; ignoring!]
I don't really want to uninstall the old build in case some data in my Firefox profile is needed to reproduce the problem.
Could we land the logging on Nightly?
Comment 14•4 months ago
|
||
I don't see an issue with that, but let me check with the team to make sure I'm not missing anything. I'll get back to this thread next week.
Assignee | ||
Comment 15•4 months ago
•
|
||
(chiming in as Bas had brought this crash to my notice recently)
I also agree with what Noah stated in comment 8. Potentially some error regarding these values in the state update JSON while navigating and receiving DOMTitleChanged
which would trigger a session state update.
Markus if you can constantly reproduce the issue, in addition to Noah's logs, you can also try enabling MOZ_LOGs related to session history on the current production app for debugging the native layer, and then we can then inspect the session history size and the current index at the time of the navigation and possibly the crash itself. To do that:
1- navigate to about:config
.
2- search for logging.config.modules
pref and input nsSHistory:5
there.
3- stay on the same session (restarting the app will reset this config.)
5- run adb shell logcat --c
on your terminal to clear the logcat before reproducing the bug
6- reproduce the bug. (if you cannot constantly reproduce the issue, maybe you can leave the app open in the background and try to catch the error from time to time. Closing the app will reset the pref and we'd lose the logging in that case.)
7- run adb shell logcat | grep "nsSHistory"
in your terminal. The output of the command will have some logs similar to [Child 4810: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=1), Length()=2. Safe range [0, 1]
. You can report back if the index is in the safe range or not.
According to our suspicion; the index, and the safe range values should tell us whether there's some bug in the state update JSON we are receiving from the native layer.
Updated•3 months ago
|
Comment 16•3 months ago
|
||
(In reply to Noah Bond [:007] from comment #8)
I may have a potential code culprit.
There's only one Array accessor within
HistoryMetadataMiddleware
There's an index,
previousUrlIndex
, that is used to obtain some tab history data. Based on the stack trace in Sentry, it looks like the code could be trying to usepreviousUrlIndex
with an index larger than the size of the tab's history items.Based on this, potentially either some tab history data got asyncrhonously updated while
fun createHistoryMetadata
was running ORHistoryState.currentIndex
was not updated from some previous history Action and has an incorrect value.
Fwiw, this bug is essentially making the browser unusable for me. It crashes frequently enough (in the middle of filling in forms, for example, there's a fair amount of data loss here). I'm currently traveling but will try to get the logs regardless.
Fwiw, if this data is accessed asynchronously, how come it's not protected by a Mutex? I don't see any obvious access synchronization here.
Comment 17•3 months ago
|
||
I have a logcat of the bug reproducing. nsSHistory does not obviously appear in it. I need to board now but will look more later.
Comment 18•3 months ago
|
||
Looks like the logging parameter inadvertently got reset. You called it:
07-27 08:03:40.686 7458 7517 I Gecko : [Parent 7458: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=16), Length()=17. Safe range [13, 16]
07-27 08:03:40.873 7458 7458 I GeckoSession: handleMessage GeckoView:PageStop uri=null
Comment 19•3 months ago
|
||
There's more:
07-27 08:03:57.512 9813 9859 I Gecko : [Child 9813: Main Thread]: D/nsSHistory nsHistory::Go(-1)
07-27 08:03:57.520 7458 7517 I Gecko : [Parent 7458: Main Thread]: D/nsSHistory LoadEntry(16, 0x4, 0)
07-27 08:03:57.528 7458 7517 I Gecko : [Parent 7458: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=16), Length()=18. Safe range [13, 17]
There's a bunch more of this. The last one, appearing right before the crash, appears to be in range:
07-27 08:09:55.566 7458 7517 I Gecko : [Parent 7458: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=19), Length()=20. Safe range [16, 19]
Updated•3 months ago
|
Updated•3 months ago
|
Updated•3 months ago
|
Reporter | ||
Comment 20•3 months ago
|
||
Have we found out which patch caused this to happen so frequently? Can we back it out for now?
Comment 21•3 months ago
|
||
We have unfortunately not been able to reproduce this locally yet, so we no candidate for the code/patch culprit yet.
That being said, we are actively working on getting a signed build to Bas so they can reproduce the bug locally and send us some logs so we can continue the investigation.
Reporter | ||
Comment 22•3 months ago
|
||
These steps to reproduce work for me at the moment:
- Go to https://www.reddit.com/r/askTO/comments/1e9qagy/grammar_or_spelling_mistakes_around_toronto_area/?chainedPosts=t3_13vsv4h
- In the banner at the bottom, confirm that you want to see the page in Firefox.
- Scroll down.
- Tap "View more comments"
- Scroll down to the comment starting with "There was this one".
- Top the reddit URL link in that comment.
100% reproducible crash.
% adb shell logcat | grep "nsSHistory"
08-11 16:53:12.380 29378 29424 I Gecko : [Parent 29378: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=2), Length()=3. Safe range [0, 2]
08-11 16:53:23.348 29378 29424 I Gecko : [Parent 29378: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=3), Length()=4. Safe range [0, 3]
Reporter | ||
Comment 23•3 months ago
|
||
Actually, that might be a different crash.
Assignee | ||
Comment 24•3 months ago
•
|
||
Markus, your and Bas' nsSHistory logs seem correct so the native layer looks like working fine. We've provided Bas an APK and it looks like we are getting logs from a JS module (GeckoViewSessionStore
) that we should not be getting. GeckoViewSessionStore should not be in use in the current production code. That's in use for the SHIP code I've implemented recently.
I suspect two changes at the moment:
1- SHIP refactors as a whole. Ideally, the SHIP code should never be triggered atm. So it brings me the question whether we are somehow leaking a state which is not prevented by the prefs. (Bug 1677190)
2- There has been a pref refactor happened for SHIP related prefs. Maybe that caused the situation I am suspecting in the first point. (Bug 1883278)
I suspect that it is a combination of two patches where first we refactored the prefs and then refactored the SHIP on Android code in the codebase.
I'll dig into these two tickets to resolve the culprit and link the regression afterwards.
Comment 25•3 months ago
|
||
It looks like for me this is related to SHIP having been enabled (this was due to information I was providing related to bug 1896663) in my build. So this appears to have been my own fault. Resetting the pref the crash appears to be resolved. I'll remark here if it happens again but for now I'm suggesting the severity is likely lower (resetting for triage) unless we're seeing large volumes elsewhere?
Reporter | ||
Comment 26•3 months ago
|
||
My build also had the SHIP pref set (fission.disableSessionHistoryInParent was false). I've set it back to true now.
Assignee | ||
Comment 27•3 months ago
•
|
||
Thanks a lot for the inputs you've provided. It looks like the crashes are happening due to the SHIP pref enabled. Thanks for catching this regression related to SHIP. I'll link the SHIP ticket as a regression for now.
Though, I am still curious why we face this crash out in the wild and if the others who experience this issue have any changes to the SHIP/session store related prefs (fission.disableSessionHistoryInParent
, browser.sessionstore.disable_platform_collection
) locally or any issues I mentioned in Comment#24 are valid or not.
Assignee | ||
Updated•3 months ago
|
Updated•3 months ago
|
Updated•2 months ago
|
Comment 28•2 months ago
|
||
(In reply to Kayacan Kaya [:kaya] from comment #27)
Thanks a lot for the inputs you've provided. It looks like the crashes are happening due to the SHIP pref enabled. Thanks for catching this regression related to SHIP. I'll link the SHIP ticket as a regression for now.
Though, I am still curious why we face this crash out in the wild and if the others who experience this issue have any changes to the SHIP/session store related prefs (fission.disableSessionHistoryInParent
,browser.sessionstore.disable_platform_collection
) locally or any issues I mentioned in Comment#24 are valid or not.
For me (3 devices tested), reproducing the issue entails scrolling trough a long list of items (like Google-images, or a long list of products in the the webstore of my local pharmacy - the latter always triggers a crash -.)
Fission being enabled - enabling it via prefs or otherwise - (on all 3 devices: Fairphone-4 , Firefox Beta / a Pixel 6a, Firefox Nightly / Fairphone-5, Firefox Beta) seems to be the common denominator:
fission.autostart | true
dom.ipc.process.Prelaunch.enabled | true
Is when I experience the crashes.
Conversely, when (about:support shows) fission is disabled, I do not experience crashes when scrolling long lists of items.
changes to the SHIP/session store related prefs:
For me:
I did not change these two prefs in about:config by hand.
For all three devices/ browsers:
fission.disableSessionHistoryInParent
is set to true
browser.sessionstore.disable_platform_collection
is set to false
FWIW: I do always utilize Delete browsing data on quit
(turned 'on' in settings, with all options for deletion enabled), I treat Fenix like a desktop browser; when I'm done, I tap quit
So in that sense I never remember browser history, cache, etc... and always start with a 'clean' session.
(P.S.
My apologies for the delay in responding to you all, I had hoped to have been of more assistance.
I've had 'a lot going on' IRL, lately.
I'll try and answer any further questions or comments in a more timely fashion.)
Assignee | ||
Comment 29•2 months ago
|
||
Hi :Ruben89,
Thank you very much for the detailed reply.
This crash happens due to a bug in SessionHistoryInParent (SHIP).
In your case, even though you have fission.disableSessionHistoryInParent
set to true
, fission.autostart
pref is overriding the fission.disableSessionHistoryInParent
pref and auto-enabling the SHIP. That's why you're experiencing the crash as well.
Currently, I am also able to reproduce this crash locally and I am working on fixing that. Thanks again for all your inputs, I'll be updating the ticket according to the state of the fix.
Assignee | ||
Updated•2 months ago
|
Assignee | ||
Updated•2 months ago
|
Comment 30•2 months ago
|
||
(In reply to Markus Stange [:mstange] from comment #22)
These steps to reproduce work for me at the moment:
- Go to https://www.reddit.com/r/askTO/comments/1e9qagy/grammar_or_spelling_mistakes_around_toronto_area/?chainedPosts=t3_13vsv4h
- In the banner at the bottom, confirm that you want to see the page in Firefox.
- Scroll down.
- Tap "View more comments"
- Scroll down to the comment starting with "There was this one".
- Top the reddit URL link in that comment.
100% reproducible crash.
% adb shell logcat | grep "nsSHistory" 08-11 16:53:12.380 29378 29424 I Gecko : [Parent 29378: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=2), Length()=3. Safe range [0, 2] 08-11 16:53:23.348 29378 29424 I Gecko : [Parent 29378: Main Thread]: D/nsSHistory EvictOutOfRangeWindowDocumentViewers(index=3), Length()=4. Safe range [0, 3]
I tried these STR, and this is a different crash for me. Bas and Markus, do you have reliable STR for this crash?
Also, Bas, do you happen to have the full logcat (you posted some excerpts above, but I'd love to see the whole thing)?
Reporter | ||
Comment 31•2 months ago
|
||
I don't have reliable STR for this crash but I think Kaya does?
Comment 32•2 months ago
|
||
(In reply to Chris Peterson [:cpeterson] from comment #1)
Caused by: java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
This bug has the same crash signature as bug 1760937, but the "Caused by" stack traces look different, e.g https://crash-stats.mozilla.org/report/index/b7747cea-c8c5-467e-8616-5b0010240712
I think this is unrelated. The one in 1760937 has a crash because of the tab index, not tab history state
Comment 33•2 months ago
•
|
||
(Removed the dependencies until we can have STR and debug this. (In reply to Ruben89 from comment #5)
Dear Ladies and Gentlepeople,
Cathy Lu [:calu] pointed me to this bug as it appears I'm affected , see among other crash reports this one.
I'm able to reliably Crash Firefox Fenix on my FP4 and -i f need be - am happy to help out by providing info/data.
Can you let us know what the steps to reproduce are? Also, how do you know it's the same crash? And finally: did you happen to flip any prefs in about:config
(I presume this happens in Nightly for you)? Thanks!
Assignee | ||
Comment 34•2 months ago
|
||
Markus, I do not have any obvious STR for this unfortunately. SHIP pref needs to enabled and fast back/forth navigations between google image search results reproduced in my case in less than 5 mins.
Technically speaking:
This crash happens when an incremental session history update is forwarded from the backend, instead of a full history update, but the frontend does not exercise the incremental updates but behaves as if they are full history updates.
Full history update -> when you navigate between pages, each time we update the history of the tab, we reconstruct the full history list from scratch, and not edit the existing history items list.
Incremental history update -> when navigating between the pages, we sometimes edit the existing history list for the tab by splicing and concatting instead. (non-SHIP code -SessionStateAggregator- does not use this, collects the history entries from scratch everytime)
On Android, even though SHIP code introduced the incremental history updates, GeckoSession.SessionState does not have a mechanism to consume them as incremental updates, but rather acts like they're full updates by reconstucting the history list (aka historychange bundle) from scratch. As a result, the history entires for that tab/session becomes smaller than expected. And once we try to retrieve an item from that list with a larger index, we get this IndexOutOfBounds exception.
ni'ng Peter, in case there's any consistent STR for receiving the incremental/partial history update.
Comment 35•2 months ago
|
||
(In reply to [:owlish] 🦉 PST from comment #33)
(Removed the dependencies until we can have STR and debug this. (In reply to Ruben89 from comment #5)
Dear Ladies and Gentlepeople,
Cathy Lu [:calu] pointed me to this bug as it appears I'm affected , see among other crash reports this one.
I'm able to reliably Crash Firefox Fenix on my FP4 and -i f need be - am happy to help out by providing info/data.
Can you let us know what the steps to reproduce are? Also, how do you know it's the same crash? And finally: did you happen to flip any prefs in
about:config
(I presume this happens in Nightly for you)? Thanks!
Dear :Owlish,
Adding to reply #34 from :kaya and as partially described in comment #28, It is the following that consistently crashes the browser for me:
Steps To Reproduce :
- Install Fenix Beta or Nightly (both do the 'trick' for me.)
- Go to
about:config
and set the following prefs totrue
(to enable fission) :
fission.autostart | true
dom.ipc.process.Prelaunch.enabled | true
(As kindly explained by :kaya - I'm no software-engineer - the 'enabling' of fission in the browser, force-enables SHIP, which is the process or mechanism, that due to a OOB eventually crashes the browser.)
-
Close the browser on your Android device of choice, and enable the changes to take effect
(For me this entails: closing the browser > Force Stop Firefox in the Android Settings > Apps menu, Clear cache and thhen not touching the Firefox App for about 30 second.) -
Verify that Fission is enabled and active:
Inabout:support
underApplication Basic
> if theFission Windows
line contains:Enabled by user
then Fission should be enabled. -
Use said browser (with the modifications mentioned under '2´, fission being enabled and activated) to browse (to) the web-store of (Dutch pharmacy-store) 'Etos':
- Go to: Etos.nl | Bodycare Products | View our range of products (in Dutch Language):
Verzorgingsproducten kopen? Bekijk ons aanbod | Etos
- Scroll the list of products and continue loading more items as you go.
- Scroll to the bottom of the list of products/items listed on the webpage;
- You will find a black 'Load More' Button (labeled as: 'Meer laden') (the bottom right) at end of the listed of items > tap/click it; more items will appear.
- After the next 'batch' of products/items has appeared, scroll to the end of the list and, again, tap/click the 'Load More' ('Meer laden')
- Scroll down to the end of the listed items, and press the button labeled 'Meer laden')
- Result After some time of scrolling and loading more items/products (usually within a minute, or two) Fenix should crash.
(Note:
Not sure as to why, but the webpage and actions mentioned above allow me to reproduce the crash, within minutes, 90% of the time.
FWIW: my guess is that the design of this site, which allows one to load a large amount of items within a single page, plus the fact the items listed are fairly similar every time, contribute to the consistency in reproducing the the crash.)
Assignee | ||
Comment 37•2 months ago
|
||
Collecting the full history updates from the parent process until the partial history colletion is introduced for SHIP on Android.
Comment 38•2 months ago
|
||
We don't need to uplift this fix to Beta because this crash only affects SHIP and SHIP is disabled by default.
Updated•2 months ago
|
Comment 39•2 months ago
|
||
Based on the topcrash criteria, the crash signatures linked to this bug are not in the topcrash signatures anymore.
For more information, please visit BugBot documentation.
Comment 40•2 months ago
|
||
Adding tracking+ for 132 since this blocks the SHIP experiment rollout.
Comment 41•2 months ago
|
||
Ruben, are you still able to reproduce this crash in a recent Fenix Nightly version using your steps to reproduce in comment 35? I'm trying, but haven't crashed yet.
Kaya has a possible fix for this crash, but we want to make sure the crash is still reproducible so we can verify that his fix works.
Comment 42•1 month ago
•
|
||
I was not able to reproduce this crash using the STR from here.
Tested on Fenix Nightly 132.0a1 from 9/18, with SHIP enabled and all the prefs modified as per the Comment 35 above.
Tested on the following devices:
- Samsung Galaxy S24 (Android 14),
- Samsung Galaxy Note 8 (Android 9),
- Realme GT Master Edition (Android 13),
- Samsung Galaxy A14 (Android 13), and
- OnePlus 5 (Android 10).
Comment 43•1 month ago
|
||
(In reply to Chris Peterson [:cpeterson] from comment #41)
Ruben, are you still able to reproduce this crash in a recent Fenix Nightly version using your steps to reproduce in comment 35? I'm trying, but haven't crashed yet.
Kaya has a possible fix for this crash, but we want to make sure the crash is still reproducible so we can verify that his fix works.
Chris,
I'll try again and report back ( will try and report back by the end of this week).
Comment 44•1 month ago
|
||
(In reply to Ruben89 from comment #43)
(In reply to Chris Peterson [:cpeterson] from comment #41)
Ruben, are you still able to reproduce this crash in a recent Fenix Nightly version using your steps to reproduce in comment 35? I'm trying, but haven't crashed yet.
Kaya has a possible fix for this crash, but we want to make sure the crash is still reproducible so we can verify that his fix works.
Chris,
I'll try again and report back ( will try and report back by the end of this week).
(In reply to Chris Peterson [:cpeterson] from comment #41)
Ruben, are you still able to reproduce this crash in a recent Fenix Nightly version using your steps to reproduce in comment 35? I'm trying, but haven't crashed yet.
Kaya has a possible fix for this crash, but we want to make sure the crash is still reproducible so we can verify that his fix works.
Chris, et al.
I've tried this on all devices, again and again, and again.
(my thumbs are sore, and I now have detailed knowledge of the product range of Etos...)
However, I'm unable to reproduce the issue at this time.
--
If there's anything more I can do, to be of assistance, please do let me know.
With kind regards,
Ruben
Comment 45•25 days ago
|
||
Hi Kaya, what's the status of this for 132 as we're nearing the end of early Beta. Does this still impact the planned SHIP experiment?
Assignee | ||
Comment 46•25 days ago
|
||
We are waiting for the confirmation of the crash on Nightly by the SHIP-enabled Nightly users out in the wild in order to ship the fix (patch attached to this bug) and let it prove itself. Due to the rarity of the crash, we are still waiting for crash reports for confirmation on v133 Nightly. Once we see reports coming consistently, we'll ship the fix and observe the existence of the crash again. So I'd say, we'd probably not proceed for v132 at least until Beta 8 (Oct 16). I am hoping we observe some reports within this week and ship the fix to v133 early next week and possibly uplift it in v132 Beta 8/9.
Chris please correct me if my assumption about our plan is wrong.
Comment 47•20 days ago
|
||
Assignee | ||
Comment 48•20 days ago
|
||
adding leave-open as the fix patch needs confirmation out in the wild. Once it proves itself, I'll close the ticket manually.
Comment 49•20 days ago
|
||
bugherder |
Comment 50•18 days ago
|
||
Per discussion with Kaya today, we're not planning to run SHIP experiments on 132 at this point.
Assignee | ||
Comment 51•13 days ago
|
||
I can confirm that we no longer see the crash happening starting from the build id: 20241015093200.
Updated•13 days ago
|
Description
•