Closed Bug 1536596 Opened 5 years ago Closed 11 months ago

Crash in [@ mozilla::dom::quota::`anonymous namespace'::PrincipalVerifier::Run]

Categories

(Core :: Storage: Quota Manager, defect, P2)

67 Branch
defect

Tracking

()

RESOLVED WORKSFORME
Tracking Status
firefox-esr60 --- unaffected
firefox-esr68 --- wontfix
firefox-esr78 --- wontfix
firefox66 --- unaffected
firefox67 - wontfix
firefox68 --- wontfix
firefox69 --- wontfix
firefox70 --- wontfix
firefox79 --- wontfix
firefox80 --- wontfix
firefox81 --- wontfix
firefox82 --- wontfix
firefox83 --- wontfix

People

(Reporter: philipp, Unassigned)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

(Keywords: crash, regression)

Crash Data

Attachments

(5 files)

This bug is for crash report bp-7e612ee9-a4aa-48ac-983b-126c80190319.

Top 10 frames of crashing thread:

0 xul.dll nsresult mozilla::dom::quota::`anonymous namespace'::PrincipalVerifier::Run dom/quota/ActorsParent.cpp:7892
1 xul.dll nsThread::ProcessNextEvent xpcom/threads/nsThread.cpp:1179
2 xul.dll NS_ProcessNextEvent xpcom/threads/nsThreadUtils.cpp:482
3 xul.dll void mozilla::ipc::MessagePump::Run ipc/glue/MessagePump.cpp:110
4 xul.dll MessageLoop::RunHandler ipc/chromium/src/base/message_loop.cc:308
5 xul.dll MessageLoop::Run ipc/chromium/src/base/message_loop.cc:290
6 xul.dll nsBaseAppShell::Run widget/nsBaseAppShell.cpp:137
7 xul.dll nsAppShell::Run widget/windows/nsAppShell.cpp:411
8 xul.dll nsresult nsAppStartup::Run toolkit/components/startup/nsAppStartup.cpp:271
9 xul.dll nsresult XREMain::XRE_mainRun toolkit/xre/nsAppRunner.cpp:4589

crash reports with this signature and MOZ_RELEASE_ASSERT(IsPrincipalInfoValid(principalInfo)) are newly appearing in firefox 67 after the patches from bug 1526891 have landed.

Version: 64 Branch → 67 Branch
Crash Signature: [@ mozilla::dom::quota::`anonymous namespace'::PrincipalVerifier::Run] → [@ mozilla::dom::quota::`anonymous namespace'::PrincipalVerifier::Run] [@ mozilla::dom::quota::(anonymous namespace)::PrincipalVerifier::Run]

There are several other dom::quota manager crash signatures appearing right now in nightly 68.

Blocks: 1517090

Jan, is this something you can look into?

Flags: needinfo?(jvarga)

FWIW the plan ATM is to hold the new localStorage (bug 1517090) to EARLY_BETA_OR_EARLIER (bug 1534736) so this won't affect users of 67 release. That being said, we want this to be fixed ASAP and this is in the list of issues surfaced by bug 1517090 (Jan's working on bug 1535221 right now).

Hm, we need to know which origins don't match. Currently we report that info to the system console and also browser console:
https://searchfox.org/mozilla-central/rev/2c912888e3b7ae4baf161d98d7a01434f31830aa/dom/quota/ActorsParent.cpp#8159
Not sure if we can report origin strings in crash stats.

Andrew, what do you think ?

Flags: needinfo?(jvarga) → needinfo?(bugmail)

Jan just told me this morning that this is a diagnostic assertion (https://searchfox.org/mozilla-central/rev/2c912888e3b7ae4baf161d98d7a01434f31830aa/dom/quota/ActorsParent.cpp#8191 ATM) so while it's an issue that we have mismatching suffixes, this won't affect release users even when we do ship the new localStorage. As such, I'm marking 67 and 68 as unaffected but it's more that 67 and 68 release are unaffected.

Priority: -- → P2

I took a quick look at the minidumps to see if we happened to have any relevant info in the data captured by the minidump, but it did not appear so. (I think if the printfstring's autostring allocation was larger, it might have.)

In terms of reporting the info in the crash, yes we can, and I think it's probably appropriate, noting that:

  • We need to use MOZ_CRASH_UNSAFE or MOZ_CRASH_UNSAFE_PRINTF or whatever at https://searchfox.org/mozilla-central/rev/a7315d78417179b151fef6108f2bce14786ba64d/mfbt/Assertions.h#295
  • We need to get data steward review. They may be able to help us make sure that any information exposed requires special crash-stats privileges. I'm very confused about the difference between the sanitized and unsanitized crash reasons.
  • I think it makes sense for us to try and sanitize the origins even if crash-stats can help us keep the origins private. We don't really care about specific origins, we just care about the protocol scheme in use and perhaps differences in base domain calculation.
  • I think it would make sense to be explicit about the crash for whether it's originNoSuffix or the baseDomain, which does imply having the check function take an argument about whether we should crash/assert on mismatch.

I'd propose an algorithm where we:

  • Retain the characters through the first colon intact and normalize the characters after that.
  • Flatten all alphabetic characters to 'a'.
  • Flatten all numeric charcters to 'D'.

This lets us see:

  • What the protocol scheme was.
  • If the base domain calculation went wrong. While it would be interesting to know what the actual public suffix was in play, I think just knowing there's an inconsistency would potentially be enough to go on.

A JS mockup of this and usages would look like:

function sanitizeOrigin(o) {
const idxColon = o.indexOf(':');
const specPrefix = o.slice(0, idxColon + 1);
const suffix = o.slice(idxColon + 1);
const normSuffix = suffix.replace(/[A-Za-z]/gu, 'a').replace(/[0-9]/g, 'D');
return specPrefix + normSuffix;
}
$ undefined
sanitizeOrigin('http://foo.bar.com')
$ "http://aaa.aaa.aaa"
sanitizeOrigin('file://UNIVERSAL_FILE_ORIGIN')
$ "file://aaaaaaaaa_aaaa_aaaaaa"
sanitizeOrigin('ftp://foo.bar:123')
$ "ftp://aaa.aaa:DDD"
sanitizeOrigin('about:home')
$ "about:aaaa"
sanitizeOrigin('https://www.google.com')
$ "https://aaa.aaaaaa.aaa"
sanitizeOrigin('https://www.google.co.au')
$ "https://aaa.aaaaaa.aa.aa"

Flags: needinfo?(bugmail)

That looks really great. I like your proposal. Thanks!

Assignee: nobody → jvarga
Status: NEW → ASSIGNED

The crashes just appeared on 67 beta 5. The pushlog for this build is:
https://hg.mozilla.org/releases/mozilla-beta/pushloghtml?fromchange=77536919b121&tochange=a89261364728

So it's probably a consequence of patch for bug 1535995.

No longer blocks: 1517090
Blocks: 1540402

(In reply to Jan Varga [:janv] from comment #7)

That looks really great. I like your proposal. Thanks!

Are you going to use this bug for that work, Jan, or should we file another one to track it?

Flags: needinfo?(jvarga)

Yeah, I think we can use this bug.

Flags: needinfo?(jvarga)

Jan, this is tracking 67. Could you please provide an update here and an estimate of when the patch would be ready for beta uplift?

Flags: needinfo?(jvarga)

I don't understand how a diagnostic assertion can be hit on beta

Calixte, where did you see it exactly ?

Flags: needinfo?(jvarga) → needinfo?(cdenizet)

:janv, just in reading the pushlog [1], the only patch I see related to the top file we've in the backtrace is [2].

[1] https://hg.mozilla.org/releases/mozilla-beta/pushloghtml?fromchange=77536919b121&tochange=a89261364728
[2] https://hg.mozilla.org/releases/mozilla-beta/rev/3cbb28dba0fa

Flags: needinfo?(cdenizet)

But this crash can't happen in beta. I'm not saying we shouldn't fix this bug, but the crash can't be hit in beta because the assertion is diagnostic.

There's a good description of MOZ_DIAGNOSTIC_ASSERT here:
https://searchfox.org/mozilla-central/rev/dd7e27f4a805e4115d0dbee70e1220b23b23c567/mfbt/Assertions.h#382

:janv, it's because the crashes happen in devedition:
https://bit.ly/2DaQdl7 (facet Release Channel)

Ah, aurora or dev edition or early beta, is that right ?
If so, then next beta will have diagnostic assertions disabled, because it will no longer be early beta.

Anyway, we can always change it to plain MOZ_ASSERT until we have real fix for this.

The volume on 67 beta is way down and no longer deserves tracking for 67.

Priority: P2 → P1

Too late for a fix in 68 but we can still take a patch for 69/70.

Fairly low volume at this point; This may not need to be P1 any more but I'll leave that to the team/assignee.
Marking fix-optional to remove this issue from weekly regression triage.

Assignee: jvarga → nobody
Status: ASSIGNED → NEW
Priority: P1 → P2
Attached file prefs.js

If I replace my usual prefs.js in my nightly profile folder with this one and restart nightly, I get this crash

I see a MOZ_DIAGNOSTIC_ASSERT(IsPrincipalInfoValid(principalInfo)); failing. IsPrincipalInfoValid(principalInfo) has many ways to return false and has been introduced only for this diagnostic assert. Probably single diagnostic asserts on each possible reason could reveal some more information here?

Flags: needinfo?(jvarga)

And comment 17 suggests to turn this into a normal assert. As there are no occurrences in nightly, this would probably hide the problem entirely, not sure if this is what we want. I did not dig into comment 21 if this makes it reproducible (and what is the difference to default prefs here).

Assignee: nobody → sgiesecke
Status: NEW → ASSIGNED

When I look at a recent crash report (https://crash-stats.mozilla.org/report/index/4d60611b-83b8-4a9c-8d7d-f02540200823#tab-metadata) I see several "ExperimentalFeatures" enabled:

browser.startup.homepage.abouthome_cache.enabled,network.cookie.sameSite.laxByDefault,network.cookie.sameSite.noneRequiresSecure,network.cookie.sameSite.schemeful,layout.css.constructable-stylesheets.enabled,layout.css.focus-visible.enabled,layout.css.grid-template-masonry-value.enabled,devtools.inspector.color-scheme-simulation.enabled,devtools.inspector.compatibility.enabled,devtools.webconsole.input.context,devtools.debugger.features.windowless-service-workers,apz.allow_zooming,image.avif.enabled,dom.media.mediasession.enabled,dom.input_events.beforeinput.enabled,dom.forms.inputmode,network.preload,dom.webgpu.enabled

Can we evaluate all crash reports to see whether they have some experimental feature in common? Would dom.indexedDB.experimental show up there as well if it were enabled?

Attachment #9172166 - Attachment description: Bug 1536596 - Change AnonymizedOriginString into a function. r=#dom-workers-and-storage → Bug 1536596 - Change AnonymizedOriginString and AnonymizedCString into functions. r=#dom-workers-and-storage
Pushed by sgiesecke@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/9a7e633be69a
Change AnonymizedOriginString and AnonymizedCString into functions. r=dom-workers-and-storage-reviewers,janv
https://hg.mozilla.org/integration/autoland/rev/566e09d02986
Added unit tests for AnonymizedOriginString. r=dom-workers-and-storage-reviewers,janv
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → 82 Branch

The main patch hasn't landed yet and needs data review, which I am going to request.

Status: RESOLVED → REOPENED
Flags: needinfo?(sgiesecke)
Resolution: FIXED → ---

Also, I am removing the in-testsuite flag again, and add leave-open. While unit tests are needed for some utility function here, we won't have tests for the main functionality, which is something we don't have STR for. The purpose of the patch will be to better understand the conditions under which that happens.

Flags: in-testsuite+
Keywords: leave-open
Attached file Data Review Request
Flags: needinfo?(sgiesecke)
Attachment #9172637 - Flags: data-review?(twsmith)
Attachment #9172168 - Attachment description: Bug 1536596 - Crash with details on principal verification failures in all builds. r=#dom-workers-and-storage → Bug 1536596 - Crash with details on principal verification failures in early-beta-or-earlier builds. r=#dom-workers-and-storage
Attachment #9172637 - Flags: data-review?(twsmith) → data-review?(tdsmith)

Comment on attachment 9172637 [details]
Data Review Request

lgtm; sorry about the crossed wires!

  1. Is there or will there be documentation that describes the schema for the ultimate data set in a public, complete, and accurate way?

Yes; this populates the MozCrashReason field documented as a crash annotation and in the crash ping.

  1. Is there a control mechanism that allows the user to turn the data collection on and off?

Yes, the Firefox telemetry opt-out.

  1. If the request is for permanent data collection, is there someone who will monitor the data over time?

Temporary collection; :sg will monitor.

  1. Using the category system of data types on the Mozilla wiki, what collection type of data do the requested measurements fall under?

Category 1, technical data.

  1. Is the data collection request for default-on or default-off?

Default-on.

  1. Does the instrumentation include the addition of any new identifiers (whether anonymous or otherwise; e.g., username, random IDs, etc. See the appendix for more details)?

No.

  1. Is the data collection covered by the existing Firefox privacy notice?

Yes.

  1. Does there need to be a check-in in the future to determine whether to renew the data?

:sg & co are responsible for removing the collection as they deem necessary.

  1. Does the data collection use a third-party collection tool?

No.

Attachment #9172637 - Flags: data-review?(tdsmith) → data-review+
Pushed by sgiesecke@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/578caad8681c
Crash with details on principal verification failures in early-beta-or-earlier builds. r=dom-workers-and-storage-reviewers,asuth

We have a report (https://crash-stats.mozilla.org/report/index/fe5d28aa-ff62-4a80-a104-57a200200912) with the annotation now:

Invalid principal infos found: originNoSuffix (file://aaaaaaaaa_aaaa_aaa_aaaaaa) doesn't match passed one (file:///a:/aaaaaaaa/aaaa-aaaaaa/DD-aaaaa-aaaa-aaa-aaaaaaaaaa/)! 

Both are file: URLs but the first (which is from the created principal) differs from the second (which is from the principal info which was used to create the principal). The pattern here matches the generic originNoSuffix generated for file URIs (file://UNIVERSAL_FILE_URI_ORIGIN) here: https://searchfox.org/mozilla-central/rev/62c443a7c801ba9672de34c2867ec1665a4bbe67/caps/ContentPrincipal.cpp#116

So this case is actually a bug in the logic of PrincipalVerifier, as it doesn't account for this handling. I am going to provide a patch that does.

Well, this looks to me like the problem which :asuth was already describing some time ago. There's a pref "security.fileuri.strict_origin_policy" which affects origin generation for file URIs and the report indicates that one origin was generated when the pref was set to true, and the other one when it was set to false.

Hm, so you don't think it's a "local" problem in PrincipalVerifier? And rather that the principal info passed to it should already contain file://UNIVERSAL_FILE_URI_ORIGIN if !nsScriptSecurityManager::GetStrictFileOriginPolicy()? (I was under the assumption that this always fails if !nsScriptSecurityManager::GetStrictFileOriginPolicy() because the principal info systematically did not go through ContentPrincipal::GenerateOriginNoSuffixFromURI, but this assumption might be wrong)

OTOH, if it's like you think, I guess we should probably support the case that someone flips security.fileuri.strict_origin_policy, right?

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two. The strict file origin policy is checked here:
https://searchfox.org/mozilla-central/rev/62c443a7c801ba9672de34c2867ec1665a4bbe67/caps/ContentPrincipal.cpp#116
https://searchfox.org/mozilla-central/rev/62c443a7c801ba9672de34c2867ec1665a4bbe67/netwerk/base/mozurl/src/lib.rs#286

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two

Is that the sole purpose of PrincipalVerifier? In that case, it seems a quite odd place to perform this check.

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.

Ah, so this only happens when the pref is flipped during a session? I thought it might be happening because it was flipped between the time some QM-managed storage was created and it is accessed, which might be across sessions. If it only happens when the pref is flipped during a session, then the solution might be to trigger re-initialization of all QM clients in that case (or even further re-initialization).

(In reply to Simon Giesecke [:sg] [he/him] from comment #40)

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two
Is that the sole purpose of PrincipalVerifier? In that case, it seems a quite odd place to perform this check.

Where would you do it instead ?
nsIPrincipal can't be used off main thread and we can't create a helper that would synchronously block a current thread while we wait for processing on the main thread (that would lead to deadlocks because LSNG is synchronous API).

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.
Ah, so this only happens when the pref is flipped during a session? I thought it might be happening because it was flipped between the time some QM-managed storage was created and it is accessed, which might be across sessions. If it only happens when the pref is flipped during a session, then the solution might be to trigger re-initialization of all QM clients in that case (or even further re-initialization).

That sounds like a lot of work and I'm not sure how would that look like, what about existing operations ?

I think we need something simpler.

(In reply to Jan Varga [:janv] from comment #41)

(In reply to Simon Giesecke [:sg] [he/him] from comment #40)

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two
Is that the sole purpose of PrincipalVerifier? In that case, it seems a quite odd place to perform this check.

Where would you do it instead ?

First, this doesn't seem like something that's specific to the quota manager.

Second, it doesn't seem like it's necessary to do at runtime, but should rather be comprehensively done with tests.

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.
Ah, so this only happens when the pref is flipped during a session? I thought it might be happening because it was flipped between the time some QM-managed storage was created and it is accessed, which might be across sessions. If it only happens when the pref is flipped during a session, then the solution might be to trigger re-initialization of all QM clients in that case (or even further re-initialization).

That sounds like a lot of work and I'm not sure how would that look like, what about existing operations ?

Existing operations could be cancelled. Can't we just reload all affected tabs (in this case all with file URIs) when the pref is flipped? Isn't this a general issue with several prefs?

(In reply to Simon Giesecke [:sg] [he/him] from comment #42)

(In reply to Jan Varga [:janv] from comment #41)

(In reply to Simon Giesecke [:sg] [he/him] from comment #40)

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two
Is that the sole purpose of PrincipalVerifier? In that case, it seems a quite odd place to perform this check.
Where would you do it instead ?
First, this doesn't seem like something that's specific to the quota manager.

I don't think we want to check all PrincipalInfos which go through IPC and I'm not aware of other code that uses nsIPrincipal and MozURL in such way.

Second, it doesn't seem like it's necessary to do at runtime, but should rather be comprehensively done with tests.

We have tests for it:
https://searchfox.org/mozilla-central/rev/62c443a7c801ba9672de34c2867ec1665a4bbe67/netwerk/test/gtest/TestMozURL.cpp#310
The list of URLs is quite long, but it can happen that something is missing and the runtime check exists to catch such cases.

Without a runtime check, how would we know there's an origin mismatch ?
Actually, in this case it's not an inconsistency between MozURL and nsIPrincipal, it's just the problem with the pref.

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.
Ah, so this only happens when the pref is flipped during a session? I thought it might be happening because it was flipped between the time some QM-managed storage was created and it is accessed, which might be across sessions. If it only happens when the pref is flipped during a session, then the solution might be to trigger re-initialization of all QM clients in that case (or even further re-initialization).

That sounds like a lot of work and I'm not sure how would that look like, what about existing operations ?

Existing operations could be cancelled. Can't we just reload all affected tabs (in this case all with file URIs) when the pref is flipped? Isn't this a general issue with several prefs?

Doable in theory, but still sounds like an overkill.

If there are any other prefs like that, we should try to latch them as well.

(In reply to Jan Varga [:janv] from comment #43)

(In reply to Simon Giesecke [:sg] [he/him] from comment #42)

(In reply to Jan Varga [:janv] from comment #41)

(In reply to Simon Giesecke [:sg] [he/him] from comment #40)

We currently use nsIPrincipal and MozURL to get the origin string. PrincipalVerifier exists to catch any inconsistencies between those two
Is that the sole purpose of PrincipalVerifier? In that case, it seems a quite odd place to perform this check.
Where would you do it instead ?
First, this doesn't seem like something that's specific to the quota manager.

I don't think we want to check all PrincipalInfos which go through IPC and I'm not aware of other code that uses nsIPrincipal and MozURL in such way.

Hm, ok.

Second, it doesn't seem like it's necessary to do at runtime, but should rather be comprehensively done with tests.

We have tests for it:
https://searchfox.org/mozilla-central/rev/62c443a7c801ba9672de34c2867ec1665a4bbe67/netwerk/test/gtest/TestMozURL.cpp#310
The list of URLs is quite long, but it can happen that something is missing and the runtime check exists to catch such cases.

Without a runtime check, how would we know there's an origin mismatch ?

I am trying to understand what cases "origin mismatch" might cover:

  • An inconsistency between nsIPrincipal and MozURL... it still doesn't right to me to assert that at runtime. Couldn't we ensure that they both use the same algorithm? (which obivously is not the case right now)
  • An inconsistency because of some ... state change, such as the pref flipping.
  • ... are there other cases?

The PrincipalVerifier is quite complex, for the reasons you explained, and all that for essentially just implementing an assertion.

Actually, in this case it's not an inconsistency between MozURL and nsIPrincipal, it's just the problem with the pref.

Yes, that's true.

So yes, the flipping of the pref on the fly is unfortunate and we need to do something with it. I'm not sure if we do a snapshot of the setting and just use that during entire session. It needs to be investigated.
Ah, so this only happens when the pref is flipped during a session? I thought it might be happening because it was flipped between the time some QM-managed storage was created and it is accessed, which might be across sessions. If it only happens when the pref is flipped during a session, then the solution might be to trigger re-initialization of all QM clients in that case (or even further re-initialization).

That sounds like a lot of work and I'm not sure how would that look like, what about existing operations ?

Existing operations could be cancelled. Can't we just reload all affected tabs (in this case all with file URIs) when the pref is flipped? Isn't this a general issue with several prefs?

Doable in theory, but still sounds like an overkill.

If there are any other prefs like that, we should try to latch them as well.

Sorry I didn't say that literally, but I was thinking more generally about effects of changing prefs on components both relating and not relating to storage. I am just wondering if there isn't a more general approach to handle that.

We should make the security.fileuri.strict_origin_policy pref mirrored only once[1]. We'd dug into this previously in bug bug 1286798 which resulted in bug 1511209 being filed to clarify why reftests use the preference. While having the preference at all is not great, we can probably further refine things just so that it can't change at runtime. As we can see from the list of places the preference is explicitly referenced at https://searchfox.org/mozilla-central/search?q=security.fileuri.strict_origin_policy&path= there aren't that many situations where it gets manipulated at runtime. Existing tests that depend on the pref flip happening at runtime can be modified to either run in a situation where the preference is already initialized when Gecko starts up or they can be disabled with explicitly filed bugs that make it the responsibility of the code owners to do something better.

1: Which also means stopping listening to preference changes in addition to changing StaticPrefList.yaml

Blocks: 1674954
See Also: → 1689130
Assignee: sgiesecke → nobody

A crash report with this signature has showed up with potentially interesting mac_crash_info:

bp-90ab5997-bc5e-4dc8-951e-aac180210610

    {
      "num_records": 1,
      "records": [
        {
          "message": "\u0080\u00a32\u0085\u0001",
          "module": "/System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI"
        }
      ]
    }

Yes, that "message" needs interpretation. I'll dig into the CoreUI framework to try to provide it.

(Following up comment #46)

In the CoreUI private framework, it's an internal _CUILog() method that writes to the "message" field in "mac crash info". I used a disassembler to look through many of that framework's calls to _CUILog(), and none of them comes close to matching the output here. "mac crash info" string fields (like "message") are UTF8 strings. I suspect this one is in non-Roman script, and Socorro's stackwalker is unable to interpret it.

So this "message" is useless here.

The crash reason there (requires the bits to see private crash data and is sanitized with alpha/digit collapse to a/D), however, is interesting (the baseDomain is for what is certainly an IMAP message URI, but the passed one is clearly for a null principal GUID) but is likely specifically down to Thunderbird's message display logic which does not generalize to Firefox because it's likely just Thunderbird violating invariants.

There are a few crashes from Firefox, but they all seem to be the potentially expected case where the expected originNoSuffix is (sanitized) file://UNIVERSAL_FILE_URI_ORIGIN but what was received is the (sanitized) specific file path, indicating the pref inconsistency discussed above.

QA Whiteboard: qa-not-actionable

The leave-open keyword is there and there is no activity for 6 months.
:jstutte, maybe it's time to close this bug?

Flags: needinfo?(jstutte)

Comment 48 and comment 45 still apply. We should just do it one day. Jan-Rio, do you want to take a look here?

Flags: needinfo?(jstutte) → needinfo?(jkrause)

(In reply to Jens Stutte [:jstutte] from comment #50)

Comment 48 and comment 45 still apply. We should just do it one day. Jan-Rio, do you want to take a look here?

Oh, sorry, it is actually bug 1665056 that needs to be solved. IIRC we kind of waited always for bug 1730535 in order to not need this.

Flags: needinfo?(jkrause)
See Also: → 1730535

The leave-open keyword is there and there is no activity for 6 months.
:jstutte, maybe it's time to close this bug?
For more information, please visit auto_nag documentation.

Flags: needinfo?(jstutte)
Flags: needinfo?(jstutte)
Severity: critical → S2

Since the crash volume is low (less than 5 per week), the severity is downgraded to S3. Feel free to change it back if you think the bug is still critical.

For more information, please visit auto_nag documentation.

Severity: S2 → S3
Depends on: 1810412

This can be very severe, rendering a profile completely nonfunctional even in safe mode.

Yesterday, I set security.fileuri.strict_origin_policy to false, so that I could write some <script type=module> without messing around with HTTP servers.

A bit over 27 hours later, Firefox suddenly crashed while I was in the middle of using a different program, and would no longer start successfully—it’d crash within a second of startup, even in safe mode. I was only able to rescue my profile because the crash reason shown in the crash reporter (“Invalid principal infos found: originNoSuffix (file://aaaaaaaaa_aaaa_aaa_aaaaaa) doesn't match passed one (file:///aaaa/aaaaa/aaaa/a/aaaaaa/aaaaaa/aaa/aaaaa_aaaaa/aaaaaaa/aaaaa.aaaa)!”) reminded me of the pref change I made yesterday.

The solution was to manually find the profile directory, edit its prefs.js, and remove the line user_pref("security.fileuri.strict_origin_policy", false);.

Status: REOPENED → NEW
Target Milestone: 82 Branch → ---

The principal verifier has been removed in bug 1810412.

Status: NEW → RESOLVED
Closed: 4 years ago11 months ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: