Closed Bug 1925117 Opened 1 year ago Closed 9 months ago

Folder Compaction shouldn't operate on live DB

Categories

(MailNews Core :: General, defect, P1)

Thunderbird 128

Tracking

(thunderbird_esr128+ affected, thunderbird136 wontfix)

RESOLVED FIXED
137 Branch
Tracking Status
thunderbird_esr128 + affected
thunderbird136 --- wontfix

People

(Reporter: benc, Assigned: benc)

References

(Regression)

Details

(Keywords: leave-open, regression)

Attachments

(4 files)

A good post-Bug 1890448 analysis from Kai:

The new folder compact strategy uses a new approach for writing new/temporary files.
In the past it was (1) write new mbox and new msf to temporary files (2) rename old.msf to old-1.msf (3) rename temporary files, overwriting old files (4) remove old-1.msf
The new logic seems to be:
(1) copy msf to msf.compact-backup (2) write new mbox to temporary file, but do an in-place overwrite of the msf file. (3) on success, rename temporary mbox file name to the final name, remove msf.compact-backup. (3b) on failure, close new msg file, and try to move the msf.compact-backup to the permanent msf filename
I think the new approach introduces new risks.
What if something bad happens while we're writing the new msf file, and it cannot complete, or the user exits?
then a new index (possibly incomplete) will point to positions in the old file
I also see at least one place in the new code that says "rollback not possible, return error"
I think the old strategy was safer. If something bad happens during compact, the old files will still be in place, unmodified

There's still the (possible but really pretty contrived) issue of getting a failure while replacing the DB and mbox files - there's no way to atomically place both at once. You could always get a failure after the first one. For local folders it's probably best to delete the old db first, install the new mbox, then the new db. For local folders a missing DB will be built from scanning the mbox. I'll have to think about imap et al.

Version: unspecified → Thunderbird 128
Depends on: 1925825

Just to note: the old folder compactor code did write directly to the live DB for IMAP - it didn't take a copy to work on.
There was separate compaction code for local and IMAP folders. The local version did work on a copy of the DB but the IMAP one did not.

Creating a separate copy of a DB can only be done for local folders at the moment - there's currently there's no way to create a standalone IMAP msgDB without the folder. I'll be working on that in Bug 1921694.

Severity: -- → S2
Priority: -- → P1
See Also: → 1928540

I played with an environment were the disk was almost full.

I once ran into a situation were compaction was apparently started (because I have a file INBOX.msf.compact-backup),
and the disk size checks didn't prevent the compact (note this might also happen if something in parallel fills the disk AFTER the check was made?),
and I've arrived with the following state of files on disk:

  • INBOX.msf.compact-backup - size 0
  • INBOX - truncated, it contains a message with one attachment, and the data is cut off in the middle, and the MIME end terminator is missing
  • INBOX.msf is in an incomplete state, because my mork dumper tool gets stuck trying to parse (never completes), which suggests some data is missing?

FYI, it's tricky to reproduce. That's because our code contains some checks for available disk size, and only attempts to compact when it believes it's safe to do so (sufficient space available, based on guessing the required space).

I was kind of lucky to get into this situation once, and I coudn't give you clear instructions how to reproduce.

IMHO there's only one way to protect against disk-full-scenarios:
Write new files, and only if those can be completely written and closed, only then proceed to replace the existing files, in a way that doesn't require any more writing to disk (limited to updating filenames).

See Also: → 1934584

I thought about a safe strategy for updating the files when compacting.
I suggest the following.

starting point:

- foldername
- foldername.msf


when compacting, write new files to

(WT)
- foldername.tmp
- foldername.msf.tmp


when new files are complete and closed

(RN)
- rename foldername.tmp => foldername.new
- rename foldername.msf.tmp => foldername.msf.new


The above two files may be written in any order.

Once the .new files are available, use the following strict order for
moving to the new files

(RO)
- rename foldername => foldername.old
- rename foldername.msf => foldername.msf.old

(RP)
- rename foldername.new => foldername
- rename foldername.msf.new => foldername.msf

(DO)
- delete foldername.old
- delete foldername.msf.old


Transaction recovery

The following steps are done for each pair of files, when any of the
two files is accessed (read or write) for the first time in the process.

Once the steps have been executed, the process will remember for the
remainder of the lifetime of the process that those files are in a
correct state and the safety handling need not be ran again.

- unconditionally delete file foldername.tmp
- unconditionally delete file foldername.msf.tmp

- test if file foldername.new exists
- test if file foldername.msf.new exists

- if none of them exists, it means they were not created in the previous
  run, or they have already been successfully renamed to the primary names.

  If they were already successfully renamed to the primary names.
  it's possible that .old files still present (were not yet deleted).

  For the sake of cleanup:
  - unconditionally delete foldername.old
  - unconditionally delete foldername.msf.old

  (If we had failed after renaming to .old, we should still have .new files.
   If we don't have any primary files currently, and also no .new files,
   The folder might have been deleted, in that scenario it's fine to
   do the cleanup and remove the .old files, too.)

- if both .new files exist, the previous process has exited unexpectedly,
  we know ist must have happened before or during renaming of primary
  files to .old files
  
  - if foldername exists, rename it to foldername.old
  - if foldername.msf exists, rename it to foldername.msf.old
  - continue the usual operation at position (RP)
  
- if only one .new file exists, the previous process has exited unexpectedly,
  and we don't know yet whether it happened during (RN) or (RP).

  If it happened during (RN), then the primary files must still exist.
  If both primary filenames exist, then delete the one .new file, done.

  If only one primary filename exists, do the following to finish (RP).
  
  If foldername.new exists (and foldername.msf.new is missing),
  then we expect that foldername is missing (and foldername.msf exists).
  
  In theory, based on the previous checks
  (only one of each primary and .new exists),
  we could see that both foldername.new and foldername exist,
  and neither foldername.msf.new nor foldername.msf exist (or vice versa).

  This should never happen with our update strategy, but we should define
  how we handle that potential scenario.
  
  Suggestion is to keep the existing primary file and delete the .new file, done.

  If foldername.new exists and foldername.msf exists,
  then rename foldername.new to foldername, continue with (DO).
  
  Or, if foldername.msf.new exists and foldername exists,
  then rename foldername.msf.new to foldername.msf, continue with (DO).

When doing regular file updates (not compact), which will update
foldername and/or foldername.msf in place, check for the
presence of the .tmp files or .new files. That is, every time we
open those files for modification, we check for those compact-temporary files.
If they are present, it means we are experiencing unexpected concurrency
during compact. If that is detected, enforce a crash.

We should consider downgrades.

Let's say we deploy this in 115.n+1, then the user experiences a process
termination with an incomplete transaction, and prior to restarting,
the user downgrades to 115.n

Version 115.n would not be able to handle the situation that
primary files are missing.

To protect against that, suggestion is we ship version 115.n
with the code as explained above after "Transaction recovery".

In a later version, either 115.n+1 or later,
we ship the code that uses the new file writing strategy.

This way we also prevent that older versions will incorrectly interpret
the .new, .old or .tmp files as individual mailboxes.

I think the described approach would:

  • protect against process termination at any point during compact
  • retain a consistent state during disk full scenarios
    (assume we abort compact if any failure is expecerienced during the file writing operations,
    once we are done writing the new files, the rename and delete operations should succeed)
  • potentially detect problematic concurrency during compact that we haven't found yet

It would be good to get this reviewed and discussed.

Flags: needinfo?(toby)

Will this help Bug 1867762 - The folder ... cannot be compacted because another operation is in progress (gmail.com) ?

Not sure, but in reading some of the above, the horror show of incomplete compacts filling the hard drive then compounded with losing its mind once full is not a good look for Thunderbird. Hope this all gets figured out and handled before the inevitable might happen.

(In reply to Worcester12345 from comment #11)

Not sure, but in reading some of the above, the horror show of incomplete compacts filling the hard drive then compounded with losing its mind once full is not a good look for Thunderbird. Hope this all gets figured out and handled before the inevitable might happen.

See: bug 1878541 and bug 1911076 comment 56

The Comment 5 suggestion looks doable, I think.

I can see it getting a bit complicated (eg I'm not confident on where we'd hook in the Transaction recovery, and I think there's lots of scope for nasty hard-to-track-down corner case bugs from folder rename code, which is all over the place) - but I do agree it's a robust approach.

But before committing to implementing it, I'd like to rule out simpler approaches and nail down the exact failure cases we're worried about.

Ultimately, we're trying to atomically update two files in sync, without the OS supporting that.
The .tmp/.old/.new approach in Comment #5 addresses the failure modes that could happen, and provides a separate recovery procedure.

The only OS "guarantee" is that renaming a file is atomic (as long as it's on the same filesystem). This can include replacing an existing file.

So, assuming we had a properly atomic database Commit() (which we could do, but our interfaces don't really expose at the moment), our compaction would look like this:

read from "mbox", write compacted data to "mbox.tmp".
update db with new storeToken values
success = db.Commit()
if success:
    rename "mbox.tmp" to "mbox"
else:
    delete "mbox.tmp"

The failure modes:

  1. The user could lose power between the Commit() and before the rename.
  2. The rename could fail.

I think 2 is unlikely - we've just been reading from "mbox", we can check our write access to it in advance, so things would have to be pretty catastrophic for it to fail.
Regarding 1, given the time window involved, I think you'd have to be exceedingly unlucky to be hit by this.

So the question becomes: are these acceptable risks or not?
My gut feeling is: "Yes. It's a vanishingly small risk, there are a million other places in the codebase subject to the same issues, and chances are that it'd only happen when something catastrophic occurred and recovery is a non-option anyway (eg hard drive failure) ".
But I'm open to be persuaded otherwise.

The Comment 5 suggestions mitigate these risks (not completely?), but at the cost of more complex code with some icky edge-cases that scare me (interactions with folder renaming).

One issue not covered yet is that putting .tmp/.old/.new in the same dir clashes with user-named folders that end with .tmp/.old/.new.
This is a more general issue, not specific to Comment 5.
But it's something we should keep in mind. The slack way we deal with filenames and folder names means we've already got quite enough folder-naming issues :-) We don't need any more (see Bug 124287).

I think the way around it would be to create a sub-directory for temp files.
It'd be on the same filesystem, so we retain the OS atomic-rename capability.

Kai, what are your thoughts on Ben's suggested approach? He's back later this week and will want to get back into this.

Flags: needinfo?(toby)

(In reply to Ben Campbell from comment #13)

I can see it getting a bit complicated (eg I'm not confident on where we'd hook in the Transaction recovery,

IMHO, part of this work is to ensure that all code that wants to open these files need to go through a single piece of code. If there isn't that place yet, it needs to be added.

The transaction recovery must be done in every code path that accesses the files.

We can minimize the amount of times we go through that logic. We can have a global list of all files that have already gone through the recovery attempt during the current process execution. If we're trying to open the files for the second or later times, we know the files must be in a sane state, and we can skip the recovery.

However, as part of this work, it must also be made sure that all file closing operations are properly tracked. We must never open a file when it's already open.

and I think there's lots of scope for nasty hard-to-track-down corner case bugs from folder rename code, which is all over the place) - but I do agree it's a robust approach.

Sorry to hear about this additional complication. Sounds like the (new) central code that tracks recovery, files opened and closed, should strictly ensure that a file isn't currently open while it's being renamed.

Maybe we could introduce some additional filenames for renaming, we should think about. For example, rename both files to *.pre-rename ? We should avoid copying the mbox file, but it might be OK to copy the msf file. So, here is a first idea:

  • rename mbox to mbox.pre-rename
  • rename msf to msf.pre-rename
  • if it's an IMAP rename, perform the IMAP rename operation
  • copy msf.pre-rename to new-msf.pre-rename
  • rename mbox.pre-rename to new-mbox
  • delete msf.pre-rename
  • rename new-msf.pre-rename to new-mbox

The transaction recovery would have to check for an incomplete rename as an additional scenario when looking for a situation that needs to be rolled back or forward.

But before committing to implementing it, I'd like to rule out simpler approaches and nail down the exact failure cases we're worried about.

I'm worried about the failure cases:

  • disk full, or disk index nodes full, which can mean that our operation might fail at any step
  • application crashing (because of another thread) or application interrupted and never continuing (e.g. laptop going to sleep, and never waking up because of a failure, rather a hard reboot happening), which could also happen at any point of time

Ultimately, we're trying to atomically update two files in sync, without the OS supporting that.
The .tmp/.old/.new approach in Comment #5 addresses the failure modes that could happen, and provides a separate recovery procedure.

The only OS "guarantee" is that renaming a file is atomic (as long as it's on the same filesystem). This can include replacing an existing file.

So, assuming we had a properly atomic database Commit() (which we could do, but our interfaces don't really expose at the moment), our compaction would look like this:

As I understand it, our intention is to eventually remove the use of mork.
If that's our plan, it might be better to use solutions that don't use more of mork.

read from "mbox", write compacted data to "mbox.tmp".
update db with new storeToken values
success = db.Commit()
if success:
    rename "mbox.tmp" to "mbox"
else:
    delete "mbox.tmp"

The failure modes:

  1. The user could lose power between the Commit() and before the rename.
  2. The rename could fail.

I think 2 is unlikely - we've just been reading from "mbox", we can check our write access to it in advance, so things would have to be pretty catastrophic for it to fail.
Regarding 1, given the time window involved, I think you'd have to be exceedingly unlucky to be hit by this.

I'd prefer if our data storage strategy didn't require being lucky.
We don't have to talk about catastrophic, we can talk about common operations.
Besides losing power, it's also possible that the commit() worked, but then the application crashes before the code reaches rename().
Or, instead of crashing, the user closes the laptop lid and the machine fails to wakeup.
Or, our code reaches the rename(), and the filesystem code begins to execute the rename() operation, but then the system crashes/sleeps before the operation completes.
And on reboot, the incomplete filesystem transaction and its rename is rolled back.
I don't consider that a catastrophic event. I consider that a regular event that legitimately and occasionally happens.
When you have 30 million users and lots of transactions per day, I don't consider it unlikely at all.
I don't want to say that the 20 users that experience that per day have to live with having been unlucky.
Our software should work reliably in all common scenarios.

So the question becomes: are these acceptable risks or not?

In my opinion: No, they are not.

My gut feeling is: "Yes. It's a vanishingly small risk, there are a million other places in the codebase subject to the same issues, and chances are that it'd only happen when something catastrophic occurred and recovery is a non-option anyway (eg hard drive failure) ".

Do we have a million of other places that change our on-disk storage in risky ways?

Many hard drive failures will be automatically handled by smart logic that will switch to working sectors.
Slowly degrading hard drives can be detected and a replace of the drive can be possible while the disk is still sufficiently working.
Some people also prevent the risk of hard drive failures by using RAID.
I think that the possibility of a hard drive failure cannot be a justification for software occasionally misbehaving and losing data on well working hardware.

The Comment 5 suggestions mitigate these risks (not completely?), but at the cost of more complex code with some icky edge-cases that scare me (interactions with folder renaming).

It's our job to make code that works reliably, even if it requires complexity.
In the same way that the operating system programmers also have to write complex code to make rename() atomic (but cannot guarantee that it will be executed if the system crashes or power fails).
OS file system programmers also cannot say, "if the disk drive fails, data will be gone anyway, so why should we worry"?

(In reply to Kai Engert [:KaiE:] from comment #16)

(In reply to Ben Campbell from comment #13)

I can see it getting a bit complicated (eg I'm not confident on where we'd hook in the Transaction recovery,

IMHO, part of this work is to ensure that all code that wants to open these files need to go through a single piece of code. If there isn't that place yet, it needs to be added.

The transaction recovery must be done in every code path that accesses the files.

I basically agree, although xpcom refcounting and the way we currently handle mork makes it tricky (very easy to end up in bad situations). eg Bug 1795178

BTW Mork operates entirely in-RAM as far as I can see. It only writes to disk when you do one of the three Commit() variations.
Rolling back a Mork DB involves just ditching it and re-reading from disk. It's actually kind of elegant in it's simplicity, but we don't take advantage.
Open DBs do seem to hold a file handle open though, which causes a lot of issues (on windows, particularly). I don't know why this is done, but I'd like to stop it :-) So there's loads of places in the code which try to hack around this by nulling out db pointers (in the hope it's the last refcount and that the db will close) or calling ForceDBClosed().

As I understand it, our intention is to eventually remove the use of mork.
If that's our plan, it might be better to use solutions that don't use more of mork.

That was my intention in using a Commit()/Rollback() approach - we'll need to move to that model at some point, so it seems good to align with that now if we can.
But it doesn't preclude the recovery procedure you outlined. And if it turns out to be too hard I'll just go back to operating on a second DB file.

So I'm going to approach this in two steps:

  1. ensuring no changes are visible to the system until they've all succeeded, using temporary files in a way that lets us detect failed switch-overs.
  2. figure out the recovery procedure and how to hook it in (i.e. where dbs are opened). Not totally sure of the scope of this yet.

Part 1 gets us most of the way there - we'd still be vulnerable to loss in the period between installing the updated database (commit), and renaming the updated mbox file.

The complication is that recovery-detection requires specific a temporary filename scheme. And this can clash with other folders.
Folder naming is shambles - it's perfectly possible a user will have folders which clash with names we'd like to use: "invoices", "invoices.tmp", "invoices.new" etc ...
Previously we didn't worry about that - the temp file generation could just keep trying new names until it found one that was free. In fact, the current compaction code uses NS_NewSafeLocalFileOutputStream(), which never even exposes the filename.

But for recovery, we need to be more specific, as we have to be able to map the temp name back to the original name.
So I think the only way to do it is to use a temp directory inside the mail folder - has to be on the same filesystem so we get atomic renames. So we'd have full control over the name inside that dir. There's still the issue of the temp directory itself clashing with another folder, but that's a bit more tractable (https://searchfox.org/comm-central/search?path=&q=nsShouldIgnoreFile).

Then we can focus on a recovery step.

Here's an idea for potentially avoiding the separate directories.

I've tried to use a folder name with a leading dot, e.g. .test

When doing so, I see that Thunderbird uses a different filename instead, on my Linux system I got 41a1a959.

If Thunderbird is guaranteed to always use that kind of special filenames for folder names that start with a dot, and never allows filenames with a leading dot for permanent files, then we could use the leading dot as a marker for temporary filenames.

Every time you post things like that I get pulled off into another rabbit hole :-)
Folder creation and naming is quite... convoluted.

To answer your question, it looks like filenames for folders with a leading dot are always hashed (at least local folders - I think IMAP just doesn't allow folders with a leading dot at all).
The name-hashing is done by NS_MsgHashIfNecessary() and it's called from various places in a pretty ad-hoc way. Sometimes in conjunction with other special-character-escaping schemes. So it wouldn't surprise me if there were a few places that let it slip through... but it's probably about as close to a guarantee as we can get.

I think it's probably still better to use a temp directory (with a leading-dot name!) to keep the clash-surface smaller - some other processes also make assumptions about files and don't obey the leading-dot rule (eg mbox<->maildir conversion).
But it'll be easy enough to switch to leading-dot files instead, if some other problem arises.

MboxScanner and MboxCompactor are helper classes, only used by
nsMsgBrkMBoxStore. They were hidden away inside nsMsgBrkMBoxStore.cpp,
but I decided they're easier to work with in separate files.
No code changes here, just splitting up files (and culling some includes).

This nails down specific rules for compacting mbox files.
It doesn't address the database side of folder compaction (that's part 2!),
but it does provide the hooks to allow the low-level mbox compaction and
higher-level (database) updates to synchronise robustly.
It aims to minimise potential issues due to catastrophic crashes or power
failures, while providing the for a recovery procedure (either automatic
or manual) if the worst does occur.

Pushed by mkmelin@iki.fi:
https://hg.mozilla.org/comm-central/rev/0a3c8ad9c854
Split mbox scan and compaction helper classes out from nsMsgBrkMBoxStore.cpp. r=mkmelin

Attachment #9464739 - Attachment description: Bug 1925117 - Add license to mailnews/local/src/MboxScanner.cpp. r=aleca DONTBUILD → Bug 1925117 - Add license to mailnews/local/src/MboxScanner.cpp. r=aleca
Pushed by heather@thunderbird.net: https://hg.mozilla.org/comm-central/rev/0feb7e402a8a Add license to mailnews/local/src/MboxScanner.cpp. r=aleca DONTBUILD
Depends on: 1948809
Depends on: 1948810
Depends on: 1948812

The part 2 patch here is done and working (just needs more testing and documenting).

...but I've still got the problem that the GUI disappears all the messages if I install a new (post-compaction) database on the open folder.
If I click on another folder then back to the freshly compacted one it all works fine.
I have no idea how to make the GUI side cope with this automatically... clues and patches welcome!

At the beginning of compaction, I call nsIMsgFolder.forceDBClosed() which also nulls nsIMsgFolder.msgDatabase.
I copy the DB file to another location, open it, apply the compaction changes, commit and force-close it, then copy it over the old DB file.
When the compaction is complete, there are a couple of notifications which are fired:
https://treeherder.mozilla.org/jobs?repo=try-comm-central&selectedTaskRun=cw9PbJNTRiWJJBzNB2ilxw.0&revision=6350952675f672f9350fcbd22ec4c6abed30dc4b

These two notifications were just cargo-culted over from the old old compaction code - DBViewWrapper seems to handle CompactCompleted sensibly by calling a refresh() function, and I think gloda hooks into one, but honestly I'm just grepping about in the dark there.

Flags: needinfo?(geoff)

It works on local folders but not on IMAP, so I looked at the DBViewWrapper code and discovered this. The IMAP folder code isn't even firing the notifications when asked.

Flags: needinfo?(geoff)

(In reply to Geoff Lankow (:darktrojan) from comment #28)

It works on local folders but not on IMAP, so I looked at the DBViewWrapper code and discovered this. The IMAP folder code isn't even firing the notifications when asked.

Ahh, thanks! I did see that in DBViewWrapper, but wasn't sure if that was what was used for the main view.

Sighs... sooooo many protocol-specific hacks in the folder code. Will need to pick my way through nsMsgLocalMailFolder::NotifyCompactCompleted() and do some code archeology to figure out what's what and why.

Depends on: 1949605
Attachment #9466739 - Attachment description: WIP: Bug 1925117 - Part 2 of 2. Perform compaction using another db file, not the live one. → Bug 1925117 - Part 2 of 2. Perform compaction using another db file, not the live one. r=#thunderbird-back-end-reviewers

(In reply to Kai Engert [:KaiE:] from comment #16)
...

However, as part of this work, it must also be made sure that all file closing operations are properly tracked. We must never open a file when it's already open.
...
Sorry to hear about this additional complication. Sounds like the (new) central code that tracks recovery, files opened and closed, should strictly ensure that a file isn't currently open while it's being renamed.

Under windows, renaming operation while the file is open fails.
So with enough tests to create various situations, we can be certain that the code does not create such a situation.
(Actually, I remember I had to modify some code for correct error handling due to this.
Under linux, we can even remove/rename files while it is still open, but under Windows we can't.
Thus I was forced to re-order operations to close the file first. I think it was one of the temporary files created during certain operations, mayb e download.
It took me a few months to figure out WHAT file was being renamed by inserting many dumps and run many jobs on try-comm-server.
Then it dawned on me that the restrcition of windows got in the way.

As for open, depending on the read/write nature of the open, similarly, I think windows balks at the second open.
So it might be possible again to have enough tests to see if TB tries to open some files twice in a manner that the windows thinks harmful.
I think that covers all the cases discussed here.

To be honest, on this second point, I am not 100 % sure, but for the first one, I am sure.

Target Milestone: --- → 137 Branch

Pushed by john@thunderbird.net:
https://hg.mozilla.org/comm-central/rev/632894a5bcf0
Part 1 of 2. Tighten up commit/rollback handling for mbox compaction. r=mkmelin, darktrojan
https://hg.mozilla.org/comm-central/rev/d4685f39abde
Part 2 of 2. Perform compaction using another db file, not the live one. r=mkmelin

Part 2 of 2 was backed out for causing test fails in mailnews/base/test/unit/test_folderCompact.js:
https://treeherder.mozilla.org/jobs?repo=comm-central&collapsedPushes=1593425&selectedTaskRun=YkoJtHmGTfO_vsPsAMRWfQ.0

Backout by john@thunderbird.net: https://hg.mozilla.org/comm-central/rev/7853911748e0 Backed out changeset d4685f39abde (Bug 1925117, D238551) for causing test fails. r=backout

Pushed by geoff@darktrojan.net:
https://hg.mozilla.org/comm-central/rev/c7c389da2a5f
Part 2 of 2. Perform compaction using another db file, not the live one. r=mkmelin

Hey Ben, does this still need to stay open?

Flags: needinfo?(benc)

No, I was just waiting for a patch to be backed out because one of the more obscure unit tests failed every second Tuesday on leap years when running on windows 9 with Belgian keyboard settings or something :-) But it hasn't happened yet, so...

Status: ASSIGNED → RESOLVED
Closed: 9 months ago
Flags: needinfo?(benc)
Resolution: --- → FIXED
See Also: → 1952463
Regressions: 1952503
Regressions: 1952463
See Also: 1952463

Unless I'm mistaken, there was a glitch in landing this code. I cannot tell whether that could have caused any negative consequences.

Patches "part " and "part 2" were landed into c-c for 137.
Then "part 2" was backed out for causing test failures.
Then, the merge day happened, and 137 was branched into beta.
Then, "part 2" was landed only into c-c for 138.

Unless I'm mistaken, only "part 1" of this work was contained in the 137 release cycle.

138 is scheduled for release today though, and that should have both parts.

Regressions: 1963490
Regressions: 1965686
Regressions: 1959858
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: