Closed Bug 1362483 Opened 7 years ago Closed 5 years ago

Gloda loops high CPU. Error console gloda.index_msg WARN "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction". Activity mgr: "Determining which messages to index". Repair moves issue to a different folder.

Categories

(MailNews Core :: Database, defect)

defect
Not set
critical

Tracking

(thunderbird_esr6069+ fixed, thunderbird68 fixed, thunderbird69 fixed, thunderbird70 fixed)

RESOLVED FIXED
Thunderbird 70.0
Tracking Status
thunderbird_esr60 69+ fixed
thunderbird68 --- fixed
thunderbird69 --- fixed
thunderbird70 --- fixed

People

(Reporter: merike, Assigned: jorgk-bmo)

References

(Regression)

Details

(Keywords: perf, regression, regressionwindow-wanted, Whiteboard: [regression:TB54?/TB55?])

Attachments

(4 files, 2 obsolete files)

Noticed higher than usual cpu activity and traced it back to gloda. It's spamming messages like that in hundreds per second.

The message mentioned is present in the folder and has been for 2 years already. If I repair said folder it moves on to another one and claims there's also a message with sketchy key there. Tried repairing a few folders and it got back to initial folder only to claim another message key sketchy.

So, probably some kind of gloda corruption. I'll restart Thunderbird and see if it reoccurs without resetting gloda but if anyone has ideas how it might have gotten stuck in that indexing loop feel free to fix :)
Thanks for reporting. I think the standard answer is: Delete global-messages-db.sqlite.
bug 535516 is the only place I've seen mentioned "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction"

If it's just sucking CPU but not actually impacting responsiveness I'd ignore it until we decide how to go about solving it.  And, I must wonder whether your case is a somewhat recent regression - on the assumption that if this had been happening previously that you would have noticed
Keywords: perf
Andrew, what might we make of this gloda.index_msg warning?
Flags: needinfo?(bugmail)
Summary: Gloda stuck in a loop with "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction" → Gloda stuck in a loop with high CPU and error console gloda.index_msg WARN "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction". Folder repair moves the issue to a different folder.
See Also: → 1349915
I've gone cold turkey on Thunderbird, apologies.
Flags: needinfo?(bugmail)
(In reply to Wayne Mery (:wsmwk, NI for questions) from comment #2)
> And, I must wonder
> whether your case is a somewhat recent regression - on the assumption that
> if this had been happening previously that you would have noticed

Maybe. Just ran into it again yesterday and deleted gloda db once more. It's noticeable when your otherwise silent laptop starts up the fan for more than a short while :)
This bug is not just running up CPU.  It also prevents message filters from running.
Just witnessed it happening for at least 4th time since reporting this bug. Disabling gloda for now, too annoying not to.
See Also: → 1406653
59.0b1 seeing this bug now.  25% cpu, 100 MB memory, no disk.  Browser console seeing this message 1 per second.
2018-02-25 17:15:10	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://nobody@Local%20Folders/DNA%20Project sketchy key: 13 subject: Fwd: FW: SNP report from Barry McCain
2018-02-25 17:15:11	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://nobody@Local%20Folders/DNA%20Project sketchy key: 13 subject: Fwd: FW: SNP report from Barry McCain

Message above referencing a Local Folder with property for retention (via account settings) dated set to "Don't delete."  Message is dated 2015.  Not the oldest or newest message in the folder.
Just a thought: This may be related to messages moved or deleted from the inbox before gloda indexing occurs.  I'm using Filtaquilla to trash some types of spam.  The trash is not include in the global search.  POP3, shared inbox, local folders.

Gloda indexing should show zero/one error messages, ignore the entry and not continuously retry.
This bug definitely an issue in 60.0b1.   No extensions enabled.  Very obvious in editor performance - serious delay in appearance of keystrokes because of the siphoned off processor cycles.
Continues to be an issue in 60.0b2.
(In reply to doug2 from comment #12)
> Continues to be an issue in 60.0b2.

We need more useful data than that to make progress :)

bug 1406653 was reported against 56 beta
bug 1362483 was reported against 54 beta (this bug)

So, can anyone reproduce this with version 52, or 53?
Flags: needinfo?(dmccammishjr)
Tell me how to get those versions and I'll try to reproduce the problem with them.
I read bug 1406653 and it sounds like a duplicate to me, but I may not be seeing the nuances. While watching the error console, the red numbers are not sequential and seem to be dynamic - i.e., somehow the number in red changes within a single error message.

After reading 1406653, I was able to clear the Gloda error by "repairing" the Inbox folder. (Previously, I shutdown TB and restarted it.) I too have a bunch of Local folders, but they don't seem to be involved.  The problem seems to occur when mail is deleted, but that could just be the most common database action.
Ran 54.0b3 for a day, 56.0b4 (auto-updated twice) for a day or more - no observed Gloda issues (but reported crashes). Have downloaded 53.0b2 and will run for a few days.
Flags: needinfo?(dmccammishjr)
53.0b2  Not seeing repeated Gloda error, but do see error for each received message.  Below is error log transcript.
I will continue to monitor 53.0b2.
IndexedDB UnknownErr: ActorsParent.cpp:594  (unknown)
undefined  Promise-backend.js:920
1523713830754	addons.xpi-utils	WARN	Could not find source bundle for add-on {972ce4c6-7e08-4474-a285-3208198ce6fd}: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIFile.persistentDescriptor]"  nsresult: "0x80004005 (NS_ERROR_FAILURE)"  location: "JS frame :: resource://gre/modules/addons/XPIProvider.jsm -> resource://gre/modules/addons/XPIProviderUtils.js :: parseDB :: line 629"  data: no] Stack trace: parseDB()@resource://gre/modules/addons/XPIProvider.jsm -> resource://gre/modules/addons/XPIProviderUtils.js:629 < syncLoadDB()@resource://gre/modules/addons/XPIProvider.jsm -> resource://gre/modules/addons/XPIProviderUtils.js:563 < checkForChanges()@resource://gre/modules/addons/XPIProvider.jsm:3758 < startup()@resource://gre/modules/addons/XPIProvider.jsm:2793 < callProvider()@resource://gre/modules/AddonManager.jsm:272 < _startProvider()@resource://gre/modules/AddonManager.jsm:756 < startup()@resource://gre/modules/AddonManager.jsm:938 < startup()@resource://gre/modules/AddonManager.jsm:3129 < observe()@resource://gre/components/addonManager.js:65
IndexedDB UnknownErr: ActorsParent.cpp:594  (unknown)
UnknownError  indexed-db.js:58:9
undefined  Promise-backend.js:920
UnknownError  indexed-db.js:58:9
2018-04-14 09:20:35	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed
  log4moz.js:690
2018-04-14 09:20:35	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed

Async statement execution returned with '1', 'no such table: moz_favicons' nsPlacesExpiration.js:691
2018-04-14 10:30:34	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed
  log4moz.js:690
2018-04-14 10:30:34	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed

Async statement execution returned with '1', 'no such table: moz_favicons'  nsPlacesExpiration.js:691
Async statement execution returned with '1', 'no such table: moz_favicons' nsPlacesExpiration.js:691
2018-04-14 12:11:04	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed
  log4moz.js:690
2018-04-14 12:11:04	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed

Async statement execution returned with '1', 'no such table: moz_favicons' nsPlacesExpiration.js:691
2018-04-14 12:41:34	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed
  log4moz.js:690
2018-04-14 12:41:34	gloda.datastore	ERROR	got error in _asyncTrackerListener.handleError(): 19: constraint failed
Not seeing the Gloda problem in 53.0b2.  Going to 56.0b4.
gloda problem occurred this am in 56.0b4 "spontaneously"  (was not using TB at the time, may have been compacting or receiving)
Repairing inbox stopped the gloda error repeats.  !! Cancel that.!!  Gloda error repeats restarted shortly after "Repair."  Tried again and Gloda error repeats again immediately restarted.   Had to shut down TB.  MS Superfetch went wild for a while.  Very high disk use.
Is this problem Windows-specific?  For other performance reasons, I shut down the "Superfetch" process while running TB56.0b4 and have not had further problems. Could there be some sort of timing conflict?  Now back to 60.0b3.  Will continue to monitor.
(In reply to doug2 from comment #20)
> Is this problem Windows-specific?
No, the initial report is on linux.
Gloda problem exists on 60.0b5.
>> We need more useful data than that to make progress :)

No, you really don't.  We don't need the underlying cause fixed.  We need the endless retry and logging of an unknown object fixed.  If the gloda scan would just skip missing entries and move on, it wouldn't be hogging CPU.
(In reply to doug2 from comment #22)
> Gloda problem exists on 60.0b5.

clearly :)

doug2, thanks for your effort and progress trying to get the regression range.  So in summary:
* 53.0b2 you don't see the problem
* 54.0b3 you don't see the problem
* 56.0b4 you see the problem
* 55.0bx you don't mention 

Is that correct?

That should be sufficient for you further narrow down the regression range using https://mozilla.github.io/mozregression/quickstart.html  I would start with "good" date 2017-03-01 and "bad" date 2017-06-01  (which is 55.0a1 so I'm sort of guessing)
Flags: needinfo?(dmccammishjr)
(In reply to A Capesius from comment #23)
> >> We need more useful data than that to make progress :)
>
> No, you really don't.  We don't need the underlying cause fixed. 

true, having a developer examine some hundreds or thousands of lines of code is one approach. However it is generally not successful or attractive.  Depends how long you want to wait for results - wait for one of 5-6 developers capable of debugging it to have time to do so, if more critical bugs aren't on their plate (which might be months or years), vs utilizing dozens or hundreds of users who with a little elbow grease and some guidance can often narrow the problem (which can take an hour or less).

So we tend to solve regressions indirectly, not from code examination by developers (who typically cannot reproduce the issue), but by leveraging available user manpower (who can easily reproduce) to get a one day regression range of code changes. We then can typically identify just a few lines of code that changed which are causing the problem.
Correct.  Don't think I tested 55.0bx.  Note that problem did occur in 60.0bx which i'm guessing is an off branch.  
I will work the regression range per above dates 2017-03-01 to 2017-06-01.
Ran bisection via mozregression from 2017-02-01 to 2017-06-01 and did not encounter the Gloda issue. Actually, the last daily tested was 2017-05-11 (I think) [i.e., mozregression did not find a daily past 2017-05-11?].  Also of note, I did not encounter the Gloda issue in my parallel 56.0bx install reading/writing daily "real" mail.  During the bisection, I tried to load the daily with mail, but fewer than 50 pieces. I moved a number of messages to trash, compacted the Inbox, etc.  If someone can give guidance on using mozregression with a larger dataset, I will run it again.  I want to take a copy of my "real," very large dataset and run the dailies against it. Does this require using the command line version?  The only active mail would be one test address, but I could pre-populate the Inbox, Trash, etc.?  Need guidance on setting that up. (Stop auto retrieve of mail for all id's except the test one, etc.)
Merike is this highly reproducible for you?
Flags: needinfo?(merikes.lists)
No, at least not recently. According to timestamps last time may have been on March 23rd as I delete gloda when it bothers me. At the time of reporting it was more like once a month. So if anything it's happening less often than it used to and definitely not often enough to narrow the range :(
Flags: needinfo?(merikes.lists)
>> we tend to solve regressions indirectly, not from code examination by developers (who typically cannot reproduce the issue), but by leveraging available user manpower (who can easily reproduce) to get a one day regression range of code changes. We then can typically identify just a few lines of code that changed which are causing the problem.

Perhaps I wasn't clear enough.  My point was that there are two bugs here:  whatever is causing gloda to have moved/deleted items indexed, and the compactor routine going into an endless loop when it can't find an item.  Fixing the former is not going to fix the latter, it will just obscure it until next time.  And the looping problem is what is causing our issue.  This looping bug could go back to original code for all I know and is just now exposed by some other indexing issue.  

While the item going missing could be happening anywhere, the looping bug is clearly localized around the error message it generates, so 10000 lines of code don't need to be searched.   comm-central/mailnews/db/gloda/modules/index_msg.js
 Line 1292.  Looks to me that case 2 should delete the gloda entry like case 1b, but I'm no expert on the internals.  The goal would be to prevent thousands of messages from being generated instead of one.
I understood your earlier point.  The code in question isn't at fault unless it changed, which you can determine yourself using annotated mode.  And if it didn't change then you are back to my point of finding the regression range.
Condition dependent bugs don't always manifest up front.  That code section handles other cases of indexing issues, it just doesn't handle this particular one correctly.  It really is two separate issues.
I wish there were a better way to discuss these side issues. I have a question about TBird user population. Is this a POP-only issue? Is POP a very minor part of TBird use?  I switched back to POP many years ago because MAPI was terrible about messages with large attachments. Bottom line - are we worrying about a problem that only affects 1% of TBird users?  I've been using TBird a LONG time and don't want to switch.  Is TBird losing users (as well as developers)?
#2 As soon as I can, I will figure out how to use the command line mozregression and build a large test dataset and run another regression with it.  My last Gloda issue occurrence is noted in comment 22 - 8 days ago. No change in software or the size of my dataset.  Use of TBird about the same. Did not delete and rebuild the index. No idea at this point why issue "went away."
Is this issue pop only, I don't know. The other protocol is IMAP (not mapi).

If this only affects 1% of users, then it affects 250,000 users  (estimated user base is ~25 million)   (FWIW over the last several years TB is gaining users. And in the past year developers increased significantly - but I don't see what that has to do with this bug)

Side issues which are unrelated to this bug ... see https://www.thunderbird.net/en-US/get-involved/#communication
- support questions should be posted in SUMO https://support.mozilla.org/en-US/questions/thunderbird
- general thunderbird questions https://discourse.mozilla.org/c/thunderbird

Thanks for continuing to look into mozregression.
That's great news. I was concerned we were working on a bug nobody cared about. I will work the mozregression.
Do you have saved search folders?
Flags: needinfo?(alanc)
(In reply to Wayne Mery (:wsmwk) from comment #36)
> Do you have saved search folders?

No
Flags: needinfo?(alanc)
No
Flags: needinfo?(dmccammishjr)
Finally got back to mozregression and figured out how to use it with large profile, etc. (not hard, just wasn't looking)
So far, failed 2017-11-20, 2017-08-04, 2017-06-09 ** BUT ** 2017-06-09 error log is different, so I'm attaching it.
With the ability to use a stored profile that happened to have an old mail id with lots to download, received 360 pcs mail, moved (script)to inbox, marked as read, deleted 200+, then compacted.  Appears to create Gloda issue quickly.  
Will continue the regression.
Update, 2017-05-12 failed - immediately on compacting inbox where I had just accepted a bunch of mail from another folder (by script), then marked as read, then deleted most.
Note gloda errors at end of file are NOT the same as repeating gloda errors that consume resources.
2017-04-26 good, 2017-05-04 good - i.e. not able to force failure as above. 
2017-05-07 bad, 207-05-06 bad, 2017-05-05   did not fail.  Attached error log.  
My process for making TBird fail with gloda problem is not perfect.  That is, it's easy if it fails, but the "good" cases may be just ones that did not respond to my test process.  One time, I followed the process above (comment 40) and it did not fail, but then copied/moved more messaged into Inbox and deleted some (not same ones) and compacted again and it failed.
By the way, the current Beta, 60.0b6, fails at the "drop of a hat" with compact.
Note that the original report was on 54.
Good catch. I will back up regression start to something before that and run again. I did try several times to "force" 2017-05-05 (and 05-04) to fail.  Note that 2017-04-26 was "good" also.
Able to fail all 2017-03 builds,, plus 04-01, 04-07, 04-19.  2017-02-12, 2017-02-14, 2017-02-21 failed.
Not able to fail 2017-01-29, 2017-02-09, 2017-02-11.  
Again - doesn't mean they won't fail, but I tried.
Just more testing.  Tested 2017-01-29 and 2017-02-09 again.  No failure.
Also tested a December 2016 build and the 207-01-01 build  Also no failure.  Confidence increasing on the range above.
I know you are already debugging it, but I would like to add a detail: I have noticed that if you put Thunderbird on offline mode, the problem ceases. As soon you resume the online status, Thunderbird starts again throwing gloda errors in error console
Interesting instance this evening on 60.0b9. Gloda error running amok. Stopping & starting TBird did not stop the errors, even with 20 minute interval.  Finally use the "repair folder" option on main Inbox and that stopped it. Seems like that points to an indexing error. Previous suggestion about deleting the index file would work also?
Probably obvious, but if people are complaining about the "editor" being slow, it is really that this bug is taking all the cpu.  Sometimes the editor will be 4 or 5 keystrokes behind, and I'm a slow typist.
Observation/empirical data/whatever. Needed to change the retention on my trash mail from 60 to 90 days.  Since then, no Gloda incidents in 48 hours. Really unusual. Makes me wonder if it's not auto-compression that's triggering the Gloda error, but the process of emptying trash.  I plan to keep watching.
Amend that. Apparently, the compaction process has been modified in 60.0b1 so that I get 1 Gloda warning per mail item and it is from Compaction (not trash).  Got 119 messages (warnings) over a few seconds from my (automatic) Local Folders compaction. Major step forward that we only get one warning per item. partial folder name "mailbox://nobody@Local%20Folders"  
The folder has 179 pieces of mail and no sub-folders, so less than 1 warning per item  (119 / 179) 
The folder is not often used (not a lot of ins and outs).  I did a manual compaction on the folder with no warnings. Also a "repair" with no warnings or errors.
Ugh. Just had "old" Gloda error on Inbox. Infinite repetition of error on one header (#251) with orange non-repeating number.  The error messages look the same as in comment 54, but the header # does not change.  "Repair" stopped the messages. Definitely in compression, but Inbox compression seems to encounter infinite errors while local folder error was sequential to the end of the folder. Different copy of same code?
Based on what? I filed it for 54, therefore it sort of should be an issue since at least 54, including 55 dailies? I don't see it often enough to do any kind of regression range hunt based on daily usage. If anything I'm starting to suspect it might have vanished as I haven't seen it for a long time now on beta.
Flags: needinfo?(merikes.lists)
Using mozregression on windows 32 build date 2017-05-05, I was able to make it fail quickly by deleting random blocks of emails from Inbox and then compressing.(1)  Also, I see the Gloda problem in the current builds regularly, although they seem to be slightly less frequent now (see notes 53,54, 55 above).
(1) I add large blocks of emails copied from trash to Inbox, and then delete random smaller blocks, assuming this will create a "messy" pointer situation. It doesn't take much to make Gloda upchuck.
(In reply to Merike Sell (:merike) from comment #57)
> Based on what? I filed it for 54, therefore it sort of should be an issue
> since at least 54, including 55 dailies? I don't see it often enough to do
> any kind of regression range hunt based on daily usage. If anything I'm
> starting to suspect it might have vanished as I haven't seen it for a long
> time now on beta.

Did you report using 54 beta?  Or daily?  
The last 54 daily was https://archive.mozilla.org/pub/thunderbird/nightly/2017/03/2017-03-07-03-02-16-comm-central/ almost two months before you reported this bug.
Flags: needinfo?(merikes.lists)
Doug, given that you now have a solid procedure, does it fail using a today's daily?

And https://archive.mozilla.org/pub/thunderbird/nightly/2017/03/2017-03-07-03-02-16-comm-central/ - the last 54 daily?
Mozregression downloaded 2018-09-19 and it failed with Gloda errors, but differently.  I'll try to find a later build in a while.  Differently = the Gloda errors appear but sometimes are not repeated infinitely and other times are repeated (blue oval with number).  More later.
Mozregression downloaded 2018-09-22 and it failed quickly with the process described earlier.  Only difference is the numbers generated for each warning are blue instead of orange.
(In reply to Wayne Mery (:wsmwk) from comment #59)
> Did you report using 54 beta?  Or daily?  
> The last 54 daily was
> https://archive.mozilla.org/pub/thunderbird/nightly/2017/03/2017-03-07-03-02-
> 16-comm-central/ almost two months before you reported this bug.
I'd guess beta, since I'm usually using that. However closest telemetry ping files before and after reporting that I could find in the profile state 52.0b4 and 53.0b2 so there's a chance I made a mistake while reporting and it should have been 53 instead. Also 54.0b1 seems to have been released after this report.. Or maybe I was testing out some 54 daily for a specific bug or fix.. Hard to tell now that the update log doesn't have the period and relevant crash reports cannot be fetched either.
Flags: needinfo?(merikes.lists)
New evidence (which I hope will help, but not surprised if it doesn't).  Generating Gloda errors a high rate visible in separate error browser console.  Killed main TBird process (X on window).  Continued generating Gloda errors.  I assume that means that the error is coming from a separate compress/cleanup process of some kind - which you all probably already knew.   
This thing drives me nuts some days when I can't even run TBird for a few minutes without going to 50% or more cpu from Gloda. Slows down the editor keyboard response to pitiful. 

What languages, technologies, etc. would I need to learn to jump in and try to debug / fix it?  What components?  Could I build a stand-alone compression engine on my PC and feed it a profile somehow to watch it fail or would I have to have a large environment installed?   I have time and I have programmed a lot (mostly C, FORTRAN (you can guess how many years ago), visual Basic, a very little Java and Javascript). Started to learn Python but got sidetracked.
But does this one fail?  https://archive.mozilla.org/pub/thunderbird/nightly/2017/03/2017-03-07-03-02-16-comm-central/ - the last 54 daily
Flags: needinfo?(dmccammishjr)
Severity: normal → major
See Also: → 1348662
Whiteboard: [regression:TB54?/TB55?]
Yes.  It was hard to make that nightly (2017-03-07) fail, but it did eventually.  I got the gloda error a few times, but not the loop repeat until I had been pounding on it (mozregression) for a while.
Flags: needinfo?(dmccammishjr)
came here due to Wayne's reference to bug 1348662.

In programming using gloda searches, I came about the fact that there are a few (lot) of search results returned where gloda reports a result but there is no messageheaderID related to the search result. So it is a fake/outdated messageid that is returned. Could that be related, is there anything I could look for in coding?
Responding to comment 68. encountered Gloda error on 64.0b3.  Captured a bit of error console stream.
2018-12-10 07:19:36	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://dmccammishjr%40gmail.com@mail.gmail.com/Inbox sketchy key: 6 subject: Thoughts from the Frontline - China's Command Innovation
2018-12-10 07:19:36	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://dmccammishjr%40gmail.com@mail.gmail.com/Inbox sketchy key: 7 subject: Thoughts from the Frontline - Chinese Growth Spurt
2018-12-10 07:19:36	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://dmccammishjr%40gmail.com@mail.gmail.com/Inbox sketchy key: 8 subject: Thoughts from the Frontline - China for the Trade Win?
<<<<<<<<<<<<<snip 3 more copies of above "sketchy key: 8", timestamp :38>
2018-12-10 07:19:40	gloda.index_msg	WARN	Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://dmccammishjr%40gmail.com@mail.gmail.com/Inbox sketchy key: 8 subject: Thoughts from the Frontline - China for the Trade Win?
<<<<<<<<<per earlier comments, the same email reported in all errors, some increment in sketchy key until last few>
<<<<<<<<<***** Repair of InBox" initiated ****>
2018-12-10 07:19:43	gloda.index_msg	ERROR	Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]"  nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156"  data: no]
log4moz.js:680
2018-12-10 07:19:43	gloda.index_msg	ERROR	Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]"  nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156"  data: no]
2018-12-10 07:19:43	gloda.index_msg	ERROR	Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]"  nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156"  data: no]
log4moz.js:680
<snip 2 more identical copies of above exception, without "log4moz.je:680>
log4moz.js:680
2018-12-10 07:19:43	gloda.index_msg	ERROR	Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]"  nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156"  data: no]
<<<<<<<<<<<<***** Second Repair of InBox" initiated ****>
​
Expanded exception from above
2018-12-10 07:19:43	gloda.index_msg	ERROR	Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]"  nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)"  location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156"  data: no]
log4moz.js:680
CApp_append
resource:///modules/gloda/log4moz.js:680:7
Logger_log
resource:///modules/gloda/log4moz.js:374:7
Logger_error
resource:///modules/gloda/log4moz.js:382:5
PendingCommitTracker_commitCallback
resource:///modules/gloda/index_msg.js:172:9
gloda_ds_pch_handleCompletion
resource:///modules/gloda/datastore.js:52:11
ftc_editFolder
chrome://messenger/content/folderPane.js:2573:5
oncommand
chrome://messenger/content/messenger.xul:1:1
Interesting observance: TBird Gloda failure three times this PM. (unusually high frequency) Each time, I corrected by "Repair Folder" and left the debug window open, followed by very little email activity (fewer than 10 pieces of mail, no manual compressions, etc.) and then failed again spontaneously - as if Repair Folder does not complete the repair?
(In reply to doug2 from comment #67)
> Yes.  It was hard to make that nightly (2017-03-07) fail, but it did
> eventually.  I got the gloda error a few times, but not the loop repeat
> until I had been pounding on it (mozregression) for a while.

How about https://archive.mozilla.org/pub/thunderbird/nightly/2017/02/2017-02-11-03-02-04-comm-central/thunderbird-54.0a1.en-US.win32.installer.exe ?   (tested in bug 1406653)
Flags: needinfo?(dmccammishjr)
See Also: → 1515262
Flags: needinfo?(dmccammishjr)
No "repeating Gloda error" failure on 2017-02-11 so far. As always, continued pounding might trip it.  Lots of Gloda and other errors (file attached) but not the infinitely repeating type.
I've seen this on 64.0b3. I also reported here: bug 1515262

I'm seeing these issues as well. Using version 60.4.0. Error rate in the console is one line per second, for the same mail.

Likely cause in my case is a plugin called Awesome Auto Archive (in fact, I came accross this issue while trying to troubleshoot why the plugin stopped archiving my mails months ago).

Apart from using Thunderbirds "archive" function, I am also using the plugin to move mail from my server's draft and sentmail folders to my local account. The fact that I'm getting those Gloda errors for those two local folders only is hardly a coincidence.

(So this basically is a +1 for comment 10, that this issue can be triggered by using alternative means to move mails between folders.)

I have not seen any Gloda errors in 65.0b3 (20190121) since installation (keeping the error console open 100%). Same for 65.0b2. I do see the issue 711204 (While viewing mail from list ...). Since I don't see ANY Gloda error (much less the infinite repeats), it would appear that the problem is fixed (not just the symptom). If the fix then re-exposed 711204, that's ok.
Later today, I'll try to get the build for 65.0b3 into mozregression and beat on it a while. If it breaks, I'll post.

I'm still stuck with this issue in 65.0b3. Now eating between 70% of the CPU for hours without ever stopping

Saw this issue in 65.0b4 "in the wild" this morning. Ran regression against several 2019-01 builds and was able to fail at least 2 (2019-01-25, 2019-01-28). Have to be careful to reset error filter. Apparently, if the Gloda errors are filtered out, they do not affect PC performance as much, giving the false impression that there are no errors.

Saw this again this morning on 66.0b1, with a permanent 70% CPU.
Interestingly, the first line in the Activity window just shows: null

Still the same in 67.0b

Agree. Seems to happen more often now. Happened a few minutes ago while auto-compressing at "wake up."

Combination of the Gloda problem higher frequency and the new "wakeup" delay is making 67.0b1 unusable.

No need to regression test 67.0b2. It goes into Gloda spin spontaneously and often. Repair folder sometimes stops it for a while.

Aceman, I think testing a try build with a backout of bug 1337872 might be useful. Can you produce with for us?

doug does not reproduce using 2017-02-11 build per comment 47, and also mgoldey from bug 1406653. Which implies bug 1337872 may be at fault

Flags: needinfo?(acelists)

Hmm, so according to comment #47, 2017-02-11 didn't fail, but 2017-02-12 did?
So https://hg.mozilla.org/comm-central/pushloghtml?startdate=2017-02-11&enddate=2017-02-13

Well, I wonder if we can back out https://hg.mozilla.org/comm-central/rev/0186ecd2dfab4e5754de6f88ddd988f4bff990dc from bug 1337872, or whether we re-introduce a feature M-C has removed since and the whole thing will fall apart.

We should look into where the errors from comment #69 and bug 1406653 comment #13 come from.

67.0b3 no change. The new "feature" cleanup process that runs at wake-up is just about guaranteed to set off the Gloda loop.

Alta88, you are poking in gloda these days, may you know what this error is talking about?

Flags: needinfo?(acelists) → needinfo?(alta88)

I'm using TB 60.6.1 on a Win 10 box. Was responding today to request from Aceman regarding bug 1305207. In testing for that bug I experienced the gloda bug described above. I got the infinite loop going with this msg:

gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@mail.../Inbox sketchy key: 93867544 subject: A service reminder for your Subaru

I deleted that email about my Subaru, which was near oldest in that Inbox, tested, looped again, so deleted the next email... Repeated a few times until it no longer threw that error. (It is curious that the indicated email was never the oldest, but 2 or 3 before oldest.)

I am no longer getting the gloda loop. Seems I stumbled into it and now it won't repeat. Perhaps corrupted db that regenerated, dunno. If want to know more let me know, I may have a few other details that might be of use...

(BTW, I swear by, not at, TB. Won't use anything else.)

(In reply to Jorg K (GMT+2) from comment #87)

Hmm, so according to comment #47, 2017-02-11 didn't fail, but 2017-02-12 did?
So https://hg.mozilla.org/comm-central/pushloghtml?startdate=2017-02-11&enddate=2017-02-13

Well, I wonder if we can back out https://hg.mozilla.org/comm-central/rev/0186ecd2dfab4e5754de6f88ddd988f4bff990dc from bug 1337872, or whether we re-introduce a feature M-C has removed since and the whole thing will fall apart.

Yeah, iterator is gone - Bug 1098412 - Remove iterator and the legacy Iterator constructor in version 57 https://bugzilla.mozilla.org/show_bug.cgi?id=1098412#c35

Blocks: 1406653
See Also: 1406653

(In reply to :aceman from comment #89)

Alta88, you are poking in gloda these days, may you know what this error is talking about?

The 'sketchy key' error is easy to reproduce; I haven't seen any cpu intense looping - it could be because the folder is marked dirty in certain cases. Gloda assumes per folder headerMessageID is unique, which is certainly not guaranteed, and the cause of that error; gloda compaction cleanup assumes messageKey increments by 1, which is also false (for imap at least) and leaves junk in the db. Gloda assuming headerMessageID is unique per folder means if you copy something into another folder, it will only be found in the original folder, on search.

Gloda has no transactional integrity as it listens for changes, and is always 'sweeping things up'. Forget about true db concepts like 2 phased commits in the system as a whole.

Compacting, repairing the folder, and rebuilding the gloda db is unfortunately the only way to achieve a true synced state across the db and msgDatabase and message store.

Edit: clarify headerMessageID.

Flags: needinfo?(alta88)

(In reply to MHW from comment #90)

I'm using TB 60.6.1 on a Win 10 box. Was responding today to request from Aceman regarding bug 1305207. In testing for that bug I experienced the gloda bug described above. I got the infinite loop going with this msg:

gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@mail.../Inbox sketchy key: 93867544 subject: A service reminder for your Subaru

I deleted that email about my Subaru, which was near oldest in that Inbox, tested, looped again, so deleted the next email... Repeated a few times until it no longer threw that error. (It is curious that the indicated email was never the oldest, but 2 or 3 before oldest.)

I am no longer getting the gloda loop. Seems I stumbled into it and now it won't repeat. Perhaps corrupted db that regenerated, dunno. If want to know more let me know, I may have a few other details that might be of use...

(BTW, I swear by, not at, TB. Won't use anything else.)

I am now getting gloda errors. I have updated to TB 60.7.0. I have browser console open with mailnews.database.dbcache.logging.dump and
mailnews.database.dbcache.logging.console set to Show. The following repeats indefinitely until I close TB and restart it.

2019-05-22 23:12:12 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://WallaceWebDesign.../Orders sketchy key: 228 subject: The Salseria: A New Order has Arrived (00110)

I had deleted that email from a folder, emptied trash then compacted folders. CPU was normal until compacted folders then immediately rose to 27-32% and stays there until close TB.

Closed then restarted TB. Deleted an email, emptied trash then compacted folders. gloda error did not appear. go figure.

(In reply to MHW from comment #93)

(In reply to MHW from comment #90)

I'm using TB 60.6.1 on a Win 10 box. Was responding today to request from Aceman regarding bug 1305207. In testing for that bug I experienced the gloda bug described above. I got the infinite loop going with this msg:

gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@mail.../Inbox sketchy key: 93867544 subject: A service reminder for your Subaru

I deleted that email about my Subaru, which was near oldest in that Inbox, tested, looped again, so deleted the next email... Repeated a few times until it no longer threw that error. (It is curious that the indicated email was never the oldest, but 2 or 3 before oldest.)

I am no longer getting the gloda loop. Seems I stumbled into it and now it won't repeat. Perhaps corrupted db that regenerated, dunno. If want to know more let me know, I may have a few other details that might be of use...

(BTW, I swear by, not at, TB. Won't use anything else.)

I am now getting gloda errors. I have updated to TB 60.7.0. I have browser console open with mailnews.database.dbcache.logging.dump and
mailnews.database.dbcache.logging.console set to Show. The following repeats indefinitely until I close TB and restart it.

2019-05-22 23:12:12 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://WallaceWebDesign.../Orders sketchy key: 228 subject: The Salseria: A New Order has Arrived (00110)

I had deleted that email from a folder, emptied trash then compacted folders. CPU was normal until compacted folders then immediately rose to 27-32% and stays there until close TB.

Closed then restarted TB. Deleted an email, emptied trash then compacted folders. gloda error did not appear. go figure.

Please let me know if there are any specific tests or info that would help troubleshoot this.

(In reply to MHW from comment #94)

(In reply to MHW from comment #93)

(In reply to MHW from comment #90)

I'm using TB 60.6.1 on a Win 10 box. Was responding today to request from Aceman regarding bug 1305207. In testing for that bug I experienced the gloda bug described above. I got the infinite loop going with this msg:

gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@mail.../Inbox sketchy key: 93867544 subject: A service reminder for your Subaru

I deleted that email about my Subaru, which was near oldest in that Inbox, tested, looped again, so deleted the next email... Repeated a few times until it no longer threw that error. (It is curious that the indicated email was never the oldest, but 2 or 3 before oldest.)

I am no longer getting the gloda loop. Seems I stumbled into it and now it won't repeat. Perhaps corrupted db that regenerated, dunno. If want to know more let me know, I may have a few other details that might be of use...

(BTW, I swear by, not at, TB. Won't use anything else.)

I am now getting gloda errors. I have updated to TB 60.7.0. I have browser console open with mailnews.database.dbcache.logging.dump and
mailnews.database.dbcache.logging.console set to Show. The following repeats indefinitely until I close TB and restart it.

2019-05-22 23:12:12 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://WallaceWebDesign.../Orders sketchy key: 228 subject: The Salseria: A New Order has Arrived (00110)

I had deleted that email from a folder, emptied trash then compacted folders. CPU was normal until compacted folders then immediately rose to 27-32% and stays there until close TB.

Closed then restarted TB. Deleted an email, emptied trash then compacted folders. gloda error did not appear. go figure.

Please let me know if there are any specific tests or info that would help troubleshoot this.

With Error Console open deleted emails then compacted folders. Here are the console messages leading up gloda warning, deleted duplicates. (I'm routinely getting the NS_ERROR_FAILURE, not sure if that's significant.):

NS_ERROR_FAILURE: Couldn't decrypt string crypto-SDR.js:179

2019-05-23 13:15:50 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@....com/Inbox sketchy key: 93869679 subject: New airspace concepts floated for ‘nontraditional’ entrants

2019-05-23 13:15:50 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]

2019-05-23 13:15:50 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://pooder@....com/Inbox sketchy key: 93869679 subject: New airspace concepts floated for ‘nontraditional’ entrants

Jorg, does this answer your questions...

(In reply to Jorg K (GMT+2) from comment #87)

Hmm, so according to comment #47, 2017-02-11 didn't fail, but 2017-02-12 did?
So https://hg.mozilla.org/comm-central/pushloghtml?startdate=2017-02-11&enddate=2017-02-13

Well, I wonder if we can back out https://hg.mozilla.org/comm-central/rev/0186ecd2dfab4e5754de6f88ddd988f4bff990dc from bug 1337872, or whether we re-introduce a feature M-C has removed since and the whole thing will fall apart.

We should look into where the errors from comment #69

As described by alt88 in comment 92 https://searchfox.org/comm-central/rev/5a670c59f9004ef9be4874cfbfe57ec2ef3b260f/mailnews/db/gloda/modules/index_msg.js#1250

bug 1406653 comment #13 come from.

mark message with gloda state after db commit https://searchfox.org/comm-central/rev/5a670c59f9004ef9be4874cfbfe57ec2ef3b260f/mailnews/db/gloda/modules/index_msg.js#151

(note - there is a space missing between "after" and "db")

Is there a test for this situation that is disabled or failing?

Provisionally marking this as regressed by bug 1337872

Flags: needinfo?(jorgk)
Regressed by: 1337872

Did I ask a question or questions? I was wondering whether the requested backout was feasible and apparently it's not since, as suspected, the underlying M-C feature was removed. I think alta88 gave an interesting insight into the deficiencies of Gloda. Personally, for me the Gloda code is an insurmountable thicket an I'm glad it's still mostly(?) working after all those years.

Flags: needinfo?(jorgk)

I believe there is some external configuration or environmental contribution to this bug. I have two machines (Laptop & Desktop) both running latest Win10, both with TBird as primary mail app, both with mail as (separate) windows. Laptop is running beta releases now but has seen the problem in both beta and released versions. Desktop has NEVER seen the Gloda problem. Both have large numbers of mail items running through daily and large numbers of mail items in trash and in dozens of other folders. I cannot yet point to a significant difference in configuration.

On my TB Error Console, I see:

2019-06-25 13:10:19 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction.
In folder: mailbox://nobody@Local%20Folders/Drafts
sketchy key: 90197 subject: Extension signing blog post

thousands and thousands of times. The error console compresses these messages, and sums them up, and I often have 1350 identical messages in a second, just to be followed with exactly the same message for 1750 times etc. pp. That continued for days now. It's always exactly the same message, same key, same subject. Thousands of times per second.

  • TB 69a1 trunk build from 2019-06-03
  • Linux
  • My production profile

Recovery:

  1. Deleting the mentioned message did not fix it. It just changed the message to ..." key 90197 subject: " (empty subject).
  2. Then, restarting TB fixed it. (But you have to know that the problem exists, as it appears only on the Error Console, but still slows down TB dramatically and lets your computer be busy without you knowing why.)

Running Beta 68.0b1. Comment 100 above is correct. The problem is the worst I've seen. Morning wake-up: read 1 message; open 2nd message, message window closes immediately, Gloda runs wild. Sometimes can stop Gloda by "repair folder" on Inbox. Happens many times during the day. Something has aggravated the bug and that may be a clue.

(In reply to doug2 from comment #99)

I believe there is some external configuration or environmental contribution to this bug. I have two machines (Laptop & Desktop) both running latest Win10, both with TBird as primary mail app, both with mail as (separate) windows. Laptop is running beta releases now but has seen the problem in both beta and released versions. Desktop has NEVER seen the Gloda problem. Both have large numbers of mail items running through daily and large numbers of mail items in trash and in dozens of other folders. I cannot yet point to a significant difference in configuration.

Above is FALSE. I just had not noticed the problem on the desktop since it is not often used. As soon as I brought up the debug console, I could see Gloda running wild. Repair folder on Inbox usually stops it, but not for long.

Running 68.0b3. So sensitive that Gloda runs away almost any time I delete a message. Is there a shortcut to Repair Folder? Seriously.

Running TB 60.8.0 (updated yesterday) and I have browser error console open with mailnews.database.dbcache.logging.dump and
mailnews.database.dbcache.logging.console set to Show. Emptied Trash then Compact Folders. I am NOT getting the Gloda errors. Restarted TB to be sure. Deleted a few more emails, emptied trash again and compacted folders. Still no Gloda erros. Could it be that it's fixed? I'll check back in if something changes, but for now it appears fixed on my end.

After doing all of that here are the error msgs in error console:

  • While creating services from category 'profile-after-change', could not create service for entry 'calendar-backend-loader', contract ID 'service,@mozilla.org/calendar/backend-loader;1'

  • Use of Mutation Events is deprecated. Use MutationObserver instead. calendar-widgets.xml:512:20

  • TypeError: this.relayout is not a function[Learn More] calendar-multiday-view.xml:597:54

  • NS_ERROR_FAILURE: Couldn't decrypt string crypto-SDR.js:179
    2019-07-11 15:51:07 mailnews.database.dbcache INFO Periodic check of cached folder databases (DBs), count=9
    2019-07-11 15:51:07 mailnews.database.dbcache DEBUG Skipping, DB open in window for folder: Facebook Stuff
    2019-07-11 15:51:07 mailnews.database.dbcache INFO DBs open in a window: 1, DBs open: 8, DBs already closing: 0

Running Beta TB 68.0b4. Held my breath for several hours hoping it was fixed, but have had Gloda run away several times "spontaneously." Almost any time new mail shows up and I start to read & dispose of it. Running 60.8.0 on "production" machine and no Gloda errors so far. Weird to fix production but not beta???

Going to switch from Beta to Production on this machine and run my test sequences to see if I can make it fail.

Guys, Gloda has absolutely not seen any changes or fixes in years, neither on beta nor production. So whatever you're writing here makes no sense or you're not looking at a reproducible case.

Re: Jorg's email. Running mozregression on 2019-07-09. Maybe nobody touched it, but I can't make it fail - yet. Been trying for 45 minutes, which is usually more than sufficient. Also running 60.8.0 for "real" mail. I'll try more tomorrow.

Jorg K: I don't know what to tell you. I had been able to reproduce it over the past two months and with the 68.8.0 release I am not able to reproduce the errors. As mentioned previously, by others, and as can be the case with code, could be some other bit of the code playing poorly with Golda, as I'm sure you know better than I. I'm by no means even well versed with the TB code and will leave it to the experts. All I know is from a knowledgeable user's perspective that TB is not throwing the errors it was previously, since I first noticed on my machine in May (don't know how long it had been going on before I noticed it).

Nuts! Took several hours, but 60.8 failed this morning "spontaneously." So, Jorg is correct. Not fixed. Maybe a little more stable
I really wish I could build a Gloda test bed and "artificially" create the problem. First, I'd like to understand Gloda, but apparently nobody does and everybody is afraid of it. No design documents. No nothing. That doesn't portend it ever getting fixed. Sad.

For me 60.8 did not changed anything - still compactification of inbox causes >50% processor load.
Honestly, I do not understand why everybody is blaming Gloda. The code that causes the loop is identified in the index_msg.js file (see comment #97). The code is commented extensively, and it is clear that some exceptional case is handled incorrectly, causing the loop. It should be a few hours of work to read the code and understand what is causing the loop, and then fix it. I can do the first part, however, I never tried to compile TB, so will not be able to do the second part (fixing it).
It is really hard to understand why it takes years to fix this issue. Do I misunderstood something?

I too would be happy to read code if someone will point me to it. I assume that, once the real problem is identified and a fix proposed, someone can tell us how to build (not compile) a test version of TB and run it. I'm assuming the segment is all javascript and it should be (in very relative terms) possible to run it in a sandbox to watch what happens. I may be naive, but I'm experienced in my naivete. ;)) It might take me months, but it ain't getting fixed.

Shoot, at this point I'd be happy with just a way to append the "repair folder" code at the end of compaction. Even a hot key for repair folder would help.

Just following comment #97
"Well, I wonder if we can back out https://hg.mozilla.org/comm-central/rev/0186ecd2dfab4e5754de6f88ddd988f4bff990dc from bug 1337872, or whether we re-introduce a feature M-C has removed since and the whole thing will fall apart."
Commit #0186ecd2dfab4e5754de6f88ddd988f4bff990dc removed line
keepIterHeader = false so once there exceptional case happens and keepIterHeader = true is set on line 1249 there is nothing that would set it back to false. Most likely, this is what is causing the loop.

The code fragment staring with line 1140 should be
if (!keepIterHeader) {
let result = headerIter.next();
if (result.done) {
headerIter = null;
msgHdr = null;
// do the loop check again
continue;
}
msgHdr = result.value;
} else {
keepIterHeader = false;
}
to be equivalent with logic before commit 0186ecd2dfab4e5754de6f88ddd988f4bff990dc

Thanks for looking into this. Interesting observation you have there.
We got hint from the gloda author to remove that keepIterheader=false branch in bug 1337872 comment 2.
But sure, we can try to put it back and see what happens.

andriusr, you could test if your fix helps directly in the Thunderbird version you have available locally, by backing up omni.ja file in your Thunderbird installation folder, then opening it as a zip file and editing the index_msg.js file with your change.

Sorry, I am on MacOSX and omni.ja fails to unzip. What exact archive type this file is? 7zip says coff file, however, it fails to unarchive.

It's a jar file, renaming before unpacking might work for you.

Its seems that I would need some help. Running in Terminal
jar -xfv ./omni.jar ./
gives output
java.util.zip.ZipException: invalid CEN header (bad signature)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:219)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.zip.ZipFile.<init>(ZipFile.java:120)
at sun.tools.jar.Main.extract(Main.java:1004)
at sun.tools.jar.Main.run(Main.java:305)
at sun.tools.jar.Main.main(Main.java:1288)
MacOSX doe not provide any options how to handle jar files. I have Java 8 update 201 on my computer.

You could also try 7zip or winrar to extract it.

ok. got all omni copied into a separate directory and created a new version of index_msg.js with the added else clause. Now what? Can I just replace that js file somewhere, or do I need to create a new compressed distribution and install it? (Win 10)

Just pack it all back into the omni.ja file (zip format) and replace the original file in Thunderbird installation folder.

Hello,
Thanks for keeping this report alive.
I just tried changing index_msg.js in omni.ja, in modules/gloda, relaunching Thunderbird 60.7.2
But no success, Thunderbird still consumes 1/2 CPU unit after end of compaction.
Not sure to have well done the modification. Can you provide it as patch?

The change may have not been picked up by TB, because of caching. Try finding the 'startupCache' folder (somewhere in temporary or local application files) and delete it. Note there may exist another one if you also use Firefox, then delete the one belonging to your Thunderbird profile.

Hello,
No more success.

Please double check the code in comment #116 above. TB is crashing with the code added. It is easily possible that I did not make the change correctly. In TB 60.8, the fragment begins at line 1169. The outer "if (keepIterHeader) { " is the one to which we are adding the else clause? i.e., if (keepIterHeader) is true, then do stuff (including an if clause and msgHdr assignment), .. else .. make keepIterHeader false. ? That doesn't seem to make sense?? I hope I'm just not seeing something. TB is crashing with the code inserted.

Yes, the original code was:
if (!keepIterHeader)
msgHdr = headerIter.next();
else
keepIterHeader = false;

See https://bugzilla.mozilla.org/page.cgi?id=splinter.html&ignore=&bug=1337872&attachment=8836250 .

code in 60.8 (before changes) is
if (!keepIterHeader) {
let result = headerIter.next();
if (result.done) {
headerIter = null;
msgHdr = null;
// do the loop check again
continue;
}
msgHdr = result.value;
}

Yes, but that is the current code, that is said to be bad, producing this bug.
So before that, there was the code that I posted, you can see the full change at the link I posted.
It seems andriusr tries to put back part of the old code, the 'keepIterHeader = false;' branch.

(In reply to :aceman from comment #125)

The change may have not been picked up by TB, because of caching. Try finding the 'startupCache' folder (somewhere in temporary or local application files) and delete it. Note there may exist another one if you also use Firefox, then delete the one belonging to your Thunderbird profile.

Running TB 68.8.0 on Win10 ver 1903 build 18362.239
First, empty TB trash, compact folders. No Golda loop.
startupCache folder in Firefox profile.
Delete folder.
startupCache folder in Thunderbird profile.
Delete folder.
Then restart TB. Delete a few emails. Empty trash & compact folder. No Golda loops.

However, it did throw 3 of the same JS errors copied below. (This is Not the same as endless "Golda" loops seen in the past.)

  • 2019-07-12 18:01:33 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]
    log4moz.js:680
    CApp_append resource:///modules/gloda/log4moz.js:680:7
    Logger_log resource:///modules/gloda/log4moz.js:374:7
    Logger_error resource:///modules/gloda/log4moz.js:382:5
    PendingCommitTracker_commitCallback resource:///modules/gloda/index_msg.js:172:9
    gloda_ds_pch_handleCompletion resource:///modules/gloda/datastore.js:52:11

I will periodically empty trash and compact folders and report if I get any of the "Golda" loops.

(P.S. I recognized prior comment that it may or may not have anything to do with Golda. Dunno. I think at this point we at least all use that as a common reference, as inaccurate as it may be. Apologies if it's inaccurate.)

On my system (MacOSX 10.14.5) my code addition fixed the issue.

  1. With unzip I was able extract archive. I should note, that unzip produced warnings
    Archive: omni.jar
    warning [omni.jar]: 47573259 extra bytes at beginning or within zipfile
    (attempting to process anyway)
    error [omni.jar]: reported length of central directory is
    -47573259 bytes too long (Atari STZip zipfile? J.H.Holm ZIPSPLIT 1.1
    zipfile?). Compensating...
    extracting: chrome.manifest and continue with extration.
    When I zipped modified files back, the size of archive got just 14.4 MB instead of 48 MB of original file...
  2. It took quite long to find where startup cache is located on MacOSX. It is here:
    ~/Library/Caches/Thunderbird/Profiles/myprofile.default/startupCaches
    Library and default profile forlders are hidden on MacOSX, so one should enable visibility of hidden files to find them... Somebody should really update https://wiki.mozilla.org/Thunderbird:Start_Hacking - it is completely useless at the moment.
  3. I put extra log message for the case 1a, where keepIterHeader = true is set. When running "Compact" on the Inbox, in console I am seeing many (several hundred) messages
    Exception case 1a In folder: mailbox://nobody@Local%20Folders/Inbox, msgHdr.messageKey key: 13078, idBaseHeader.messageKey key: 389 subject: Po LWOP2019
    Then Compact finishes with message
    2019-07-13 08:06:27 gloda.indexer WARN Problem during [job:folderCompact id:7 items:0 offset:0 goal:null], bailing: TypeError: msgHdr is null
    CPU load drops back to normal after compact finish.

Correction to my previuos message:
cache is located in
~/Library/Caches/Thunderbird/Profiles/myprofile.default/startupCache

Diff of my patch:
--- Z:/Desktop/Backup/modules/gloda/index_msg.js Fri Jan 01 01:00:00 2010
+++ Z:/Desktop/TB_fix/modules/gloda/index_msg.js Sat Jul 13 09:35:18 2019
@@ -1175,6 +1175,8 @@
continue;
}
msgHdr = result.value;

  •    } else {
    
  •    	keepIterHeader = false;
       }
     }
    

@@ -1281,6 +1283,11 @@
// Take another pass through the loop so that we check the
// enumerator header against the next message in the gloda
// database.

  •       this._log.warn("Exception case 1a" +
    
  •                     " In folder: " + msgHdr.folder.URI +
    
  •                     ", msgHdr.messageKey key: " + msgHdr.messageKey +
    
  •                     ", idBaseHeader.messageKey key: " + idBasedHeader.messageKey +
    
  •                     ", subject: " + msgHdr.mime2DecodedSubject);
         keepIterHeader = true;
       }
       // - Case 2
    

@@ -1289,7 +1296,7 @@
// header claiming to be indexed by gloda that gloda does not
// actually know about. This is exceptional and gets a warning.
else if (msgHdr) {

  •      this._log.warn("Observed header that claims to be gloda indexed " +
    
  •      this._log.warn("Case 2 - Observed header that claims to be gloda indexed " +
                        "but that gloda has never heard of during " +
                        "compaction." +
                        " In folder: " + msgHdr.folder.URI +
    

This markdown is driving me crazy - here is diff text

--- Z:/Desktop/Backup/modules/gloda/index_msg.js	Fri Jan 01 01:00:00 2010
+++ Z:/Desktop/TB_fix/modules/gloda/index_msg.js	Sat Jul 13 09:35:18 2019
@@ -1175,6 +1175,8 @@
             continue;
           }
           msgHdr = result.value;
+        } else {
+        	keepIterHeader = false;
         }
       }
 
@@ -1281,6 +1283,11 @@
           // Take another pass through the loop so that we check the
           //  enumerator header against the next message in the gloda
           //  database.
+           this._log.warn("Exception case 1a" +
+                         " In folder: " + msgHdr.folder.URI +
+                         ", msgHdr.messageKey key: " + msgHdr.messageKey +
+                         ", idBaseHeader.messageKey key: " + idBasedHeader.messageKey +
+                         ", subject: " + msgHdr.mime2DecodedSubject);
           keepIterHeader = true;
         }
         // - Case 2
@@ -1289,7 +1296,7 @@
         //  header claiming to be indexed by gloda that gloda does not
         //  actually know about.  This is exceptional and gets a warning.
         else if (msgHdr) {
-          this._log.warn("Observed header that claims to be gloda indexed " +
+          this._log.warn("Case 2 - Observed header that claims to be gloda indexed " +
                          "but that gloda has never heard of during " +
                          "compaction." +
                          " In folder: " + msgHdr.folder.URI +

Need some help. Running 60.8 on Win 10. Have unpacked (unzipped) Omni and made changes per comments above. Reviewed changes multiple times. Zipped Omni back up and changed back to .ja and replaced Omni.ja in TBird directory. Running TB crashes. I must be doing something wrong in repacking Omni. Is there a length check? Thanks

To my knowledge and last I looked, omni.ja uses a special ZIP format:
https://developer.mozilla.org/en-US/docs/Mozilla/About_omni.ja_(formerly_omni.jar)
They say: The correct command to pack omni.ja is: zip -qr9XD omni.ja *

If you don't succeed with this, attach a patch to the bug, or "diff text", and I'll build a special version for you on our servers.

Attached patch gloda.patch (obsolete) — Splinter Review

Patch from comment #134. I'll do a try build for Windows.

Assignee: nobody → jorgk
Attached patch gloda.patch (obsolete) — Splinter Review

Fixed bad indentation.

Attachment #9078028 - Attachment is obsolete: true

Looking at the patch again, Andrius, do you think the extra warning is worth keeping or just for debugging? If we keep it, we should adjust it a bit, one says "Exception case 1a", the other "Case 2 - ...". Anyway, we worry about this once the fix is confirmed. Build for Windows based on TB 60.8.0 in progress:
https://treeherder.mozilla.org/#/jobs?repo=try-comm-central&revision=b8821f080327a626c401a612e3ae0e67d3a7a2e5
Will be done in about an hour.

Jorg, sorry for messy patch, indeed, wording needs to be fixed. I made changes to both cases just to make sure that new file is used (since cleaning up cache is not trivial). Once fix is confirmed, the warnings can be removed.
From the other hand, if there were warning message for case 1a, the issue, I believe would have fixed much faster. While it seems that loop issue is fixed, I think compactification logic needs to be checked as well - in the log I am seeing many warnings, and they are re-appearing every time I run compact - it seems that database indexes are not really fixed (or something is broken on my system (folder repair also does not remove warnings)).

Well, you can attach another diff ;-)

The binary to try is here:
EDIT: DO NOT USE THIS ONE: https://queue.taskcluster.net/v1/task/JUz_exY-Q4WJqa29XZrT9w/runs/0/artifacts/public/build/install/sea/target.installer.exe

However, if you click on
https://treeherder.mozilla.org/#/jobs?repo=try-comm-central&revision=b8821f080327a626c401a612e3ae0e67d3a7a2e5&selectedJob=256441775
you will see an orange X1. Click onto it and see that the patch caused a test failure:
TEST-UNEXPECTED-FAIL | comm/mailnews/db/gloda/test/unit/test_index_compaction.js

So we can't except the patch as it is; someone would need to investigate why this is causing a test failure now.

OK, I'v done just that. On trunk, I applied the patch without the additional logging, so just

           }
           msgHdr = result.value;
+        } else {
+          keepIterHeader = false;
         }

And lo and behold, the test passes. So I looked at the log
https://taskcluster-artifacts.net/NhmpHXTFR7KPHKJyNdNJTA/0/public/logs/live_backing.log
and saw
19:15:17 INFO - PID 1428 | 2019-07-14 19:15:17 gloda.index_msg DEBUG Entering folder: mailbox://nobody@Local%20Folders/gabba2
19:15:17 INFO - PID 1428 | 2019-07-14 19:15:17 gloda.indexer DEBUG Exception in batch processing: TypeError: msgHdr is null
19:15:17 INFO - 2019-07-14 19:15:17 gloda.indexer WARN Problem during [job:folderCompact id:6 items:0 offset:0 goal:null], bailing: TypeError: msgHdr is null

So your logging is wrong and makes the whole thing fall over :-(

Let me do another build just with the "real" code change and without the additional logging.

https://treeherder.mozilla.org/#/jobs?repo=try-comm-central&revision=dad52a799972e86b5e0de1792fb96816462a315f

Will post another binary here in a while. I don't recommend using the one before since I assume the JS error will cause unpredictable results.

19:15:17 INFO - PID 1428 | 2019-07-14 19:15:17 gloda.indexer DEBUG Exception in batch processing: TypeError: msgHdr is null
19:15:17 INFO - 2019-07-14 19:15:17 gloda.indexer WARN Problem during [job:folderCompact id:6 items:0 offset:0 goal:null], bailing: TypeError: msgHdr is null

Indeed, my extra log line doe not check against msgHdr being null (the same message I was seeing in log).

Many thanks. Now running new "daily." Will report any incidents.

Had a crash in compaction, but probably my fault. Have Windows folder protection turned on and needed to set exception for daily. Also did not allow moving message from inbox to local folder - now corrected.

So far, so good. Did manual Compact on Inbox. Got 2 pairs of errors. identical. (so much better than before)

2019-07-14 20:03:18 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]
log4moz.js:680
2019-07-14 20:03:18 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]

2019-07-14 20:03:18 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]
log4moz.js:680
2019-07-14 20:03:18 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]

(In reply to doug2 from comment #146)

Many thanks. Now running new "daily." Will report any incidents.

It's TB 60.8.0 ESR plus the fix here. Since it's a special build, it says "Daily", but it's not a trunk build, currently TB 70.

Severity: major → critical
Summary: Gloda stuck in a loop with high CPU and error console gloda.index_msg WARN "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction". Folder repair moves the issue to a different folder. → Gloda loops high CPU. Error console gloda.index_msg WARN "Observed header that claims to be gloda indexed but that gloda has never heard of during compaction". Activity mgr: "Determining which messages to index". Repair moves issue to a different folder.

Running patched version of 60.8 (Thanks to Jorg K) about 24 hours with no loops. Debug console shows 6 "gloda index message WARN" incidents, 2 "gloda.collection ERROR caught exception ..." and 1 "gloda.index_msg ERROR Exception while attempting ..."
"Manual" compact folder resulted in 6th WARN incident this morning, with no loop.
Strongly recommend implementation of patch in beta and next possible release of TB.

Note that the gloda warning / error problem itself still exists, but the infinite loop and high CPU issue appears to be resolved. (Ref comment 30, etc.)

OK, let's take this without the additional logging.

Attachment #9078029 - Attachment is obsolete: true
Attachment #9078449 - Flags: review+
Attachment #9078449 - Flags: approval-comm-esr60+
Attachment #9078449 - Flags: approval-comm-beta+

Pushed by mozilla@jorgk.com:
https://hg.mozilla.org/comm-central/rev/97c198f44bf5
Fix Gloda hang. r=jorgk DONTBUILD

Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → Thunderbird 70.0

I am the initial reporter for Bug 1406653 which likely has the same root cause re: endless error messages, and is still present in the current 68.8.0. I would be happy to test this patched version of 68.8.0 against Bug 1406653 if someone could please direct me to a downloadable version of it. (I'm just a civilian, not a software developer....) Thanks.

See comment #145 for a Windows installer based on TB 60.8.0. I'll ship it in TB 68 beta 5 in the next few days.

Now running trial Windows build (32b) on two machines. Will switch to 68.0b5 when available. MANY thanks to Jorg K.

Can't find the bug report for the basic gloda error? Read a bunch of gloda bugs and compact folder bugs, but the error we are seeing did not seem to be there?

Reporting that the patch in comment 145 resolves the repetitive error reporting noted in Bug 1406653 (Hurray! Thank you!) but not the underlying gloda problem.

Doug2, I believe that the gloda error here is the same one as reported in Bug 1406653. In a nutshell, the sqlite database refers to message keys that no longer match messages in the mail store. Something like this:

2019-07-16 14:05:03 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://[user]@mail.[domain].[tld]/Inbox/purchases sketchy key: 49818944 subject: Your AmazonSmile order of Chocolate Kennedy Half... and 2 more items.

The referenced message about chocolate Kennedy Half dollars very much does exist.

Deleting the sqlite database forces the database to regenerate and then errors go away, until messages are deleted or moved. Sooner or later, the db gets borked again.

this may or may not be related. I am just working on gloda in an addon. In my collection, I am finding many messages where glodamsg.messageheaderID is null. According to old documentation, that state is allowed, but still, it could wreak havoc on compaction or other actions.
Reading: 'Observed header that claims to be gloda indexed but that gloda has never heard of during compactio': could be other word for saying that header being null.
In my case, gloda returns all db entries but empty messageheaderID. So it claims it is indexed but would not be able to find it in message store.

(In reply to MHW from comment #131)

(In reply to :aceman from comment #125)

The change may have not been picked up by TB, because of caching. Try finding the 'startupCache' folder (somewhere in temporary or local application files) and delete it. Note there may exist another one if you also use Firefox, then delete the one belonging to your Thunderbird profile.

Running TB 68.8.0 on Win10 ver 1903 build 18362.239
First, empty TB trash, compact folders. No Golda loop.
startupCache folder in Firefox profile.
Delete folder.
startupCache folder in Thunderbird profile.
Delete folder.
Then restart TB. Delete a few emails. Empty trash & compact folder. No Golda loops.

However, it did throw 3 of the same JS errors copied below. (This is Not the same as endless "Golda" loops seen in the past.)

  • 2019-07-12 18:01:33 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]
    log4moz.js:680
    CApp_append resource:///modules/gloda/log4moz.js:680:7
    Logger_log resource:///modules/gloda/log4moz.js:374:7
    Logger_error resource:///modules/gloda/log4moz.js:382:5
    PendingCommitTracker_commitCallback resource:///modules/gloda/index_msg.js:172:9
    gloda_ds_pch_handleCompletion resource:///modules/gloda/datastore.js:52:11

I will periodically empty trash and compact folders and report if I get any of the "Golda" loops.

(P.S. I recognized prior comment that it may or may not have anything to do with Golda. Dunno. I think at this point we at least all use that as a common reference, as inaccurate as it may be. Apologies if it's inaccurate.)

Just reporting that I've had the so-called Golda loop again today for the first time in 25 days. The only thing ostensibly different this time is that I had deleted a large number of emails (500+). I did have error console open before I deleted and compacted folders. I've been watching it for the past few weeks and have had no issues until today. Noticed it's been very quiet with no posts for 3 weeks...

No instances here. 2 machines running current beta with the fix. Have not had this long without a loop in 2 years.

(In reply to MHW from comment #160)

(In reply to MHW from comment #131)

(In reply to :aceman from comment #125)

The change may have not been picked up by TB, because of caching. Try finding the 'startupCache' folder (somewhere in temporary or local application files) and delete it. Note there may exist another one if you also use Firefox, then delete the one belonging to your Thunderbird profile.

Running TB 68.8.0 on Win10 ver 1903 build 18362.239
First, empty TB trash, compact folders. No Golda loop.
startupCache folder in Firefox profile.
Delete folder.
startupCache folder in Thunderbird profile.
Delete folder.
Then restart TB. Delete a few emails. Empty trash & compact folder. No Golda loops.

However, it did throw 3 of the same JS errors copied below. (This is Not the same as endless "Golda" loops seen in the past.)

  • 2019-07-12 18:01:33 gloda.index_msg ERROR Exception while attempting to mark message with gloda state afterdb commit [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE) [nsIMsgDBHdr.getUint32Property]" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: resource:///modules/gloda/index_msg.js :: PendingCommitTracker_commitCallback :: line 156" data: no]
    log4moz.js:680
    CApp_append resource:///modules/gloda/log4moz.js:680:7
    Logger_log resource:///modules/gloda/log4moz.js:374:7
    Logger_error resource:///modules/gloda/log4moz.js:382:5
    PendingCommitTracker_commitCallback resource:///modules/gloda/index_msg.js:172:9
    gloda_ds_pch_handleCompletion resource:///modules/gloda/datastore.js:52:11

I will periodically empty trash and compact folders and report if I get any of the "Golda" loops.

(P.S. I recognized prior comment that it may or may not have anything to do with Golda. Dunno. I think at this point we at least all use that as a common reference, as inaccurate as it may be. Apologies if it's inaccurate.)

Just reporting that I've had the so-called Golda loop again today for the first time in 25 days. The only thing ostensibly different this time is that I had deleted a large number of emails (500+). I did have error console open before I deleted and compacted folders. I've been watching it for the past few weeks and have had no issues until today. Noticed it's been very quiet with no posts for 3 weeks...

Just checking in... Still getting Golda loops when compacting folders. It is one particular POP account. The other 2 POPs don't trigger it. Closing TB of course stops it, otherwise runs endlessly consuming about 33% of CPU. Will be interested to see if 60.9 resolves and will of course report back once that's released and I install. Thanks to everyone working on the issue. Here's the first line of errors:
2019-08-18 13:12:33 gloda.index_msg WARN Observed header that claims to be gloda indexed but that gloda has never heard of during compaction. In folder: mailbox://p...@ mail_xmission_com/Inbox sketchy key: 93871593 subject: Touching Base

Still getting Golda loops when compacting folders

You'll have to say using which precise Thunderbird version. If you're using TB 60.8, then of course you would still see the bug, because it doesn't have the fix. You need at least TB 68 beta 5.

Given that TB 60.9 might be the last TB 60, it would be good to test this (on Daily or beta) before it's released to TB 60.9. Esp. if you regularly see the bug, because most of us do not and cannot verify the fix.

This was tested by the relevant reporters on a TB 60.8 + the fix, see around comment #145.

Just installed TB 60.9.0 (32-bit). Emptied trash. Compacted. No issues. Thanks to all who have been working on the issue. You are appreciated!

I received an update to the 68.0 release.
Launched a compact operation, usage of 9% CPU during compaction, then returned to 0%.
Thanks all.

(In reply to MHW from comment #167)

Just installed TB 60.9.0 (32-bit). Emptied trash. Compacted. No issues. Thanks to all who have been working on the issue. You are appreciated!

Still not seeing any issues with periodic empty trash and compact. Thanks again to all who have worked on this issue. :-)

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: