Bug 1726319 Comment 175 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

Gene,

Why folderCache.json holds open many more DB file handles than panacea.dat, and why they are not closed, remain a mystary, I could not find out either.

Does an nsMsgDatabase::Close exists (similarly to a fclose for fopen), that is either not used to close an opened DB file when no longer needed or not properly closing the file without raising an error? Maybe is such function is used it, it is not applied to the righ file pointer, or run via a different process or thread that do not have access to the opened file pointer? 
If such function exists it could be interesting to look at what it closes, and if it succeed doing so...

Or no such function exists and the Garbage Collector (or else mechanism) is relied on for this task of closing ooened file no longer needed and it never does it (thinking the file keep being used), do it  via the wrong process and fails silently or di it but not fast enough?
Are DB files are opened faster than they are closed, if closed at all?

I think with your tests to reproduce and logging process you are on the path to discover why DB files are opened and not close which seems the root cause of the issue.
While your fix maybe a nice workaround to temporarily fix the issue due to reaching a limit set, by extending the limit, a follow-up bug may need to be raised to investigate why the limit is reached in the first place. 
I suppose it may be your plan unless it isbfigured out altogether here.
Gene,

Why folderCache.json holds open many more DB file handles than panacea.dat, and why they are not closed, remain a mystary, I could not find out either.

Does an nsMsgDatabase::Close exists (similarly to a fclose for fopen), that is either not used to close an opened DB file when no longer needed or not properly closing the file without raising an error? Maybe is such function is used it, it is not applied to the righ file pointer, or run via a different process or thread that do not have access to the opened file pointer? 
If such function exists it could be interesting to look at what it closes, and if it succeed doing so...

Or no such function exists and the Garbage Collector (or else mechanism) is relied on for this task of closing opened file no longer needed and it never does it (thinking the file keep being used), do it  via the wrong process and fails silently or di it but not fast enough?
Are DB files are opened faster than they are closed, if closed at all?

I think with your tests to reproduce and logging process you are on the path to discover why DB files are opened and not close before reaching a limit, which seems the root cause of the issue.
While your fix maybe a nice workaround to temporarily fix the issue due to reaching a limit set, by extending the limit, a follow-up bug may need to be raised to investigate why the limit is reached in the first place. 
I suppose it may be your plan unless it isbfigured out altogether here.
Gene,

Why folderCache.json holds open many more DB file handles than panacea.dat, and why they are not closed, remain a mystary, I could not find out either.

Does an nsMsgDatabase::Close exists (similarly to a fclose for fopen), that is either not used to close an opened DB file when no longer needed or not properly closing the file without raising an error? Maybe is such function is used it, it is not applied to the righ file pointer, or run via a different process or thread that do not have access to the opened file pointer? 
If such function exists it could be interesting to look at what it closes, and if it succeed doing so...

Or no such function exists and the Garbage Collector (or else mechanism) is relied on for this task of closing opened file no longer needed and it never does it (thinking the file keep being used), do it  via the wrong process and fails silently or di it but not fast enough?
Are DB files are opened faster than they are closed, if closed at all?

I think with your tests to reproduce and logging process you are on the path to discover why DB files are opened and not close before reaching a limit, which seems the root cause of the issue.
While your fix maybe a nice workaround to temporarily fix the issue due to reaching a limit set, by extending the limit, a follow-up bug may need to be raised to investigate why the limit is reached in the first place. 
I suppose it may be your plan unless it is figured out altogether here.

Back to Bug 1726319 Comment 175