Closed Bug 410747 Opened 17 years ago Closed 16 years ago

IMAP code deadlocks when it can't reach the network


(MailNews Core :: Networking: IMAP, defect, P4)



(Not tracked)



(Reporter: bzbarsky, Assigned: bugmil.ebirol)



(Keywords: dogfood)


(2 files, 5 obsolete files)

This morning, when my DSL connection died, mailnews deadlocked the very next time I tried to view a message (after I dismissed the dialog it popped up saying it couldn't load it).

I killed the process with -SEGV, and the resulting breakpad incident is <>.  You want to view the frames, and then show all threads.  Another similar hang from later today is <>

The problem is that on the main thread we're in the middle of running JS for an event handler:

1  	ClaimScope  	 mozilla/js/src/jslock.c:525
2 	js_LockScope 	mozilla/js/src/jslock.c:1088
3 	js_LockObj 	mozilla/js/src/jslock.c:1246
4 	js_LookupPropertyWithFlags 	mozilla/js/src/jsobj.c:3279
5 	js_LookupProperty 	mozilla/js/src/jsobj.c:3248
6 	js_FindProperty 	mozilla/js/src/jsobj.c:3442
7 	js_Interpret 	mozilla/js/src/jsinterp.c:4184
8 	js_Invoke 	mozilla/js/src/jsinterp.c:1378
9 	nsXPCWrappedJSClass::CallMethod(nsXPCWrappedJS*, unsigned short, XPTMethodDescriptor const*, nsXPTCMiniVariant*) 	mozilla/js/src/xpconnect/src/xpcwrappedjsclass.cpp:1441


1  	ClaimScope  	 mozilla/js/src/jslock.c:525
2 	js_GetSlotThreadSafe 	mozilla/js/src/jslock.c:599
3 	JS_GetGlobalForObject 	mozilla/js/src/jsapi.c:1707
4 	XPCWrappedNativeScope::FindInJSObjectScope(XPCCallContext&, JSObject*, int) 	mozilla/js/src/xpconnect/src/xpcwrappednativescope.cpp:705

while at the same time on the IMAP thread we're running JS because we fired an onload event(!):

1  	ClaimScope  	 mozilla/js/src/jslock.c:525
2 	js_GetSlotThreadSafe 	mozilla/js/src/jslock.c:599
3 	JS_ResolveStandardClass 	mozilla/js/src/jsapi.c:1485
4 	nsWindowSH::NewResolve(nsIXPConnectWrappedNative*, JSContext*, JSObject*, long, unsigned int, JSObject**, int*) 	mozilla/dom/src/base/nsDOMClassInfo.cpp:5717

The real issues here might be twofold:

First, IMAP code is firing a DOM event on a random thread like so:

39 	nsLoadGroup::RemoveRequest(nsIRequest*, nsISupports*, unsigned int) 	mozilla/netwerk/base/src/nsLoadGroup.cpp:688
40 	nsImapMockChannel::Close() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:7982
41 	nsImapProtocol::CloseStreams() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:1002
42 	nsImapProtocol::TellThreadToDie(int) 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:1091
43 	nsImapProtocol::CreateNewLineFromSocket() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:4370
44 	nsImapProtocol::EstablishServerConnection() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:1248
45 	nsImapProtocol::ProcessCurrentURL() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:1370
46 	nsImapProtocol::ImapThreadMainLoop() 	mozilla/mailnews/imap/src/nsImapProtocol.cpp:1192

This is guaranteed to break things, since you touch the DOM from off-thread.  We need to fix this.

That said, shaver thinks the deadlock is also due to the request model not being used some places where it should perhaps be...  Not sure where those are in this case, because I'm still not clear on when all we really need to be using the request model.  We definitely do the JS_ResolveStandardClass call inside a request, so the problem would have to be in the code running on the main thread, if there's a problem.
Note that last tiem I was poking at deadlocks in IMAP code, I added this comment on CloseStreams:

981 // called from UI thread.
982 // XXXbz except this is called from TellThreadToDie, which can get called on
983 // either the UI thread or the IMAP protocol thread, per comments.

(the first line was there before me).  That case is _exactly_ the case we're hitting here: TellThreadToDie being called from the protocol thread.
Flags: blocking1.9?
Not sure we need to block on this for 1.9 - but I don't know the SM/TBird release schedules to know if they'll do a release off of 1.9 or a 1.9.x..
Flags: wanted1.9+
Flags: blocking1.9?
Flags: blocking1.9-
Priority: -- → P4
It doesn't so much need to block 1.9 as TB3 and  I don't know what those will base on either...

For what it's worth, I was hitting this about twice a day, so I've stopped using trunk mailnews as a result.
Flags: blocking-thunderbird3?
FWIW, I upgraded to trunk TB recently and am getting this often too (about every time I switch WiFi networks, so ~2 times a day).
err, you're not supposed to let dom objects be touched by other threads.

i had a patch somewhere that enforced this (it'd break venkman and domi, but at least this couldn't happen)....
as for the request model, assuming spidermonkey is using CHECK_REQUEST completely, then anyone w/ a debug build should instantly recognize if the request model is being violated (because it will trigger instant death).,51,61,1463

JS_BeginRequest is present,117,776,793

should be called by (called from spidermonkey in a request)
DefinePropertyIfFound is called from JS_STATIC_DLL_CALLBACK(JSBool) XPC_WN_NoHelper_Resolve 
JS_STATIC_DLL_CALLBACK(JSBool) XPC_WN_ModsAllowed_Proto_Resolve

dolske: how about running tb-dbg ? :) -- [or tb-nightly-w32 if there's a symbol server]
Note that this initially got filed in bug 330273 almost two years ago.  Also note that this is also causing what look like memory-corruption crashes: bug 422178.
Blocks: 422178
Depends on: 330273
Severity: normal → critical
Flags: wanted-thunderbird3.0a1?
Flags: blocking-thunderbird3?
Flags: blocking-thunderbird3+
Keywords: dogfood
Flags: blocking-thunderbird3.0a1?
Blocking 3.0a1+.
Flags: blocking-thunderbird3.0a1? → blocking-thunderbird3.0a1+
Flags: wanted-thunderbird3.0a1?
I have tried couple things including proxying m_mockChannel (m_channelListener is already a proxy), and delegating function call to the main thread using event queue mechanism. Although event queue approach is not the right thing to do, both methods work and prove Boris' comments. Only problem is I got the following assertions for both ways;

###!!! ASSERTION: nsStreamListenerTee not thread-safe: '_mOwningThread.GetThread() == PR_GetCurrentThread()', file /Users/ebirol/Projects/mozilla/netwerk/base/src/nsStreamListenerTee.cpp, line 43
###!!! ASSERTION: nsWeakReference not thread-safe: '_mOwningThread.GetThread() == PR_GetCurrentThread()', file nsWeakReference.cpp, line 146

So, I tried another approach and proxied nsIImapProtocol impl in TellThreadToDie instead;

nsImapProtocol::TellThreadToDie(PRBool isSafeToClose)
	nsresult rv = NS_OK;
	nsCOMPtr<nsIImapProtocol> imapProtoProxy = nsnull;
        rv = NS_GetProxyForObject(NS_IsMainThread() ? nsnull : NS_PROXY_TO_MAIN_THREAD,
								(nsIImapProtocol*)this, NS_PROXY_SYNC,
	//XXXeb then what?
	//if ( NS_FAILED(rv) ) 

As you might guess DoTellThredToDie is the original method. Not sure if NS_IsMainThread() ? nsnull : NS_PROXY_TO_MAIN_THREAD statement is necessary or not but based on NS_GetProxyForObject's documentation should be ok. Currently I am not sure what to do if rv is failed. 

It looks like this approach works fine and doesn't cause any assertion. Before I submit a patch, I wanted to get your opinions...

fwiw, imo the only proper time/place to create proxies is before you send things to a thread. i.e. you should get and store your proxy (PROXY_ALWAYS) very early as a member, and simply use it here where you need it.

this means you don't have to worry about:
//XXXeb then what?

because if you fail to create the proxy at the beginning, you simply refuse to create the thread.
When I tried to fix this, I tried to add a proxied method to nsIImapServerSink , CloseProtocolStreams, to make sure that the closing of the streams happened on the UI thread. That seemed to work, but I ran into deadlock problems in RetryUrl. I don't remember if that was a related issue or not. In general, I think it would be better to use the existing proxyied interfaces, if possible, instead of doing your own ad hoc proxy stuff, but perhaps timeless disagrees.
If I recall correctly, IMAP does its own proxying because that code was written before nsISupports proxies existed.  I also seem to remember hearing that the IMAP custom proxying stuff was lighter weight.  Are those things correct?

Possibly relevant here is the bigger picture question about whether we think switching to nsISupports proxying would make the code more maintainable, if for no other reason than the fact that people new to the IMAP code wouldn't have to deal with a separate proxying implementation there.  Thoughts?
I think you're thinking of the hand-rolled proxying we did - I got rid of all that. Now we use xpcom proxying
David, do you suggest to use something similar to the following instead;

      // grab a monitor so m_mockChannel doesn't get cleared out
      // from under us.
      nsAutoCMonitor mon(this);
      if (m_mockChannel)
        // Proxy the release of the channel to the main thread.  This is something
        // that the xpcom proxy system should do for us!
        nsCOMPtr<nsIThread> thread = do_GetMainThread();
        nsIImapMockChannel *doomed = nsnull;
        NS_ProxyRelease(thread, doomed);

oh, i have no opinion about which interfaces you use. that's an implementation detail.

just don't create proxies late, create them before you create threads. that's all i ask.
Emre, I was thinking of adding a method to nsIImapServerSink, and using that to close streams, so that the imap thread can make streams get closed on the ui thread.

Maybe I'm unclear on what Dan was saying, but all of the sink interfaces are used by the imap thread to call into the ui thread, automatically using proxying. I thought that was called nsISupports proxying, and that's the only kind of proxying the imap code does.

CloseStreams() method releases lots of ImapProtocol members such as m_channelListener, m_channelContext, m_server, etc.. If I move CloseStreams() method into ImapServerSink (which is implemented by nsImapIncomingServer), then I have to pass nsImapProtocol instance pointer to ImapServerSink::CloseStreams() as an argument, and have to declare ImapServerSink as a friend class of ImapProtocol. 

First, do I understand your suggestion correctly? if so, is there a better way to implement it (one that doesn't increase the class coupling perhaps)?
I have spent sometime to understand what's going on under the hood of imap protocol implementation. These are my findings related to this particular bug:

o nsImapProtocol is an active class that runs its own thread internally to manage the server communication. UI thread creates an instance of this class for each imap folder opened by the user. The number of instances, therefore the number of threads, are limited to 5. After this limit is reached, UI recycle the connections.

o TellThreadToDie() method is an interface method of imap protocol class to allow UI thread to kill its internal thread when necessary. This method is also used by imap protocol thread to kill itself when an error occurs - connection error, etc.

o TellThreadToDie() method calls CloseStreams() method internally to release the resources used by this thread. 

++ When TellThreadToDie is executed by an imap protocol thread instead of UI thread the system crashes.


Since this is a blocker, and we don't have much time to wait, I propose the attached patch that simply dispatches every TellThreadToDie call to UI thread. 
This approach should be enough to fix every possible thread boundary problem we might get in such context. 

Some QA time is needed after it's landed to trunk.
Attachment #315658 - Flags: superreview?(dmose)
Attachment #315658 - Flags: review?(bienvenu)
Emre, your diagnosis is basically correct. But the solution doesn't really use the same mechanisms as the rest of the imap code with respect to handling cross-threading issues, i.e., xpcom proxying.  Is there a reason you didn't use xpcom proxying? In your previous comment, you worried about having to make nsImapProtocol and nsImapIncomingServer friend classes, but you could simply add a method to nsIImapProtocol.idl to do the actual work...
Actually, my first patch candidate (comment #9) was a standard xpcom proxying by adding a new method to nsIImapProtocol.idl, but didn't get any love. The last patch is a lightweight alternative to the previous one, essentially the same idea, but different implementation.

Would you be ok with a solution that makes all internal TellThreadToDie() calls through a proxied imapprotocol object? Or you suggest adding a CloseStreams() like method to idl?
No, I didn't intend to proxy through nsImapProtocol at all - I would use the server sink to proxy over to the UI thread to close the streams, and pass in the nsIImapProtocol interface pointer, and add a method to the nsIImapProtocol interface to do the cleanup. I can try to whip up a patch to demonstrate what I mean...
I definitely understand your approach, and will do it that way. Giving that you have already tried this, and have doubts, I suggest to add a method like KillThread() to serversink and call imaprotocol::TellThreadToDie() instead.


one perhaps easy thing to do is simply proxy release the mock channel to the ui thread, and let the ui thread do the close stream when the mock channel gets deleted.

Is proxy releasing synchronous or async?
> one perhaps easy thing to do is simply proxy release the mock channel to the ui
> thread, and let the ui thread do the close stream when the mock channel gets
> deleted.

Agreed but other null assignments in CloseStreams() method cause too many "not thread-safe" assertions when it is called by the protocol thread. Is it ok to ignore them?

> Is proxy releasing synchronous or async?
it's parametrical. imapprotocol.cpp uses with NS_PROXY_SYNC | NS_PROXY_ALWAYS arguments.

Note: I am going to submit a patch for the approach you've recommended. Going to test both with CloseProtocolStreams() and KillThread() methods on imapserversink interface to be called by imapprotocol internally.
hmm, in my tree, I don't see those assertions, but I've done a lot of other things.

I was asking about NS_ProxyRelease, which is different from XPCOM interface proxying.

I'll attach the patch I was working on, just to give you an idea of what I was thinking...
this gives you an idea of what I was thinking - maybe you can run with some of the ideas.

I'd like the UI thread to not call TellThreadToDie directly, so I wrote SignalThreadToDie for the UI thread to call.  It does some cleanup, and then sets m_threadToDie. The IMAP thread will notice this and shut itself down.

If the imap thread notices a bad connection, it proxies a call over the UI thread to CloseProtocolStreams, which cleans up the ui thread part of the connection, and then signals back the thread to tell it to go away.

In nsImapProtocol::CloseStreams, I proxy the release of the channel over to the UI thread, as bz suggested.
Reassigning to Emre, since he's doing the work of driving the patch into a final form.
Assignee: bienvenu → ebirol
Whiteboard: [needs updated patch]
This one very similar to David's attempts. The thread interaction changes as follows:
 o Added CloseProtocolStreams to imapServerSink
 o Added CloseProtocolStreams to imapProtocol
 o I kept the TellThreadToDie to be called by UI thread (and by imap protocol thread) as it is originally designed, but imapProtocol::CloseStreams() is not called from TellThreadToDie anymore.
 o In the original code, imap protocol thread calls CloseStreams twice -- which is redundant -- once in TellThreadToDie, and once after leaving the main imap thread loop. Removing CloseStreams from TellThreadToDie, guarantees that it is called by imap protocol thread only once just before the thread dies. -- Note that mockChannel->Cancel also calls TellThreadToDie internally.
 o Instead of calling CloseStreams from the imap protocol thread, I used imapServerSink->CloseProtocolStreams (which eventually calls imapProtocol::CloseStreams) to proxy over UI thread.
 o Also cleaned up some code in imap protocol

Although more tests are needed, the result of my tests show that TB doesn't crash, nor deadlock anymore and weak reference assertions are gone. It looks like this is the minimal change to fix this bug.

David, If you convince that this might be a good patch for the short term, let me know, I am going to clean up some comments from the code before submitting the final one.

IMHO, on the long run, we need to think thoroughly about  the thread interaction in imap module. It makes the code very difficult to approach and error prone.
Attachment #316961 - Flags: superreview?(dmose)
Attachment #316961 - Flags: review?(bienvenu)
Attachment #315658 - Attachment is obsolete: true
Attachment #315658 - Flags: superreview?(dmose)
Attachment #315658 - Flags: review?(bienvenu)
Comment on attachment 316961 [details] [diff] [review]
Proposal for proxying CloseStreams over UI thread

Can't really stamp sr+ on this until we have a final cleaned up version, but here are some thoughts:

>Index: mailnews/imap/public/nsIImapProtocol.idl
>RCS file: /cvsroot/mozilla/mailnews/imap/public/nsIImapProtocol.idl,v
>retrieving revision 1.14
>diff -u -8 -p -r1.14 nsIImapProtocol.idl
>--- mailnews/imap/public/nsIImapProtocol.idl	25 Sep 2007 20:11:25 -0000	1.14
>+++ mailnews/imap/public/nsIImapProtocol.idl	22 Apr 2008 05:40:00 -0000
>@@ -94,12 +94,15 @@ interface nsIImapProtocol : nsISupports 
>+  // closes the imap connections
>+  void CloseProtocolStreams();

It's probably useful to also have this comment talk about what thread it's expected to be called from.

>+  // do cleanup on the UI thread
>+  void closeProtocolStreams(in nsIImapProtocol aProtocol);

It would be helpful if this comment were made more clear about whether it's expected to only be called from the UI thread, or that is expected to be called from some other thread in order to cause the proxying to happen.

>+  CloseStreams();    
>+  return NS_OK;

Is a wrapper method really necessary here?  Seems like the code would be simpler if there were simply a single method exposed via the IDL.
Attachment #316961 - Flags: superreview?(dmose)
Whiteboard: [needs updated patch] → [needs review bienvenu]
(In reply to comment #29)
> >+  // closes the imap connections
> >+  void CloseProtocolStreams();
> It's probably useful to also have this comment talk about what thread it's
> expected to be called from.

The implementation should also assert that it's on that thread, ideally in a fatal way. If NS_ABORT_IF_FALSE were actually fatal, I would counsel its use here.
>Can't really stamp sr+ on this until we have a final cleaned up version
Right, this is just a proposal anyway.

>Is a wrapper method really necessary here?  Seems like the code would be
>simpler if there were simply a single method exposed via the IDL.

Right again, It's not. I just wanted to make it clear that CloseStreams() is not a natural interface of the imap protocol class - since it should be called only by the UI thread. I think, once it's the part of the interface, there is no way to limit which thread is calling it in XPCOM. My original thought was making CloseStreams private - not part of nsIImapProtocol interface class, and making nsImapServerSink a friend of nsImapProtocol class to call it through a native pointer. Not sure if this is a common practice in XPCOM. I think the interface clarity issue should be definitely addressed, mildly confusing as it is.

Thank you both, I am going to reflect your remarks to the final version.
I'll try it out - one nit - it's better to just remove code instead of commenting it out, at least for checkin.
I tried running with this patch - it does get rid of some thread-safety assertions but most of the time, I can't shut down. I get this warning:

WARNING: nsExceptionService ignoring thread destruction after shutdown: file C:/
mozilla-build/msys/tbirdtrunk/mozilla/xpcom/base/nsExceptionService.cpp, line 19

and then I'm stuck with no windows open but the app is still running. This is on windows.

I'll try without the patch...
I am getting this error too with not-patched trunk built:

WARNING: nsExceptionService ignoring thread destruction after shutdown: file /Users/ebirol/Projects/mozilla/xpcom/base/nsExceptionService.cpp, line 194
--DOMWINDOW == 0 (0x150cf1bc) [serial = 1] [Outer = 0x0]
I mean the same error you have mentioned "I'm stuck with no windows open but the app is still running"
Emre: which platform are you seeing it on?
w/ my patch, I don't see the error, and I don't see the failure to shut down. W/ no patch, I believe I saw the error, but it did shutdown, but I only tried a few times. With your patch, I saw the error every time, and it failed to shut down every time. It may be that I didn't try a large enough sample, but I have a feeling that your patch exposes/exacerbates an existing race condition. I think at shutdown time, we can't be doing the proxying over to the UI thread...
Whiteboard: [needs review bienvenu] → [needs new patch]
This patch adds an interface named nsIImapProxyHelper (couldn't come up with a better name) to imap module. Its sole purpose is to expose methods that MUST be proxied over UI thread such as CloseStreams() - currently there is one method, but if need be, we can add more. In order to fix this bug, I  changed TellThreadToDie() to call  CloseStreams() via this proxied interface instead of calling it natively.

I have tested TB version 3.0a1pre (2008042422) on Mac OS 10.5.2 with and without this patch, in normal condition and in case of connection error with multiple (3) gmail imap account. I am going to attach the console output for each combination of these cases.

Additionally, I have tested the same patch on Windows XP VM (TB version 3.0a1pre (2008042511)) for similar cases, you can also find their console output attached.

o In all cases, TB did shutdown gracefully, I haven't experienced any exit failure.
o In case of no error, trunk with an without  patch show similar output results.
o In case of connection error, trunk without this patch, outputs lots of assertion related to threading issues.
o In case of connection error, trunk with this patch doesn't show any extra assertion
o Trunk with patch doesn't crash in case of connection error
o "WARNING: nsExceptionService ignoring thread destruction after shutdown: .." is there before and after the patch
o Current code outputs couple threading related assertions with and without this patch
o Memory leaks occurs/increases usually after I have selected and opened a message content. My observation is just selecting folders doesn't cause memory leaks. -- this is an observation, not tested thoroughly

To Do:
1) Somebody else (preferably David, or Dmose since we know the previous version of this patch caused exit failures on their machine) should test the patch for exit failure.
2) Emre: Going to test this patch on native Windows XP with different hardware specs - just to make sure
Attachment #315861 - Attachment is obsolete: true
Attachment #316961 - Attachment is obsolete: true
Attachment #318230 - Flags: review?(bienvenu)
Attachment #316961 - Flags: review?(bienvenu)
Console outputs that I have mentioned at previous comment.
Comment on attachment 318230 [details] [diff] [review]
Patch for proxying CloseStreams() over UI thread

I think you copied the license stuff from an old file. We use tab width of 2, and basic offset of 2. And the copyright date would be 2008.

You should fix the braces style here:

+  if (m_imapProxyHelper) {
+    m_imapProxyHelper->CloseStreams();
+    m_imapProxyHelper = nsnull;
+  }

re the name of the interface, maybe something like nsIImapProtocolSink? That would match the other sinks...

But the most important thing is that this patch does seem to work well.
davida: any suggestion about the license?
Rev4 of the patch. It is changed according to the suggestions.
Attachment #318230 - Attachment is obsolete: true
Attachment #318318 - Flags: review?(bienvenu)
Attachment #318230 - Flags: review?(bienvenu)
Whiteboard: [needs new patch] → [needs final review bienvenu]
Comment on attachment 318318 [details] [diff] [review]
Patch for proxying CloseStreams() over UI thread rev4

looks good, thanks!
Attachment #318318 - Flags: review?(bienvenu) → review+
Whiteboard: [needs final review bienvenu] → [needs sr dmose]
Attachment #318318 - Flags: superreview?(dmose)
Comment on attachment 318318 [details] [diff] [review]
Patch for proxying CloseStreams() over UI thread rev4

In general, this looks good, but it seems to me that creating the extra interface isn't actually necessary and makes the code more complex than it needs to be.  I'd suggest just adding this method to the nsIImapProtocol interface itself, naming the member variable something else, and keeping the rest identical.  Also, remember to use doxygen/javadoc-style  comments when documenting IDL methods.
Attachment #318318 - Flags: superreview?(dmose) → superreview-
creating the extra interface makes it clear which method(s) can be called from other threads. Are you suggesting proxying the whole nsIImapProtocl interface, but only ever calling the single method between threads?
David: yeah, just documenting in the IDL that that method should be called only on the UI thread (through a proxy if necessary).  The thing that I found confusing is that there's now a second unnecessary interface on something which is guaranteed to only ever be a single object.  That said, this is perhaps 6-of-one, half-dozen-of-the-other.  So if you'd prefer keeping it the way it is, I can live with that.
Assignee: bugmil.ebirol → bienvenu
After giving this a bit more thought, I've decided this just isn't that important, and since we've got a working patch, let's go with it.  Reversing to sr+, conditional on changing the IDL comments to be doxygen style.
Assignee: bienvenu → bugmil.ebirol
Attachment #318318 - Flags: superreview- → superreview+
Whiteboard: [needs sr dmose] → [needs updated patch]
IDL comment is changed to doxygen style.
Attachment #318318 - Attachment is obsolete: true
Attachment #318514 - Flags: superreview+
Attachment #318514 - Flags: review+
Keywords: checkin-needed
Whiteboard: [needs updated patch] → [check-in needed]
Attachment #318514 - Attachment description: Patch for proxying CloseStreams() over UI thread rev5 → Patch for proxying CloseStreams() over UI thread rev5 (checked in)
Fix checked in.
Closed: 16 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Whiteboard: [check-in needed]
Summary: Imap code deadlocks when it can't reach the network → IMAP code deadlocks when it can't reach the network
Product: Core → MailNews Core
You need to log in before you can comment on or make changes to this bug.