Right now, fakeserver lives in the current process, which requires us to spin the event loop to get anything done because multithreaded JS sucks. Moving fakeserver into a separate process allows us to reduce all worries about the event loop to standard asynchronous callbacks. How can this be done? I've written a small script with JS proxies that appears to be sufficient to proxy a daemon (every set, delete, define property ends up getting run on both sides via a pipe, so both process sides have a mirror copy). The only potential downfall is passing functions in properties due to the variables maybe not having scope, but underusing functions should probably work well enough. In other words, running fakeserver in another process amounts to: a. Turn the daemon object into a mirrored object in both test process, server process. b. We probably need to mirror the handlers as well, so createHandler can be run in the test process. Actually, given my setup, it should be possible to do a one-way mirror, such that the modifications on the test side can be reflected on the server side but not vice-versa (for speed). c. Something I just realized is that we end up with race conditions if we just mirror sets, since the code to process the set may be waiting when we do the corresponding get. So the test side needs to force the get anyways. d. nsMailServer becomes a more complicated framework handling startup and tear-down via remote proxying calls. I don't want to put the cart before the horse, but there are some things to think along the way: 1. Windows process creation is relatively expensive... should we just load the fakeserver process before tests start and keep it loaded for all of make xpcshell-tests ? 2. What files should be loaded in the fakeserver process? mailnews/test/fakeserver/*.js seems minimally necessary. 3. IPC mechanism. stdin/stdout piping is my preferred option, but Mozilla doesn't have a native way to do this (there is http://hg.mozilla.org/ipccode, but requiring that for tests seems a bit much). IPDL-based IPC seems possible, but that would require us to write a C++ shell framework to use and more require core m-c changes. Essentially, everything appears to be ruled out except socket-based IPC unless we write C++ code. 4. If we do write things in C++, we probably gain the ability to use "rawer" socket access, which would make getting STARTTLS in tests easier. 5. Assuming that we eventually want to enable protocol tests in mozmill (or other test frameworks), I figure that the standard daemon stuff we set up in head_*.js might also want to be available to mozmill people, to allow us to use a unified framework for all test frameworks. That probably means we need to recentralize fakeserver tests. Dependency on bug 656984 since I'm not going to try to imagine this bug without sanifying that API first. ;-)
I've done some more research, so I think I better understand the decisions that need to be made. We have three object classes, with the following requirements: Handlers: - Test process worries only about creation, thanks to bug 656984. - Server process uses these a lot for just about everything. - Server process needs to call creator, creator wants to run in test process nsMailServer: - Test process uses: -- new, setDebugLevel, start, playTransaction, stop, -- performTest, resetTest: obsoleted by this bug and bug 656984, resp. - Server process needs everything else. Actually, it's pretty disjoint who uses what. - Well, _daemon is used by both sides, but that's already the daemon object we worry about big time. Daemons: - Initially created by test process - Server process mostly just reads it, but can write it in cases like posting or necessary server-side manipulation - Test process generally ignores it after creation and setup, but occasionally needs to query or modify a control parameter -- Some function calls done more on the fly. Fixing this is pretty simple, except for the daemons. Basically, nsMailServer can be split up into distinct objects in the test and server side, with functions being RPC calls. The createHandler is an RPC from the server to the test process, which basically means we clone the return values. The sticking point is the daemon. RPC implies I need a stream or socket based form of communication, which immediately means that gets may have to be asynchronous. If I use shared memory, everything becomes a lot more complicated, but we can make gets synchronous. There is another design option to consider with daemons: how much to proxy? One options is to proxy all data, and make function calls local. Another options to proxy all function calls onto the server process and make data local (so direct data access on the test process is stale). Should gets be asynchronous (require a function that gets the answer on callback) or synchronous (which may imply pumping the event loop--unless we do shared memory)?