Closed
Bug 539532
Opened 15 years ago
Closed 7 years ago
Garbage collection still slow in Firefox 3.6
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: limi, Unassigned)
References
Details
Attachments
(1 file)
4.87 KB,
text/plain
|
Details |
From http://hacks.mozilla.org/2010/01/javascript-speedups-in-firefox-3-6/ "In Fx3.5, I see frequent pauses and the animation looks noticeably jerky. In Fx3.6, it looks pretty smooth, and it’s hard for me even to tell exactly when the GC is running." I'm running a modern 2GHz Core 2 Duo Mac Mini with 4GB of RAM, and no other processes running. In Firefox 3.6 RC, I still see significant jerkiness, and GC delays of around 80-100ms. According to the description, this shouldn't be happening anymore. :)
Comment 1•15 years ago
|
||
Nobody ever said that it shouldn't happen anymore. should happen less. Do you have any explicit bug report, with measurements? Or any suggestions how to improve? If not, this is an invalid bug report.
Comment 2•15 years ago
|
||
The GC performance improved a lot: Measurements for example for the chrome experiment canopy in ms: http://www.chromeexperiments.com/detail/canopy/ Time for marking, sweeping, marking + sweeping, time spent in GC function [ms] Firefox 6/1/2009 mark: 38.419304, sweep: 91.084648, m+s: 129.503952, end-enter: 140.846568 mark: 35.990072, sweep: 129.592184, m+s: 165.582256, end-enter: 168.059344 mark: 107.222688, sweep: 19.347560, m+s: 126.570248, end-enter: 133.150704 mark: 35.916024, sweep: 159.106288, m+s: 195.022312, end-enter: 197.909720 mark: 35.312520, sweep: 158.193832, m+s: 193.506352, end-enter: 195.481792 mark: 90.258304, sweep: 7.256320, m+s: 97.514624, end-enter: 99.182816 mark: 35.025816, sweep: 143.162464, m+s: 178.188280, end-enter: 180.361592 mark: 35.554696, sweep: 150.175992, m+s: 185.730688, end-enter: 187.696336 mark: 89.520920, sweep: 23.924560, m+s: 113.445480, end-enter: 115.305800 mark: 35.414552, sweep: 160.274816, m+s: 195.689368, end-enter: 197.612304 mark: 35.564120, sweep: 165.512928, m+s: 201.077048, end-enter: 203.002480 mark: 34.899472, sweep: 162.344568, m+s: 197.244040, end-enter: 199.154784 mark: 35.761560, sweep: 147.017320, m+s: 182.778880, end-enter: 184.691560 Firefox 1/14/2010 mark: 28.399248, sweep: 14.472976, m+s: 42.872224, end-enter: 55.180832 mark: 25.625744, sweep: 54.187568, m+s: 79.813312, end-enter: 82.046840 mark: 58.042976, sweep: 44.032624, m+s: 102.075600, end-enter: 111.238808 mark: 25.009120, sweep: 58.149824, m+s: 83.158944, end-enter: 85.367112 mark: 24.757432, sweep: 59.628688, m+s: 84.386120, end-enter: 86.042168 mark: 42.172072, sweep: 6.834800, m+s: 49.006872, end-enter: 50.698944 mark: 25.072536, sweep: 58.421032, m+s: 83.493568, end-enter: 85.145376 mark: 24.808304, sweep: 59.534608, m+s: 84.342912, end-enter: 86.003264 mark: 25.090440, sweep: 58.494168, m+s: 83.584608, end-enter: 85.252736 mark: 24.811264, sweep: 60.443688, m+s: 85.254952, end-enter: 86.913240 mark: 24.990128, sweep: 57.546752, m+s: 82.536880, end-enter: 84.193536 mark: 25.002568, sweep: 59.622496, m+s: 84.625064, end-enter: 86.257240 mark: 24.888880, sweep: 59.736168, m+s: 84.625048, end-enter: 86.405688 But of course we still have a lot to do if we look at webkit numbers: The reset function is called on the heap and performs a marking of all objects and a size adjustment if necessary. reset: mark: 0.408704, sweep: 0.000064, m+s: 0.408768, end-enter: 1.194832 reset: mark: 0.207768, sweep: 0.000064, m+s: 0.207832, end-enter: 0.211784 reset: mark: 0.318864, sweep: 0.000072, m+s: 0.318936, end-enter: 0.322176 reset: mark: 1.166136, sweep: 0.000064, m+s: 1.166200, end-enter: 1.995144 reset: mark: 2.072360, sweep: 0.000064, m+s: 2.072424, end-enter: 2.077872 reset: mark: 3.245304, sweep: 0.000064, m+s: 3.245368, end-enter: 4.153016 mark: 3.436960, sweep: 2.597072, m+s: 6.034032, end-enter: 6.041720 reset: mark: 3.096448, sweep: 0.000064, m+s: 3.096512, end-enter: 3.103248 reset: mark: 4.571888, sweep: 0.000072, m+s: 4.571960, end-enter: 5.462640 reset: mark: 4.403984, sweep: 0.000072, m+s: 4.404056, end-enter: 4.412240 mark: 4.633896, sweep: 4.084576, m+s: 8.718472, end-enter: 8.726768 mark: 5.796280, sweep: 1.924552, m+s: 7.720832, end-enter: 8.578632 reset: mark: 6.114288, sweep: 0.000064, m+s: 6.114352, end-enter: 7.622936 reset: mark: 6.824504, sweep: 0.000064, m+s: 6.824568, end-enter: 8.476432 reset: mark: 6.858968, sweep: 0.000064, m+s: 6.859032, end-enter: 6.876120 reset: mark: 7.025680, sweep: 0.000064, m+s: 7.025744, end-enter: 7.042352 reset: mark: 7.111608, sweep: 0.000064, m+s: 7.111672, end-enter: 7.129528 reset: mark: 6.960560, sweep: 0.000064, m+s: 6.960624, end-enter: 6.978616 reset: mark: 7.038176, sweep: 0.000064, m+s: 7.038240, end-enter: 7.055352 reset: mark: 7.184928, sweep: 0.000064, m+s: 7.184992, end-enter: 7.202224 reset: mark: 7.111504, sweep: 0.000064, m+s: 7.111568, end-enter: 7.128496 reset: mark: 7.251312, sweep: 0.000064, m+s: 7.251376, end-enter: 7.269152 reset: mark: 7.193072, sweep: 0.000064, m+s: 7.193136, end-enter: 7.210848 reset: mark: 7.023200, sweep: 0.000064, m+s: 7.023264, end-enter: 7.040072 reset: mark: 6.937768, sweep: 0.000064, m+s: 6.937832, end-enter: 6.954632 reset: mark: 7.166176, sweep: 0.000064, m+s: 7.166240, end-enter: 7.184048 reset: mark: 7.319296, sweep: 0.000064, m+s: 7.319360, end-enter: 7.336048 reset: mark: 7.207064, sweep: 0.000064, m+s: 7.207128, end-enter: 7.225304 reset: mark: 7.251400, sweep: 0.000064, m+s: 7.251464, end-enter: 7.268592 reset: mark: 7.072728, sweep: 0.000064, m+s: 7.072792, end-enter: 7.089416 mark: 7.382672, sweep: 7.916176, m+s: 15.298848, end-enter: 15.316104
Comment 3•15 years ago
|
||
Nice stats! Summarizing a bit: mark sweep total Fx 6/ 1/2009 35 190 225 Fx 1/14/2010 25 60 85 WebKit 7 0 7 So our pause times are about 1/3 of what they used to be, which is great. But they are still 10x WebKit. I'm told that WebKit uses separate heaps per page or something like that, which I would think would help mark time a lot. I wouldn't be surprised if we could get our mark time similar to theirs by separating heaps appropriately. Our sweep time seems way too long, though. I thought rule of thumb was that sweep time should be < 1/10 mark time. WebKit seems to be at about that level, although it is distributed so that 9 of 10 GCs sweep time is zero, and on the other, it's about equal to sweep time. They may be GCing too often on this test. Anyway, it seems to me we should strive for sweep time to be <= mark time.
Comment 4•15 years ago
|
||
(In reply to comment #3) > I'm told that WebKit uses separate heaps per page or something like that, which > I would think would help mark time a lot. I wouldn't be surprised if we could > get our mark time similar to theirs by separating heaps appropriately. There are still low-hanging fruits in the GC code like inlining various hot paths in the marking/finalizing code. IMO we should try that first to see if we can get some quick results. We have not even tried to inline various hot paths in the mark code. > > Our sweep time seems way too long, though. I thought rule of thumb was that > sweep time should be < 1/10 mark time. AFAIK that rule aplied to pure GC systems. In SpiderMonkey objects and strings contain pointer to malloced data. That inevitably makes the sweep phase longer as the GC needs to run the finalization phase if not to free the memory but at least detect which GC things needs extra free calls and to delegate that to a separated thread. If everything would be allocated through the GC heap, then the finalization would not be necessary. But the price for that is longer markinging phase as for live things the GC would need to mark the data pointers which currently is avoided.
Comment 5•15 years ago
|
||
(In reply to comment #4) > (In reply to comment #3) > > I'm told that WebKit uses separate heaps per page or something like that, which > > I would think would help mark time a lot. I wouldn't be surprised if we could > > get our mark time similar to theirs by separating heaps appropriately. > > There are still low-hanging fruits in the GC code like inlining various hot > paths in the marking/finalizing code. IMO we should try that first to see if we > can get some quick results. That makes sense to me, as long as the low-hanging fruit gives as good a bang for the buck as the big fix. Smaller heaps seems to offer more potential for a big improvement. What I don't know is the relative amount of work. Over the past 7 months, we've reduced the total pause time from about 185 to 85 in this test, for a reduction of 100 ms. It seems to break down something like this: local optimizations in marking paths 10 ms move finalizer frees off the main thread 100 ms That supports my expectation that bigger system-level changes have much more potential than local optimization, but of course we have to watch out for confirmation bias. > We have not even tried to inline various hot paths in the mark code. > > > > > Our sweep time seems way too long, though. I thought rule of thumb was that > > sweep time should be < 1/10 mark time. > > AFAIK that rule aplied to pure GC systems. I think so too. > In SpiderMonkey objects and strings > contain pointer to malloced data. That inevitably makes the sweep phase longer > as the GC needs to run the finalization phase if not to free the memory but at > least detect which GC things needs extra free calls and to delegate that to a > separated thread. If everything would be allocated through the GC heap, then > the finalization would not be necessary. But the price for that is longer > markinging phase as for live things the GC would need to mark the data pointers > which currently is avoided. I wonder where we are in the trade-off space. Is it easy to do an experiment with GCing object fslots or something like that?
Comment 6•15 years ago
|
||
More details for a typical GC run of my example [ms]: mark: 31.794968, sweep: 70.057136, m+s: 101.852104 js_SweepAtomState(cx); + CloseNativeIterators(cx); + js_SweepWatchPoints(cx); 1.98 Finalize Objects and Functions: 18.39 Finalize Doubles: 6.19 js_SweepScopeProperties: 0.82 js_SweepScriptFilenames: 0.06 DestroyGCArenas: 42.55 submitDeallocatorTask: 0.06
Comment 7•15 years ago
|
||
(In reply to comment #6) > More details for a typical GC run of my example [ms]: > > mark: 31.794968, sweep: 70.057136, m+s: 101.852104 > > js_SweepAtomState(cx); + > CloseNativeIterators(cx); + > js_SweepWatchPoints(cx); 1.98 > Finalize Objects and Functions: 18.39 > Finalize Doubles: 6.19 > js_SweepScopeProperties: 0.82 > js_SweepScriptFilenames: 0.06 > DestroyGCArenas: 42.55 > submitDeallocatorTask: 0.06 That clearly suggests to move DestroyGCArenas (or, perhaps, DestroyGCChunk) to the deallocator thread.
Comment 8•15 years ago
|
||
(In reply to comment #6) > Finalize Objects and Functions: 18.39 > Finalize Doubles: 6.19 "Finalize Doubles" means here the time between the last call to FinalizeArenaList and the call to js_SweepScopeProperties, right?
Comment 9•15 years ago
|
||
Those are some superb numbers. Where do we spend so much time in DestroyGCArenas? Is it the unmap syscall?
Comment 10•15 years ago
|
||
(In reply to comment #8) > (In reply to comment #6) > > Finalize Objects and Functions: 18.39 > > Finalize Doubles: 6.19 > > "Finalize Doubles" means here the time between the last call to > FinalizeArenaList and the call to js_SweepScopeProperties, right? Yes. It's the time spent in the "while ((a = *ap) != NULL)" loop.
Comment 11•15 years ago
|
||
(In reply to comment #9) > Those are some superb numbers. Where do we spend so much time in > DestroyGCArenas? Is it the unmap syscall? Yes. If DestroyGCArenas takes 42.58ms, we spend about 36.05ms in DestroyGCChunk!
Comment 12•15 years ago
|
||
Great, that's an easy short term fix (it doesn't eliminate the need for a general overhaul though just as dmandelin pointed out). We should wire up DestroyGCChunk to the background thread.
Comment 13•15 years ago
|
||
(In reply to comment #1) > Nobody ever said that it shouldn't happen anymore. should happen less. > > Do you have any explicit bug report, with measurements? Or any suggestions how > to improve? If not, this is an invalid bug report. These kinds of "faster please" reports are usually not invalid, just fodder for more work. Great to see numbers showing real, juicy, low-hanging fruit! dmandelin, is there a bug on thread-local GC heaps, with API control over number per thread? In Gecko we could even try cx-local heaps since each DOM window has its own nsJSContext/JSContext, IIRC. You'd need wrappers or proxies among heaps for inter-window refs, but mrbkap et al. know how to do those. You would need a more global (JSRuntime, or perhaps JSThread in Gecko) GC to collect the proxies, but we have that today. This all seems doable with enough sweat. Thanks for the bug, :limi! /be
Comment 14•14 years ago
|
||
With a very simple path where we only return GCChunks at shutdown we would reduce a typical GC time from 90ms to 58ms for this test. I am using a different machine so these numbers are not 1:1 comparable with the old numbers. Current: Time in DestroyGCArenas: 31.730652, mark: 34.656966, sweep: 56.171250 Total GC: 90.895761 Don't return GCChunks: Time in DestroyGCArenas: 4.484034, mark: 31.478373, sweep: 26.588961 Total GC: 58.135635 Another win to mention is that GC Chunks also have to be allocated again once they are returned to the OS and we also reduce the allocation time if we don't return them. I will measure this next.
Comment 15•14 years ago
|
||
(In reply to comment #14) > Current: > Time in DestroyGCArenas: 31.730652, mark: 34.656966, sweep: 56.171250 > Total GC: 90.895761 > > Don't return GCChunks: > Time in DestroyGCArenas: 4.484034, mark: 31.478373, sweep: 26.588961 > Total GC: 58.135635 What would happen if you bump GC_ARENAS_PER_CHUNK to 64/128/256/512?
Comment 16•14 years ago
|
||
(In reply to comment #15) > What would happen if you bump GC_ARENAS_PER_CHUNK to 64/128/256/512? I meant for the current tip that still releases unused chunks.
Comment 17•14 years ago
|
||
Increasing GC_ARENAS_PER_CHUNK also blur the measurements for the current tip. The attachment shows all numbers with the old and new approach and increased GC_ARENAS_PER_CHUNK. I tried to pick a "typical" run to summarize: Tip: 64: Time in DestroyGCArenas: 25.034769, mark: 28.394154, sweep: 44.784315 Total GC: 73.256526 128: Time in DestroyGCArenas: 22.861467, mark: 34.241733, sweep: 44.946027 total GC: 79.271289 256 Time in DestroyGCArenas: 22.868892, mark: 35.404425, sweep: 44.492103 total GC: 80.034192 new approach: 64: Time in DestroyGCArenas: 3.675042, mark: 35.322102, sweep: 26.649243 Total GC: 62.049915 128: Time in DestroyGCArenas: 4.527333, mark: 34.661817, sweep: 27.025209 total GC: 61.760565 256: Time in DestroyGCArenas: 3.808224, mark: 29.448756, sweep: 23.656320 total GC: 53.179101 It looks like more arenas per chunk would help us with the new approach.
Comment 18•14 years ago
|
||
Sunspider also wins due to less allocation time if we don't return GCChunks: These numbers are from buildmonkey-right TEST COMPARISON FROM TO DETAILS ============================================================================= ** TOTAL **: 1.010x as fast 677.1ms +/- 0.2% 670.3ms +/- 0.1% significant ============================================================================= 3d: 1.013x as fast 108.1ms +/- 0.4% 106.7ms +/- 0.6% significant cube: - 34.4ms +/- 1.1% 34.4ms +/- 1.1% morph: 1.048x as fast 22.0ms +/- 0.0% 21.0ms +/- 0.0% significant raytrace: - 51.7ms +/- 0.9% 51.3ms +/- 0.7% access: 1.018x as fast 91.3ms +/- 0.5% 89.7ms +/- 0.4% significant binary-trees: 1.024x as fast 17.0ms +/- 0.0% 16.6ms +/- 2.2% significant fannkuch: - 43.3ms +/- 0.8% 43.2ms +/- 0.7% nbody: 1.053x as fast 20.0ms +/- 1.7% 19.0ms +/- 0.0% significant nsieve: 1.009x as fast 11.0ms +/- 0.0% 10.9ms +/- 2.1% significant bitops: - 29.8ms +/- 1.5% 29.6ms +/- 1.2% 3bit-bits-in-byte: *1.50x as slow* 1.0ms +/- 0.0% 1.5ms +/- 25.1% significant bits-in-byte: ?? 7.7ms +/- 4.5% 8.0ms +/- 0.0% not conclusive: might be *1.039x as slow* bitwise-and: - 2.6ms +/- 14.2% 2.4ms +/- 15.4% nsieve-bits: 1.045x as fast 18.5ms +/- 2.0% 17.7ms +/- 2.0% significant controlflow: - 6.5ms +/- 5.8% 6.4ms +/- 5.8% recursive: - 6.5ms +/- 5.8% 6.4ms +/- 5.8% crypto: - 43.9ms +/- 0.9% 43.5ms +/- 0.9% aes: - 24.8ms +/- 1.2% 24.7ms +/- 1.4% md5: - 12.5ms +/- 3.0% 12.5ms +/- 3.0% sha1: - 6.6ms +/- 5.6% 6.3ms +/- 5.5% date: 1.014x as fast 96.4ms +/- 0.4% 95.1ms +/- 0.7% significant format-tofte: 1.016x as fast 49.4ms +/- 0.7% 48.6ms +/- 0.8% significant format-xparb: 1.011x as fast 47.0ms +/- 0.0% 46.5ms +/- 0.8% significant math: 1.020x as fast 25.1ms +/- 0.9% 24.6ms +/- 1.5% significant cordic: - 8.1ms +/- 2.8% 8.0ms +/- 0.0% partial-sums: - 11.8ms +/- 2.6% 11.4ms +/- 3.2% spectral-norm: - 5.2ms +/- 5.8% 5.2ms +/- 5.8% regexp: *1.017x as slow* 35.8ms +/- 0.8% 36.4ms +/- 1.0% significant dna: *1.017x as slow* 35.8ms +/- 0.8% 36.4ms +/- 1.0% significant string: 1.008x as fast 240.2ms +/- 0.2% 238.3ms +/- 0.3% significant base64: - 10.7ms +/- 3.2% 10.5ms +/- 3.6% fasta: 1.014x as fast 51.4ms +/- 0.7% 50.7ms +/- 0.7% significant tagcloud: - 76.8ms +/- 0.9% 76.6ms +/- 0.8% unpack-code: 1.009x as fast 69.1ms +/- 0.3% 68.5ms +/- 0.5% significant validate-input: - 32.2ms +/- 0.9% 32.0ms +/- 0.0%
Comment 19•14 years ago
|
||
These results are great! Especially the pause reduction. Gregor, can you file a new bug on not immediately returning GCChunks?
Comment 20•14 years ago
|
||
(In reply to comment #19) > These results are great! Especially the pause reduction. Gregor, can you file a > new bug on not immediately returning GCChunks? Filed as bug 541140.
Comment 21•14 years ago
|
||
A new comparison of our our GC approach versus Webkit. I was running the same page in Firefox and Webkit in parallel. GDocs synchronizes the pages and I should get the same behavior. I created a big GDocs spread sheet with about 7200 cells that heavyly depend on each other and once I change one cell, all other cells are recalculated. The GC numbers represent (rdtsc) cycles * 1E6. Current Tip: GC Total: 143.475300 Mark: 100.023638, Sweep: 38.703385 Finalize Obj: 30.365630, Doubles: 0.009986 GC Total: 142.622251 Mark: 97.160354, Sweep: 40.842017 Finalize Obj: 32.560828, Doubles: 0.010227 GC Total: 145.069503 Mark: 97.180648, Sweep: 43.080435 Finalize Obj: 34.771116, Doubles: 0.010280 GC Total: 144.991038 Mark: 97.155136, Sweep: 43.024544 Finalize Obj: 34.696260, Doubles: 0.010338 GC Total: 149.818112 Mark: 103.512206, Sweep: 41.282014 Finalize Obj: 33.152985, Doubles: 0.010232 GC Total: 150.453284 Mark: 103.643774, Sweep: 41.666369 Finalize Obj: 33.516051, Doubles: 0.010220 GC Total: 146.757963 Mark: 103.181916, Sweep: 39.019825 Finalize Obj: 30.932012, Doubles: 0.010303 GC Total: 154.735430 Mark: 109.019121, Sweep: 40.630714 Finalize Obj: 32.472317, Doubles: 0.010208 This time the allocation and deallocation of chunks are not critical at all. The Webkit numbers are (also cycles * 1E6): reset: mark: 37.623088, sweep: 0.000064, m+s: 37.623152, end-enter: 37.682448 reset: mark: 41.558384, sweep: 0.000064, m+s: 41.558448, end-enter: 41.618320 reset: mark: 36.938152, sweep: 0.000064, m+s: 36.938216, end-enter: 36.999968 reset: mark: 39.090192, sweep: 0.000064, m+s: 39.090256, end-enter: 39.148624 reset: mark: 40.519840, sweep: 0.000064, m+s: 40.519904, end-enter: 40.580944 reset: mark: 41.723000, sweep: 0.000064, m+s: 41.723064, end-enter: 41.781576 They are using lazy deconstruction of objects and therefore they don't have to call the finalizer during sweeping. Once I close the page, the GC that runs all the finalizers looks like: mark: 10.622440, sweep: 174.034520, m+s: 184.656960, end-enter: 190.025592 Maybe we should also think of a lazy finalize approach where we call the finalizer once we get the Object from the freelist or when we give the page back to the OS...
Comment 22•14 years ago
|
||
(In reply to comment #21) > Maybe we should also think of a lazy finalize approach where we call the > finalizer once we get the Object from the freelist or when we give the page > back to the OS... We cannot simply call the finalizer from another thread as various parts of xpconnect assumes that the finalizer hooks are called during the normal GC phase. Still we should be able to delay finalization for the standard objects and arrays. But that requires AFAICS that JSScope is allocated via GC.
Comment 23•14 years ago
|
||
> We cannot simply call the finalizer from another thread as various parts of
> xpconnect assumes that the finalizer hooks are called during the normal GC
> phase. Still we should be able to delay finalization for the standard objects
> and arrays. But that requires AFAICS that JSScope is allocated via GC.
What if the finalizer was only called from the main thread? Or do many objects depend specifically on being called during GC?
Comment 24•14 years ago
|
||
(In reply to comment #22) > We cannot simply call the finalizer from another thread as various parts of > xpconnect assumes that the finalizer hooks are called during the normal GC > phase. Still we should be able to delay finalization for the standard objects > and arrays. But that requires AFAICS that JSScope is allocated via GC. Bug 505004. (In reply to comment #23) > What if the finalizer was only called from the main thread? Or do many objects > depend specifically on being called during GC? Some do, because the GC callbacks are used (e.g. by XPConnect) to run their own mark/sweep GC of their data structures. Also, the XPCOM refcounting model means last Release is an eager pre-mortem finalizer (resurrection not allowed). This is used to do tree-like cascading destruction and some code "knows" about order of operations in the cascade. /be
Comment 25•14 years ago
|
||
The chunk bottleneck for high throughput applications should be solved now. Brendan you said "Some" objects have to be finalized right away. Is there a chance to identify them and even more interesting perform lazy finalization for them?
Comment 26•14 years ago
|
||
(In reply to comment #25) > The chunk bottleneck for high throughput applications should be solved now. > Brendan you said "Some" objects have to be finalized right away. Is there a > chance to identify them and even more interesting perform lazy finalization for > them? All the wrappers for main-thread-only, single-thread XPCOM objects. XPConnected stuff, IOW, including all DOM object. /be
Assignee | ||
Updated•10 years ago
|
Assignee: general → nobody
Comment 27•7 years ago
|
||
Is GC still an issue? Driving by (apologies if this is not relevant): quantumflow – Ehsan Akhgari <https://ehsanakhgari.org/tag/quantumflow> … etc..
Comment 28•7 years ago
|
||
Let's just close this. Incremental and generational GC landed and things should be much better these days. Please file new bugs if there are still GC perf issues.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WORKSFORME
You need to log in
before you can comment on or make changes to this bug.
Description
•