I'm likely fine with 50M (67108864?) if it works for you Florian*. If it turns out to not work for me I can always adjust it as I have been.
However, I'm realizing that a sensible default may not actually be the way to go about this. The core problem is just that it's not particularly obvious from a profile how much memory the profiler is using, and whether that is causing the system to be bottlenecked or not. I could easily see 50M being too much again if conditions change, either from more processes or larger entries(?) or <thing I don't know about>, and at that point we could start seeing spurious profiles with no clear indication that they are spurious. I only noticed the original problem because I was scratching my head at a profile, and happened to notice the lack of memory availability when I reprofiled it, but someone could easily just come to a bad conclusion, post the profile in a bug somewhere, and there would be no indication from that profile that the underlying cause was just paging.
So I'm happy with this as a band-aid, but maybe the conversation should start here about what the long-term solution should be? I see a few options:
- Track total free physical memory on the system
- Track page faults
- Just include total physical memory as well as memory used specifically by the profiler in the Platform section of the top right dropdown
- This would allow us to compute the remaining memory available, and if it's below some heuristic threshold, just display a warning to the user
It seems like the latter would be the cheapest to implement, and would generally cover us from bad conclusions. The other two bullet points would be nice, but if necessary someone can manually collect those on their system and cross-reference.
* EDIT: Florian pointed out that 50M is significantly higher than what we have now, so this comment is based on a misunderstanding. I still think it stands on its own somewhat, but I need to investigate further why I'm seeing such different results on my own system.