From Vlad's perf doc, investigate: Finer-grained startup metrics, feature performance metrics, new tab-switch histogram, etc etc. There are hundreds of performance histograms in Histograms.json. Using the following methodology: Just doing a diff of all histogram distributions for e10s vs non-e10s population will almost certainly generate some interesting insights. We do something like this in our "Medusa" Telemetry detection-regression system. I'm not saying any of the regressions found should be a release-blocker, but we do need to do the diff to see if we overlooked anything big. I could go through the 1000+ histograms in Histograms.json to see if there are any that I forgot that measure important aspects of perf/correctness/stability, but doing a giant diff seems like a better use of our time.