The ElasticSearch response could be huge, up to a gigabyte. Python probably can not handle that. The query response translator has an `aggs_iterator()` , which implies the code already deals with the ES response as a stream, but we know the low level JSON decoder requires all JSON to be in memory: The `aggs_iterator()` has lots of unused potential. Find a JSON streaming library, or make one, for use by the query response translator. Since existing JSON streaming appears to be a event-listener driven maddness , I suggest creating a JSON streaming library that accepts a query, and operates on the stream just like any other data set.  https://github.com/klahnakoski/ActiveData/blob/74902f1d993e8bfde563952309686e1c709560ad/pyLibrary/queries/es14/aggs.py#L479  https://pypi.python.org/pypi/ijson > for prefix, event, value in parser: > if (prefix, event) == ('earth', 'map_key'): > stream.write('<%s>' % value) > continent = value > elif prefix.endswith('.name'): > stream.write('<object name="%s"/>' % value) > elif (prefix, event) == ('earth.%s' % continent, 'end_map'): > stream.write('</%s>' % continent) > stream.write('</geo>')
Around the week of Sept21, test informant was failing due to this problem. Either the data was too big for the ActiveData service to digest (which causes a crash, and restart), or the service delivered enough JSON to push 32bit Firefox over it memory limit, causing a browser crash.
Also, it is notice that buildbot json files  can reach 250megs (uncompressed) or more, which also is a little too big to be processed en-mass with 32bit python.  http://builddata.pub.build.mozilla.org/builddata/buildjson/
done long ago