Davehunt has configured the logcat from the flame.ui.sanity test runs to upload to treeherder, however somehow along the way it is being truncated in the wrong places and is not landing on amazonaws in its entirety. In the logcat linked to below, searching for TEST-START and TEST-END includes the end of the penultimate test and the start of the final test but neither TEST-* tags before or after them. Logcat file: https://gaiatest.s3.amazonaws.com/18c5ee2b03cc5418a611fd6592437fc6f0ede5926cf42819b7f22ce5ef5b064718351c0fada81875f6a37e13739fb5ec4326797e1f67d6da84e86ff469f5a5a4 CI build: http://jenkins1.qa.scl3.mozilla.com/job/flame.b2g-inbound.ui.functional.sanity/1713/ Treeherder job: https://treeherder.allizom.org/ui/#/jobs?repo=b2g-inbound&revision=9f72c199944f
I think this is due to the logcat being a circular store, and therefore only has a maximum of 256Kb (for Flame) at any point in time. To have more we'd either need to somehow increase this or stream the logcat to disk. We could also grab the state of the logcat at the end of each failed test and include this in the HTML report.
I thought we had some tooling to capture data from the circular buffer and stream to disk? :mdas do you know where/if that code exists?
mozrunner does this by starting a separate process listening to logcat and storing the output in a file https://mxr.mozilla.org/mozilla-central/source/testing/mozbase/mozrunner/mozrunner/devices/base.py#137.
(In reply to Malini Das [:mdas] from comment #3) > mozrunner does this by starting a separate process listening to logcat and > storing the output in a file > https://mxr.mozilla.org/mozilla-central/source/testing/mozbase/mozrunner/ > mozrunner/devices/base.py#137. In that case we should be able to use this once bug 1038868 is resolved. We'll need to provide a --logcat-dir command line option and it looks like we rotate the log file every time we connect to the device up to a maximum of 3 files. As we restart between tests each time we'd get a new log file, and by the end of the suite we'd have only a few log files. We could either store these log files in the HTML report alongside failed tests, or we could upload multiple files to S3 (and increase the maximum number), or perhaps look into a way to continuously write to the same file. Andrew: Do you have any thoughts that might help us here? We were initially wanting to upload a full logcat from the entire testrun to S3 and for inclusion in the Treeherder results. At the moment we only get the current buffer.
Current buffer meaning the logcat for the current test? Not really sure how much I can help, if you restart between tests, then we'll just need to deal with multiple logcats. Personally dumping all logcats into one file seems confusing. I'd make that magic 3 configurable, set it to 'unlimited' and then upload the whole directory. You'd probably also want the test name somewhere in the logcat file name.
(In reply to Andrew Halberstadt [:ahal] from comment #5) > Current buffer meaning the logcat for the current test? Meaning whatever the current buffer contains, which is usually just enough to cover a single test. > Not really sure how much I can help, if you restart between tests, then we'll > just need to deal with multiple logcats. Personally dumping all logcats into > one file seems confusing. I would agree. If we had a single logcat we would at least need to indicate clearly the start of each test. > I'd make that magic 3 configurable, set it to 'unlimited' and then upload > the whole directory. You'd probably also want the test name somewhere in the > logcat file name. The problem is how to represent this in Treeherder. Perhaps it would be worth archiving the logcat files and providing a link to the archive. I also think that it would be worth including the logcat for each failed test in the HTML report.
B2G related so closing