Closed Bug 572791 Opened 15 years ago Closed 8 years ago

Processor should truncate over long stacks for all threads, not just the crashing thread

Categories

(Socorro :: General, task)

x86
macOS
task
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: ozten, Unassigned)

References

()

Details

1.3MB Crash Dumps like f2bef626-bb69-4c08-af5a-7bcab2100617 should have their fields truncated in a reasonable manner. @ted noted in IRC that these routines already exist.
This is actually about the truncation of stack frames from the output of minidump_stackwalk. The processor already will slice the middle out of over long stack listings for the crashing thread. We've just experienced the problem of a non-crashing thread having an over long list. The existing truncation routines that act on the crashing thread, should be extended to act on all threads.
Summary: Processor should use the existing Breakpad stack truncation routines → Processor should truncate over long stacks for all threads, not just the crashing thread
This might be more urgent than first thought. xstevens' numbers indicate the median jsonz we are storing in HBase is 1.2Mb, so there must be a lot of these.
My first set of numbers were wrong do to some nuances with Hadoop Writables. Here are the updated calculations: Min. : 759 1st Qu.: 30761 Median : 36946 Mean : 37529 3rd Qu.: 42898 Max. :2070278 These are calculated like so: value.toString().getBytes("UTF-8").length
the fix is in place for v1.8 - do you need a back port to 1.7.x?
No. Since the 3rd quartile numbers are still small, we aren't in any danger and can wait for 1.8 as far as hbase goes. Let someone else answer about potential UI problems.
Component: Socorro → General
Product: Webtools → Socorro
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.