Closed Bug 1291173 Opened 6 years ago Closed 6 years ago

Show important info from memory reports in crash-stats

Categories

(Socorro :: Backend, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: n.nethercote, Assigned: n.nethercote)

References

Details

Attachments

(4 files, 5 obsolete files)

Some crash reports contain a memory report (when ContainsMemoryReport=1). The memory report is full of interesting information, particularly when looking at OOM crashes. But it's currently very hard to view -- you have to go to the unredacted crash viewer and save the crash report, then load it locally in about:memory, and you can only do this if you have minidump access permissions.

It would be nice to have a processing step that analyzes the memory report when present and pulls out a small number of important things from it -- crucial fields, anything that looks suspicious, etc. -- and adds it to the processed crash report, for display in crash-stats's "Details" tab.

I'm not sure yet what the important things would be; determining them will require analyzing the data requested in bug 1291068.
Blocks: 1291174
Blocks: 1289676
Ted, how is this related to your idea of reprocessing with conditionals?

(a project, by the way, that I was going to lead but had to take third place after 1) migrating away from persona and 2) send crashes to telemetry)
Flags: needinfo?(ted)
I don't think this is related. This is just an additional set of annotations that happens to live in a separate JSON document. We don't need to do any processing on it, simply load it and display some data.
Flags: needinfo?(ted)
As comment 0 mentions, when a crash report contains a memory report, I would
like to I want to extract a few key measurements from the memory report:
explicit, resident, vsize, heap-unclassified, etc.

These should be usable in the usual two ways.

1. They should be displayed in the individual crash report view on crash-stats.

2. They should be searchable, which means they must be added to the SuperSearch
   API.

One complication is that, with e10s, a memory report contains measurements for
multiple processes, and the number of processes can vary. Displaying 
measurements (case 1) for multiple processes is easy, but searching on
measurements (case 2) for 1..n processes is hard, because search fields need to
be fixed ahead of time.

The obvious thing to do is to only use the measurements for the process that
crashed. (Extracting the measurements for that process would require adding the
pid of the crashing process to the crash report, but that's not hard.) 

This idea makes sense for virtual OOM crashes, where if a process crashes it is
its own fault. It makes less sense for physical/page file OOM crashes, where
the OOM is caused by the interaction of all processes on the system. However,
the low memory heuristic that triggers the saving of a memory report in a crash
report only considers virtual memory! (One consequence of this is that we never
get memory reports for crashes on Win64.) Which means things fall out nicely.

Steps required to implement this:

1. Add the pid to the crash report.

2. Decide which memory report measurements are "key" and should be extracted.
   The ones in attachment 8789672 [details] are a good starting point: explicit,
   resident, vsize, vsize-max-contiguous, private, heap-allocated,
   resident-unique, heap-unclassified, top-non-detached, system-heap-allocated,
   ghost-windows.

3. Write a Python script to extract the key measurements from a crash report.
   It'll be similar to the one attached to bug 1291068.

4. Insert that script into Socorro's crash report processing pipeline, and use
   it to augment the processed crash report.

5. Add those key measurements to the SuperSearch API.

6. Show those key measurements in the crash-stats UI.

7. Add the key measurements to things considered by Marco's correlation tools.
   This would allow us to, for example, get correlations between ghost-windows
   and extensions!

Of these, I am able to do 1, 2, 3, and 6. I will need help with 4 and 5, and
Marco would need to do 7.

I am needinfo'ing a whole bunch of people for feedback because there are
multiple pieces to this and I won't want to do a bunch of work to discover that
they won't fit together for any reason. Please let me know if this plan has any
holes in it.
Flags: needinfo?(ted)
Flags: needinfo?(peterbe)
Flags: needinfo?(mcastelluccio)
Flags: needinfo?(erahm)
Flags: needinfo?(continuation)
Flags: needinfo?(chris.lonnen)
Flags: needinfo?(adrian)
(In reply to Nicholas Nethercote [:njn] from comment #3)
> 1. Add the pid to the crash report.

The process ID can be in the MDRawMiscInfo struct in the dump, I don't know if it is reliably there though:
https://chromium.googlesource.com/breakpad/breakpad/+/master/src/google_breakpad/common/minidump_format.h#769

I assume it is on Windows, and that's the only place we currently send memory reports anyway. (It wouldn't be hard to fix that if it's not present on other platforms.)
Flags: needinfo?(ted)
I checked a sample of dumps I have locally, and the PID seems to be reliably available on Windows, at least. We don't expose that from the stackwalker right now, but it would be very easy to do so.
(In reply to Nicholas Nethercote [:njn] from comment #3)
> As comment 0 mentions, when a crash report contains a memory report, I would
> like to I want to extract a few key measurements from the memory report:
> explicit, resident, vsize, heap-unclassified, etc.
> 
> These should be usable in the usual two ways.
> 
> 1. They should be displayed in the individual crash report view on
> crash-stats.
> 

This is https://bugzilla.mozilla.org/show_bug.cgi?id=1061371 which is in PR review right now (https://github.com/mozilla/socorro/pull/3468)
However, it's only going to make it easy to *download* the memory report. It won't display it in pretty print JSON. 

> 2. They should be searchable, which means they must be added to the
> SuperSearch
>    API.
> 

This is non-trivial since they're large. It's the same reason ElasticSearch only indexes the basic fields and converts the `json_dump` into a summary/redacted version. 

I can do an investigation to try to figure out how many bytes it would add. 
I would do that by first scanning a sample of 1,000 Firefox crashes and see how many have ContainsMemoryReport=1 and based on that, download them all locally and measure their size. 


One first step would perhaps be to make it easier to extract the memory_report more easily using the API. 
E.g. you give it a UUID(s) and returns the memory report in JSON. Then you'd be able to download them locally and run some scripting to do your own analysis.
Flags: needinfo?(peterbe)
The Gecko side of this plan sounds reasonable to me.
Flags: needinfo?(continuation)
That said, I'm not sure if there's ever been much useful information gotten from crash dump reports. Maybe it would be worth spending time digging through a few hundred manually downloaded reports to see if there's something useful there before investing all of this effort in a giant new reporting system.
> from crash dump reports.
Sorry, I meant from the memory reports contained in crash dumps.
In fairness we haven't really ever made it easy to get at the memory report information. Once bug 1061371 goes live you'll at least be able to download the raw memory-report.json.gz to load in about:memory, so we can do some manual testing with that.
I didn't realize, when writing my comment, that the memory report IS available in the processed crash. AND it is sent into ElasticSearch. How to query it in SuperSearch I don't know. That's for Adrian to explain.
(In reply to Peter Bengtsson [:peterbe] from comment #6)
> >
> > 1. They should be displayed in the individual crash report view on
> > crash-stats.
> 
> This is https://bugzilla.mozilla.org/show_bug.cgi?id=1061371 which is in PR
> review right now (https://github.com/mozilla/socorro/pull/3468)
> However, it's only going to make it easy to *download* the memory report. It
> won't display it in pretty print JSON. 
>
> > 2. They should be searchable, which means they must be added to the
> > SuperSearch API.
> 
> This is non-trivial since they're large. It's the same reason ElasticSearch
> only indexes the basic fields and converts the `json_dump` into a
> summary/redacted version. 

Please read comment 3 again, especially step 2 of my proposed solution: my plan is to add a processing step that pulls out a small number of key measurements from the memory report and adds them to the processed crash report. (I have listed 11 possible key measurements; that list may change slightly but it is indicative.)

Those key measurements are what I want to be displayed in the webapp UI; not the entire memory report. (Having a nice way to download the full memory report for detailed analysis is still a good thing, so I fully support bug 1061371!) 

And because the number of key measurements is small and fixed, adding it to the SuperSearch API should be straightforward.

> I can do an investigation to try to figure out how many bytes it would add. 
> I would do that by first scanning a sample of 1,000 Firefox crashes and see
> how many have ContainsMemoryReport=1 and based on that, download them all
> locally and measure their size. 

This isn't necessary. I'm just talking about adding 11 or so fields to a processed crash. 
 
> One first step would perhaps be to make it easier to extract the
> memory_report more easily using the API. 
> E.g. you give it a UUID(s) and returns the memory report in JSON. Then you'd
> be able to download them locally and run some scripting to do your own
> analysis.

Again, simpler access to the full memory report would be a good thing, but that's orthogonal to this plan.
(In reply to Andrew McCreight [:mccr8] from comment #8)
> That said, I'm not sure if there's ever been much useful information gotten
> from crash dump reports. Maybe it would be worth spending time digging
> through a few hundred manually downloaded reports to see if there's
> something useful there before investing all of this effort in a giant new
> reporting system.

I analyzed ~4,500 such reports in bug 1291068 and concluded that this is worthwhile. I could be wrong, but we won't know for sure if we don't try.
(In reply to Peter Bengtsson [:peterbe] from comment #11)
> I didn't realize, when writing my comment, that the memory report IS
> available in the processed crash. AND it is sent into ElasticSearch. How to
> query it in SuperSearch I don't know. That's for Adrian to explain.

One thing worth mentioning that isn't obvious: the memory report JSON isn't suitable for direct querying, because it wasn't designed that way. The format is basically a big unsorted array of individual measurements, and you need to iterate over every entry in the JSON to pull out any of them, and some of the key measurements (e.g. explicit, heap-unclassified) are a function of multiple individual measurements. So even if it currently is in the database, it's stored in a way that isn't usefully queryable. That's why a processing step is required.
The plan looks all good to me. If you need help implementing it, let me know!
Flags: needinfo?(adrian)
Sounds good to me, I'll need some help to define the discretization of the measurements (my correlation tool works with discrete values).
Flags: needinfo?(mcastelluccio)
Flags: needinfo?(chris.lonnen)
Attached file extract-key-memory-measurements.py (obsolete) —
Here is a revised script that extracts key memory measurements from a crash report that has a `memory_report` field. Using the plan from comment 3, this is step 3. You invoke it with two arguments, a crash report filename and a pid, and it produces output like this:

> resident-unique         1,248,493,568
> explicit                1,067,751,198
> resident                1,312,665,600
> heap-unclassified          65,177,160
> system-heap-allocated      25,666,977
> vsize-max-contiguous       62,849,024
> heap-allocated            615,807,696
> vsize                   1,957,425,152
> private                 1,233,248,256
> top-non-detached          150,643,984
> ghost-windows                       0

erahm, can you please check the script for correctness? I'd also be interested to hear if you think anything else should be measured. (I have an "njn:" comment in the script with some additional suggestions.)

lonnen, can you please help me with step 4 of the plan, i.e. inserting this script into Socorro's processing pipeline? I'm happy to change the script's output, e.g. I could print JSON instead if that would help. But I don't know how Socorro needs modification to get this script into its processing pipeline, so I will need help with that.
Attachment #8804975 - Flags: feedback?(erahm)
Attachment #8804975 - Flags: feedback?(chris.lonnen)
This will probably be a transform rule or similar. Adrian can help.
Flags: needinfo?(erahm) → needinfo?(adrian)
(In reply to Nicholas Nethercote [:njn] from comment #17)
> 
> erahm, can you please check the script for correctness? I'd also be
> interested to hear if you think anything else should be measured. (I have an
> "njn:" comment in the script with some additional suggestions.)

Overall it looks good, you might want to restructure into a proper class/function + a main entry point. I'd also suggest adding handling of gzipped memory reports (although maybe that's not an issue in socorro). I could see a slightly more generalized version of this being useful elsewhere, something like:

> memory_totals((things you care about), pid=None)

As far as values to look at, maybe add these as well:

>  - host-object-urls       <- essentially blob leaks due to CreateObjectURL
>  - gfx-textures           <- graphics leaks
>  - sum(images/)           <- image decoding leaks / cache eviction issues
>  - sum(js-main-runtime/)  <- JS memory overhead

Number of windows sounds interesting, heap-overhead might be a good proxy for fragmentation although we already have vsize-max-contiguous.

As far as output goes it might be nice to add a flag to either just output in json or pretty print. If you just care about script processing I'd go with json.
Attachment #8804975 - Flags: feedback?(erahm) → feedback+
I'm going to start rewriting your code to a Socorro Processor Rule so that we can use it easily there.
Flags: needinfo?(adrian)
This is what I have so far: https://github.com/mozilla/socorro/compare/master...adngdb:1291173-processor-rule-memory-report?expand=1

It lacks good unit tests, and we still don't have the pid in our data. :njn, do you have good examples for input/output that I could use in the tests?
Attachment #8804975 - Flags: feedback?(chris.lonnen)
Attached file good.json (obsolete) —
Here is a sample file that can be used for unit testing. erahm suggested extracting some more measurement but since you've started rewriting my script let's put that on hold for the moment.

Here is the expected output from my original script. (Note that I have sorted the output because the script is non-deterministic in the order it prints!)

> $ ./extract-key-memory-measurements.py ~/good.json 11620
> explicit                  232,227,872
> ghost-windows                       7
> heap-allocated            216,793,184
> heap-unclassified         171,114,283
> private                   182,346,923
> resident                  330,346,496
> resident-unique           253,452,288
> system-heap-allocated         123,456
> top-non-detached           45,678,901
> vsize                   1,481,437,184
> vsize-max-contiguous        2,834,628
> 
> $. /extract-key-memory-measurements.py ~/good.json 11717
> explicit                   20,655,576
> ghost-windows                       0
> heap-allocated             20,655,576
> heap-unclassified          20,655,576
> private                             0
> resident                  123,518,976
> resident-unique            56,209,408
> system-heap-allocated         234,567
> top-non-detached                    0
> vsize                     905,883,648
> vsize-max-contiguous        5,824,618

Also, if you run the script with a pid not equal to 11620 or 11717, you'll get an error like "ValueError: no measurements found for pid 12345".
Attached file bad1.json (obsolete) —
Run the script over this file with any pid and you'll get this error:

"ValueError: File does not contain recognisable memory reports"
Attached file bad2.json (obsolete) —
Run the script over this file with pid=11620 and you'll get this error:

"ValueError: bad units for an explicit/ report: explicit/foo, 1"
Attached file bad3.json (obsolete) —
Run the script over this file with pid=11620 and you'll get this error:

"ValueError: bad kind for an explicit/ report: explicit/foo, 2"
Thanks for your files Nicholas! I have tests now, and they pass, see the branch: https://github.com/mozilla/socorro/compare/master...adngdb:1291173-processor-rule-memory-report?expand=1

I believe the only thing we now need is the pid?
> I believe the only thing we now need is the pid?

Yes. Ted wants to extract it from the minidump, which is complicated. I've started trying to get it, but am having link problems with the stackwalker code :(
Depends on: 1318481
Notes to self:
* change how to pull the pid from the crash data
* update SuperSearchFields with new data
* update the JSON schema with new data so it goes to Telemetry
* add fields to /report/index page
Here's a pull request that should be working!
Commit pushed to master at https://github.com/mozilla/socorro

https://github.com/mozilla/socorro/commit/59f84872e58f10d9a5159a7964badea5737edee7
Bug 1291173 - Added a rule to extract key measurements from memory_report. (#3635)

* Bug 1291173 - Added a rule to extract key measurements from memory_report.

* Added tests for memory measurements.

* Get pid from the right place. Added the new rule to the list of rules to apply.

* correct import

* handle KeyError, better predicate.

* Added doc about constants.
What should these fields be exposed as in Super Search? 

I propose we call them `memory_*`, so `explicit` would become `memory_explicit` and `ghost-windows` would become `memory_ghost_windows`. Does that look good?
> I propose we call them `memory_*`, so `explicit` would become
> `memory_explicit` and `ghost-windows` would become `memory_ghost_windows`.
> Does that look good?

Sounds fine.

Note that I intend to expand the extraction code to get a few more values from the memory report (as per comment 19). Hopefully I'll get to that on Friday. We should wait until then to do the remaining steps (e.g. super search) so that we don't have to end up doing those steps twice. Thanks!
Depends on: 1330819
I think the remaining tasks are these:

1. update SuperSearchFields with new data
2. update the JSON schema with new data so it goes to Telemetry
3. add fields to /report/index page
4. update correlations tool to handle the new fields

I'm happy to do 3. Adrian, do we have example crashes in Socorro that have the new fields present? I will need an example crash to work with.

Also, Adrian, can you do 1 and 2?
Flags: needinfo?(adrian)
I'll do 1. and 2. on Tuesday next week. Points 3 and 4 are blocked respectively by points 1 and 2.
Flags: needinfo?(adrian)
Alright, I added the following fields on stage:
* memory_explicit
* memory_gfx_textures
* memory_ghost_windows
* memory_heap_allocated
* memory_head_overhead
* memory_heap_unclassified
* memory_host_object_urls
* memory_images
* memory_js_main_runtime
* memory_private
* memory_resident
* memory_resident_unique
* memory_system_heap_allocated
* memory_top_non_detached
* memory_vsize
* memory_vsize_max_contiguous

I've made them all number fields, stored as `long` in Elasticsearch. Nick, could you please verify they work, and provide some more details? Notably, can you tell me if any of these fields should be something other than a `long`, and could you give me descriptions of each one? I'll then use that in our Super Search documentation and in the crash report JSON Schema that we use to send data to telemetry.
Flags: needinfo?(n.nethercote)
(In reply to Adrian Gaudebert [:adrian] from comment #35)
> Alright, I added the following fields on stage:
> * memory_explicit
> * memory_gfx_textures
> * memory_ghost_windows
> * memory_heap_allocated
> * memory_head_overhead

This should be memory_heap_overhead. Hopefully just a typo in the Bugzilla comment?

> * memory_heap_unclassified
> * memory_host_object_urls
> * memory_images
> * memory_js_main_runtime
> * memory_private
> * memory_resident
> * memory_resident_unique
> * memory_system_heap_allocated
> * memory_top_non_detached
> * memory_vsize
> * memory_vsize_max_contiguous
> 
> I've made them all number fields, stored as `long` in Elasticsearch. Nick,
> could you please verify they work, and provide some more details?

How do I verify? E.g. I looked at https://crash-stats.allizom.org/report/index/860259cd-8f9b-4ae7-a59d-b09ee2170127#tab-details and I can't see the fields. Not sure what you're asking me to do here.

> Notably,
> can you tell me if any of these fields should be something other than a
> `long`

`long` is fine.

> and could you give me descriptions of each one? I'll then use that
> in our Super Search documentation and in the crash report JSON Schema that
> we use to send data to telemetry.

The easy thing for descriptions is to just say, for "memory_foo", "The 'foo' measurement from about:memory." Because about:memory itself has descriptions...
Flags: needinfo?(n.nethercote) → needinfo?(adrian)
(In reply to Nicholas Nethercote [:njn] from comment #36)
> (In reply to Adrian Gaudebert [:adrian] from comment #35)
> > * memory_head_overhead
> 
> This should be memory_heap_overhead. Hopefully just a typo in the Bugzilla
> comment?

Yes, luckily I noticed that while adding the field, but not here in my comment. 

> How do I verify? E.g. I looked at
> https://crash-stats.allizom.org/report/index/860259cd-8f9b-4ae7-a59d-
> b09ee2170127#tab-details and I can't see the fields. Not sure what you're
> asking me to do here.

They are available in Super Search. You can facet on them, etc. But it might not be that important. 

> The easy thing for descriptions is to just say, for "memory_foo", "The 'foo'
> measurement from about:memory." Because about:memory itself has
> descriptions...

That sounds good!
Flags: needinfo?(adrian)
All fields have been added to Super Search on both stage and production! That means step 1. is completed and we can proceed with step 3.
Nick, I just realized while working on adding those fields to our crash report JSON schema that their names break our convention of using underscores (and not dashes). I thus propose that we merge this before going forward, while it's still quite easy to fix. This will require me to change all of those fields in our Super Search Fields Lists but that's fine. I'd rather we keep consistent naming convention across our documents, especially in Telemetry.
Attachment #8832078 - Flags: review?(n.nethercote)
This is the work for step 2. from Nick's plan. It must not be merged before the previously linked PR though, as it relies on the fields changing names.
Comment on attachment 8832078 [details] [review]
Link to Github pull-request: https://github.com/mozilla/socorro/pull/3664

The conversion to underscores is fine, but there's an easier way to do it, as I noted on GitHub.
Attachment #8832078 - Flags: review?(n.nethercote) → feedback+
Adrian, one other comment: we're using "memory_" as the common prefix for all these measurements. But there is also a "memory_report_error" field, which looks similar but means something different. Maybe we can live with this, but it is a source of potential confusion.
Another one: memory_top_non_detached should be memory_top_none_detached. In fact, we've accidentally used "non" instead of "none" in numerous places, all of which need fixing.
(In reply to Nicholas Nethercote [:njn] from comment #42)
> Adrian, one other comment: we're using "memory_" as the common prefix for
> all these measurements. But there is also a "memory_report_error" field,
> which looks similar but means something different. Maybe we can live with
> this, but it is a source of potential confusion.

We can still easily change the name of those fields in Super Search. Do you have a suggestion? Maybe something like `mem_*` instead of `memory_*`? So we would have e.g. `mem_top_none_detached` instead of `memory_top_none_detached`.
> We can still easily change the name of those fields in Super Search. Do you
> have a suggestion? Maybe something like `mem_*` instead of `memory_*`? So we
> would have e.g. `mem_top_none_detached` instead of
> `memory_top_none_detached`.

Hmm, not sure that's much better. A different thought: do we need to expose the "memory_report_error" field? Doesn't seem that useful?
Comment on attachment 8832078 [details] [review]
Link to Github pull-request: https://github.com/mozilla/socorro/pull/3664

New patch on GitHub looks good.
Attachment #8832078 - Flags: feedback+ → review+
Commit pushed to master at https://github.com/mozilla/socorro

https://github.com/mozilla/socorro/commit/f6d577af0f0feebd26679c25012cf6a6f70b0be1
Bug 1291173 - Renamed memory measures keys to have underscores. (#3664)

* Bug 1291173 - Renamed memory measures keys to have underscores.
These keys had dashes in their names, leading to inconsistent naming accross our crash documents. That was to make extracting values from the memory report simpler, but we should not sacrifice naming conventions. With this all keys use underscore naming.

* Better code, and renamed top-none-detached
`memory_report_error` contains the error message if a memory_report could not be loaded. I believe it is useful to people trying to debug them, but maybe it isn't. The idea was mainly to not put the error message inside the memory_report field, as that gave it a non-JSON value which would cause problems with some scripts. 

But now that I think about it, it wasn't exposed in prod until last week and no one complained, so I can just remove it.
Attachment #8809254 - Attachment is obsolete: true
Attachment #8809255 - Attachment is obsolete: true
Attachment #8809256 - Attachment is obsolete: true
Attachment #8804975 - Attachment is obsolete: true
Attachment #8809253 - Attachment is obsolete: true
Commit pushed to master at https://github.com/mozilla/socorro

https://github.com/mozilla/socorro/commit/8206a13386037ce4ae1028cf434a2d17626a44bc
Bug 1291173 - Show key measurements from the memory report for each crash report. (#3666)
> 4. update correlations tool to handle the new fields

This is the only thing remaining! Marco, are you able to do this?
Flags: needinfo?(mcastelluccio)
If it's any use, host-object-urls and ghost-windows are both counts (i.e. unitless) and the other 14 measurements are measured in bytes.
Commit pushed to master at https://github.com/mozilla/socorro

https://github.com/mozilla/socorro/commit/5559de74654697828dd4cceba6b0cd8130150102
[DNM] Bug 1291173 - Added all new memory measurement fields to our crash re… (#3665)

* Bug 1291173 - Added all new memory measurement fields to our crash report schema.

* Fixes top_none_detached name.

* Changed descriptions.
Comment on attachment 8833127 [details] [review]
Link to Github pull-request: https://github.com/mozilla/socorro/pull/3666

Adrian merged this on GitHub, which implies r+ :)
Attachment #8833127 - Flags: review?(adrian) → review+
I've opened bug 1341132 for the correlations part.
Flags: needinfo?(mcastelluccio)
This bug is done! Thank you, everyone.
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
It's speculative that https://github.com/mozilla/socorro/commit/5559de74654697828dd4cceba6b0cd8130150102#diff-82e3b30a9c5737b0a5e278a3d095bf27 broke the JSON Schema validation. 

Also see https://bugzilla.mozilla.org/show_bug.cgi?id=1343623
Status: RESOLVED → REOPENED
Depends on: 1343619
Resolution: FIXED → ---
Commit pushed to master at https://github.com/mozilla/socorro

https://github.com/mozilla/socorro/commit/1c56ef48db6cddf9d9d885a8c6a1977c575cb37f
Fixes bug 1291173 and bug 1311647 by readding the new fields back with correct types (#3688)

* Revert "fixes bug 1343619 - Revert crash_reports.json to state BEFORE Feb 2017"

* Fixes bug 1291173 and bug 1311647 by readding the new fields back with correct types
Status: REOPENED → RESOLVED
Closed: 6 years ago6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.