Closed
Bug 1101337
Opened 10 years ago
Closed 10 years ago
Hang opening reftest analyzer
Categories
(Core :: XPCOM, defect)
Core
XPCOM
Tracking
()
RESOLVED
FIXED
mozilla37
People
(Reporter: BenWa, Assigned: ehsan.akhgari)
References
Details
Attachments
(4 files, 2 obsolete files)
6.66 KB,
text/plain
|
Details | |
218 bytes,
text/html
|
Details | |
11.16 KB,
patch
|
froydnj
:
review+
|
Details | Diff | Splinter Review |
2.79 KB,
patch
|
Details | Diff | Splinter Review |
Works fine with TBPL but not treehearder:
https://tbpl.mozilla.org/?tree=Try&rev=ebe810b66428
vs.
https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=ebe810b66428
Reporter | ||
Comment 1•10 years ago
|
||
Comment 2•10 years ago
|
||
I've seen this too.
Reporter | ||
Comment 3•10 years ago
|
||
Total guess: TBPL only feeds the shortlog to reftest analyzer while TreeHerder puts everything. Maybe the platform needs to handle large text in a textbox faster, maybe TreeHerder needs to feed the shortlog.
Comment 4•10 years ago
|
||
Agree this is most likely due to downloading the whole log, which is covered by bug 1063640.
We can either dupe this to there, or else morph this to be about platform improvements (to happen in parallel). Can I leave it up to you to decide if the latter is useful? :-)
Comment 6•10 years ago
|
||
From Philor in IRC:
philor> camd: if 1101337 isn't P1, then just removing the UI until it isn't fatal should be
philor> it's just too tempting, having it right there
Comment 7•10 years ago
|
||
Duping per comment 4.
I don't think this is a P1, I only hang for a second or two. Unless I've just not found a suitable example? (The links in comment 0 don't work any more)
Status: NEW → RESOLVED
Closed: 10 years ago
Flags: needinfo?(emorley)
Resolution: --- → DUPLICATE
Comment 8•10 years ago
|
||
I hit one today that made me force kill my browser. That wasn't nice.
Comment 9•10 years ago
|
||
In which case happy for you to make bug 1063640 a P1 :-)
Reporter | ||
Comment 10•10 years ago
|
||
Nope, the platform shouldn't hang for a text workload in the 100mb (I believe logs are smaller). This is a platform bug.
You guys are welcome to work around it in bug 1063640 until we fix it in the platform.
Status: RESOLVED → REOPENED
Component: Treeherder → Editor
Flags: needinfo?(ehsan.akhgari)
Product: Tree Management → Core
Resolution: DUPLICATE → ---
Version: --- → Trunk
Reporter | ||
Comment 11•10 years ago
|
||
Assignee | ||
Comment 12•10 years ago
|
||
This is caused by nsContentUtils::PlatformToDOMLineBreaks being O(n^2). I have a test fix on the try server that makes that function O(n).
Assignee: nobody → ehsan.akhgari
Component: Editor → DOM
Flags: needinfo?(ehsan.akhgari)
Assignee | ||
Comment 13•10 years ago
|
||
Benoit, please note that other things elsewhere in Gecko may still be slow in such circumstances. If you hit this in TreeHerder again after my patches land, please file follow-ups with profiles. Thanks!
Assignee | ||
Comment 14•10 years ago
|
||
ReplaceSubstring() is an O(n^2) algorithm because we have to move the
remainder of the string, search it again and potentially memmove most of
it again as we find more matches. It's easy however to implement a
linear algorithm that can only handle shrinking replacements (where the
length of the pattern to look for is not smaller than the length of the
replacement string.) This patch adds such an API to XPCOM string
classes. We will use this API in various parts of the platform later.
Note that we currently don't build TestStrings.cpp, so the test case in
this patch is not run automatically, but the test case has been verified
to pass separately by moving the test function into Gecko and calling it
during startup and stepping through it in the debugger.
Assignee | ||
Comment 15•10 years ago
|
||
This will help with test cases such as assigning a value containing
a lot of line breaks to a textarea, as done in TreeHerder, for
example.
Assignee | ||
Comment 16•10 years ago
|
||
Comment on attachment 8534786 [details] [diff] [review]
Part 1: Add a linear substring replace API to XPCOM strings for the special case where the replacement is a shrinking one; r=froydnj
Nathan, please review this mercilessly. :-)
Specifically, please let me know if you can think of other cases to test. I'd like to have as many different tests here as possible, even though we can't run them on the infra yet.
Attachment #8534786 -
Flags: review?(nfroyd)
Assignee | ||
Updated•10 years ago
|
Attachment #8534787 -
Flags: review?(jst)
Comment 17•10 years ago
|
||
Thank you for digging into this :-)
Assignee | ||
Comment 18•10 years ago
|
||
(In reply to Ed Morley (moved to Treeherder) [:edmorley] from comment #17)
> Thank you for digging into this :-)
Totally selfish! I have spent around 6 months of my life making exactly this much faster a few years ago. Just finishing the job. :-)
![]() |
||
Comment 19•10 years ago
|
||
Comment on attachment 8534786 [details] [diff] [review]
Part 1: Add a linear substring replace API to XPCOM strings for the special case where the replacement is a shrinking one; r=froydnj
Why not modify ReplaceSubstring to use the linear time algorithm in cases where we can? And/or why not modify ReplaceSubstring to be linear in all cases, not just shrinking ones?
Flags: needinfo?(ehsan.akhgari)
Assignee | ||
Comment 20•10 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #19)
> Why not modify ReplaceSubstring to use the linear time algorithm in cases
> where we can?
I hate it when functions change their performance characteristics depending on their arguments/etc. I think a change between a linear and quadratic algorithm should be made explicitly by the caller. Note that in a lot of cases the string being searched in is small and the quadratic algorithm doesn't hurt us that much, as evidenced by how long this has lived in the code base without anyone complaining.
> And/or why not modify ReplaceSubstring to be linear in all
> cases, not just shrinking ones?
Doing the linear algorithm in all cases is more involved since it will deal with the doubling growth strategy and I would also need to deal with how to handle the various storage mechanisms, whereas here I can just call MutatePrep() once with the maximum needed capacity. Also the general case linear algorithm wouldn't help me with the issue at hand, so I decided to stick with the simpler case and leave the rest to someone else. :-)
Flags: needinfo?(ehsan.akhgari)
![]() |
||
Comment 21•10 years ago
|
||
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #20)
> > And/or why not modify ReplaceSubstring to be linear in all
> > cases, not just shrinking ones?
>
> Doing the linear algorithm in all cases is more involved since it will deal
> with the doubling growth strategy and I would also need to deal with how to
> handle the various storage mechanisms, whereas here I can just call
> MutatePrep() once with the maximum needed capacity.
What's wrong with something like:
// Determine all locations of the target string.
nsAutoTArray<uint32_t, 16> positions;
uint32_t i = 0;
while (i < mLength) {
int32_t offset = FindSubstring(mData + i, mLength - i, ...);
if (offset == kNotFound) {
break;
}
positions.AppendElement(offset);
i = offset + aTarget.Length();
}
// Anything to do?
if (positions.IsEmpty()) {
return;
}
// Compute the length of the resulting string.
// Single call to MutatePrep, to give us a new buffer.
// Copy non-matching portions of the string + replacements into new buffer,
// as the current patch does.
?
> Also the general case
> linear algorithm wouldn't help me with the issue at hand, so I decided to
> stick with the simpler case and leave the rest to someone else. :-)
The first part of this statement doesn't sound right to me: you want a linear algorithm, why does it matter whether the algorithm is general or handles only the shrinking case?
Flags: needinfo?(ehsan.akhgari)
![]() |
||
Comment 22•10 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #21)
> // Determine all locations of the target string.
Actually might be better to record all the pieces that *don't* match; I think that makes the later parts of the algorithm be somewhat simpler.
Reporter | ||
Comment 23•10 years ago
|
||
If we want to have a special case for shrinking that is faster than that's fine.
But I thinking having a O(n^2) string replace is a big bug and we should focus on fixing that first.
Assignee | ||
Comment 24•10 years ago
|
||
(In reply to Nathan Froyd (:froydnj) from comment #21)
> (In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from
> comment #20)
> > > And/or why not modify ReplaceSubstring to be linear in all
> > > cases, not just shrinking ones?
> >
> > Doing the linear algorithm in all cases is more involved since it will deal
> > with the doubling growth strategy and I would also need to deal with how to
> > handle the various storage mechanisms, whereas here I can just call
> > MutatePrep() once with the maximum needed capacity.
>
> What's wrong with something like:
>
> // Determine all locations of the target string.
> nsAutoTArray<uint32_t, 16> positions;
> uint32_t i = 0;
> while (i < mLength) {
> int32_t offset = FindSubstring(mData + i, mLength - i, ...);
> if (offset == kNotFound) {
> break;
> }
>
> positions.AppendElement(offset);
> i = offset + aTarget.Length();
> }
>
> // Anything to do?
> if (positions.IsEmpty()) {
> return;
> }
>
> // Compute the length of the resulting string.
>
> // Single call to MutatePrep, to give us a new buffer.
>
> // Copy non-matching portions of the string + replacements into new buffer,
> // as the current patch does.
>
> ?
There is nothing wrong with it, it's just not necessarily better than what we have now. ReplaceSubstring can very well be used for very small strings so doing this can slow things down for such consumers. One can of course measure all of the cases that happen in real life and then choose what to do for the general case, but given that I haven't seen any profile to suggest that such an improvement can make a difference in practice, I'd rather not scope creep this bug.
> > Also the general case
> > linear algorithm wouldn't help me with the issue at hand, so I decided to
> > stick with the simpler case and leave the rest to someone else. :-)
>
> The first part of this statement doesn't sound right to me: you want a
> linear algorithm, why does it matter whether the algorithm is general or
> handles only the shrinking case?
Please look at the second part of the patch. The performance problem that I am trying to solve is for the shrinking only case. Doing a linear algorithm for the general case is strictly slower because you either need to realloc or you need to do two passes on the string. I just can't think of something that doesn't have those downsides.
Assignee | ||
Comment 25•10 years ago
|
||
(In reply to Benoit Girard (:BenWa) from comment #23)
> But I thinking having a O(n^2) string replace is a big bug and we should
> focus on fixing that first.
Fixing that won't fix anything as well as my current patch does for this bug. Can we please move the discussion about the general algorithm in a follow-up bug?
Flags: needinfo?(ehsan.akhgari)
Assignee | ||
Comment 26•10 years ago
|
||
OK, so I had a lengthy discussion about this with BenWa on IRC today. He was arguing that we should just make the general case linear and not worry about doing multiple passes on the string, and while I didn't agree with him initially, he got me curious to implement the two approaches, so I went ahead and did that. It turns out that with the test case on this bug, with either of the two approaches, nsContentUtils::PlatformToDOMLineBreaks won't show up in the profile!
So, BenWa was right, and we should fix this issue in XPCOM for all ReplaceSubstring() calls.
Component: DOM → XPCOM
Assignee | ||
Updated•10 years ago
|
Attachment #8534786 -
Attachment is obsolete: true
Attachment #8534786 -
Flags: review?(nfroyd)
Assignee | ||
Updated•10 years ago
|
Attachment #8534787 -
Attachment is obsolete: true
Attachment #8534787 -
Flags: review?(jst)
Assignee | ||
Comment 27•10 years ago
|
||
ReplaceSubstring() is an O(n*m) algorithm (n being the length of the
string and m being the number of occurrences of aTarget) because we have
to move the remainder of the string, search it again and potentially
memmove most of it again as we find more matches. This patch rewrites
that function to make it O(n+m).
Note that we currently don't build TestStrings.cpp, so the test case in
this patch is not run automatically, but the test case has been verified
to pass separately by moving the test function into Gecko and calling it
during startup and stepping through it in the debugger.
Assignee | ||
Updated•10 years ago
|
Attachment #8535388 -
Flags: review?(nfroyd)
Reporter | ||
Comment 28•10 years ago
|
||
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #27)
> Note that we currently don't build TestStrings.cpp, so the test case in
> this patch is not run automatically, but the test case has been verified
> to pass separately by moving the test function into Gecko and calling it
> during startup and stepping through it in the debugger.
Might be a good candidate to port to GTest, looks like a bunch of simple but a bit tedious work. Might be a decent first bug.
![]() |
||
Comment 29•10 years ago
|
||
Comment on attachment 8535388 [details] [diff] [review]
Make the ReplaceSubstring() XPCOM string API linear; r=froydnj
Review of attachment 8535388 [details] [diff] [review]:
-----------------------------------------------------------------
BenWa for XPCOM string maintainer! :D I'm glad he had that conversation with you so I didn't have to. (And credit to NeilAway on IRC for prompting me to ask why we were adding a second function in the first place.)
::: xpcom/string/nsTStringObsolete.cpp
@@ +466,5 @@
> if (aTarget.Length() == 0)
> return;
>
> + // Remember all of the non-matching parts.
> + nsAutoTArray<std::pair<uint32_t, uint32_t>, 16> nonMatching;
I dislike std::pair for its lack of readability when using members. Could you please just write out the five lines of struct definition for the element type here?
@@ +482,5 @@
>
> + newLength += aNewValue.Length();
> + i += r + aTarget.Length();
> + if (i >= mLength) {
> + // Add an auxilary entry at the end of the list to help as an edge case
Nit: "auxiliary"
@@ +488,5 @@
> + nonMatching.AppendElement(std::make_pair(mLength, mLength));
> + break;
> + }
> + }
> +
I think it'd be good to insert a check here for:
// If there's only one non-matching segment, then the target string was not
// found, and there's nothing to do.
if (nonMatching.Length() == 1) {
// Maybe MOZ_ASSERT the dimensions of the non-match are what we expect.
return;
}
as that behavior matches the original. No sense heap-churning things in the non-matching case.
@@ +504,5 @@
> + if (aTarget.Length() >= aNewValue.Length()) {
> + // In the shrinking case, start filling the buffer from the beginning.
> + const uint32_t delta = (aTarget.Length() - aNewValue.Length());
> + for (i = 1; i < nonMatching.Length(); ++i) {
> + memmove(mData + nonMatching[i].first - i * delta,
There ought to be a way to name the reused-below subexpressions here to make the code clearer. Something like:
char_type* segmentPtr = mData + nonMatching[i].first;
would be a good start (bikeshedding on the name(s) welcome).
Can you also add a comment, something like:
// When we move the i'th non-matching segment into position, we need to account for the
// characters deleted by the previous |i| replacements by subtracting |i * delta|.
@@ +506,5 @@
> + const uint32_t delta = (aTarget.Length() - aNewValue.Length());
> + for (i = 1; i < nonMatching.Length(); ++i) {
> + memmove(mData + nonMatching[i].first - i * delta,
> + mData + nonMatching[i].first,
> + sizeof(char_type) * (nonMatching[i].second - nonMatching[i].first));
This is the only computation you perform with .second (duplicated below); it seems like you might as well just make .second be the length of the non-matching substring, rather than the end of the non-matching substring, and avoid this computation.
@@ +516,5 @@
> + const uint32_t delta = (aNewValue.Length() - aTarget.Length());
> + const uint32_t numNonMatching = nonMatching.Length();
> + if (numNonMatching > 1) {
> + for (i = numNonMatching - 1; i > 0; --i) {
> + memmove(mData + nonMatching[i].first + i * delta,
Same comments from the above case apply, but in the reverse sense ("adding" instead of "deleting") for |i * delta|.
Attachment #8535388 -
Flags: review?(nfroyd) → review+
Assignee | ||
Comment 30•10 years ago
|
||
Assignee | ||
Comment 31•10 years ago
|
||
And backed out since ASAN discovered a buffer overflow. :(
Comment 32•10 years ago
|
||
Comment on attachment 8535388 [details] [diff] [review]
Make the ReplaceSubstring() XPCOM string API linear; r=froydnj
>+ // Add an auxilary entry at the end of the list to help as an edge case
>+ // for the algorithms below.
Would including the null terminator as part of the string data save you an edge case?
>+ if (!MutatePrep(newLength, &oldData, &oldFlags))
>+ return;
>+ if (oldData) {
>+ // Copy all of the old data to the new buffer.
(I'm sure there's a way to avoid copying all the old data to the wrong place and then fixing it up afterwards, but I can't quite think of it offhand.)
>+ memmove(mData + nonMatching[i].first - i * delta,
>+ mData + nonMatching[i].first,
>+ sizeof(char_type) * (nonMatching[i].second - nonMatching[i].first));
>+ char_traits::copy(mData + nonMatching[i].first - i * delta - aNewValue.Length(),
>+ aNewValue.Data(), aNewValue.Length());
Sticking to char_traits::copy would be slightly more readable and consistent.
(The second copy actually precedes the first copy in memory; I think the processor's cache might prefer you to write to the destination in strictly ascending order.)
>+ if (numNonMatching > 1) {
If the substring was not actually found in the original string, wouldn't it be better to return much earlier than this?
Assignee | ||
Comment 33•10 years ago
|
||
(In reply to :Ehsan Akhgari (not reading bugmail, needinfo? me!) from comment #31)
> And backed out since ASAN discovered a buffer overflow. :(
OK, the problem was that in the shrinking case we could allocate a smaller buffer, which means losing access to part of the source data. So instead of allocating a mLength sized buffer I need to allocate an XPCOM_MAX(mLength, newLength) sized buffer.
(In reply to neil@parkwaycc.co.uk from comment #32)
> Comment on attachment 8535388 [details] [diff] [review]
> Make the ReplaceSubstring() XPCOM string API linear; r=froydnj
>
> >+ // Add an auxilary entry at the end of the list to help as an edge case
> >+ // for the algorithms below.
> Would including the null terminator as part of the string data save you an
> edge case?
No, the edge case is handling the last (in case of shrinking and first in case of growing) segment being empty. We still need to run the last iteration of the for loop in order to copy the replacement bytes into place.
> >+ if (!MutatePrep(newLength, &oldData, &oldFlags))
> >+ return;
> >+ if (oldData) {
> >+ // Copy all of the old data to the new buffer.
> (I'm sure there's a way to avoid copying all the old data to the wrong place
> and then fixing it up afterwards, but I can't quite think of it offhand.)
Yes. But I think having only two code paths helps the readability of this code.
> >+ memmove(mData + nonMatching[i].first - i * delta,
> >+ mData + nonMatching[i].first,
> >+ sizeof(char_type) * (nonMatching[i].second - nonMatching[i].first));
> >+ char_traits::copy(mData + nonMatching[i].first - i * delta - aNewValue.Length(),
> >+ aNewValue.Data(), aNewValue.Length());
> Sticking to char_traits::copy would be slightly more readable and consistent.
That is not an option since char_traits::copy uses memcpy and on the first copy the ranges are overlapping in memory, so you really do need memmove!
> (The second copy actually precedes the first copy in memory; I think the
> processor's cache might prefer you to write to the destination in strictly
> ascending order.)
Good point. Fixed.
> >+ if (numNonMatching > 1) {
> If the substring was not actually found in the original string, wouldn't it
> be better to return much earlier than this?
Nathan already suggested this, but I forgot to remove this now stale branch.
https://hg.mozilla.org/integration/mozilla-inbound/rev/95377313608b
Comment 34•10 years ago
|
||
(In reply to Ehsan Akhgari from comment #33)
> (In reply to comment #32)
> No, the edge case is handling the last (in case of shrinking and first in
> case of growing) segment being empty.
Meaning you deliberately trigger the code that handles that case?
> That is not an option since char_traits::copy uses memcpy and on the first
> copy the ranges are overlapping in memory, so you really do need memmove!
I've just discovered char_traits::move
Backed out in https://hg.mozilla.org/integration/mozilla-inbound/rev/afd27046e00c for linux build bustage:
https://treeherder.mozilla.org/ui/logviewer.html#?job_id=4611334&repo=mozilla-inbound
Flags: needinfo?(ehsan.akhgari)
Assignee | ||
Comment 36•10 years ago
|
||
(In reply to neil@parkwaycc.co.uk from comment #34)
> (In reply to Ehsan Akhgari from comment #33)
> > (In reply to comment #32)
> > No, the edge case is handling the last (in case of shrinking and first in
> > case of growing) segment being empty.
> Meaning you deliberately trigger the code that handles that case?
Yes, that's how I don't need to special case the other end!
> > That is not an option since char_traits::copy uses memcpy and on the first
> > copy the ranges are overlapping in memory, so you really do need memmove!
> I've just discovered char_traits::move
Awesome! This makes the code less ugly.
This was innocent, will reland.
Flags: needinfo?(ehsan.akhgari)
Comment 37•10 years ago
|
||
As well as avoiding the extra copy when a new buffer is created, I save the lengths of the gaps instead of their positions and use them to advance along the string as I make the replacements. (I used short variable names to save typing.)
Comment 38•10 years ago
|
||
Status: REOPENED → RESOLVED
Closed: 10 years ago → 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla37
Assignee | ||
Comment 39•10 years ago
|
||
(In reply to neil@parkwaycc.co.uk from comment #37)
> Created attachment 8536004 [details] [diff] [review]
> For posterity this is what my version looks like
>
> As well as avoiding the extra copy when a new buffer is created, I save the
> lengths of the gaps instead of their positions and use them to advance along
> the string as I make the replacements. (I used short variable names to save
> typing.)
This looks good based on a quick look. Please put it up on a bug for review!
You need to log in
before you can comment on or make changes to this bug.
Description
•