Closed Bug 1262671 Opened 8 years ago Closed 8 years ago

IPC::Message data need not be contiguous

Categories

(Core :: IPC, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla49
Tracking Status
e10s + ---
firefox48 --- wontfix
firefox49 --- fixed

People

(Reporter: billm, Assigned: billm)

References

Details

(Whiteboard: btpp-active)

Attachments

(7 files, 8 obsolete files)

3.66 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
203.29 KB, patch
billm
: review+
Details | Diff | Splinter Review
32.07 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
74.82 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
12.82 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
20.08 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
23.76 KB, patch
froydnj
: review+
Details | Diff | Splinter Review
I don't think there's any reason that the data in an IPC message needs to be stored contiguously. We end up reading the data out into other data structures via ParamTraits; otherwise the data is never used. So I think we could (with a lot of work) rewrite all our ParamTraits functions to handle non-contiguous input. We might also be able to pull a trick like in bug 1253131 and re-allocate the buffer to be contiguous if any of the ParamTraits needed for a particular message doesn't support non-contiguous memory.
Can you just use nsSegmentedBuffer?
There's also SegmentedVector, though it isn't a generic data buffer.
(In reply to Bill McCloskey (:billm) from comment #0)
> So I think we could (with a lot of work) rewrite all our ParamTraits functions to handle
> non-contiguous input.

Is it necessary to change any ParamTraits functions? There are only a few low-level accessor Pickle functions, and the only callers are a handful of low-level IPC functions. I think ParamTraits implementations go through the various ReadFoo() methods, so I think that once those are set up everything should work. I could be wrong, though.
(In reply to Andrew McCreight [:mccr8] from comment #3)
> (In reply to Bill McCloskey (:billm) from comment #0)
> > So I think we could (with a lot of work) rewrite all our ParamTraits functions to handle
> > non-contiguous input.
> 
> Is it necessary to change any ParamTraits functions? There are only a few
> low-level accessor Pickle functions, and the only callers are a handful of
> low-level IPC functions. I think ParamTraits implementations go through the
> various ReadFoo() methods, so I think that once those are set up everything
> should work. I could be wrong, though.

Most things should Just Work, but we definitely have a couple ParamTraits that require contiguous input:

http://mxr.mozilla.org/mozilla-central/source/ipc/glue/IPCMessageUtils.h#288 (and the nsAString equivalent)
http://mxr.mozilla.org/mozilla-central/source/ipc/glue/IPCMessageUtils.h#412 (for POD data, essentially)

I don't know how widespread WriteBytes and ReadBytes usage is, but it's something to consider.

Another thing I've thought about doing is for IPDL methods that take ns{,C}String is constructing nsDependent{,C}Strings in the generated code, instead of eagerly copying out of the message buffer.  This change would also dependent on contiguous memory in IPC::Message.  In the best case, this would save us from making copies...but it'd also be a lot of work, and browsing through just a few cases yesterday, it seems like most ns{,C}String arguments are stashed away and would require a copy anyway.  So I'm not sure how useful this would be, and it doesn't need to be a blocker.
(In reply to Nathan Froyd [:froydnj] from comment #4)
> I don't know how widespread WriteBytes and ReadBytes usage is, but it's
> something to consider.

There aren't a ton of callers outside of core IPC code: https://dxr.mozilla.org/mozilla-central/search?q=%2Bcallers%3A%22Pickle%3A%3AWriteBytes%28const+void+%2A%2C+int%2C+uint32_t%29%22

But you are right, I guess there are at least a few.

> Another thing I've thought about doing is for IPDL methods that take
> ns{,C}String is constructing nsDependent{,C}Strings in the generated code,
> instead of eagerly copying out of the message buffer.  This change would
> also dependent on contiguous memory in IPC::Message.

Yeah, I filed bug 1262841 earlier today along these lines. To do it right you'd want to encapsulate the data in something besides a string, to give greater flexibility in the underlying representation, but that could end up requiring a lot of changes.
(In reply to Andrew McCreight [:mccr8] from comment #5)
> But you are right, I guess there are at least a few.

Though I don't see why ReadBytes is incompatible with non-contiguous IPC::Message. You could just split it across blocks.
Whiteboard: btpp-fixlater
I looked at this a little bit to scope how hard it might be to change the public API of Pickle() to be compatible with discontinuity.

Two methods in Pickle() directly expose a contiguous data block to outside users, and will have to be fixed:
1. data(), which is used in two places.
a) First, in Histogram::SerializeHistogramInfo(), which turns histogram info into a std::string, but that method seems to be unused, so it could be removed (along with DeserializeHistogramInfo().
b) Second, it is used in ChannelImpl::ProcessOutgoingMessages(), to actually get the raw block of data to pass into sendmsg. This system call takes an array of iovec, so I think it could be made to pass in something using the same underlying storage as the segmented Pickle data structure. However, the Windows version of this method calls WriteFile(), and from a quick glance at the docs that doesn't seem to support passing in more than one block at a time, so that will be harder to fix up.
2. BeginWriteData(int). This looks to be unused, except for TestActorPunning.cpp. Hopefully the test could be rewritten.

The other problem with the API is that the void* iterator used for the Read*() methods is pointer-sized, but I think we will need it to be at least two pointers (block + offset). I guess the Read methods could be changed to do a new() of some iterator data structure, and then EndRead() could delete() the iterator, and no API changes would be needed. I don't know offhand how strong the invariant is that EndRead() gets called for every read, but at least our IPDL-generated code seems to do it.

Then of course there is all of the work to actually fix the Pickle() and Message() implementation to actually use a segmented buffer.
(In reply to Andrew McCreight [:mccr8] from comment #7)
> The other problem with the API is that the void* iterator used for the
> Read*() methods is pointer-sized, but I think we will need it to be at least
> two pointers (block + offset). I guess the Read methods could be changed to
> do a new() of some iterator data structure, and then EndRead() could
> delete() the iterator, and no API changes would be needed. I don't know
> offhand how strong the invariant is that EndRead() gets called for every
> read, but at least our IPDL-generated code seems to do it.

If we were going to do this, it would be splendid if we could change the API here to actually be typed so we didn't have to check for a null iterator every. single. time. we read a data element.  (Though I get that said change is more work than retaining the void* interface we have now.)
Also, looks like WriteFileGather might do what we want on Windows:

https://msdn.microsoft.com/en-us/library/windows/desktop/aa365749(v=vs.85).aspx

Though that only works asynchronously, and I'm assuming the WriteFile calls we have are synchronous ones...
I'm probably going to start working on this soon.
Assignee: nobody → wmccloskey
Great!
tracking-e10s: --- → ?
Just out of curiosity has this come up in the chromium impl? I'm surprised FF has so much trouble with memory fragmentation and large message sizes...
The bulk of our giant messages are from message manager, which they don't have to deal with. They may also be better about appropriately using shared memory to move large pieces of data.
Attached patch void** -> PickleIterator* (obsolete) — Splinter Review
Here's the first part of this. I used PickleIterator* rather than PickleIterator& to make it obvious that the iterator is being modified and to keep consistency with how the code used to work. I could go the other way though.
Attachment #8743670 - Flags: review?(nfroyd)
Comment on attachment 8743670 [details] [diff] [review]
void** -> PickleIterator*

Review of attachment 8743670 [details] [diff] [review]:
-----------------------------------------------------------------

This is great, thank you!
Attachment #8743670 - Flags: review?(nfroyd) → review+
(In reply to Bill McCloskey (:billm) from comment #14)
> Here's the first part of this. I used PickleIterator* rather than
> PickleIterator& to make it obvious that the iterator is being modified and
> to keep consistency with how the code used to work. I could go the other way
> though.

I think PickleIterator& would be a little nicer; seeing as how we only ever iterate over things in generated code, I'm not sure the visual '&' hint is that helpful.  rs+ if you want to make that change.
This isn't quite ready yet, but I want to post some stuff to make it easier to sync up with Kan-Ru in bug 1264642.
Attached patch BufferList patch (obsolete) — Splinter Review
This data structure is intended to be used to store the data for a Pickle as well as for structured clone data. The reason I didn't use SegmentedVector is because we need to be able to do the following:

1. Read an IPC message into a segmented buffer.
2. Create another segmented buffer that references the part of the message holding structured clone data.
3. Pass this inner segmented buffer to the JS engine to transform into JS objects.

It's important that step 2 should not just copy the data into a new data structure. Copying would be very costly since structured clone data is often multiple megabytes. Instead it should borrow it. That's how our IPC code works now (except that everything is contiguous so it's a lot easier). For non-contiguous data, we need a segmented data structure that (1) doesn't necessarily own its data, and (2) may hold buffers of varying sizes. That's what BufferList is.

I'm asking for feedback on this early since it will be shared with bug 1264642. If it looks okay, then Kan-Ru can rebase his changes there on it.

Note that the FlattenBytes function won't be needed once we have bug 1264642. Its only purpose is to allow our current structured clone code to get a contiguous view into the message.
Attachment #8751534 - Flags: feedback?(nfroyd)
Comment on attachment 8751113 [details] [diff] [review]
Store pickle data in a vector of buffers

Review of attachment 8751113 [details] [diff] [review]:
-----------------------------------------------------------------

Just two comments I saw while skimming this.

::: ipc/chromium/src/base/pickle.cc
@@ +469,3 @@
>    if (padding) {
> +    MOZ_RELEASE_ASSERT(padding <= 8);
> +    char padding_data[8] = {

|static const|, please.

@@ +485,5 @@
>    // memory.
> +  uint32_t padding = AlignInt(length) - length;
> +  if (padding) {
> +    MOZ_RELEASE_ASSERT(padding <= 4);
> +    char padding_data[4] = {

|static const|, please.
Whiteboard: btpp-fixlater → btpp-active
Comment on attachment 8751534 [details] [diff] [review]
BufferList patch

Review of attachment 8751534 [details] [diff] [review]:
-----------------------------------------------------------------

Canceling feedback, not because I don't think it's useful, but because I haven't really looked at the uses to see whether this is the correct API or not.

This thing could really, really use some tests.

I assume this has to be in MFBT because the JS structured clone stuff is going to need to know about BufferLists?  If that's not the case, I'd prefer that we keep BufferList in ipc/glue/ or similar.  I guess we can't use nsSegmentedBuffer because of the ownership constraint (and because nsSegmentedBuffer is kinda clunky...).

::: mfbt/AllocPolicy.h
@@ +11,5 @@
>  
>  #ifndef mozilla_AllocPolicy_h
>  #define mozilla_AllocPolicy_h
>  
> +#include "mozilla/Attributes.h"

This is dead code from an earlier experiment?

::: mfbt/BufferList.h
@@ +36,5 @@
> +     : mData(aData),
> +       mSize(aSize),
> +       mCapacity(aCapacity)
> +    {
> +    }

If you're going to use copy and move methods for Segment, please make them explicit by |= default|'ing them.  (Or it might make sense to have the move variants null out mData and zero out mSize and mCapacity.)

@@ +56,5 @@
> +  // (N % Align == A % Align) if Align == 2, 4, or 8.
> +  static const size_t kSegmentAlignment = 8;
> +
> +  // Allocate a BufferList. If aOwnership == OWNING, then the BufferList will
> +  // free all its buffers when it is destroyed. An initial buffer of size

This constraint worries me a little bit, because it's not obvious to me that a NONOWNING user can easily free buffers.  I guess you're supposed to do something like:

Iter iter...

while (iter.HasMore()) {
  char* segmentBuffer = iter.Data();
  iter.Advance(list, iter.RemainingInSegment(list));
  free(segmentBuffer); // XXX should be using |list|'s AllocPolicy?
}

?  If so, what's the point of having non-owning lists?  I guess you would accumulate them, FlattenBytes, and then pass the resulting buffer to something else?  That doesn't seem to address the memory issues.

Or perhaps you'd accumulate them, and then pass the buffers into a message-passing or networking stack, which would take care of the free'ing?

Also, in passing, it seems un-ergonomic to me to have to constantly pass |list| into the iterator's methods.

@@ +78,5 @@
> +  }
> +
> +  // Copies a BufferList. If aOther owns its buffers, then all the buffers will
> +  // be copied to new buffers. If it doesn't, then the data will be borrowed.
> +  BufferList(const BufferList& aOther)

Given problems that we've had with memory allocations and copying in the IPC stack, do we even want to support copy operations?

@@ +100,5 @@
> +     mSegments(Move(aOther.mSegments)),
> +     mSize(aOther.mSize),
> +     mStandardCapacity(aOther.mStandardCapacity)
> +  {
> +    aOther.mSize = 0;

I think aOther.Clear() might be more appropriate, though that would require being careful about buffer ownership.  A little easier to keep invariants straight if the sum of the sizes of all the segments in mSegments == mSize.

@@ +106,5 @@
> +
> +  // Assignment operator. Ownership works similar to the copy constructor.
> +  BufferList& operator=(const BufferList& aOther)
> +  {
> +    mSize = aOther.mSize;

Presumably you want to copy over the capacity as well?  What if the capacities of the two lists don't match (and therefore the capacities of the segments you just copied into this list), is that going to cause problems elsewhere?

@@ +115,5 @@
> +        memcpy(data, segment.mData, segment.mSize);
> +      }
> +    } else {
> +      MOZ_RELEASE_ASSERT(mSegments.reserve(aOther.mSegments.length()));
> +      mSegments.infallibleAppend(aOther.mSegments.begin(), aOther.mSegments.end());

This isn't really assignment so much as appending to what's already there, isn't it?  We should either change assignment to do actual assignment, or we should |= delete| assignment and have an |Append()| method.

What happens when the owning-ness of the lists doesn't match?  MOZ_RELEASE_ASSERT?

@@ +122,5 @@
> +  }
> +
> +  BufferList& operator=(BufferList&& aOther)
> +  {
> +    mSegments = Move(aOther.mSegments);

This is an actual assignment...but what about freeing buffers if we're OWNING?  And what about non-matching-ness of mOwning?

@@ +123,5 @@
> +
> +  BufferList& operator=(BufferList&& aOther)
> +  {
> +    mSegments = Move(aOther.mSegments);
> +    mSize = aOther.mSize;

What about assigning the capacity?

@@ +124,5 @@
> +  BufferList& operator=(BufferList&& aOther)
> +  {
> +    mSegments = Move(aOther.mSegments);
> +    mSize = aOther.mSize;
> +    aOther.mSize = 0;

aOther.Clear() here too, with similar concerns about buffer ownership, as in the move constructor.

@@ +169,5 @@
> +    // Returns true if the memory in the range [Data(), Data() + aBytes) is all
> +    // part of one contiguous buffer.
> +    bool HasRoomFor(const BufferList& aBuffers, size_t aBytes) const
> +    {
> +      MOZ_RELEASE_ASSERT(mSegment < aBuffers.mSegments.length());

I applaud the aggressive use of MOZ_RELEASE_ASSERT in this code.

@@ +177,5 @@
> +      MOZ_RELEASE_ASSERT(mData < segment.End() || Done(aBuffers));
> +
> +      const char* end_of_region = mData + aBytes;
> +      // Watch out for overflow in pointer calculation, which wraps.
> +      return (mData <= end_of_region) && (end_of_region <= segment.End());

Maybe we should just do this in CheckedInt<uintptr_t> arithmetic?  It took me a little while to realize that the first check was the overflow check.  Or make the IterImpl store the offset in the current segment, and then we could reason about things in integer arithmetic and wouldn't have pointer arithmetic gotchas (Chromium has switched to integer checks in their Pickle implementation rather than pointers for this reason, I think.)

@@ +199,5 @@
> +    // end of a buffer, it will be moved to the beginning of the next buffer
> +    // unless it is the last buffer.
> +    void Advance(const BufferList& aBuffers, size_t aBytes)
> +    {
> +      MOZ_ASSERT(HasRoomFor(aBuffers, aBytes));

I think this should be MOZ_RELEASE_ASSERT.  Otherwise...there will be problems.

@@ +203,5 @@
> +      MOZ_ASSERT(HasRoomFor(aBuffers, aBytes));
> +
> +      const Segment& segment = aBuffers.mSegments[mSegment];
> +      char* updated = mData + aBytes;
> +      if (updated >= segment.End() && mSegment + 1 < aBuffers.mSegments.length()) {

I think you're technically not allowed to make this pointer check, because you're only allowed to look at the end of an allocated block of memory, not some unspecified amount beyond it.  And what would happen if we tried to advance by aBytes that was more than the amount remaining when we're on the last segment?  Wouldn't we...

@@ +209,5 @@
> +        mSegment++;
> +        const Segment& nextSegment = aBuffers.mSegments[mSegment];
> +        updated = nextSegment.Start();
> +      }
> +      mData = updated;

...be storing bogus data here?  I guess we might assert somewhere else, but if somebody decided to downgrade to MOZ_ASSERT instead of MOZ_RELEASE_ASSERT without auditing all the rest of the code...

@@ +219,5 @@
> +    bool AdvanceAcrossSegments(const BufferList& aBuffers, size_t aBytes)
> +    {
> +      size_t bytes = aBytes;
> +      while (bytes) {
> +        size_t toAdvance = std::min(bytes, RemainingInSegment(aBuffers));

These std::min and similar need an <algorithm> include somewhere in this file.

@@ +352,5 @@
> +
> +  Segment* p = &mSegments[startIter.mSegment + 1];
> +  Segment* end = p + fullSegmentCount;
> +  for (Segment* iter = p; iter < end; iter++) {
> +    this->free_(iter->mData);

This isn't valid if the list is non-owning, is it?

@@ +361,5 @@
> +  p = &mSegments[startIter.mSegment + 1];
> +  MOZ_RELEASE_ASSERT(mSegments.insert(p, Segment(buffer, aSize, aSize)));
> +
> +  Segment& firstSegment = mSegments[startIter.mSegment];
> +  firstSegment.mSize -= startIter.RemainingInSegment(*this);

I think a few guiding comments in this method, and particularly in this segment/data shuffling, would be most welcome.
Attachment #8751534 - Flags: feedback?(nfroyd)
Comment on attachment 8751534 [details] [diff] [review]
BufferList patch

Review of attachment 8751534 [details] [diff] [review]:
-----------------------------------------------------------------

::: mfbt/BufferList.h
@@ +64,5 @@
> +  BufferList(Ownership aOwnership,
> +             size_t aInitialSize,
> +             size_t aInitialCapacity,
> +             size_t aStandardCapacity)
> +   : mOwning(aOwnership == Ownership::OWNING),

Note in JS the AllocPolicy's constructor needs a argument. The argument is a cx used to report out-of-memory error.

For example current mfbt::Vector has this constructor:

  explicit Vector(AllocPolicy = AllocPolicy());

  template<typename T, size_t N, class AP>
  Vector<T, N, AP>::Vector(AP aAP)
    : AP(aAP)
  { }

and js::TempAllocPolicy has this

  MOZ_IMPLICIT TempAllocPolicy(JSContext* cx) : cx_((ContextFriendFields*) cx) {}
  MOZ_IMPLICIT TempAllocPolicy(ContextFriendFields* cx) : cx_(cx) {}


BufferList will need to support this. It also need to support borrowing from a BufferList with different AllocPolicy; otherwise IPC can't share data with JS.
Attached patch Sentinel checking (obsolete) — Splinter Review
This patch isn't strictly related to this bug, but it's really nice for debugging refactorings like this. It changes IPDL to add a bunch of constants in between things like function parameters and struct members. The constant chosen is based on a hash of the name of parameter or whatever.

I'm on the fence whether we should turn this on for only nightlies or for everyone. Given the overhead of IPC, I doubt it has much effect on performance. But it does make messages bigger, which we already have a problem with. Anyway, it's on by default for now.

Here's a sample of the generated output:

auto PContentParent::SendPBrowserConstructor(...) {
    ...
    IPC::Message* msg__ = PContent::Msg_PBrowserConstructor(MSG_ROUTING_CONTROL);
    Write(actor, msg__, false);
    // Sentinel = 'actor'
    (msg__)->WriteSentinel(875202478);
    Write(tabId, msg__);
    // Sentinel = 'tabId'
    (msg__)->WriteSentinel(3419081923);
    Write(context, msg__);
    // Sentinel = 'context'
    (msg__)->WriteSentinel(3514529014);
    Write(chromeFlags, msg__);
    // Sentinel = 'chromeFlags'
    (msg__)->WriteSentinel(299515804);
    Write(cpId, msg__);
    // Sentinel = 'cpId'
    (msg__)->WriteSentinel(2452595622);
    Write(isForApp, msg__);
    // Sentinel = 'isForApp'
    (msg__)->WriteSentinel(2024231582);
    Write(isForBrowser, msg__);
    // Sentinel = 'isForBrowser'
    (msg__)->WriteSentinel(2051565587);
    ...
}
Attachment #8751111 - Attachment is obsolete: true
Attachment #8753095 - Flags: review?(nfroyd)
This patch removes the pickle methods that provide direct access to the message buffer. There are a few cases where we're going to use memcpy where we weren't before, but it doesn't seem like a huge deal.

I also added the FlattenBytes function, which will make the pickle buffer contiguous and return a pointer. We'll be able to eliminate this function with bug 1264642, but I'd like to keep the two bugs independent. FlattenBytes is just needed for some structured clone stuff.

Finally, I was able to remove some unused code from histogram.cc that used pickling.

This patch doesn't actually make the message buffer non-contiguous. It's just setup work for that.
Attachment #8751112 - Attachment is obsolete: true
Attachment #8753099 - Flags: review?(nfroyd)
Attached patch BufferList patch (obsolete) — Splinter Review
Here's an updated patch for BufferList. I'll respond to your previous comments in a separate comment.
Attachment #8751534 - Attachment is obsolete: true
Attachment #8753101 - Flags: review?(nfroyd)
This is sort of the main patch. It changes Pickle to use BufferList and modifies the IPC channel code accordingly. The channel code actually gets a little less crazy I think.

A few things to watch out for:
1. I don't really understand why the code for Copier takes a void** rather than a void* or char*. I hope I haven't broken anything by changing it.

2. I changed a bunch of size types from int to uint32_t. This seems clearer to me. I didn't use size_t because chromium code really seems to prefer uint32_t. I was kind of worried about overflow behavior, but I checked everything over and it seems okay to me.

3. I still haven't converted the Windows channel code. This is harder than I realized since WriteFileGather probably isn't going to work. It's not clear that it works on pipes, and it requires the buffers to be 4K aligned. So we'll just do multiple WriteFile calls, although this adds some complexity. I'd rather work on it while getting some feedback on the rest of the code.
Attachment #8751113 - Attachment is obsolete: true
Attachment #8753104 - Flags: review?(nfroyd)
Comment on attachment 8751109 [details] [diff] [review]
Remove unused TrimWriteData

Review of attachment 8751109 [details] [diff] [review]:
-----------------------------------------------------------------

::: ipc/chromium/src/base/pickle.h
@@ +175,5 @@
>    // Returns NULL on failure.
>    //
>    // The returned pointer will only be valid until the next write operation
>    // on this Pickle.
>    char* BeginWriteData(int length);

Can you remove BeginWriteData as well?  A followup is fine.  It looks like it's only used in tests, and very dodgily at that.
Attachment #8751109 - Flags: review?(nfroyd) → review+
Attached patch Windows fixesSplinter Review
Here's the patch for Windows support. It turned out to be very similar to posix. I may look at unifying this code a bit more too. It's crazy how much duplication there is.
Attachment #8754514 - Flags: review?(nfroyd)
Comment on attachment 8753095 [details] [diff] [review]
Sentinel checking

Review of attachment 8753095 [details] [diff] [review]:
-----------------------------------------------------------------

I don't know when to turn this on.  In pathological cases, this change is going to double message size...but in non-pathological cases (even passing one string), I don't have a good feel for what the overhead is here.  Maybe 5-20%?  I'm inclined to say we should turn it on for non-release builds (and all the time for debug builds?), and if we get any interesting data, we can see about turning it on for release.

f+ only because I'd like to see what the conditionalizing for non-release builds looks like.

::: ipc/ipdl/ipdl/lower.py
@@ +51,5 @@
>  ## Helper code
>  ##
>  
> +def hashfunc(value):
> +    h = hash(value) % 2**32

Is this mod really necessary?  |hash| is just defined to "return an integer" and AFAICT from the Python manual, "integer" typically means "32-bit integer" (aka "plain integer").  There are "long integers" which the language reference classifies (unhelpfully) as "integers", but the library reference says "plain integers" are abbreviated as "integers".  So I feel pretty good about saying |hash| returns a 32-bit integer here.   We could assert that if you like...or have you seen cases where it returns long integers?

@@ +4739,5 @@
>          if this:  read = ExprSelect(this, '->', read.name)
>          return self.maybeAddNullabilityArg(
>              ipdltype, ExprCall(read, args=[ expr, from_, iterexpr ]))
>  
> +    def checkedWrite(self, ipdltype, expr, msgvar, sentinelKey, this=None):

Please add an:

  assert sentinelKey

here.

@@ +5400,5 @@
> +        block = Block()
> +        block.addstmt(ifbad)
> +
> +        if sentinel:
> +            block.addstmt(Whitespace('// Sentinel = ' + repr(sentinelKey) + '\n', indent=1))

Please add:

  assert sentinelKey

at the start of this Python block so something reasonable happens if somebody makes a mistake.

@@ +5405,5 @@
> +            read = ExprCall(ExprSelect(msgexpr, '->', 'ReadSentinel'),
> +                                  args=[ iterexpr, ExprLiteral.Int(hashfunc(sentinelKey)) ])
> +            ifsentinel = StmtIf(ExprNot(read))
> +            ifsentinel.addifstmts(errfn('Error deserializing \'' + paramtype + '\''))
> +            block.addstmt(ifsentinel)

WDYT about trying to get more information in this message (which I think can get passed into crash reports)?
Attachment #8753095 - Flags: review?(nfroyd) → feedback+
Sorry, I forgot to respond to this earlier even though I said I would.

(In reply to Nathan Froyd [:froydnj] from comment #25)
> Comment on attachment 8751534 [details] [diff] [review]
> BufferList patch
> 
> Review of attachment 8751534 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> I assume this has to be in MFBT because the JS structured clone stuff is
> going to need to know about BufferLists?  If that's not the case, I'd prefer
> that we keep BufferList in ipc/glue/ or similar.  I guess we can't use
> nsSegmentedBuffer because of the ownership constraint (and because
> nsSegmentedBuffer is kinda clunky...).

Yes, it needs to be accessible from the JS engine so that's why I put it in mfbt.

> ::: mfbt/AllocPolicy.h
> @@ +11,5 @@
> >  
> >  #ifndef mozilla_AllocPolicy_h
> >  #define mozilla_AllocPolicy_h
> >  
> > +#include "mozilla/Attributes.h"
> 
> This is dead code from an earlier experiment?

I happened to noticed that AllocPolicy.h uses MOZ_MUST_USE, so I added this so we IWYU.

> @@ +56,5 @@
> > +  // (N % Align == A % Align) if Align == 2, 4, or 8.
> > +  static const size_t kSegmentAlignment = 8;
> > +
> > +  // Allocate a BufferList. If aOwnership == OWNING, then the BufferList will
> > +  // free all its buffers when it is destroyed. An initial buffer of size
> 
> This constraint worries me a little bit, because it's not obvious to me that
> a NONOWNING user can easily free buffers.  I guess you're supposed to do
> something like:

I added a Borrow method that I expect will be used in the structured clone bug. Hopefully this makes the non-owning use case clearer. Basically, for every non-owning BufferList, there should be an owning BufferList that outlives it.

> Also, in passing, it seems un-ergonomic to me to have to constantly pass
> |list| into the iterator's methods.

I eliminated some of this, but it's sort of necessary. The alternative is to put a pointer to the BufferList in the iterator, but then the BufferList can't move around without invalidating iterators. That seems like a footgun. SegmentedVector handles this by using a linked list of segments rather than an array (so it has next/prev pointers). But then I would have to dynamically allocate each BufferList::Segment, which seems a little inefficient.

> @@ +122,5 @@
> > +  }
> > +
> > +  BufferList& operator=(BufferList&& aOther)
> > +  {
> > +    mSegments = Move(aOther.mSegments);
> 
> This is an actual assignment...but what about freeing buffers if we're
> OWNING?  And what about non-matching-ness of mOwning?

I added a Clear() at the top here to fix this. I also ended up copying mOwning since that made sense for the test. Not sure that's the right way to go though. We could just do an assert that they're the same and change the test somehow.

> @@ +123,5 @@
> > +
> > +  BufferList& operator=(BufferList&& aOther)
> > +  {
> > +    mSegments = Move(aOther.mSegments);
> > +    mSize = aOther.mSize;
> 
> What about assigning the capacity?

BufferList doesn't save any kind of total capacity. Only segments have a capacity.

> @@ +177,5 @@
> > +      MOZ_RELEASE_ASSERT(mData < segment.End() || Done(aBuffers));
> > +
> > +      const char* end_of_region = mData + aBytes;
> > +      // Watch out for overflow in pointer calculation, which wraps.
> > +      return (mData <= end_of_region) && (end_of_region <= segment.End());
> 
> Maybe we should just do this in CheckedInt<uintptr_t> arithmetic?  It took
> me a little while to realize that the first check was the overflow check. 
> Or make the IterImpl store the offset in the current segment, and then we
> could reason about things in integer arithmetic and wouldn't have pointer
> arithmetic gotchas (Chromium has switched to integer checks in their Pickle
> implementation rather than pointers for this reason, I think.)

I added an mDataEnd pointer to iterators which simplified a lot of stuff. Hopefully it addresses your concerns here.

> @@ +352,5 @@
> > +
> > +  Segment* p = &mSegments[startIter.mSegment + 1];
> > +  Segment* end = p + fullSegmentCount;
> > +  for (Segment* iter = p; iter < end; iter++) {
> > +    this->free_(iter->mData);
> 
> This isn't valid if the list is non-owning, is it?

FlattenBytes is only valid on owning BufferLists. I added an assert.
Comment on attachment 8753099 [details] [diff] [review]
ReadBytes elimination

Review of attachment 8753099 [details] [diff] [review]:
-----------------------------------------------------------------

I think this makes sense.  I'm a little concerned about how FlattenBytes is going to play out, though, see below.

::: dom/ipc/StructuredCloneData.cpp
@@ +98,5 @@
>    WriteParam(aMsg, DataLength());
>  
>    if (DataLength()) {
>      // Structured clone data must be 64-bit aligned.
> +    aMsg->WriteBytes(Data(), DataLength());

What happened to the 64-bit alignment constraint here?  Is this just a case of the structured clone data needing to be 64-bit aligned when it's actual data, but when we stuff it into an IPC buffer for transfer, we don't care about its alignment?  If so, please remove the comment above this.

::: ipc/chromium/src/base/pickle.cc
@@ +442,3 @@
>      return false;
> +  }
> +  memcpy(data, outp, length);

I think this function makes so much more sense than the hodgepodge of memcpy and reinterpret_cast we had scattered about before.

::: ipc/glue/IPCMessageUtils.h
@@ +304,4 @@
>      }
> +    aResult->SetLength(length);
> +
> +    return aMsg->ReadBytesInto(aIter, aResult->BeginWriting(), length);

Does this null-terminate correctly?  I guess SetLength() above should null-terminate, so we're just reading into a buffer, and nsACString will manage the termination for us.

@@ +731,5 @@
> +    const char** buffer =
> +      const_cast<const char**>(reinterpret_cast<char**>(&aResult->data));
> +    // Structured clone data must be 64-bit aligned.
> +    if (!aMsg->FlattenBytes(aIter, buffer, aResult->dataLength,
> +                            sizeof(uint64_t))) {

Just to make sure I understand what's going on here: this currently returns a pointer into the Message's buffer, but at some future point may have to return something allocated?  That sounds like a bit of a mess?
Attachment #8753099 - Flags: review?(nfroyd) → review+
Comment on attachment 8753101 [details] [diff] [review]
BufferList patch

Review of attachment 8753101 [details] [diff] [review]:
-----------------------------------------------------------------

I was rebasing my patch onto BufferList and noticed some issues.

::: mfbt/BufferList.h
@@ +237,5 @@
> +  IterImpl Iter() const { return IterImpl(*this); }
> +
> +  // Copies aSize bytes from aData into the BufferList. The storage for these
> +  // bytes may be split across multiple buffers. Size() is increased by aSize.
> +  inline void WriteBytes(const char* aData, size_t aSize);

This is fallible if the AllocPolicy is fallible so it should return bool to indicate that. Probably a good idea to add a infallible version like in other containers.

@@ +268,5 @@
> +
> +  void* AllocateSegment(size_t aSize, size_t aCapacity)
> +  {
> +    char* data = this->template pod_malloc<char>(aCapacity);
> +    MOZ_RELEASE_ASSERT(mSegments.append(Segment(data, aSize, aCapacity)));

malloc and Vector::append is fallible if AllocPolicy is fallible. I think here and later the MOZ_RELEASE_ASSERT should return false instead.

@@ +274,5 @@
> +    return data;
> +  }
> +
> +  bool mOwning;
> +  Vector<Segment> mSegments;

Use same AllocPolicy for Vector.

@@ +285,5 @@
> +BufferList<AllocPolicy>::WriteBytes(const char* aData, size_t aSize)
> +{
> +  MOZ_RELEASE_ASSERT(mStandardCapacity);
> +
> +  Segment& lastSegment = mSegments.back();

It will crash here if mSegments is empty.

@@ +297,5 @@
> +  size_t remaining = aSize - toCopy;
> +  while (remaining) {
> +    toCopy = std::min(remaining, mStandardCapacity);
> +
> +    void* data = AllocateSegment(toCopy, mStandardCapacity);

This can be fallible.
(In reply to Nathan Froyd [:froydnj] from comment #31)
> Comment on attachment 8751109 [details] [diff] [review]
> Remove unused TrimWriteData
> 
> Review of attachment 8751109 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: ipc/chromium/src/base/pickle.h
> @@ +175,5 @@
> >    // Returns NULL on failure.
> >    //
> >    // The returned pointer will only be valid until the next write operation
> >    // on this Pickle.
> >    char* BeginWriteData(int length);
> 
> Can you remove BeginWriteData as well?  A followup is fine.  It looks like
> it's only used in tests, and very dodgily at that.

This gets removed later in the series.

(In reply to Nathan Froyd [:froydnj] from comment #33)
> ::: ipc/ipdl/ipdl/lower.py
> @@ +51,5 @@
> >  ## Helper code
> >  ##
> >  
> > +def hashfunc(value):
> > +    h = hash(value) % 2**32
> 
> Is this mod really necessary?  |hash| is just defined to "return an integer"
> and AFAICT from the Python manual, "integer" typically means "32-bit
> integer" (aka "plain integer").

hash(2**33) == 2**33

Not sure that means we need this code, but it seems prudent.

> @@ +5405,5 @@
> > +            read = ExprCall(ExprSelect(msgexpr, '->', 'ReadSentinel'),
> > +                                  args=[ iterexpr, ExprLiteral.Int(hashfunc(sentinelKey)) ])
> > +            ifsentinel = StmtIf(ExprNot(read))
> > +            ifsentinel.addifstmts(errfn('Error deserializing \'' + paramtype + '\''))
> > +            block.addstmt(ifsentinel)
> 
> WDYT about trying to get more information in this message (which I think can
> get passed into crash reports)?

I think the stack will tell us what we need to know. I can't think of any other information to include. I guess an annotation would allow us to aggregate reports better, but I'm not sure that's too important.

(In reply to Nathan Froyd [:froydnj] from comment #35)
> ::: dom/ipc/StructuredCloneData.cpp
> @@ +98,5 @@
> >    WriteParam(aMsg, DataLength());
> >  
> >    if (DataLength()) {
> >      // Structured clone data must be 64-bit aligned.
> > +    aMsg->WriteBytes(Data(), DataLength());
> 
> What happened to the 64-bit alignment constraint here?  Is this just a case
> of the structured clone data needing to be 64-bit aligned when it's actual
> data, but when we stuff it into an IPC buffer for transfer, we don't care
> about its alignment?  If so, please remove the comment above this.

Yes, we only care that the final buffer is 64-bit aligned. Since we're copying anyway, it doesn't matter whether the bytes in the message are aligned. Note that this is different for SerializedStructuredCloneBuffer, where we do SC directly out of the message.

> ::: ipc/glue/IPCMessageUtils.h
> @@ +304,4 @@
> >      }
> > +    aResult->SetLength(length);
> > +
> > +    return aMsg->ReadBytesInto(aIter, aResult->BeginWriting(), length);
> 
> Does this null-terminate correctly?  I guess SetLength() above should
> null-terminate, so we're just reading into a buffer, and nsACString will
> manage the termination for us.

It looks like we null-terminate in SetCapacity, which is called from SetLength:
http://searchfox.org/mozilla-central/rev/5cbce51a75755bd073d2e1604b2f1ecb70573e91/xpcom/string/nsTSubstring.cpp#694

> @@ +731,5 @@
> > +    const char** buffer =
> > +      const_cast<const char**>(reinterpret_cast<char**>(&aResult->data));
> > +    // Structured clone data must be 64-bit aligned.
> > +    if (!aMsg->FlattenBytes(aIter, buffer, aResult->dataLength,
> > +                            sizeof(uint64_t))) {
> 
> Just to make sure I understand what's going on here: this currently returns
> a pointer into the Message's buffer, but at some future point may have to
> return something allocated?  That sounds like a bit of a mess?

Yes. FlattenBytes isn't very nice. In a later patch in the series, it makes the entire message buffer contiguous if it wasn't before (essentially negating the benefits of this patch). It only exists so that this code can work until bug 1264642 lands.
Made this release-mode only.
Attachment #8753095 - Attachment is obsolete: true
Attachment #8755649 - Flags: review?(nfroyd)
Attached patch BufferList patch, v2 (obsolete) — Splinter Review
This fixes the issues with fallible allocation that Kan-Ru pointed out.
Attachment #8753101 - Attachment is obsolete: true
Attachment #8753101 - Flags: review?(nfroyd)
Attachment #8755650 - Flags: review?(nfroyd)
Comment on attachment 8755649 [details] [diff] [review]
Sentinel checking, v2

Review of attachment 8755649 [details] [diff] [review]:
-----------------------------------------------------------------

::: ipc/chromium/src/base/pickle.cc
@@ +19,5 @@
>  #include "nsDebug.h"
>  
> +#if !defined(RELEASE_BUILD) || defined(DEBUG)
> +#define SENTINEL_CHECKING
> +#endif

This is more elegant than a mass of ifdeffery being generated by the IPDL compiler.
Attachment #8755649 - Flags: review?(nfroyd) → review+
Comment on attachment 8755650 [details] [diff] [review]
BufferList patch, v2

Review of attachment 8755650 [details] [diff] [review]:
-----------------------------------------------------------------

Apologies for the delay in reviewing this.  I think my biggest concern, API-wise, is how Borrow is supposed to be used, especially since the pickle data patch doesn't appear to use Borrow?  Should we delete Borrow()--and ownership handling--entirely, or was that an oversight in the pickle data patch?

::: mfbt/BufferList.h
@@ +68,5 @@
> +  // offset invariant is not violated.
> +  static const size_t kSegmentAlignment = 8;
> +
> +  // Allocate a BufferList. If aOwnership == OWNING, then the BufferList will
> +  // free all its buffers when it is destroyed. An initial buffer of size

Is it possible that we could dispense with aOwnership, and the only way to created a non-owning BufferList would be through Borrow()?  (Not sure if the borrowed list should have a different type or not...)

@@ +253,5 @@
> +  // This method requires aIter and aSize to be 8-byte aligned.
> +  inline bool FlattenBytes(IterImpl& aIter, const char** aOutData, size_t aSize);
> +
> +  template<typename BorrowingAllocPolicy>
> +  inline BufferList<BorrowingAllocPolicy> Borrow(IterImpl& aIter, size_t aSize, bool* aSuccess,

This method needs some documentation.  Please remove the |inline| keyword.

@@ +261,5 @@
> +  explicit BufferList(AllocPolicy aAP)
> +   : AllocPolicy(aAP),
> +     mOwning(false),
> +     mSize(0),
> +     mStandardCapacity(0)

This bit seems not so good for borrowed buffer lists; mStandardCapacity being zero is going to cause problems if we try to do append a segment, for instance.  This suggests that we should either:

1. Have a separate type for borrowed lists that only permits read operations.
2. MOZ_RELEASE_ASSERT on borrowed lists when we're trying to modify the list.

(These options assume that we remove the option to construct non-owning lists outside of borrowing.)

@@ +420,5 @@
> +    aIter.Advance(*this, toAdvance);
> +    size -= toAdvance;
> +  }
> +
> +  result.mSize = aSize;

We are assigning |result.mSize| here and above the while loop...is that an oversight, or did you mean to assign something else here?

::: mfbt/tests/TestBufferList.cpp
@@ +16,5 @@
> +  template <typename T>
> +  T* pod_malloc(size_t aNumElems)
> +  {
> +    if (aNumElems & mozilla::tl::MulOverflowMask<sizeof(T)>::value) {
> +      MOZ_CRASH("TestSegmentedVector.cpp: overflow");

Too much cut-and-paste in this method, here and below.

@@ +39,5 @@
> +// MOZ_MUST_USE. But we're using an infallible alloc policy, and so
> +// don't really need to check the result. Casting to |void| works with clang
> +// but not GCC, so we instead use this dummy variable which works with both
> +// compilers.
> +//static int gDummy;

This is an excellent comment.  But the variable appears to be unused, so can we delete the variable instead?

@@ +44,5 @@
> +
> +// This tests basic segmented vector construction and iteration.
> +void TestBasics()
> +{
> +  BufferList bl(BufferList::Ownership::OWNING, 16, 24, 32);

I think it would help readability of this test if 16, 24, and 32 were defined as |const size_t| variables and then those variables were used in appropriate places below.
Attachment #8755650 - Flags: review?(nfroyd) → feedback+
Comment on attachment 8753104 [details] [diff] [review]
Store pickle data in a vector of buffers

Review of attachment 8753104 [details] [diff] [review]:
-----------------------------------------------------------------

I think all of this makes sense.

::: ipc/chromium/src/base/pickle.cc
@@ +172,2 @@
>  
>    iter->CopyFrom(result);

As I read through this patch, I'm realizing that it's really odd to have the "inefficient" cases call |ReadBytesInto| but the efficient cases call |CopyFrom|.  The |Into| vs. |From| distinction seems confusing; WDYT about renaming |CopyFrom| to |CopyInto|?

@@ +366,5 @@
>    if (!ReadLength(iter, &len))
>      return false;
> +
> +  auto chars = mozilla::MakeUnique<char[]>(len);
> +  if (!ReadBytesInto(iter, chars.get(), len)) {

I think it's possible we could make this and the wstring case more efficient, but do we even use these functions anywhere?

::: ipc/chromium/src/chrome/common/ipc_channel_posix.cc
@@ +622,5 @@
> +      size_t size = iter.RemainingInSegment();
> +
> +      // Don't add more than kMaxIOVecSize to the iovec so that we avoid
> +      // OS-dependent limits.
> +      if (iov_count < kMaxIOVecSize) {

The case where our iterator has more than kMaxIOVecSize segments just works, since we're inside a loop here, correct?

::: ipc/chromium/src/chrome/common/ipc_channel_posix.h
@@ +83,5 @@
>  
>    // Indicates whether we're currently blocked waiting for a write to complete.
>    bool is_blocked_on_write_;
>  
> +  mozilla::Maybe<Pickle::BufferList::IterImpl> partial_write_iter_;

This could use a comment similar to the one for message_send_bytes_written_.  It might also be good to explain how the buffer being iterated over doesn't disappear while this is still live?

@@ +124,5 @@
>    // Channel::kReadBufferSize bytes. We add kControlBufferSlopBytes bytes
>    // for the control header.
>    char input_cmsg_buf_[Channel::kReadBufferSize + kControlBufferSlopBytes];
>  
> +  mozilla::Maybe<Message> incoming_message_;

Keep the comment?
Attachment #8753104 - Flags: review?(nfroyd) → review+
Comment on attachment 8754514 [details] [diff] [review]
Windows fixes

Review of attachment 8754514 [details] [diff] [review]:
-----------------------------------------------------------------

::: ipc/chromium/src/chrome/common/ipc_channel_win.h
@@ +87,4 @@
>    // We read from the pipe into this buffer
>    char input_buf_[Channel::kReadBufferSize];
> +  size_t input_buf_offset_;
> +  mozilla::Maybe<Message> incoming_message_;

Same comments for incoming_message_ and partial_write_iter_ from the posix patch apply here as well.
Attachment #8754514 - Flags: review?(nfroyd) → review+
Attachment #8755650 - Attachment is obsolete: true
Attachment #8756121 - Flags: review?(nfroyd)
(In reply to Nathan Froyd [:froydnj] from comment #41)
> Comment on attachment 8755650 [details] [diff] [review]
> BufferList patch, v2
> 
> Review of attachment 8755650 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Apologies for the delay in reviewing this.  I think my biggest concern,
> API-wise, is how Borrow is supposed to be used, especially since the pickle
> data patch doesn't appear to use Borrow?  Should we delete Borrow()--and
> ownership handling--entirely, or was that an oversight in the pickle data
> patch?

Borrow is only for use of bug 1264642. I wasn't going to put it in this bug, but then I thought it would help to show how the ownership stuff is expected to be used.

> ::: mfbt/BufferList.h
> @@ +68,5 @@
> > +  // offset invariant is not violated.
> > +  static const size_t kSegmentAlignment = 8;
> > +
> > +  // Allocate a BufferList. If aOwnership == OWNING, then the BufferList will
> > +  // free all its buffers when it is destroyed. An initial buffer of size
> 
> Is it possible that we could dispense with aOwnership, and the only way to
> created a non-owning BufferList would be through Borrow()?  (Not sure if the
> borrowed list should have a different type or not...)

Yes, the ownership param to the constructor can be removed. I don't think it makes sense to use a separate type for borrowing though. Most of the complexity in the class is for iterating, and the borrowed type would also need that. The only public methods that can't be used on a borrowed BufferList are WriteBytes and FlattenBytes--and FlattenBytes is going to be removed in bug 1264642.

In addition, structured cloning does need to use WriteBytes (or something like it). It's just that it won't be used on a borrowed BufferList. That is, it's a dynamic rather than static invariant. So I think an assertion makes the most sense.
> I think it's possible we could make this and the wstring case more efficient, but do we even
> use these functions anywhere?

I think we use this in some Windows-specific code when creating a new channel:
http://searchfox.org/mozilla-central/rev/c416322cd4079ab9ef6b30203fedae11cf39fdc5/ipc/glue/Transport_win.h#22

It's pretty uncommon, though, so it didn't seem worth optimizing.
Comment on attachment 8756121 [details] [diff] [review]
BufferList patch, v3

Review of attachment 8756121 [details] [diff] [review]:
-----------------------------------------------------------------

Looks great, thank you!
Attachment #8756121 - Flags: review?(nfroyd) → review+
Bill, I want to rebase my patches on top of this bug but your current patches does not cleanly apply to trunk. Could you share a branch that I can pull from or pointing to the changeset you are basing on?
Flags: needinfo?(wmccloskey)
See Also: → 1274075
See Also: 1274075
Here's a try push.
https://treeherder.mozilla.org/#/jobs?repo=try&revision=f340f638b5b2
I'll be landing this tomorrow assuming it's green.
Flags: needinfo?(wmccloskey)
Depends on: 1276590
Comment on attachment 8753104 [details] [diff] [review]
Store pickle data in a vector of buffers

I would like to backport all these patches to Aurora. We think they will reduce the rate of OOM crashes on e10s. Talos shows a small perf regression on Linux x64 that needs to be investigated, but I think the benefit here outweighs the cost even if we accept the perf regression.

Approval Request Comment
[Feature/regressing bug #]: none
[User impact if declined]: more OOM crashes
[Describe test coverage new/current, TreeHerder]: on m-c for several days
[Risks and why]: Some risk due to code churn, but it promises to reduce OOM crashes, which are one of our biggest problems in e10s.
[String/UUID change made/needed]: none
Attachment #8753104 - Flags: approval-mozilla-aurora?
Bill, have we been able to see the effect of this patch on nightly? I am a bit concerned to take a big patch just before the aurora => beta merge
Flags: needinfo?(wmccloskey)
It is a little hard to see the effects of this patch on Nightly. We have very few OOM | large crashes there (I'd guess due to having many more 64-bit users), and only one signature I can see in the top 300 in the last couple of weeks in IPC code, which was last reported a few days before this patch landed.
I think we probably can see an effect on Nightly. If I search for "OOM | large" going back to May 10, the #1 and #4 signatures are IPC-related and they no longer happen (i.e., all crashes are with build IDs before my patch landed):

https://crash-stats.mozilla.com/search/?product=Firefox&version=49.0a1&signature=~OOM%20%7C%20large&date=%3E%3D2016-05-10&_facets=signature&_columns=date&_columns=signature&_columns=product&_columns=version&_columns=build_id&_columns=platform#facet-signature

Andrew is right that the effect on Nightly will be much less than on beta due to differences in the populations. But Nightly did improve, I think.
Flags: needinfo?(wmccloskey)
Looks like we need to update this to a beta uplift request.
Flags: needinfo?(wmccloskey)
Comment on attachment 8753104 [details] [diff] [review]
Store pickle data in a vector of buffers

see comment 52.
Flags: needinfo?(wmccloskey)
Attachment #8753104 - Flags: approval-mozilla-aurora? → approval-mozilla-beta?
Comment on attachment 8753104 [details] [diff] [review]
Store pickle data in a vector of buffers

OK, thanks for the proof. Taking it then.
Should be in 48 beta 2.
Attachment #8753104 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
has problems to apply to beta like : merging ipc/glue/ProtocolUtils.cpp
merging ipc/glue/ProtocolUtils.h
merging ipc/ipdl/ipdl/lower.py
merging ipc/ipdl/test/cxx/TestActorPunning.cpp
merging layout/base/nsLayoutUtils.cpp
merging netwerk/ipc/NeckoMessageUtils.h
merging netwerk/protocol/http/PHttpChannelParams.h
merging widget/nsGUIEventIPC.h
warning: conflicts while merging gfx/ipc/GfxMessageUtils.h! (edit, then use 'hg resolve --mark')
warning: conflicts while merging ipc/chromium/src/base/pickle.h! (edit, then use 'hg resolve --mark')
warning: conflicts while merging ipc/glue/ProtocolUtils.h! (edit, then use 'hg resolve --mark')
abort: unresolved conflicts, can't continue
(use 'hg resolve' and 'hg graft --continue')
Flags: needinfo?(wmccloskey)
I talked this over with Brad and we decided not to backport. It's a big patch and the rebasing is a lot of work. Seems risky.
Flags: needinfo?(wmccloskey)
Comment on attachment 8753104 [details] [diff] [review]
Store pickle data in a vector of buffers

updating the flags accordingly
Attachment #8753104 - Flags: approval-mozilla-beta+ → approval-mozilla-beta-
Depends on: 1296642
Depends on: 1051567
Depends on: 1326422
Depends on: 1348591
You need to log in before you can comment on or make changes to this bug.