Open
Bug 510691
Opened 15 years ago
Updated 2 years ago
bound maximum successful allocation to avoid ptrdiff_t overflow
Categories
(Core :: Memory Allocator, defect)
Tracking
()
NEW
People
(Reporter: luke, Unassigned)
Details
(Background) The C++ standard allows pointer subtraction to overflow. That is, if p and q are pointers, then (p - q) might not fit in ptrdiff_t if p and q are far enough apart. This makes for a very unexpected error (e.g., bug 510319, even though it occurred on OS X, which doesn't use jemalloc). Technically, the Standard says we are only supposed to be subtracting pointers that point to the same underlying segment of memory, so one way to ensure that this doesn't happen more in the wild is to bound the largest size allocation malloc will return. If ptrdiff_t is a 32-bit signed integer, that means failing on allocation byte sizes >= 2^31 and mutatis mutandis for 64-bit systems.
Comment 1•15 years ago
|
||
Or maybe we want to use PRUptrdiff instead of ptrdiff_t?
Reporter | ||
Comment 2•15 years ago
|
||
Well, unfortunately, ptrdiff_t is what is defined in the language to be the result of subtracting two pointers so we don't get a say in the matter. Even if we did "(PRUptrdiff)(a - b)", it would be too late, since the overflow has already occurred.
Reporter | ||
Comment 3•15 years ago
|
||
One more thing on the subject: bounding the size of the maximum alloc fixes the problem in one place, whereas careful pointer-subtraction overflow checking requires vigilance everywhere.
Comment 4•15 years ago
|
||
What bound is proposed? INT_MAX?
Comment 5•15 years ago
|
||
er, MAX_INT?
Comment 6•15 years ago
|
||
ptrdiff_t aside, this may help with general int overflows. typical example of escaping is: r=malloc (C * length) where C is usually a small constant. as for the choice of what is considered big IMO it should be less than 2^31 bytes (2GB)
Reporter | ||
Comment 7•15 years ago
|
||
On some 64-bit platforms (I think msvc) integers stay 32-bit while only pointers and long longs are 64-bit, so MAX_INT is overly conservative. With C++, you could use std::numeric_limits<ptrdiff_t>::max(), but IIRC jemalloc is C, so perhaps there is an equivalent symbolic constant for C?
Comment 8•15 years ago
|
||
capping alloc at 1<<31 is not overly conservative: we already do it NS_Alloc, http://mxr.mozilla.org/mozilla-central/source/xpcom/base/nsMemoryImpl.cpp#312 If jemalloc can't depend on PR_INT32_MAX, then just make an equivalent define...
Comment 9•15 years ago
|
||
But, we shouldn't be doing this in jemalloc, which only gives us defense in certain circumstances: it sounds like we should instead be doing this in cjones' new infallible-allocator.
Reporter | ||
Comment 10•15 years ago
|
||
Will all malloc calls be routed through the new infallible allocator (including JS)?
Comment 11•15 years ago
|
||
> Will all malloc calls be routed through the new infallible allocator
beside malloc/realloc/alloca |new []| should be restricted too
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•