|Submitter||Diff||Changes||Open Issues||Last Updated|
|Error loading review requests:|
59 bytes, text/x-review-board-request
|Details | Review|
From https://blogs.msdn.microsoft.com/vcblog/2016/05/04/new-code-optimizer/ : ``` Historically, Visual C++ did not take advantage of the fact that the C and C++ standards consider the result of overflowing signed operations undefined. > Other compilers are very aggressive in this regard, which motivated the decision to implement some patterns which take advantage of undefined integer > overflow behavior. We implemented the ones we thought were safe and didn’t impose any unnecessary security risks in generated code. A new undocumented compiler flag has been added to disable these optimizations, in case an application that is not standard-conformant fails: > –d2UndefIntOverflow–. Due to security concerns, we have seen cases where these patterns should not be optimized, even though following the C and C++ > standards allows us to by making the potential addition overflow undefined: > a + Constant > a -> true // Constant > 0 > a + Constant <= a -> false // Constant > 0 These two tests (and the similar ones with subtraction) are frequently used to check for overflow in places such as file readers and memory allocators. While the use is non-conformant with the standard and a well-known issue, enabling these transformations could potentially break the security of those applications. ``` (It used to be d2SSAOptimizerUndefinedIntOverflow, but the new version is -d2UndefIntOverflow-)
Bug 922603 is an example for a bug that would be turned into an exploitable issue by this optimization.
Try: https://treeherder.mozilla.org/#/jobs?repo=try&revision=594f79555b676bcce6b09b9580c5e7689ef1fa98 Talos: https://treeherder.mozilla.org/perf.html#/compare?originalProject=mozilla-central&newProject=try&newRevision=594f79555b676bcce6b09b9580c5e7689ef1fa98&framework=1&showOnlyImportant=0&selectedTimeRange=172800
:decoder, does it make sense that we would be killing performance on WebGL and "Desktop Firefox UI animation speed and smoothness" with this? If the very large regressions are accurate, I'd guess we'd need to land it behind --enable-hardening... I'd like to find someone who could tell us if we should be suspect of these test results and that we should try different/more tests - not sure who can do that (jmaher is out and he's my usual go-to.)
(In reply to Tom Ritter [:tjr] from comment #4) > :decoder, does it make sense that we would be killing performance on WebGL > and "Desktop Firefox UI animation speed and smoothness" with this? It somewhat makes sense because most of the (signed) integer overflows I've ever seen (e.g. during our CI) are in layout or gfx. I don't know exactly how the code gets so much faster by assuming these don't happen (because I think they are not checked for). It would totally make sense if something like -fwrapv was used instead, because that really makes all of these computations slower, but I guess that's not the case with this flag? Or does MSVC default to calculating the result like fwrapv does while it just doesn't without this flag? That theory would explain the huge performance difference at least.
2 failures in 901 pushes (0.002 failures/push) were associated with this bug in the last 7 days. Repository breakdown: * autoland: 2 Platform breakdown: * macosx64-stylo: 2 For more details, see: https://brasstacks.mozilla.com/orangefactor/?display=Bug&bugid=1377550&startday=2017-08-07&endday=2017-08-13&tree=all
I spoke with the author of this compiler flag. He said: > There are only a few small optimizations done right now in the SSA Optimizer that rely of undefined int overflow. > The "a + Constant > a = true" style of optimizations that security devs. are usually worried about are not done unless it's certain that "a + Constant" would not overflow - this is based on the Bit Estimator described in my blog post. > > The patterns handled right now are: > > (a * C1) / C2 -> a * (C1 / C2), if C1 is a multiple of C2 > -a / C -> a / -C, if -C doesn't overflow (not MIN_INT) > a + 1 > b -> a >= b > (a * 2^C) >> C -> a > > The -d2UndefIntOverflow- all those patterns. There is still the loop optimizer that sometimes takes advantage of signed induction variables, but it did that for 20 years and every compiler basically does it. > > Until now there was no bug report involving these optimizations from within Microsoft or external customers. > There are plans to handle some more similar patterns for MUL/DIV instructions in the future, but nothing that would obviously break the security of the programs.
You need to log in before you can comment on or make changes to this bug.