Major parts of the web depend on shallow comparison of objects in hot code loops. Modern examples are virtual dom implementations. We should test-drive a native Object.shallowEqual API in collaboration with the ecosystem, iterate and eventually get it on standards track. Existing implementations: https://github.com/dashed/shallowequal (npm: 168,733 downloads in the last month) https://github.com/hughsk/shallow-equals (npm: 10,693 downloads in the last month) https://github.com/facebook/fbjs/blob/master/src/core/shallowEqual.js (used in react)
platform-rel: --- → ?
(In reply to Harald Kirschner :digitarald from comment #0) > We should test-drive a native Object.shallowEqual API in collaboration with > the ecosystem, iterate and eventually get it on standards track. Can we work on the standardization part first, to at least see what people think? If others don't like it we want to know it upfront, so we don't spend a lot of time on a feature and then find out we're the only engine stuck with it because Firefox/add-ons/etc depend on it...
What jandem said. Implementing then standardizing is exactly backward from how we usually do things, at least as far as first passes go -- first there's a draft spec, then we implement, then both sides iterate to a final state. As far as the actual API goes, what power does this function provide over something pages and scripts can simply provide for themselves, with whatever bells and whistles they desire? It's typically better to focus on things that scripts can't do themselves, because different users will have different desires for how an API like this should look and work. I'd expect we would end up self-hosting an algorithm here, so it's not clear to me how we'd beat out a pure JS implementation a script might provide -- and indeed, we might even be *slower* once you consider all the edge cases that totally-specified algorithms must handle, that a user-provided script can perhaps just skate past and get away with it.
This is a case where I expect that the implementation details will need to drive the semantics. This is only useful if it can be implemented as a fast operation. Ideally a single `memcmp`, but those semantics might not be viable. Having implementation feedback and performance numbers early on will be critical for the standards process. Note that expanding this to function closures where a single context would also be incredibly useful but might see more opposition.
> find out we're the only engine stuck with it because Firefox/add-ons/etc depend on it... The trial would work with a white list of sites that agreed to feature detect and work without the API if we consider to don't move forward with it.
The use case in React, and similar algorithms in other libraries, is to rely on shallow compare as a memoization heuristic - not as a critical semantic. Specifically in tree structures. I created a small gist in an earlier discussion. Feel free to ignore the function closure part for now. https://gist.github.com/sebmarkbage/1b474f3e2d8bc99df1210b93822ffe8f Often the memoization will be invalidated because the input has changed. Therefore doing many slow comparisons can make things unnecessarily slow when you're really just recomputing most things anyway. However, if you never use memoization anywhere then recomputing the entire result is even more expensive.
2 years ago
Whiteboard: [platform-rel-ReactJS] → [platform-rel-Frameworks][platform-rel-Facebook][platform-rel-ReactJS]
Hello :h4writer. :digitarald mentioned that you perhaps may be able to provide some insights here?
ni? Naveed on how this fits with the new WebVM team structure.
Flags: needinfo?(efaustbmo) → needinfo?(nihsanullah)
The concerns raised in comment 1 and 3 still stand. The normal procedure starts with standardizing this feature, before we officially implement this feature. It is still possible to hack something together (forked / after a pref / ...), but in that case I assume we won't be able to drive it ourself. We are currently working on ES6 (performance) / bug fixes / ES7 / WebAssembly / ... which also have high priority. I think it was the intention to talk about all this in the all hands to have a better idea how we can help and how important this is. It might be good to reach out to :shu about this.
I agree with the team that some attempt at standardization/feedback should probably lead any implementation. Shu's and Jason's opinion would be useful here.
I wanted to make another case to run this as joined experiment between FB and us: Even with a simple implementation we can use Hasal to test performance impact on Facebook. After putting the API behind a pref and up to Beta for testing we only need 2 weeks to collect enough performance data. I reckon that it does not even have to get to release.
Recapping the discussion that happened in the July 2016 TC39 meeting re: shallowEquals below: There were many legitimate security concerns about the intended memcmp-backed shallowEquals. Such an implementation exposes details that are hard to work around: - String implementation internal to a VM (e.g., flat vs rope, interning) - GC implementation details (e.g. moving GC changing result of shallowEquals) - Private state - Captured variables in closures - WeakMap implementation The reached consensus was for various VMs to implement an actual shallow equality check, that is, something like checking for identity of own properties, and work on optimizing that, instead of opening up the can of worms that is memcmp-guided shallowEquals. I agreed with the consensus and I believe Sebastian did as well. I am still in favor of the more principled approach suggested by committee.
platform-rel: + → -
Priority: -- → P3
You need to log in before you can comment on or make changes to this bug.