Closed Bug 1254392 Opened 4 years ago Closed 3 years ago

Refresh device DB after some time interval

Categories

(DevTools :: Responsive Design Mode, defect, P1)

defect

Tracking

(firefox48 affected, firefox49 fixed, firefox50 verified)

VERIFIED FIXED
Firefox 49
Iteration:
49.3 - Jun 6
Tracking Status
firefox48 --- affected
firefox49 --- fixed
firefox50 --- verified

People

(Reporter: jryans, Assigned: bigben)

References

Details

(Whiteboard: [multiviewport] [mvp-rdm])

Attachments

(1 file, 3 obsolete files)

Bug 1241714 adds a device selector.

Currently the device DB is only ever loaded once and never refreshed.  We need to refresh it after some time has elapsed.
Flags: qe-verify+
Priority: -- → P2
Whiteboard: [multiviewport][triage] → [multiviewport] [mvp-rdm]
Priority: P2 → P3
QA Contact: mihai.boldan
Whiteboard: [multiviewport] [mvp-rdm] → [multiviewport] [reserve-rdm]
ni? to add more detail here.
Flags: needinfo?(jryans)
The responsive.html tool has a module it uses to manage the device list[1].  This in turn calls to a devices module shared by all tools[2].  This shared devices module then calls a general `getJSON` utility to load a JSON file with the devices from a CDN and cache it into a pref[3].

`getJSON`'s current behavior defaults to checking the pref it cached data into.  If that pref exists, it returns that and does not check the CDN.  However, it does have a `bypassCache` argument you can pass to always check with the CDN.

Originally, I was going to suggest we should change `shared/devices.js` to save a timestamp of that last time it checked for new data, so we set some policy, like only check every N days.  However, it would be nice if we could use just always make a request and let HTTP caching rules handle this for us.

However, Firefox by default sends Cache-Control: no-cache from XHR requests.  If we can change `getJSON` to allow HTTP caching and then always request from higher levels, I think that would be ideal.

I'll keep the ni? for now to check on more details of caching.

[1]: https://dxr.mozilla.org/mozilla-central/source/devtools/client/responsive.html/devices.js
[2]: https://dxr.mozilla.org/mozilla-central/source/devtools/client/shared/devices.js
[3]: https://dxr.mozilla.org/mozilla-central/source/devtools/client/shared/getjson.js
Ah, I was tricked by the DevTools when testing this. :) (We default to disabling HTTP cache when the toolbox is open.  To do a proper test of caching from the toolbox, you need to go to the toolbox options / gear icon and uncheck "Disable HTTP cache".)

So just making the request should follow HTTP cache rules like we want, which is great!

I think the best plan then would be to make a more general change to `getJSON`.  It should change to always making a network request.  Every time the network request succeeds, continue storing the response in the pref as an extra backup in case it falls out of the cache and we are offline.

If the network request fails, try to read a value out of the pref first and use that if possible, otherwise we give up.

With these changes, the `bypassCache` parameter can be removed since we're always checking the network / HTTP cache now.

Also, it would be nice to test the RDM UI in the case that this fails, since I'm not sure if we handle that well right now.
Flags: needinfo?(jryans)
As discussed last week Ryan, I would like to take this bug.

I'm working on a patch to improve the behavior of getJSON, but I noticed WebIDE uses this function as well [1]. Are we sure that this update won't have negative side effects?

By the way, the device list is quite long, and writing it to the preferences seems expensive in terms of performance. ("Warning: attempting to write 18663 bytes to preference devtools.devices.url_cache. This is bad for general performance and memory usage. Such an amount of data should rather be written to an external file. This preference will not be sent to any content processes"). Perhaps we should store the response the least often possible.

[1]: https://dxr.mozilla.org/mozilla-central/source/devtools/client/webide/content/webide.js#21
Flags: needinfo?(jryans)
Assignee: nobody → bchabod
Status: NEW → ASSIGNED
(In reply to Benoit Chabod [:bigben] from comment #4)
> I'm working on a patch to improve the behavior of getJSON, but I noticed
> WebIDE uses this function as well [1]. Are we sure that this update won't
> have negative side effects?

It should be fine for WebIDE as well.  For WebIDE's usages, I believe it always passes true for `bypassCache` today, so your work shouldn't affect WebIDE.

> By the way, the device list is quite long, and writing it to the preferences
> seems expensive in terms of performance. ("Warning: attempting to write
> 18663 bytes to preference devtools.devices.url_cache. This is bad for
> general performance and memory usage. Such an amount of data should rather
> be written to an external file. This preference will not be sent to any
> content processes"). Perhaps we should store the response the least often
> possible.

Ah right, thanks for reminding me of this.  We should change `getJSON` to stop storing this data in a pref, since as the warning mentions, they are not intended for large data like this.  Let's change to the async-storage module[1], which uses IndexedDB internally to store its data.  Then we won't need to worry about the size.

As a clean up step for people who already have the pref set, let's clear the existing pref if it exists when calling `getJSON`.

[1]: https://dxr.mozilla.org/mozilla-central/source/devtools/shared/async-storage.js
Flags: needinfo?(jryans)
Here's a first patch!
The main changes are in devtools/client/shared/getjson.js.
Now, it always makes a network request, and the bypassCache parameter has been removed here and in getjson calls.
The full device list is stored in IDB with asyncStorage module instead of preferences.
That local storage version is updated every time the network request succeeds without HTTP cache.

I've improved the logs in case of failure, but modules using this function should take care of the eventual promise rejection themselves.
I also removed the file from .eslintignore as it now complies with the linter rules.
Attachment #8756235 - Flags: review?(jryans)
Comment on attachment 8756235 [details] [diff] [review]
1st patch - change getJSON behaviour and move caches to local storage

Review of attachment 8756235 [details] [diff] [review]:
-----------------------------------------------------------------

Your patch doesn't seem to apply for me, I think there we some ESLint changes to the same file in another patch.  Could you rebase it and request review again?

Have you tried running any tests yet with this change?  It's possible tests could have issues, since they aren't allowed to make requests out to the public internet during test runs.  However, looking at the RDM and WebIDE users of this module, it looks like their tests replace the URL to use with a local file, so perhaps everything will work fine.

There may be impact to tests, so we'll want to make to sure run this on try before landing.  Do you have access to the try server yet[1]?  If not, you should request it and have one of us vouch for you.

[1]: https://wiki.mozilla.org/ReleaseEngineering/TryServer#Getting_access_to_the_Try_Server

::: devtools/client/shared/getjson.js
@@ +17,4 @@
>    let deferred = promise.defer();
>    let xhr = new XMLHttpRequest();
>  
> +  // Clear unused cache pref if it exists

Expand on this comment a bit to say we used to store cached data to pref, but now we are using asyncStorage instead, so this is a migration step for existing users.

@@ +18,5 @@
>    let xhr = new XMLHttpRequest();
>  
> +  // Clear unused cache pref if it exists
> +  if (Services.prefs.prefHasUserValue(prefName + "_cache")) {
> +    Services.prefs.clearUserPref(prefName + "_cache");

To be a bit nicer for migration purposes, we could take the cached value from the pref, store it in asyncStorage, and then delete it from the pref.
Attachment #8756235 - Flags: review?(jryans) → feedback+
Comment on attachment 8756235 [details] [diff] [review]
1st patch - change getJSON behaviour and move caches to local storage

Review of attachment 8756235 [details] [diff] [review]:
-----------------------------------------------------------------

::: devtools/client/shared/getjson.js
@@ +35,4 @@
>      try {
> +      let json = JSON.parse(xhr.responseText);
> +      // Update storage cache only when JSON was modified
> +      if (xhr.status == 200) {

I am not entirely sure I follow the status check here, can you explain your thinking?  You're trying to avoid saving the same data to the cache?

I am not sure we need the check.  In particular, if we are going to pass the response through, it seems like it should go in the cache, so maybe we can simplify and drop the check.  But maybe there's a case I am missing!
You're right, there were recent ESLint changes and Bug 1270994 already fixed getjson.js!
Here's a rebased version of the patch, that you should be able to apply on latest master.

I used this rebase opportunity to update the comments in the code, as you asked for the migration step but also for the main function usage.

(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #7)
> To be a bit nicer for migration purposes, we could take the cached value
> from the pref, store it in asyncStorage, and then delete it from the pref.

However, I'm not sure this is a good idea.
What happens is we decide to update the JSON data structure?
Someone who updates his Firefox but has an old pref cache will end up with something wrong in asyncStorage, that might crash the corresponding module.

(In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> ::: devtools/client/shared/getjson.js
> @@ +35,4 @@
> >      try {
> > +      let json = JSON.parse(xhr.responseText);
> > +      // Update storage cache only when JSON was modified
> > +      if (xhr.status == 200) {
> 
> I am not entirely sure I follow the status check here, can you explain your
> thinking?  You're trying to avoid saving the same data to the cache?

I chose to check that the HTTP code is 200 because we don't need to write the cached data in asyncStorage every time. For example, if the HTTP code is 304 (Not Modified = HTTP cache is too old, but the JSON hasn't changed), there is no point in storing the JSON again. Do you think it is useless?

By the way, I already have access to the try server, and here's the test benchmark for that patch:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=d0d2ea9c1f38
As you can see, besides the intermittent failures, some WebIDE tests don't pass anymore.
I will try to fix this :/
Attachment #8757284 - Attachment description: Change getJSON behaviour and move caches to local storage → 2nd patch - change getJSON behaviour and move caches to local storage
Attachment #8757284 - Flags: review?(jryans)
Attachment #8757284 - Flags: feedback+
Attachment #8756235 - Attachment is obsolete: true
Comment on attachment 8757284 [details] [diff] [review]
2nd patch - change getJSON behaviour and move caches to local storage

Review of attachment 8757284 [details] [diff] [review]:
-----------------------------------------------------------------

While testing this locally, I noticed the storage inspector, which I wanted to use to check IDB, doesn't work on chrome:// pages (filed bug 1276339).

I don't have a quick answer about the WebIDE tests.  Are you able to reproduce the failures locally?  I can see them here.  It could be that the changes to getJSON have altered the timing of these tests.  :janx worked on these tests, so he should be able to help.

::: devtools/client/shared/getjson.js
@@ +29,5 @@
>  
> +  // We used to store cached data in preferences, but now we use asyncStorage
> +  // Migration step: clear now useless cache pref (if it still exists)
> +  if (Services.prefs.prefHasUserValue(prefName + "_cache")) {
> +    Services.prefs.clearUserPref(prefName + "_cache");

> (In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #7)
> > To be a bit nicer for migration purposes, we could take the cached value
> > from the pref, store it in asyncStorage, and then delete it from the pref.
> 
> However, I'm not sure this is a good idea.
> What happens is we decide to update the JSON data structure?
> Someone who updates his Firefox but has an old pref cache will end up with
> something wrong in asyncStorage, that might crash the corresponding module.

You're right that we have to watch out for that, but it's independent of where we cache the data locally.  Whether we use a pref or asyncStorage, we need to watch out for changes to the schema in general.

So, it should be safe to assume we're already watching for data changes and the data can be migrated to asyncStorage here.

@@ +46,4 @@
>      try {
> +      let json = JSON.parse(xhr.responseText);
> +      // Update storage cache only when JSON was modified
> +      if (xhr.status == 200) {

> (In reply to J. Ryan Stinnett [:jryans] (use ni?) from comment #8)
> > ::: devtools/client/shared/getjson.js
> > @@ +35,4 @@
> > >      try {
> > > +      let json = JSON.parse(xhr.responseText);
> > > +      // Update storage cache only when JSON was modified
> > > +      if (xhr.status == 200) {
> > 
> > I am not entirely sure I follow the status check here, can you explain your
> > thinking?  You're trying to avoid saving the same data to the cache?
> 
> I chose to check that the HTTP code is 200 because we don't need to write
> the cached data in asyncStorage every time. For example, if the HTTP code is
> 304 (Not Modified = HTTP cache is too old, but the JSON hasn't changed),
> there is no point in storing the JSON again. Do you think it is useless?

Okay, I guess it just reads a bit oddly, since we _always_ pass the response value through to the promise, but then only _sometimes_ store it.

It think we should instead do one of the following:

A. Drop the check so we always store the response value
B. If it's not a 200, ignore the response entirely, an read from the cache.

It just seems confusing to have a case where we resolve the network value, but then don't update storage.  I know we're saying they should be the same value, but it feels like it adds mental complexity.

I guess I would say just drop the check (A), since there's not really any harm to storing in asyncStorage every time.  It happens in the background too, since we're not waiting on it to complete.

@@ +47,5 @@
> +      let json = JSON.parse(xhr.responseText);
> +      // Update storage cache only when JSON was modified
> +      if (xhr.status == 200) {
> +        asyncStorage.setItem(prefName + "_cache", json).catch(function (e) {
> +          // Could not update storage cache, but we can continue

Let's at least log an error here in case it happens.  I believe `catch(e => console.error(e))` should work.
Attachment #8757284 - Flags: review?(jryans)
Okay, I've managed to fix the WebIDE tests issue.
The problem was in webide/test/test_simulators.html, which called asynchronous functions in webide/content/simulator.js with a potential race condition.
There is a test that checks if the value of a DOM element has been updated [1], but this update might happen too late and the test didn't wait for it.
Since we've made getJSON a bit longer to execute, the DOM element was always updated too late. I've forced the test script to wait for an "onchange" event on this element, and now the tests pass.

Ryan, I've also followed your advice about moving the cache from the preferences if it still exists there and storing the network response every time.
Please give me your opinion on this new version :)

[1]: https://dxr.mozilla.org/mozilla-central/source/devtools/client/webide/test/test_simulators.html#165
Attachment #8757940 - Flags: review?(jryans)
Attachment #8757940 - Flags: feedback+
Attachment #8757284 - Attachment is obsolete: true
I've been thinking about this, and on second thought, maybe changing the code so that tests pass isn't satisfying. I've only added this:

    // Trigger any listener waiting for this information
    let change = document.createEvent("HTMLEvents");
    change.initEvent("change", true, true);
    this._form.dispatchEvent(change);

So it probably doesn't change the simulator performance, but it would be better if we didn't have to change any code, except in the test itself. I think there a few other options:

A. Adding an arbitrary wait, but this slows down the tests and isn't bulletproof
B. Adding a polling system at a regular interval, that will check if the form name has been updated every X ms. We can set a timeout and the test fails if it hasn't been updated after Y ms.

What do you think?
(In reply to Benoit Chabod [:bigben] from comment #10)
> By the way, I already have access to the try server, and here's the test
> benchmark for that patch:
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=d0d2ea9c1f38
> As you can see, besides the intermittent failures, some WebIDE tests don't
> pass anymore.
> I will try to fix this :/

As an aside for future try runs, I would suggest something like "-u mochitests,xpcshell" for DevTools code.  Reftests are generally not affected by DevTools, but xpcshell is.

See Hacking page[1] for more notes.  Will review and reply to comments in a bit.

[1]: https://wiki.mozilla.org/DevTools/Hacking#DevTools_Automated_Tests
(In reply to Benoit Chabod [:bigben] from comment #13)
> I've been thinking about this, and on second thought, maybe changing the
> code so that tests pass isn't satisfying. I've only added this:
> 
>     // Trigger any listener waiting for this information
>     let change = document.createEvent("HTMLEvents");
>     change.initEvent("change", true, true);
>     this._form.dispatchEvent(change);
> 
> So it probably doesn't change the simulator performance, but it would be
> better if we didn't have to change any code, except in the test itself. I
> think there a few other options:
> 
> A. Adding an arbitrary wait, but this slows down the tests and isn't
> bulletproof
> B. Adding a polling system at a regular interval, that will check if the
> form name has been updated every X ms. We can set a timeout and the test
> fails if it hasn't been updated after Y ms.
> 
> What do you think?

In most cases, adding some kind of an event (depends on the specific situation what event is best) tends to the best way to handle races like this, since it clearly communicates when it's safe to proceed.  Waiting on a timeout is bad especially on try where the machines can be really slow leading to intermittents.  Polling can work, but it's pretty unsatisfying I'd say.  So, an event is usually the right choice.
Comment on attachment 8757940 [details] [diff] [review]
3rd patch - change getJSON behaviour and move caches to local storage

Review of attachment 8757940 [details] [diff] [review]:
-----------------------------------------------------------------

Great, I think this version looks good overall!  Just a few small things to tweak.

After fixing those and assuming try looks good (make sure to add the try run as a comment here in the bug), you should be ready to land (set "checkin-needed").

::: devtools/client/webide/content/simulator.js
@@ +114,5 @@
>  
>        // Update the form fields.
>        this._form.name.value = simulator.name;
> +
> +      // Trigger any listener waiting for this information

Let's move this to the end of the `then()` block, so we can be sure all fields and element are up to date.

::: devtools/client/webide/test/test_simulators.html
@@ +148,5 @@
>  
>            let doc = simulatorPanel.contentWindow.document;
>            let form = doc.querySelector("#simulator-editor");
> +
> +          let deferred = new Promise((resolve, reject) => {

Let's use a more specific name for the deferred, like `formReady` or something.
Attachment #8757940 - Flags: review?(jryans) → review+
Thanks for the review Ryan!
I've addressed your two remarks, and the patch should be ready for landing now.
Here's the corresponding trial run on treeherder:

https://treeherder.mozilla.org/#/jobs?repo=try&revision=b503311ebbe8
Attachment #8758709 - Flags: review+
Attachment #8758709 - Flags: feedback+
Attachment #8757940 - Attachment is obsolete: true
Keywords: checkin-needed
Pushed by cbook@mozilla.com:
https://hg.mozilla.org/integration/fx-team/rev/e88afebe9226
Change getJSON behaviour and move caches to local storage, r=jryans
Keywords: checkin-needed
https://hg.mozilla.org/mozilla-central/rev/e88afebe9226
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 49
Iteration: --- → 49.3 - Jun 6
Priority: P3 → P1
Whiteboard: [multiviewport] [reserve-rdm] → [multiviewport] [mvp-rdm]
I managed to reproduce this issue on Firefox 49.0a1 (2016-05-04) and on Windows 10 x64, using the STR received from Ryan through IRC.

On Ubuntu and Windows OS's, the list is updated after enabling RDM or if disabling RDM and enable it again.
On Mac OS the list is not refreshed by using this STR. Is there something else that I should modify in order to test this issue on Mac OS? 
Here is the error thrown in console on Mac: not well-formed devices.json:1:1.

Also, the list is not refreshed while the RDM is enabled. I think that this is expected. Ryan, can you please confirm this? 
The tests were performed on Firefox 50.0a1 (2016-06-06) and on Windows 10 x64, Ubuntu 16.04 x64 and on Mac OS X 10.10.5.
QA Whiteboard: [qe-rdm]
Flags: needinfo?(jryans)
After discussing with Benoit Chabod on IRC, I found that there were some problems with the format of the json file on my side.
After modifying the file format, the list of devices is correctly refreshed on Mac OS.
Since the device list is correctly refreshed, I am marking the issue Verified Fixed.
Status: RESOLVED → VERIFIED
Flags: qe-verify+
Flags: needinfo?(jryans)
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.