Closed Bug 945449 Opened 11 years ago Closed 11 years ago

FxAccountsClient should support auth server's backoff protocol

Categories

(Firefox :: Firefox Accounts, defect)

All
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED FIXED
Firefox 30
Tracking Status
firefox29 + fixed
firefox30 --- fixed

People

(Reporter: spenrose, Assigned: spenrose)

References

Details

(Whiteboard: [qa+])

Attachments

(2 files, 5 obsolete files)

Per https://github.com/mozilla/fxa-auth-server/pull/323, the FxA auth server may return an HTTP header indicating that the client should wait before re-sending any requests. The client should follow it.
Blocks: 935245
Whiteboard: [qa+]
Blocks: 943521
The protocol is specified here:

  https://github.com/mozilla/fxa-auth-server/blob/master/docs/api.md#backoff-protocol

I suspect the best place to sniff the errno field is in:

  services/fxaccounts/FxAccountsManager.jsm

but that may require some refactoring. Also, given that the expected Retry-After values are very human-noticeable, we will need to give FxOS or Firefox enough information to make its own policy and UX decisions at the gaia/RP and sync/chrome levels, respectively. Thoughts from the Sync team? Fernando?
Flags: needinfo?(ferjmoreno)
This is an interesting addition :)

(In reply to Sam Penrose from comment #3)
> The protocol is specified here:
> 
>  
> https://github.com/mozilla/fxa-auth-server/blob/master/docs/api.md#backoff-
> protocol
> 
> I suspect the best place to sniff the errno field is in:
> 
>   services/fxaccounts/FxAccountsManager.jsm
> 
> but that may require some refactoring. Also, given that the expected
> Retry-After values are very human-noticeable, we will need to give FxOS or
> Firefox enough information to make its own policy and UX decisions at the
> gaia/RP and sync/chrome levels, respectively. Thoughts from the Sync team?
> Fernando?

FxAccountsManager already handles this error.

Check https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsCommon.js#61

and https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsManager.jsm#58

We progress the JSON object included within the error response to the content side, so it seems that Gaia already has the required information.

I can't find any UX specification that
Flags: needinfo?(ferjmoreno)
(Sorry, I hit the send button before I was supposed to :\ ).

... I can't find any UX specification that contains any guidance about what we are supposed to do in case of a server request triggered during periods of heavy load, but I would assume that:

1. We want to re-trigger the failed request once the 'retryAfter' timeout expires.
2. We want to notify the user about this.
3. We want to allow the user to cancel the retry.

For 1. IMHO we should probably queue the failed requests in FxAccountsClient.jsm

For 2. we already have the required bits in the platform, so we just need to appropriately handle the error in the Gaia side.

For 3. we will need a way to let FxA manager apps (FTU and Settings in B2G) to cancel a pending request. So we'll need to add a new, let's say, 'cancelPendingRequest(aId)' method to the existing B2G glue API (IAC based API + FxAccountsMgmtService.jsm API) that should allow apps to remove a request from the requests queue.
Fernando, thanks, that's very helpful. Adam and John we'll need an error screen or two.
Flags: needinfo?(arogers)
Assignee: nobody → jparsons
Target Milestone: --- → mozilla29
Blocks: 905997
Talked to jgruen on IRC just now.  He said he should be able to put something together tomorrow.
Target Milestone: mozilla29 → ---
Nothing more I can add here beyond "Yes, we should have UX for this."
Flags: needinfo?(arogers) → needinfo?(jgruen)
Summary: FxAccountsClient should support server's backoff protocol → FxAccountsClient should support auth server's backoff protocol
Blocks: 955951
Component: Identity → FxA
Attached patch 945449-scheduler.patch (obsolete) — Splinter Review
So I missed that Jed had taken this, and tried to sketch out the approach that Fernando had suggested. This first patch creates a very simple, second-resolution delay tracking object. It does not (contrary to the patch file name) schedule future calls.
Assignee: jparsons → spenrose
Attached patch 945449-client-backoff.patch (obsolete) — Splinter Review
This patch modifies hawk.js to include a Retry-After value if the server sends one, and modifies FxAccountsClient to notice it and store it. I need to add tests for both changes, but I wanted to get feedback on the approach first. Jed, my apologies if you were already working on this, I am happy to defer to your WIP.
Attachment #8376863 - Flags: feedback?(jparsons)
Attachment #8376863 - Flags: feedback?(ferjmoreno)
I am inclined to make the Gaia work a separate bug. Would it make sense to handle the retry scheduling there?
Comment on attachment 8376863 [details] [diff] [review]
945449-client-backoff.patch

Review of attachment 8376863 [details] [diff] [review]:
-----------------------------------------------------------------

::: services/fxaccounts/FxAccountsClient.jsm
@@ +13,5 @@
>  Cu.import("resource://services-common/hawk.js");
>  Cu.import("resource://services-crypto/utils.js");
>  Cu.import("resource://gre/modules/FxAccountsCommon.js");
>  Cu.import("resource://gre/modules/Credentials.jsm");
> +Cu.import("resource://gre/modules/Delay.jsm");

Did you forget to add Delay.jsm to the patch?
Attachment #8376863 - Flags: feedback?(ferjmoreno)
Sorry, patch 8376863 depends on the previous, 8376862. Perhaps I should have made them a single patch.
Blocks: 973635
Oh, I see, sorry, I didn't notice the other patch :). I'll set myself again as feedback? and will look into the code later.
Attachment #8376863 - Flags: feedback?(ferjmoreno)
Comment on attachment 8376863 [details] [diff] [review]
945449-client-backoff.patch

Review of attachment 8376863 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks for this patch Sam.

My main concern here is that we are not queueing HAWK requests done via FxAccountsClient._request(), we are just rejecting request promises hoping that the caller handles the retry work itself. We have multiple callers, so IMHO we need to implement the retry mechanism in the client itself and don't fulfill/reject the request promise until we get the "real" response for the server request or the consumer decides to stop waiting for the retry.
Attachment #8376863 - Flags: feedback?(ferjmoreno) → feedback-
Hhhmm, "We have multiple callers, so IMHO we need to implement the retry mechanism in the client itself and don't fulfill/reject the request promise" I considered the multiple callers a reason to move the queue into the callers, since each caller has to track waiting/cancel state anyway, why add another waiting/cancel state machine in Gecko? Shouldn't the UX display "Server busy, retry in <13> seconds <cancel>", and if it should, is there any more work to do for Gecko? Also, if the user presses cancel in one UX and Gecko is managing the queue, won't it have to do extra work to propagate the cancel back to the second UX? But I don't feel strongly.
(In reply to Sam Penrose from comment #16)
> Hhhmm, "We have multiple callers, so IMHO we need to implement the retry
> mechanism in the client itself and don't fulfill/reject the request promise"
> I considered the multiple callers a reason to move the queue into the
> callers, since each caller has to track waiting/cancel state anyway, why add
> another waiting/cancel state machine in Gecko?

Callers don't need to track waiting state, they need to track only the fulfillment of the promise associated with the client request that they asked for. Allowing callers to handle the waiting/retry/cancel work can be quite hard and will certainly require a lot of duplicated code.

> Shouldn't the UX display
> "Server busy, retry in <13> seconds <cancel>", and if it should, is there
> any more work to do for Gecko?

Yes, we need to show this message and allow the user to cancel the requests if needed.

As soon as we get the first backoff response from the server, we need to queue the request (including the promise) and notify the UI side to show the appropriate UX (probably via nsIObserverService). Following requests will also be queued. In case of cancellation, we need to cancel the whole queue as some requests might be dependent of others.

> Also, if the user presses cancel in one UX
> and Gecko is managing the queue, won't it have to do extra work to propagate
> the cancel back to the second UX? But I don't feel strongly.

No, the cancellation should be handled by the FxAccountsClient request queue only and it will reject all the promises associated with each queued request.

Let me know if this doesn't make sense or if you need me to write some code for this, please.
Maybe it is easier to see this with an example. For a signIn request:

(I'm taking a B2G example but this is extensible to other products).

1. Omitting the Gaia specifics, we get a 'signIn' request from the content side at [1] and call the appropriate FxAccountsManager method [2].

2. Let's say that everything goes well and we are allowed to call FxAccountsClient [3].

3. We do the 'account/login' request [4]

4. And we get a backoff response at [5]

5. Instead of rejecting the promise [6], we need to save the request and the promise in a task queue that you'll need to write. We will notify about this probably via nsIObserverService [7]. In the B2G case, I would handle this notification in FxAccountsManager, like in [8], and would use FxAccountsUIGlue to launch the retry UI (you'll need to add a new method to [9] and [10]). Other products can handle this observer notification until they use FxAccountsManager.

6. When the timeout for the queued request expires, we retry it and notify about the retry attempt to update the UI (again via nsIObserverService probably).

7. If the server replies with something different than the back off response, we send a notification to remove the retry UI and fulfill or reject the promise associated with the sigIn request that we saved in step 5.

[1] https://mxr.mozilla.org/mozilla-central/source/b2g/components/FxAccountsMgmtService.jsm#130
[2] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsManager.jsm#211
[3] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsManager.jsm#130
[4] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsClient.jsm#104
[5] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsClient.jsm#317
[6] https://developer.mozilla.org/en-US/docs/XPCOM_Interface_Reference/nsIObserverService
[7] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsClient.jsm#319
[8] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsManager.jsm#30
[9] https://mxr.mozilla.org/mozilla-central/source/services/fxaccounts/interfaces/nsIFxAccountsUIGlue.idl
[10] https://mxr.mozilla.org/mozilla-central/source/b2g/components/FxAccountsUIGlue.js
As a side note, we will need to ensure the right thing happens if the user happens to change during this interval - this is a similar issue my patch in bug 972070 is trying to solve (ie, avoid things getting confused when a long-lived promise fires, but after the user has changed.)

In general though, I'm torn here:

* At face value, it is certainly much simpler to queue the promises and resolve them later internally to the FxAccounts code.  In theory, clients can remain blissfully unaware this happened - although in practice I doubt it's that simple.

* Sync on desktop itself would probably be best served by just rejecting the promise and letting the existing sync mechanisms be used to do the retry.  But this alone isn't enough justification.

* I'm somewhat wary of very-long-lived promises.  If we end up being told to back-off for a number of hours, we may well have *many* pending promises in the queue.  We'd still need some UX-related code that prevents the callers from queueing up the same request multiple times - eg, the UI would need to change so the user doesn't continuously attempt a login during this backoff period.  So the front-end is already going to need to be aware of being in the backoff state and take some special action (eg, disable the ability to re-login and/or explain why they aren't being logged in).  Some clients may prefer to just abort in this case (eg, the user doesn't care and wants to go do something else) - but their promise will remain in the queue.

Off the top of my head, I wonder if there is some compromise we can make - eg, reject the promise with an error object that itself has *another* promise - one that gets resolved when the backoff period has expired.  This way the front-end still has to enter some state that reflects we were backed off and can decide how it wants to proceed, with the simplest option being to simply retry once the "backoff period expired" promise fires?  IIUC, this wouldn't need a new promise per client, just one single shared promise once we enter this backoff state.
Fernando thank you very much for the detailed explanation of the code path. I defer to your sense of what's best. My uninformed sense was along the lines of Mark's comments, but I will happily follow whichever path the two of you settle on. That said, it feels to me like time to ask whether exposing his feature to the user layer is worth it. We could use a stupid delay tracker like the one in my patch above in the Gecko layer (thereby observing the contract with the server API), and until it expired simply return a "Service not available" error to Gaia, and let the UX handle it as it will. Adam, I'd like to get your opinion on the value of being able to show the user a dialog: "The service is too busy, retrying in XX seconds [Cancel]". It seems that the cost may be quite high relative to the time we have left for 1.4. The alternative is simply to show a dumb "service unavailable" dialog, with or without a [Retry] button that won't work before the backoff period expires.
Flags: needinfo?(arogers)
(In reply to Mark Hammond [:markh] from comment #20)
> As a side note, we will need to ensure the right thing happens if the user
> happens to change during this interval - this is a similar issue my patch in
> bug 972070 is trying to solve (ie, avoid things getting confused when a
> long-lived promise fires, but after the user has changed.)
> 

Indeed. That's why handling the cancellation in one single place looks easier to me.

> In general though, I'm torn here:
> 
> * At face value, it is certainly much simpler to queue the promises and
> resolve them later internally to the FxAccounts code.  In theory, clients
> can remain blissfully unaware this happened - although in practice I doubt
> it's that simple.
> 
> * Sync on desktop itself would probably be best served by just rejecting the
> promise and letting the existing sync mechanisms be used to do the retry. 
> But this alone isn't enough justification.
> 

Fair enough. I am mostly ignorant about the Sync code :(

> * I'm somewhat wary of very-long-lived promises.  If we end up being told to
> back-off for a number of hours, we may well have *many* pending promises in
> the queue.  We'd still need some UX-related code that prevents the callers
> from queueing up the same request multiple times - eg, the UI would need to
> change so the user doesn't continuously attempt a login during this backoff
> period.  So the front-end is already going to need to be aware of being in
> the backoff state and take some special action (eg, disable the ability to
> re-login and/or explain why they aren't being logged in).  Some clients may
> prefer to just abort in this case (eg, the user doesn't care and wants to go
> do something else) - but their promise will remain in the queue.

It seems to me that having a request queue with more than one backed-off request will hopefully be a very edge case. And in any case it can be controlled with an appropriate UI as you suggest. Also we can mitigate this kind of very-long-lived promises issues by directly rejecting the promises if the Retry-After value is bigger than 30 sec., for example. I don't think it is worth queueing requests with a Retry-After bigger than that as the user will hardly wait longer than that.

I agree about the UX required changes. And this is exactly what I tried to propose in comment 17 and comment 18. We need to notify the UI about the back off situation so a proper UI can be shown. This notification doesn't need to follow the path of the promise associated with the UI request. We can notify the UI via other mechanisms (nsIObserverService for instance). For the B2G specific case I guess that we will show a system dialog with the countdown for the retry and the cancel button, so there won't be any option to trigger any other request from the FxA UI. We can also just disable all the UI options with associated client requests in B2G and other products. If the user decides to cancel the retry, the UI will notify the platform about this, and the promises in the queue will be rejected and removed from the queue.

> 
> Off the top of my head, I wonder if there is some compromise we can make -
> eg, reject the promise with an error object that itself has *another*
> promise - one that gets resolved when the backoff period has expired.

I am afraid that this approach still requires to handle the retry mechanism in all the different places where we do client requests.

> This
> way the front-end still has to enter some state that reflects we were backed
> off and can decide how it wants to proceed, with the simplest option being
> to simply retry once the "backoff period expired" promise fires?  IIUC, this
> wouldn't need a new promise per client, just one single shared promise once
> we enter this backoff state.

We don't need to reject the promise to allow the front-end side to enter in a state that reflects the backed-off situation.
(In reply to Sam Penrose from comment #21)
> That said, it feels to me like time to ask whether
> exposing his feature to the user layer is worth it. We could use a stupid
> delay tracker like the one in my patch above in the Gecko layer (thereby
> observing the contract with the server API), and until it expired simply
> return a "Service not available" error to Gaia, and let the UX handle it as
> it will. Adam, I'd like to get your opinion on the value of being able to
> show the user a dialog: "The service is too busy, retrying in XX seconds
> [Cancel]". It seems that the cost may be quite high relative to the time we
> have left for 1.4. The alternative is simply to show a dumb "service
> unavailable" dialog, with or without a [Retry] button that won't work before
> the backoff period expires.

The question here is probably: do we want manual or automatic retries?

FWIW I don't think that adding a [Retry] button is easier than showing the dialog for the automatic retry and the cancel button.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #22)
> Also we can mitigate this
> kind of very-long-lived promises issues by directly rejecting the promises
> if the Retry-After value is bigger than 30 sec., for example.

I think it might help if we (or I :) knew what we "typical" values we could expect from Services here. 30 seconds seems somewhat arbitrary - if longer could be reasonably expected, clients who don't simply abort will still need some basic retry strategy (like Sync already does).  Once they have that, then they might as well handle the entire retry flow - they already need UX, so a promise rejection and a simple timer doesn't sound like *that* much additional complexity (and that's effectively all we'd be giving them for free).  Sync's use-case *definitely* can't just abort so will always need to keep a retry beyond whatever default timeout limit is chosen.

So ISTM:
* If it transpires that the UX for products evolve such that an arbitrary timeout limit doesn't meet requirements so retry logic gets added anyway, we've significantly increased complexity for no practical gain.
* A simple retry scheme really isn't *that* hard - and of the 2 existing clients, one has already evolved enough to have one.

I think what would change my mind is if a typical Retry-After value was only a few seconds - enough that a *really* slow network would have similar latency.  But then I'd argue we don't really need a notification to surface that state - it's treated exactly like the really slow network case.
> I think it might help if we (or I :) knew what we "typical" values we could expect from Services here.

Frankly, we don't really know either :-)

For sync storage servers, where everything is supposed to be running silently in the background, we wouldn't hesitate to send a multi-hour backoff signal if we got into trouble server-side.  For FxA which is much more user-facing, I've no sense of what a reasonable value for this will be in practice.

> if a typical Retry-After value was only a few seconds

That's unlikely.  30s is the current default of database-related errors in the server code, and I'd be surprised if we bothered to send anything less than that.  If we've got a real server-side crisis on our hands we could easily send a higher value.

The question of how to surface this state to the user is an interesting one, and I don't know what the best answer is.  IIRC in the current sync system, an explicit "Sync Now" from the user will override any server-requested backoff and attempt the sync right away.  A "server is busy, retry?" option might be fine.

Also note the difference between a 503-Retry-After and 429-Retry-After response.  The former is politely asking you not to retry due to server load, but you can retry if you like and it might well work.  The later is telling out that you *must* wait due to bad client behaviour, and a retry is pointless.
(In reply to Fernando Jiménez Moreno [:ferjm] from comment #23)
> (In reply to Sam Penrose from comment #21)
> > That said, it feels to me like time to ask whether
> > exposing his feature to the user layer is worth it. We could use a stupid
> > delay tracker like the one in my patch above in the Gecko layer (thereby
> > observing the contract with the server API), and until it expired simply
> > return a "Service not available" error to Gaia, and let the UX handle it as
> > it will. Adam, I'd like to get your opinion on the value of being able to
> > show the user a dialog: "The service is too busy, retrying in XX seconds
> > [Cancel]". It seems that the cost may be quite high relative to the time we
> > have left for 1.4. The alternative is simply to show a dumb "service
> > unavailable" dialog, with or without a [Retry] button that won't work before
> > the backoff period expires.
> 
> The question here is probably: do we want manual or automatic retries?
> 
> FWIW I don't think that adding a [Retry] button is easier than showing the
> dialog for the automatic retry and the cancel button.

That's a good point. We have four weeks left for 1.4. If the no-queue, no [Retry button] path (just a simple "Service unavailable, please try later dialog") is acceptable to Adam, and we think it will save considerable engineering time, I am inclined to recommend it. Does anyone NOT think it will save us a week or so of development and review time?
At this point "Service unavailable, please try again later" is OK by me. 

We can worry about streamlining the experience later, but it's not MVP.
Flags: needinfo?(arogers)
Spoke with :ckarlof and he is OK with this approach.
To reiterate, I believe this will be our approach to handling Retry-After headers on device:

  - return an error to Gaia which it can handle with existing "service not available code"
  - store the delay the server has requested
  - when the next request comes in:
    + if the delay has expired, clear it and send the request
    + otherwise, return the same error

Any objections? Any volunteers for f? on the two patches above?
If your application receives this error, your application would probably only do one of two things:

1) If the call was in response to a user action, display a "try again later" or "service unavailable" error to the user
2) If the call was part of a background task, reschedule the task to run again after one would expect FxA to be available again (per the value in retry-after). For example, if you have a background sync task that runs every 10 minutes and you get this error and retry-after=5 mins, just skip this iteration and see what happens the next time. 

Queuing up individual requests for retries, especially below the application layer, sounds complicated and hard to get right.

Sam, I think your approach would be fine for now, but it is still a little fancy for my taste. Lacking any application handling of this error, it sounds like a good compromise.
Attached patch 945449-client-backoff.patch (obsolete) — Splinter Review
I believe the first patch this revised version of the second patch fulfill the gecko behavior we've agreed on.
Attachment #8376863 - Attachment is obsolete: true
Attachment #8379303 - Flags: feedback?(jparsons)
Comment on attachment 8379303 [details] [diff] [review]
945449-client-backoff.patch

Review of attachment 8379303 [details] [diff] [review]:
-----------------------------------------------------------------

::: services/fxaccounts/FxAccountsClient.jsm
@@ +340,5 @@
> +        log.error("request error: " + errorObj);
> +        if (error.retryAfter) {
> +          log.debug('Received backoff response; caching error.');
> +          this.backoffError = error;
> +          this.delay = new FxAccountsNetworkDelay();

Can't we just use a timer instead of using FxAccountsNetworkDelay?

Something like:

if (error.retryAfter) {
  this.backoff = true;
  setTimeout(() => {
    this.backoff = false;
  }, error.retryAfter);
}
I followed your helpful suggestion and simplified the code to use a timer. Please let me know how this looks.
Attachment #8376862 - Attachment is obsolete: true
Attachment #8379303 - Attachment is obsolete: true
Attachment #8379303 - Flags: feedback?(jparsons)
Attachment #8380231 - Flags: feedback?(ferjmoreno)
Comment on attachment 8380231 [details] [diff] [review]
945449-simple_backoff_timer.patch

Review of attachment 8380231 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks Sam! Looks good!

::: services/fxaccounts/FxAccountsClient.jsm
@@ +335,5 @@
> +          log.debug('Received backoff response; caching error as flag.');
> +          this.backoffError = error;
> +          // Schedule clearing of cached-error-as-flag.
> +          CommonUtils.namedTimer(
> +            this.clearBackoff,

s/clearBackoff/_clearBackoff

::: services/fxaccounts/tests/xpcshell/test_client.js
@@ +79,5 @@
>  
>    yield deferredStop(server);
>  });
>  
> +add_task(function test_some_http_errors() {

Create a new task for this instead of having one big task for all errors, please.

@@ +128,5 @@
> +    do_check_eq(e.message, "Client has sent too many requests");
> +  }
> +  // Once timer fires, client nulls error out and HTTP calls work again.
> +  client._clearBackoff();
> +  yield client._request("/duringDelayIShouldNotBeCalled", method);

Can you also check the result of this call, please?
Attachment #8380231 - Flags: feedback?(ferjmoreno) → feedback+
I broke out the test and checked the final result per your feedback. I also added a couple more checks in the test. Thanks as always!
Attachment #8380231 - Attachment is obsolete: true
Attachment #8381711 - Flags: review?(ferjmoreno)
Flags: needinfo?(jgruen)
Comment on attachment 8381711 [details] [diff] [review]
945449-simple_backoff_timer.patch

Review of attachment 8381711 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good! r=me

::: services/fxaccounts/tests/xpcshell/test_client.js
@@ +112,5 @@
> +      response.bodyOutputStream.write(message, message.length);
> +    },
> +    "/duringDelayIShouldNotBeCalled": function(request, response) {
> +      response.setStatusLine(request.httpVersion, 200, "OK");
> +      jsonMessage = '{"working": "yes"}'

missing ;

@@ +126,5 @@
> +    yield client._request("/retryDelay", method);
> +  } catch (e) {
> +    do_check_eq(429, e.code);
> +    do_check_eq(30, e.retryAfter);
> +    do_check_neq(typeof(client.fxaBackoffTimer), 'undefined');

nit: use double quotes in the whole patch, please.
Attachment #8381711 - Flags: review?(ferjmoreno) → review+
r=ferjm . Nits addressed. Thanks Fernando!
Attachment #8381711 - Attachment is obsolete: true
Attachment #8382354 - Flags: review+
Keywords: checkin-needed
https://hg.mozilla.org/integration/fx-team/rev/4cd10b3c37d6

Please make sure your final patch has a proper commit message when requesting checkin :)
Flags: in-testsuite+
Keywords: checkin-needed
Whiteboard: [qa+] → [qa+][fixed-in-fx-team]
https://hg.mozilla.org/mozilla-central/rev/4cd10b3c37d6
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Whiteboard: [qa+][fixed-in-fx-team] → [qa+]
Target Milestone: --- → mozilla30
We need this uplifted to Aurora 29, right?
(In reply to :Gavin Sharp (email gavin@gavinsharp.com) from comment #40)
> We need this uplifted to Aurora 29, right?

yes.
Comment on attachment 8382354 [details] [diff] [review]
945449-simple_backoff_timer.patch

[Approval Request Comment]
Bug caused by (feature/regressing bug #): FxA sync
User impact if declined: If the backend is under load, Sync will make the problem worse and report errors
Testing completed (on m-c, etc.): Landed on m-c
Risk to taking this patch (and alternatives if risky): Low
String or IDL/UUID changes made by this patch: None

Note that the patch from bug 977931 must also come along.
Attachment #8382354 - Flags: approval-mozilla-aurora?
Attachment #8382354 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
This and bug 972070 don't apply on Aurora. Are we missing another patch that should be uplifted first?
Tim that's certainly possible. I haven't done any uplift bisection-triage efforts. I'll see what I can figure out.
The issues will be bug 963835 and bug 938635.  It looks like 963835 should be uplifted (and Jed is requesting that as I type), but bug probably isn't strictly necessary as it is FxOS only IIUC
This patch is updated against Aurora (which was a little painful - conflicts and 1 file rename!)
Product: Core → Firefox
Target Milestone: mozilla30 → Firefox 30
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: