Closed Bug 706247 Opened 14 years ago Closed 14 years ago

Easy-to-configure backoff for server

Categories

(Web Apps Graveyard :: AppsInTheCloud, defect, P2)

defect

Tracking

(Not tracked)

RESOLVED INVALID

People

(Reporter: ianbicking, Assigned: tarek)

References

Details

(Whiteboard: server)

Sometimes we might have to ask all clients to slow down talking to the server. We should have a simple configuration to make that happen.
Blocks: 700492
Priority: -- → P2
Whiteboard: server
I see two use cases: 1/ reduce the load. 2/ completely stop the load. proposal, 1- python level: a backoff option that contains the number of seconds we set in Retry-after, if set, all requests get 503+Retry-After 2- nginx level: a max req per min per client http://wiki.nginx.org/HttpLimitReqModule
The nginx level seems like it would be more to stop a rogue client from sending too many requests? Retry-After should be sufficient for responsible clients.
X-Sync-Poll-Time on GETs
hardcoded 503 page Nginx to shut down the cluster ? See with Ops ?
Assignee: nobody → tarek
What we have in Sync, to avoid having to restart the server, is a back-off flag located in memcache, Since we want to use memcache already for GETs optimization, I guess that a good idea to store the flag there
pushed at https://github.com/mozilla/appsync/commit/db5a9c111c8fcc5d3c6e1522cde7c086ee22b306 Ian, I'll let you post-review. Notice that we can now use this to cache the last modified stuff, at least for the SQL backend
The server is now configured with Memcached enabled. There's a backoff script Ops can use to set/get/del the backoff value in memcached See the end of the README file https://github.com/mozilla/appsync/blob/master/README.rst
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
The old app sync codebase is no longer going to be supported. All resolved fixed bugs are being marked as invalid, as they no longer apply to the new apps in the cloud service.
Resolution: FIXED → INVALID
Product: Web Apps → Web Apps Graveyard
You need to log in before you can comment on or make changes to this bug.