Closed Bug 769759 Opened 12 years ago Closed 11 years ago

Allow clients to make intelligent decisions regarding server limits

Categories

(Cloud Services Graveyard :: Server: Sync, defect)

defect
Not set
normal

Tracking

(Not tracked)

VERIFIED WONTFIX

People

(Reporter: gps, Unassigned)

Details

(Whiteboard: [qa?])

Currently, storage service clients can issue HTTP requests that exceed a server's configured operating parameters. e.g. they can POST a set of BSOs that are too large. The spec says the client is supposed to address the issue and continue (by e.g. reducing the size of the request). It would be really nice if clients could make intelligent decisions on what to do based on the configured server limits. Ideas: * 413 sends back header saying max entity body size * New HTTP endpoint exposing server configuration There may be security implications here. e.g. if we say what the max payload size is, somebody would issue requests up to that size knowing they will trigger DB access, achieving the most efficient DoS. Of course, said limit can be easily determined experimentally.
Whiteboard: [qa?]
I agree with the need for clients to be able to introspect these limits, somehow. Having the server expose a JSON blob with its various limits is one option. Hardcoding (at least some lower bounds on) the limits into the spec may be another. CC'ing :dchan for opinions on the security implications.
I don't believe that exposing a max upload limit to the client reduces the security. As mentioned by :gps, this value can be determined experimentally. A benefit of the current system is that we may catch an attacker that is trying to determine the max upload size and preemptively restrict / throttle their ip. However a determined attacker would be using a botnet. I'm indifferent to whether there be a separate endpoint for client configuration or if it is rolled in the the storage server. The storage endpoint would still have to check the incoming BSO size.
I think it probably belongs in the somewhat moribund discovery service as additional metadata. Will try to resurrect that.
The spec currently hard-codes limits on the number of items in the "ids" parameter and in the body of a post. We also hard-code the maximum BSO payload size to 256K. What if we just add a further note that any requests with a body over XYZ KB will be rejected? (Or, slightly less restrictive, spec that servers must support requests with bodies less than XYZ KB in size?)
By definition, ids * BSO max provides a theoretical max. If we want a lower total pody max, we should just document that it's a configurable option and what the moz value for it is.
I don't think we can do anything productive with this post sync-1.5
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WONTFIX
ok
Status: RESOLVED → VERIFIED
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.