Python servers: run multiple API versions in parallel

RESOLVED FIXED

Status

RESOLVED FIXED
8 years ago
8 years ago

People

(Reporter: philikon, Unassigned)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

We're contemplating speccing out a 1.2 iteration of the Sync API. If that ever went into production, we'd have to run 1.0/1.1 (which are the same for hysterical raisins) and 1.2 in parallel on the Sync nodes. Older clients would access 1.0/1.1, newer ones would go through 1.2, accessing the same Sync data.

AFAIUI the PHP server would handle this by simply having two different code bases deployed to the server, one in the '1.0' dir (with a symlink to '1.1'), one in the '1.2' directory.

Do we have a plan yet how to do this with the Python server? If so, yay RESOLVED WORKSFORME. If not, we probably want to start coming up with it now before any work on new API versions commenced.
Operationally, we can in theory run two different gunicorn instances on the webservers, one server 1.0/1.1, one serving 1.2; and then use logic either in nginx or in Zeus to direct traffic to the appropriate pool.
That sounds like an acceptable plan. I just wanted to make sure we have a bug on the issue. Whether it gets resolved in code or in config files -- I don't really care :)
It's certainly one plan. It would be more efficient to only run one instance of the code and have the python do the right thing. I can't comment on whether that's an option or not.
(In reply to comment #3)
> It's certainly one plan. It would be more efficient to only run one instance of
> the code and have the python do the right thing. I can't comment on whether
> that's an option or not.

One instance of the code seems doomed to a whole lot of if-then statements.

It's possible if we do a substantial overhaul that we might do a single controller that checks which api it is, then basically splits off into a subcontroller for the rest. But doing something at the zeus level seems more likely.
(In reply to comment #4)
> (In reply to comment #3)
> > It's certainly one plan. It would be more efficient to only run one instance of
> > the code and have the python do the right thing. I can't comment on whether
> > that's an option or not.
> 
> One instance of the code seems doomed to a whole lot of if-then statements.
> 
> It's possible if we do a substantial overhaul that we might do a single
> controller that checks which api it is, then basically splits off into a
> subcontroller for the rest. But doing something at the zeus level seems more
> likely.

This is an alias problem. aliases at the zeus or nginx level seems less painful and less ugly than at the code level: our apps are versionned and we don't want to release some kind of multi-versioned apps that handle themselves API routing. I have done it for 1.0/1.1 because 1.1 is backward compatible, but the next version, if backward incompatible, will be a new thing.
I like the idea of multiple gunicorns running (one for each major supported api version) and nginx having rules to reverseproxy based on the URL.
semi related: I think we could have several nginx per boxes because we'll pbbly end-up with an under-used CPU for many apps on the web heads
Fixed as a config-related issue
Status: NEW → RESOLVED
Last Resolved: 8 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.