Closed Bug 523955 Opened 15 years ago Closed 15 years ago

Add memcache config for authstage.mozillla.com & mozilla.com

Categories

(Infrastructure & Operations Graveyard :: WebOps: Other, task)

All
Other
task
Not set
major

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: rdoherty, Assigned: dmoore)

Details

Can we get memcache configuration added to includes/config.inc.php on authstage.mozilla.com & mozilla.com?

This is preparation for pushing a patch for bug 500849 (improving yslow scores by concatenation css & js). The concatenation script uses memcache to avoid reading from disk on every request.

Memcache config example can be viewed at the bottom of includes/config.inc.php-dist on trunk and authstage.

Let me know if there's questions
Assignee: server-ops → dmoore
Ryan,

I've updated the config for http://www.authstage.mozilla.com

www.mozilla.com is a different matter, however, as the memcache config is more complicated. We're discussing this internally, and it's not yet deployed.
(In reply to comment #1)
> www.mozilla.com is a different matter, however, as the memcache config is more
> complicated. We're discussing this internally, and it's not yet deployed.

Thanks, let us know when it's ready and we'll push.
How long do you think it'll take?  Can we get an ETA?
mozilla.com is geographically distributed, so we're going to need to decide how to properly integrate a memcache configuration for multiple locations. Staging is simple, as memcached runs locally on the webserver itself. We do not run memcache on the production webservers, though... we have dedicated memcache servers.
It's not proxied?  So we have actual mozilla.com webheads elsewhere?

If that's the case you could spin up small memcache partitions locally just for this.  I don't think we need much memory for this, honestly.  And if memcache is a problem we can engineer around it and use local files instead.  Would spinning of a 128M memcache partition be difficult to do in our other datacenters?
For reference... there are nodes for www.mozilla.com in Amsterdam and Beijing.  Those two locations do not have any memcache servers.  We'll need to deploy memcache in those two locations first (which could be months - depends on hardware delivery, etc).

The other viable option would be to stop serving www.mozilla.com from those locations and just serve it from San Jose, but that seems like a step backwards.

If there's enough application backend to need memcache, the site is really straying from what the static cluster was intended for, and it really needs to be moved to another cluster that has support for web applications.
I midaired with morgamic on that, and submitted anyway.

We *could* change it to proxy from those locations instead of actually serving it from there (similar to how AMO works).  That'd be easy to do I suspect.
The idea is just having a small bucket to place the pre-compiled output for the minified JS and CSS.  We're not having load issues with mozilla.com at all, but this is an easy way to reduce overall traffic that we think is worthwhile.
For Amsterdam and Beijing I think we can modify the minification script to simply fallback to file cache if memcache does not exist.  Doesn't seem like it's really worth it to set up memcache just for this.
To echo justdave, the dependency on memcache does imply we should move this to a cluster designated for dynamic webapps. As an upside, this would make it easier to integrate into the global load balancing / proxy system currently in place.
Nevermind -- the minify code already falls back to /tmp if memcache doesn't exist.  Like I said in comment #8, redoing the static cluster to support memcache is not a good use of time.
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → WONTFIX
fyi, we -only- have origin servers in San Jose and Beijing.  Amsterdam was rebuilt - proxy/cache only.
Component: Server Operations: Web Operations → WebOps: Other
Product: mozilla.org → Infrastructure & Operations
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in before you can comment on or make changes to this bug.