Open Bug 425287 Opened 12 years ago Updated 11 years ago

Add memcache support to AUS2

Categories

(AUS Graveyard :: Systems, defect)

defect
Not set

Tracking

(Not tracked)

People

(Reporter: morgamic, Assigned: morgamic)

Details

Attachments

(1 file, 1 obsolete file)

In case we need a fix for this in the app, I wrote a patch to use memcached to store xml output for AUS based on rawPath (url params).

Patch to come shortly -- this passes acceptance tests and would relieve NFS if we need it.
acceptance tests (reload a few times to verify working cache):
http://khan-vm.mozilla.org:8080/VerifyAus?test

This wraps the XML output so that if it's previously set and not expired it'll avoid all file reads.

I also added a command line flush script to flush memcache if we need to.  Recommend we use different memcache servers for AUS if we do deploy this.
Attachment #311918 - Flags: review?(oremj)
Same as before, just added status.php.
Attachment #311918 - Attachment is obsolete: true
Attachment #311923 - Flags: review?(oremj)
Attachment #311918 - Flags: review?(oremj)
Hey, so test plan for this:
* set up memcache patch on aus2 staging
* create new aus_data.csv file from current repo
** like this: http://morgamic.pastebin.mozilla.org/382513
** but updated from ausstage-01 to be current and include ~5k-10k tests
* use http://morgamic.pastebin.mozilla.org/382512 to parse input .csv and compare tests

Run:
1) test to verify no differences between aus2 prod and staging
2) same tests again, using "php -q status.php" to verify delta # of get_hits in memcache
3) assert that delta == # of total tests in input .csv

Post results here.  On top of the acceptance tests already run, this is a guarantee that we're not changing aus behavior, just caching current behavior to relieve the NFS.
Comment on attachment 311923 [details] [diff] [review]
v2, added status.php to query stats without flushing from CLI

Looks good to me.
Attachment #311923 - Flags: review?(oremj) → review+
What's the timeout/flush policy here?
Configurable to # of seconds in the config.  For new pushes, a no-update XML would be served until cache timeout, so no invalidation, which could potentially add a delay to new releases, but it would be avoidable by making the cache timeout less than the bouncer mirror delay.

I did add a flush script callable via cron, so should be easy to call it if we need to un-cache items.

The service hardly ever changes except for releases and nightlies -- I would figure a 15-30 minute cache expiration time would be doable.  Thoughts?
Does this mean that after we push updates live (on any channel), QA will have to wait 15-30 mins to see the updates?
No, there is a flush script that could be run during releases to clear memcache and avoid a delay.
You need to log in before you can comment on or make changes to this bug.