Closed Bug 748164 Opened 12 years ago Closed 12 years ago

sync: storage -> 1.10-2, core -> 2.6.5-1

Categories

(Cloud Services :: Operations: Deployment Requests - DEPRECATED, task)

x86
macOS
task
Not set
normal

Tracking

(Not tracked)

VERIFIED FIXED

People

(Reporter: telliott, Assigned: Atoll)

References

Details

(Whiteboard: [qa+])

This backports a few fixes in support of the android client without touching a bunch of the major changes that have been made to dbpooling or user objects. Those are not scheduled for deployment (and likely never will be).


Build Commands:

make build CHANNEL=prod RPM_CHANNEL=prod PYPI=http://pypi.build.mtv1.svc.mozilla.com/simple PYPIEXTRAS=http://pypi.build.mtv1.svc.mozilla.com/extras PYPISTRICT=1 SERVER_CORE=rpm-2.6-5 SERVER_STORAGE=rpm-1.10-2

make build_rpms CHANNEL=prod RPM_CHANNEL=prod PYPI=http://pypi.build.mtv1.svc.mozilla.com/simple PYPIEXTRAS=http://pypi.build.mtv1.svc.mozilla.com/extras PYPISTRICT=1 SERVER_CORE=rpm-2.6-5 SERVER_STORAGE=rpm-1.10-2

Bug fixes:

Bug 745059 - fixes a bug where memcache wasn't correctly incrementing storage size
Bug 693893 - handles the max-timestamp for empty collections correctly.
Bug 691315 - PUTs correctly overwrite items with expired TTLs
Bug 740170 - fixes an issue where the server would provide two different timestamps for a PUT request


QA Plan

QA has been verifying these bugs as part of the normal code - these are just the backports for a release branch. The QA tests as described in the bugs can be reused to verify the correctness of this release.

It's also been some time since we've done a storage push, and we are using a new build script, so some general QA of the whole process to make sure things are still working as expected would be a good idea.
There has been a noted incompatibility with a couple of release packages, notably py26-simplejson. Watch out for that; it can be worked around and I think Ryan is looking to tweak the build to make it work cleanly.
Depends on: 747998
Whiteboard: [qa+]
workaround is to rpm -e python-simplejson --nodeps; yum localinstall python26-simplejson
Assignee: nobody → rsoderberg
Perhaps we should wait for the fix to the hot-off-the-presses Bug 749924 before we roll this out...?
This change set covers a lot of time, and passes tests, so I'm inclined to ship it and give the changes a day to settle.

Any code fix for bug 749924 will only help users that end up calling DELETE /storage, so we'll still need to ship a fix for the rest.

When we fix this for existing users, it doesn't appear that new users are affected. So the temporary fix for existing cases should last until the patch is tested and load tested and etc.
Deployed 1.10-2 and so far it's passing loadtest, with db traffic confirmed going to all syncdb as opposed to just one.

QA, please test as desired.
Whiteboard: [qa+] → [qa+][needs qa]
Noting here some tickets that I believe are one way or another attached to this deployment:
Bug 747998 - RHEL6 build issue: "python26-simplejson" incompatible with "python-simplejson"
Bug 740695 - [meta] final bugfix deployment for sync1.1
with Bug 691315 - IntegrityError: Duplicate entry ___ for key PRIMARY during INSERT INTO wbo
Bug 740170 - PUT requests can return different timestamps in header and body
Bug 693893 - Key error u'meta' when PUTs with X-If-Unmodified-Since when memcache 'meta' key is not yet set
Status: NEW → ASSIGNED
QA Contact: operations-deploy-requests → jbonacci
In theory this deployment should fix Bug 687108.  I will try to find a way to reproduce it reliably, so that we can confirm whether it has actually been fixed.
Depends on: 751399
For completeness, noting that I fat-fingered the server-core version when I was giving these build commands to Toby.  It should be rpm-2.6.5-1, not rpm-2.6-5.
storage 1.10-2, core 2.6.5-1 is up and running without errors.
Whiteboard: [qa+][needs qa] → [qa+][needs qa][needs loadtest]
Summary: Deploy server-storage 1.10-2 → sync: storage -> 1.10-2, core -> 2.6.5-1
QA has completed testing in Stage for this deployment.
Covered the usual non-AP testing of new and old sync accounts with various sets of collections across multiple devices (including mobile).
The only remaining tasks may be some specific tests requiring intervention from OPs.
Deployed to SCL2, PHX1.
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Whiteboard: [qa+][needs qa][needs loadtest] → [qa+]
Very thoroughly verified by QA with extensive testing of storage for SCL2 and PHX1 - new and used accounts.
Status: RESOLVED → VERIFIED
I have built rpms with the following command and attempted to deploy them to sync1.web.mtv1.dev for testing:

    make build_rpms CHANNEL=prod RPM_CHANNEL=prod PYPI=http://pypi.build.mtv1.svc.mozilla.com/simple PYPIEXTRAS=http://pypi.build.mtv1.svc.mozilla.com/extras PYPISTRICT=1 SERVER_CORE=rpm-2.9-1 SERVER_STORAGE=rpm-1.12-1

However, I get a hard failure inside the memcache client library when running the tests:

    $ nosetests -xs syncstorage.................................python26: libmemcached/io.cc:356: memcached_return_t memcached_io_read(memcached_server_st*, void*, size_t, ssize_t*): Assertion `0' failed.
    Aborted

I guess this is due to sync1.dev being a CentOS machine while the rpms were built on r6-build.  Is there a dev machine matching r6-build on which I can sanity-check the RPMs before we try to push the out to stage?
Erm, sorry for the noise, that comment was supposed to go into Bug 758482...
You need to log in before you can comment on or make changes to this bug.