Refactor MemcachedStorage backend for locking-based API

RESOLVED FIXED

Status

Cloud Services
Server: Sync
RESOLVED FIXED
6 years ago
6 years ago

People

(Reporter: rfkelly, Assigned: rfkelly)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [qa?])

Attachments

(1 attachment, 2 obsolete attachments)

(Assignee)

Description

6 years ago
Created attachment 638332 [details]
python file for refactored memcached backend

Attached is my in-progress refactoring of the MemcachedStorage backend to be more generic and to implement the new locking API.  It's a serious change, so I'm attaching the code as a whole python file rather than a messy messy patch.

Currently memcached-level locks are not implemented, the locking APIs just call straight through to the underlying backend implementation.  Adding a simple exclusive lock will be easy, but trying to do a shared-readers/exclusive-writers lock in memcached is a lot trickier.

I've coalesced the memcache keys into just the following:

    <userid>:metadata:  the timestamps for all collections, and the current usage size.
    <userid>:c:<collection>:  data for a collection that is stored only in memcache.

The reason for bundling all metadata into a single key is twofold: it's very often written all at once so this cuts down on the number of roundtrips, and it makes it easier to keep all the data consistent.  I'm not wedded to this approach but I think it's worth exploring.

This patch also removes the hard-coded logic for keeping "meta/global" and "tabs" purely in memcache, replacing it with more generic logic.  You can specify a cached_collections setting and have those collections stored purely in memcache.  So something similar to the old behaviour can be had by doing:

    MemcachedStorage(sqlstorage, cached_collections=["meta", "tabs"])
Attachment #638332 - Flags: feedback?(telliott)
Whiteboard: [qa?]
A couple preliminary notes here (still working through the whole thing)

1) You need to distinguish between cached_collections and doubled_collections. Meta writes to memcache, but also to the db, because it's faster to serve it out of memcache, but losing that would be extra ungood. That's different behavior from the tab handling, where it's only in memcache. Perhaps we should be revisiting this.

2) I think we'll definitely want to do our locking in memcache and not defer it to the storage layer. That could be a config option, or maybe we just assume that it'll be that way if we're using the hybrid approach.
(Assignee)

Comment 2

6 years ago
Created attachment 639239 [details]
zipfile with refactored memcached backend

I've done some substantial reworking of this based on our conversation the other day.  It now supports two classes of cached collection as you suggest above, as well as a dead-simple memcache-based lock.

I'm not entirely convinced about the locking as it seems a little fragile.  But neither do I want to involve something like zookeeper at this stage :-)
Assignee: nobody → rfkelly
Attachment #638332 - Attachment is obsolete: true
Attachment #638332 - Flags: feedback?(telliott)
Attachment #639239 - Flags: feedback?(telliott)
Comment on attachment 639239 [details]
zipfile with refactored memcached backend

Really nice work. Impressive.

Only a couple notes. 

lock_for_read and lock_for_write are identical. TODO to differentiate them, or is it just an assumption that if you're using memcache storage, it's the same thing?

Also, get_cached_data calls storage.lock_for_read. Does this imply our mysql installation will also need to do locking at a lower level than memcache, or is the expectation that we'll be able to config it off. I guess I'm confused what the scenario is where we need a second level of locking.
Attachment #639239 - Flags: feedback?(telliott) → feedback+
(Assignee)

Comment 4

6 years ago
(In reply to Toby Elliott [:telliott] from comment #3)
> lock_for_read and lock_for_write are identical. TODO to differentiate them,
> or is it just an assumption that if you're using memcache storage, it's the
> same thing?

The writelock has to be a superset of the readlock, and I haven't come up with a more nuanced scheme that seems worthwhile in practice.  We could, for example, try to implement a shared-readers lock by writing the list of current readers into a JSON structure, but it's a lot of fiddling for an unknown payoff at this stage.
 
> Also, get_cached_data calls storage.lock_for_read. Does this imply our mysql
> installation will also need to do locking at a lower level than memcache, or
> is the expectation that we'll be able to config it off. I guess I'm confused
> what the scenario is where we need a second level of locking.

Since it makes several calls to the underlying store, it needs the readlock to ensure it is getting consistent data.  But it should be taking it on the memcached storage itself rather than calling through to the underlying store.  Hmmm...this may require some thread-local state to make the memcached lock re-entrant.

As discussed on IRC, I'll also add a flag to switch between memcached-level and database-level locking so that we can experiment between the two.
(Assignee)

Comment 5

6 years ago
Created attachment 641671 [details]
python file for refactored memcached backend

OK, I have added a "cache_lock" option to switch between locking-in-memcache and locking-in-the-database, and changed the get_cached_data method to call lock_for_read() at the right level of abstraction.  It's now ready for final review.

(It's also back to a single python file, since the memcached helper was moved into mozsvc)
Attachment #639239 - Attachment is obsolete: true
Attachment #641671 - Flags: review?(telliott)
Comment on attachment 641671 [details]
python file for refactored memcached backend


> A key prefix can also be defined to aviod clobbering unrelated data in a

typo

> It has the following structure:

Not something worth worrying about right now, but it's worth noting that the names add about 40 bytes of overhead to each record and, unlike bsos, that actually does represent a notable proportion of the overall record size.


> def _key(*names):
>     return ":".join(map(str, names))

Move this into the memcache class itself? We have this function recurring in a bunch of projects.


>     # are just simple mutext keys in memcache, one per colletion.  If you

couple typos

>             data = {
>                 "size": size,
>                 "last_size_recalc": last_size_recalc,
>                 "modified": modified,
>                 "collections": stamps,
>             }

Sort of related to the above, I think you're eventually going to want to treat this as its own object so that you can abstract away the names and structure.



>             self.cache.cas(key, data, casid)

Doesn't hurt to be extra careful here, but if you're holding the write lock, doesn't it make it impossible for someone else to beat you to it?
Attachment #641671 - Flags: review?(telliott) → review+
(Assignee)

Comment 7

6 years ago
> >             self.cache.cas(key, data, casid)
> 
> Doesn't hurt to be extra careful here, but if you're holding the write lock,
> doesn't it make it impossible for someone else to beat you to it?

It's not clear which of three instances of this you're referring to.  Two occur in _get_metadata() which may be called during read operations.  The third occurs in _update_total_size() which *should* only be called while holding the write lock, but that's not validated in the code.

I think any slight overhead of using cas() vs set() is worth it for an extra level of assurance against corrupting the data in memcache.
(Assignee)

Comment 8

6 years ago
https://github.com/mozilla-services/server-syncstorage/commit/a437a5d712955bc22ebb13c7d8b632aa7a632168
Status: NEW → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.