Closed
Bug 599018
Opened 14 years ago
Closed 14 years ago
Add ability to read certain constants out of memcache
Categories
(Cloud Services Graveyard :: Server: Sync, defect)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: telliott, Assigned: telliott)
References
Details
(Keywords: push-needed, Whiteboard: [qa-])
Attachments
(1 file)
2.09 KB,
patch
|
tarek
:
review+
|
Details | Diff | Splinter Review |
We'll mostly just need this for server-downing and backing off, but the sync server should be able to read an arbitrary set of constants out of memcache. Need to make sure this is compatible with systems not using memcache. That means having a constant in default_constants that defines a set of constants to be read out of memcache. (This constant would be off by default, therefore ignored)
Assignee | ||
Comment 1•14 years ago
|
||
Attachment #478890 -
Flags: review?(tarek)
Assignee | ||
Comment 2•14 years ago
|
||
sorry, that path should be constants:$node
Comment 3•14 years ago
|
||
(In reply to comment #0) > We'll mostly just need this for server-downing and backing off, but the sync > server should be able to read an arbitrary set of constants out of memcache. > > Need to make sure this is compatible with systems not using memcache. That > means having a constant in default_constants that defines a set of constants to > be read out of memcache. (This constant would be off by default, therefore > ignored) Sounds good. For the Python version I was thinking of a [constants] section in the config file, with a "provider" variable to define where the constants are loaded from: - memcache: loads the values from the file, and updates them using the mapping from memcached, if any. - file: loaded from the file (default one if the "provider" variable is not defined.) Example: [constants] provider = memcache var1 = value1 var2 = value2 ... the constants in memcached would be a mapping under the same cache key "constants:host".
Comment 4•14 years ago
|
||
Ok that works fine. I tried to disable my local php/memcached server with success. I have also realized that the PHP memcache lib uses a custom serializer to store arrays in memcache, which makes it incompatible with Python. Since we will do a progressive upgrade of the nodes when switching to Python, we should make the memcached data inter-operable. I could write a custom deserializer to transform the PHP array into a Python mapping, but a cleaner solution would be to use JSON there or maybe a custom interoperable serialization format for mappings. == JSON == It seems built-in with certain versions of PHP: http://stackoverflow.com/questions/1816128/change-serialization-functions-in-php-for-memcached If we do this we'll need to compare the speed to make sure we don't slow down the lookup too much. And maybe try Proto-buf. == Custom format == a "name:value,name2:value2,.." string
Updated•14 years ago
|
Attachment #478890 -
Flags: review?(tarek) → review+
Assignee | ||
Comment 5•14 years ago
|
||
While I think this would be OK for a transition, I think it hurts us in the long run, and we should try to optimize for that rather than worry about the transition. I expect the python transition to be fairly dramatic, and we can announce an actual downtime to do this. Then, we can kill the memcache and start from scratch, bringing up one node at a time to prevent the dbs from being overwhelmed. That will result in loss of tab data, so we should talk about the messaging there.
Comment 6•14 years ago
|
||
Maybe we should start a migration doc then to have the big lines of the scenario in mind. I thought we wanted to move transparently one node at a time to Python to limit the risks if anything goes bad. That's one of the reason I am currently making the Python server compatible with the existing infrastructure/data. The cache was the last bit.
Assignee | ||
Comment 7•14 years ago
|
||
Hmm, good point. We would need an interim format if we wanted to go that way. Handling seriaization/deserialization ourselves isn't where we want to go in the long run. Need to think about the best way to handle this.
Comment 8•14 years ago
|
||
JSON seems fine to me. We just need to make sure that the overhead is minimal by benching the serialization on both side. I doubt it will be that slow given the size of the stuff we store in memcached,
Assignee | ||
Comment 9•14 years ago
|
||
I think we may need to talk to ops about the migration process here - it's much deeper than just the constants - tabs and collection timestamps are also stored in incompatible formats.
Comment 10•14 years ago
|
||
Turns out JSON do add some overhead :/ Serializing 200 tabs with 500 chars payloads using JSON instead of the binary serializer adds an overhead of 5 ms, which is quite a lot on requests that can last 20 ms. I'll start a bug to discuss the transition
Comment 11•14 years ago
|
||
Transition : Bug 600482
Assignee | ||
Comment 12•14 years ago
|
||
pushed to hg in http://hg.mozilla.org/services/sync-server/rev/fccc1a73755a
Assignee | ||
Updated•14 years ago
|
Updated•14 years ago
|
Whiteboard: [qa-]
Updated•1 year ago
|
Product: Cloud Services → Cloud Services Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•