Closed
Bug 829356
Opened 13 years ago
Closed 13 years ago
Yahoo BigTent needs key/value or shared memory store for OpenID associations
Categories
(Cloud Services :: Operations: Miscellaneous, task)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: ozten, Assigned: gene)
References
Details
Please advise on preferred components (memcache, redis, mysql, or other).
Requirement: Store and retrieve a key/value pair.
Example Key: ceANeS0GU0ezTDNxyOSsw2mrGQlExjkXD64d7NX94Gs9.4AcSAIP85g2JKlqL3mlgdQ.291xFv1FbgkyBXyyKGauaySH9QzXL3swm1_iaP_bP8zUSFZ4oQ5cEnX.lWq3sLQ.uls-
Example Value:
{"provider":{"endpoint":"https://open.login.yahooapis.com/openid/op/auth","version":"http://specs.openid.net/auth/2.0"},"type":"sha256","secret":"H2IxE2sICOJl1TU/XdsJ6/LXNp8g8j234jW3ytQ9320="}
Most common access pattern: These values will be written once, read once, and deleted N seconds later.
N BigTent servers will need to access the same component, which creates a shared memory space.
Based on these requirements, I'd go with Memcache, but will defer to your guidance.
Background: I completely missed this requirement originally during BigTent development. It wasn't in a load balanced environment, until it went to stage. Really sorry about that.
| Reporter | ||
Comment 1•13 years ago
|
||
Any high level guidance here?
I know IT uses a pool of memcached servers in many of our webapps that webdev has built.
is there any reason that these can't use the couchbase cluster for this?
| Assignee | ||
Updated•13 years ago
|
Assignee: nobody → gene
| Assignee | ||
Comment 3•13 years ago
|
||
Austin, I'll start with memcache, see how it goes. Looking into how to get it to play nice with rsbac
Status: NEW → ASSIGNED
| Assignee | ||
Comment 4•13 years ago
|
||
Notes :
sudo yum install memcached
Encountered this
http://bugs.centos.org/view.php?id=5466
Running Transaction
Installing : libevent-1.4.13-1.el6.x86_64 1/2
Non-fatal POSTIN scriptlet failure in rpm package libevent-1.4.13-1.el6.x86_64
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-2.6.32-220.el6.x86_64.conf:6: hwcap index 0 already defined as nosegneg
warning: %post(libevent-1.4.13-1.el6.x86_64) scriptlet failed, exit status 1
Working on getting rsbac opened to allow this :
<6>0000193140|rsbac_auth_p_capset_member(): adding AUTH capability for uid 400 to process 8385 (memcached) to transaction 0!
<6>0000193141|rsbac_auth_p_capset_member(): adding AUTH capability for uid 400 to FILE Device 252:01 Inode 282550 Path /usr/bin/memcached to transaction 0!
<6>0000193142|check_comp_rc(): learning mode: pid 8387 (touch), owner 0, rc_role 10050, FILE rc_type 603, right MODIFY_ACCESS_DATA added to transaction 0!
| Reporter | ||
Comment 5•13 years ago
|
||
lloyd had a very cool idea - use node-client-sessions to store these associations.
Unfortunately, I'm having a hard time finding a way to do this in a concurrency safe manner without forking node-openid, passport-openid, and passport-yahoo. Even if we forked, it seems like a gnarly patch as the code currently doesn't have request and response in scope in the interesting codepaths.
Discussion:
https://groups.google.com/forum/?fromgroups=#!topic/passportjs/iwHIR2KcNjI
| Assignee | ||
Comment 6•13 years ago
|
||
I'll close this for now. When you've got a solution that works in your dev environment feel free to re-open and we can work at deploying in stage.
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → INCOMPLETE
| Assignee | ||
Comment 7•13 years ago
|
||
Austin indicated in IRC that he has a working installation based on memcached.
Austin, can you share your memcached installation method/configuration settings so I can write up the puppet manifest?
We're currently waiting on Bug 830945
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
| Reporter | ||
Comment 8•13 years ago
|
||
Here is an example:
https://github.com/mozilla/browserid-bigtent/blob/train-2013.01.17/server/config/local.json-dist#L3
Also, OPS_NOTES.md has a new memcached section:
https://github.com/mozilla/browserid-bigtent/blob/train-2013.01.17/docs/OPS_NOTES.md#memcached-config
| Assignee | ||
Comment 9•13 years ago
|
||
Cool, thanks for the details.
Question : potentially, in production, a user could hit a bigtent server in one datacenter for the first request and another bigtent server in a different datacenter for the second request. Does this imply that all bigtent servers running memcached must allow the memcached servers to talk to each other across datacenters?
Flags: needinfo?(ozten.bugs)
| Reporter | ||
Comment 10•13 years ago
|
||
Yes, a single shared memcached server must be used by all bigtent nodes.
The is very little communication between the bigtent servers and memcache. During discovery a single SET command is sent. During verification a single GET is sent. This is very low overhead compared to other possible solutions.
Yes, I imagine we'll need a secured connection on 11211 or whatever port we use for memcached between the various DCs.
A couple notes:
For geographically targeted DC, if you can guarantee that a user's browser will always hit the same DC, then you can do memcached deployments per-DC. I wouldn't recommend it, but want to mention it.
Similarly, this solution is designed to avoid the need for sticky sessions (pinning a user to a specific bigtent node server), but that is another way to solve this. It is a much worse idea than geo targetted DCs.
Flags: needinfo?(ozten.bugs)
| Reporter | ||
Comment 11•13 years ago
|
||
Here is a etherpad where we can work out some of these details:
https://etherpad.mozilla.org/bigtent-memcache-nitty-gritty
| Assignee | ||
Comment 12•13 years ago
|
||
Looks like Lloyd's solution obviates the need for this. Re-open if I'm mistaken
Status: REOPENED → RESOLVED
Closed: 13 years ago → 13 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•