We want to allow users to publish content to a user-specified sub-domain. Our tools will need to publish content differently than they currently do, since everything will need to get namespaced to the user/sub-domain. If I'm logged into Webmaker as `humph` and I save something called `kittens`, I will expect to be able to get at it like so (floopy.org is obviously not our real domain): humph.floopy.org/kittens Within S3, this might mean that we've saved to a global bucket, and name is: humph/kittens See also: bug 863356
Severity: normal → enhancement
Status: NEW → ASSIGNED
OS: Mac OS X → All
Hardware: x86 → All
I need to give some thought to this, but serving out of buckets is problematic. S3 has a few limitations that I see us heading into: 1) CNAME's need to be named specifically for hte bucket. Thus, if I want www.mydomain.com served with a cname from S3, my bucket will be named www.mydomain.com. Any other naming will not work. Thus, since we do not know the names we will assign, and we do not have an unlimited # of buckets to spin up on demand (account limit of 100), we need to think about this and test it out. We can definitely have apps push the data up to buckets, based on cname/username, but the app will have to be smart.
2) Multiple nodes being up will need access to that same bucket. I'll play with s3fs, but like most filesystems, I would bet that only one node can safely interact with one bucket simultaneously. However, one ghetto-ish work around is to have a sync. Consider: -We have 1 bucket, called usercontentbucket -We have 4 autoscale nodes in. -User1 comes in, gets to autoscale-node-1. He uploads some bits to sit at user1.webmakerdomain.com/newstuff.html. Now, user1 loads up his 2nd computer and goes to user1.webmakerdomain.com/newstuff.html, but this 2nd computer hits autoscale-node-2. How do we get the same bits uploaded to autoscale-node-1 to show up on autoscale-node-2? We can either store it locally on the autoscale-node-1, and have a sync job running that pushes any new/changed content up to S3 while simultaneously pulling down any new content. This scales poorly, and has an issue of lags in consistency. If we can do it like we do with popcorn, and have the store on s3 itself, that will be awesome
Strawman proposal: * Application uploads to an S3 bucket with path "/username/slug" IE "/jbuck/my-cool-project" * DNS setup to point *.example.org at an ELB with EC2 nodes running nginx. nginx will reverse proxy between *.example.org <-> S3 bucket. nginx will proxy_pass http://username.example.org/slug to s3://example.org/username/slug. My only question here is what would performance be like between this and plain S3? Could we somehow use nginx as a caching reverse proxy?
I have created maker.ly-staging S3 bucket, set it to send logs to mofo-techops with the prefix logs/maker.ly-staging/, and setup the web site hosting option. The URL: maker.ly-staging.s3-website-us-east-1.amazonaws.com
We're pursuing jbucks awesome idea. I'm writing manifests to have an nginx node with the same proxying jbuck has set for his demo, and I'll sit that behind load balancers.
guys remember i have purchased mywebmaker.org and a SSL for it, so just throwing it out there.
haha, whoops! Yeah, this is working now: https://jon.mywebmaker.org/35
Status: ASSIGNED → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.