Closed
Bug 981801
Opened 11 years ago
Closed 10 years ago
Set up an Apps section of our CDN for distributing templates and libraries (code.cdn.mozilla.net?)
Categories
(Infrastructure & Operations Graveyard :: WebOps: Other, task)
Infrastructure & Operations Graveyard
WebOps: Other
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: potch, Assigned: cturra)
References
Details
(Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/458] )
Attachments
(1 file)
121.65 KB,
image/png
|
Details |
The ecosystem team produces artifacts we would like to distribute to developers. Some of these things, like app templates and library code, should be available for copies of Firefox and third-party services to pull in. Instead of having to retrofit an existing server such as Marketplace to distribute these static files in a highly-available way, we should have a subdomain of cdn.mozilla.net for this purpose.
Updated•11 years ago
|
Assignee: nobody → server-ops
Component: App Center → Server Operations
Product: Developer Ecosystem → mozilla.org
QA Contact: shyam
Version: unspecified → other
Comment 1•11 years ago
|
||
a+
We're receptive to a solution that works for IT, but something like the Video CDN (WebDAV access) would work.
For added splenditude, we'd also like to be able to automate this. WebDAV with an otherwise privilege-less user account would certainly fit that bill.
Updated•11 years ago
|
Assignee: server-ops → server-ops-webops
Component: Server Operations → WebOps: Other
Product: mozilla.org → Infrastructure & Operations
QA Contact: shyam → nmaul
Reporter | ||
Comment 2•11 years ago
|
||
From IRC:
> jakem: potch: wenzel can you drop a note in the bug that we've discussed this and are happy with S3 as the origin and a CDN in front of it? that's easy to set up :)
> potch: you got it!
Comment 3•11 years ago
|
||
Some context, S3 is a good solution, there are plenty of libraries available to interact with it. We do need an ACL that we can control somehow, and then we can give people access who need it to release their tools, then layer the CDN on top and everyone is happy.
Assignee | ||
Comment 4•11 years ago
|
||
to get rolling here, i'd like to suggest we start by creating the amazon s3 bucket for this.
we discussed this a bit in our last triage and going down the road with s3 will mean that user access requests need to come through our group (webops) since iam access is required for user management and that's not something we can grant outside of our group.
additionally, is end-to-end encryption a requirement here? if so, we won't be able to front this with a cdn since amazon doesn't support ssl on custom named s3 buckets (all s3 buckets receive a certificate with a common name of *.s3.amazonaws.com).
Flags: needinfo?(thepotch)
Comment 5•11 years ago
|
||
Talked to Paul about this and because it integrates with the Client, we expect SSL to be a requirement.
Why can't it be fronted with a CDN? Can't we add a valid Mozilla cert to the CDN and have that be backed transparently by *uglyname*.amazonaws? Do origin and CDN have to share the very same cert?
Flags: needinfo?(thepotch)
Updated•11 years ago
|
Assignee: server-ops-webops → cturra
Comment 6•11 years ago
|
||
Chris, can we move this forward next week? If you need to help me understand what the options/complexities are so we can work through the tradeoffs, we'll be happy to hop on a call if that's useful.
Assignee | ||
Comment 7•11 years ago
|
||
:wenzel - can you please provide me with a list of folks that would need access to this s3 bucket. i should be able to set this up using the https -origin services for the CDN as s3.amazonaws.com.
please, in future, can you refrain from assigning ownership of bugs in our group? leaving it in server-ops-webops is where we (webops) looks when we have free cycles. by assigning the bug to me directly, you're adding to my queue, which i work through in priority of other tasks. generally, my queue is long ;)
Comment 8•11 years ago
|
||
As you make technical choices for how this is implemented, please take a look at these user stories, these assume that at some point we want to encourage the community to also contribute.
983252 - basic bare bones templates (current most basic implementation plan)
983258 - third party template submission workflow
983263 - template accessible/installable directly on devices
Comment 9•11 years ago
|
||
Thanks for the reminder, Axel. I took a look at the bugs. This code CDN is meant to be for Mozilla-released code only. I think third-party templates should be able to be hosted anywhere, no need to co-mingle the two use cases.
Comment 10•11 years ago
|
||
:cturra -
Let's give access to:
- potch
- tofumatt
- soledad penades
- rory petty
- fwenzel
We might have to tweak it after, but that should do for now.
Also, sorry for assigning the bug, I figured you meant to do that, but I should not have jumped to conclusions.
Assignee | ||
Comment 11•11 years ago
|
||
:wenzel - i have the basic setup for this complete and will send you your aws s3 credentials via a gpg encrypted email. i have configured this bucket to provide "static website hosting," however, due to how we're having to use the non-shortened amazon s3 url for https support, you'll need to specify the resource directly.
for example, if you go to the base of the bucket, you'll see a listing: https://code-origin-cdn-mozilla-net.s3.amazonaws.com/. the same is true if you browse to it from the CDN i have setup: https://code.cdn.mozilla.org/.
this said, if you specify the resource you want, everything functions as expected:
https://code-origin-cdn-mozilla-net.s3.amazonaws.com/images/firefox_dark_widescreen-wide.jpg
https://code.cdn.mozilla.net/images/firefox_dark_widescreen-wide.jpg
the group policy i have setup for you (and the others once testing is complete) gives you all access to this bucket.
stay tuned for your credentials via email.
Assignee | ||
Comment 12•11 years ago
|
||
:wenzel - have you had time to review this setup and confirm everything is functioning as expected? if so, please let me know and i will get credentials out to the remaining users noted earlier in this bug.
Flags: needinfo?(fwenzel)
Comment 13•11 years ago
|
||
Thanks for the ping. I haven't forgotten, just been too busy :-/ I'll try and look at it this afternoon, will get back to you ASAP!
Flags: needinfo?(fwenzel)
Updated•11 years ago
|
Flags: needinfo?(fwenzel)
Comment 14•11 years ago
|
||
I tried it out (using CyberDuck as a client, fwiw) and it worked just peachy.
Please proceed giving the other users in comment 10 access, thank you!
(Note: we might want to automate adding files at some point, but baby steps.)
Also, thanks for setting this up. You rock.
Flags: needinfo?(fwenzel)
Assignee | ||
Comment 15•11 years ago
|
||
i wasn't able to track down gpg keys for the other users, so have placed a file (s3user.txt) in each of the following users home directories on people.mozilla.org:
- potch
- tofumatt
- soledad penades
- rory petty
i agree, you'll want to get a system s3 user when you have an application is ready to roll. i am going to mark this bug as r/fixed, but when you're ready to start testing this with a dedicated users, can you please open another bug and reference this one?
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Comment 16•11 years ago
|
||
Thanks Chris, will do!
Comment 17•10 years ago
|
||
Hey Chris
I finally got time to try out my credentials. I can't login. I tried with Cyberduck and I get "Login failed. Listing directory failed. Please contact your web hosting service provider for assistance." when I try to use the s3 credentials data you put on my p.m.o. directory.
Not sure if you revoked them after a period of inactivity or what could be the reason for this... Could any of you friendly webops have a look?
Status: RESOLVED → REOPENED
Flags: needinfo?(server-ops-webops)
Resolution: FIXED → ---
Assignee | ||
Comment 18•10 years ago
|
||
(In reply to Soledad Penades [:sole] [:spenades] from comment #17)
>
> I finally got time to try out my credentials. I can't login. I tried with
> Cyberduck and I get "Login failed. Listing directory failed. Please contact
> your web hosting service provider for assistance." when I try to use the s3
> credentials data you put on my p.m.o. directory.
i just tested your credentials (from people) with Cyberduck myself and was able to login successfully. be sure that you're opening a connection to S3 (Amazon Simple Storage Service) and that you use your Access Key Id as the Username and Secret Access Key as the password.
Status: REOPENED → RESOLVED
Closed: 11 years ago → 10 years ago
Flags: needinfo?(server-ops-webops)
Resolution: --- → FIXED
Comment 19•10 years ago
|
||
I am doing exactly that and it keeps not working and giving me the same "listing directory failed" error. I also deleted the bookmark and recreated it again, etc, etc, several times just to make sure I wasn't copying extraneous characters from the txt file.
Is there any specific detail that I might be overlooking?
I will attach an screenshot with my config. Although it's not shown in the screenshot, when asked I evidently enter the secret access key where required.
Status: RESOLVED → REOPENED
Flags: needinfo?(server-ops-webops)
Resolution: FIXED → ---
Comment 20•10 years ago
|
||
Updated•10 years ago
|
Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/370]
Assignee | ||
Comment 21•10 years ago
|
||
hmm.. it looks like the group policy i had setup for you and a couple other folks on this bug has been unlinked in iam (amazon identity access management). i have updated that now - can you please test again for me?
Flags: needinfo?(server-ops-webops) → needinfo?(sole)
Assignee | ||
Comment 22•10 years ago
|
||
marking as r/fixed once again, but please :sole, feel free to reopen if this is *still* an issue.
Status: REOPENED → RESOLVED
Closed: 10 years ago → 10 years ago
Resolution: --- → FIXED
Comment 23•10 years ago
|
||
OK So I can now connect and authenticate. Yay =)
But when I upload a file I can't access it in the browser UNLESS I change the permissions for the file manually to "Everyone = READ".
I uploaded a gradient.png file to the images folder and right clicked over it in Cyberduck, then Copy URL and the HTTPS URL which gave me this:
https://code-origin-cdn-mozilla-net.s3.amazonaws.com/images/gradient.png
Opening that URL on the browser without changing the permissions, I got this error message:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>10ADF555A6292BF3</RequestId><HostId>tB6FnUZHMHTGZP9J2hrfal0fyd9zO6Ec80bIglXItapVd7lkPMMiG+woJlWPZ52XGDZn727tfN0=</HostId>
</Error>
Once I edited the permissions to said "Everyone READ", I can open the file in the browser without issues.
This is my first time using S3 so I'm not sure if I'm doing this right or even if it's the normal behaviour for this bucket. Could you please confirm, Chris? Then we can totally mark this as r/fixed forever.
Status: RESOLVED → REOPENED
Flags: needinfo?(sole)
Resolution: FIXED → ---
Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/370] → [kanban:https://kanbanize.com/ctrl_board/4/] [kanban:https://kanbanize.com/ctrl_board/4/370]
Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/] [kanban:https://kanbanize.com/ctrl_board/4/370]
Comment 24•10 years ago
|
||
So we have this issue where I've uploaded an updated version of the files but some of them are still showing the old version. We've waited a day and while the ZIPs were instantly delivering the updated versions, the JPGs are still showing the old version.
I've checked the response headers and they're saying the last-modified is 18th of June so it is weird.
Updated•10 years ago
|
Flags: needinfo?(server-ops-webops)
Assignee | ||
Comment 25•10 years ago
|
||
(In reply to Soledad Penades [:sole] [:spenades] from comment #23)
> Once I edited the permissions to said "Everyone READ", I can open the file
> in the browser without issues.
>
> This is my first time using S3 so I'm not sure if I'm doing this right or
> even if it's the normal behaviour for this bucket. Could you please confirm,
> Chris? Then we can totally mark this as r/fixed forever.
that is the correct behaviour. you should be able to make an entire directory "public" which would recursively apply to the files within it.
documentation about working with s3 buckets can be found at:
http://docs.aws.amazon.com/AmazonS3/latest/UG/BucketOperations.html
(In reply to Soledad Penades [:sole] [:spenades] from comment #24)
>
> I've checked the response headers and they're saying the last-modified is
> 18th of June so it is weird.
there is a CDN in front of this file storage (as requested). you can access the files directly at s3 origin (bypassing the CDN) using:
https://code-origin-cdn-mozilla-net.s3-us-west-2.amazonaws.com/
Status: REOPENED → RESOLVED
Closed: 10 years ago → 10 years ago
Flags: needinfo?(server-ops-webops)
Resolution: --- → FIXED
Comment 26•10 years ago
|
||
I know what a CDN is.
I just want to know how to deliver updated files to people, and also why are some being updated while others are not. ZIPs are updated, JPGs are megacached. How can I expire them when I upload them? I don't have access to any control panel for setting these options, or do I?
Thanks!
Status: RESOLVED → REOPENED
Flags: needinfo?(server-ops-webops)
Resolution: FIXED → ---
Assignee | ||
Comment 27•10 years ago
|
||
(In reply to Soledad Penades [:sole] [:spenades] from comment #26)
>
> I just want to know how to deliver updated files to people, and also why are
> some being updated while others are not. ZIPs are updated, JPGs are
> megacached. How can I expire them when I upload them? I don't have access to
> any control panel for setting these options, or do I?
there are a couple options here: 1) version your file, an example would be to name something foo.jpg?v=1* or (2) submit a ticket for us to purge the CDN for your file(s).
option 1 is a common way to deal with static files that need to be updated from time to time. an example of another mozilla project doing this is MDN -- if you look at the source on developer.mozilla.org you will see that they append ?build=XXXXXXXX to the .css files stored on the CDN.
*you may need to URL encode the query parameter. for example, i created the following versioned file...
images/firefox_dark_widescreen-wide?v=1.jpg
which is available at:
https://code-origin-cdn-mozilla-net.s3.amazonaws.com/images/firefox_dark_widescreen-wide%3Fv%3D1.jpg
Flags: needinfo?(server-ops-webops)
Comment 28•10 years ago
|
||
Hey Chris, thanks for answering my questions!
Still, just to totally clarify:
1) why is this behaviour different depending on the type of file? can't we just have images behave the same way than the zips do?
2) is submitting a ticket the only way to invalidate the cache? It seems overkill, and I don't want to distract you when you could be doing better things with your time. And we'd also get instant feedback instead of waiting for an entire week, which is not really optimal for such an small "bug".
Flags: needinfo?(server-ops-webops)
Reporter | ||
Comment 29•10 years ago
|
||
Chris,
For some of these files, we will want the proper CORS headers to be served (fonts, etc). Will configuring the headers in S3 be sufficient, or will I also need to modify something on the CDN side?
Assignee | ||
Comment 30•10 years ago
|
||
(In reply to Soledad Penades [:sole] [:spenades] from comment #28)
> Still, just to totally clarify:
>
> 1) why is this behaviour different depending on the type of file? can't we
> just have images behave the same way than the zips do?
good question - they shouldn't behave any differently. there are no cache specific settings configured on the s3 bucket or edgecast for this. you can be sure to set a Cache-Control header though, which should ensure this is explicit.
> 2) is submitting a ticket the only way to invalidate the cache? It seems
> overkill, and I don't want to distract you when you could be doing better
> things with your time. And we'd also get instant feedback instead of waiting
> for an entire week, which is not really optimal for such an small "bug".
sadly, the CDN purge requests would need to go through edgecast and at this point, the only ones with access to do that is the webops team. the only times i have seen a need for this in other places is when a bad firefox binary is released - that's a rare occurrence.
(In reply to Potch [:potch] from comment #29)
> For some of these files, we will want the proper CORS headers to be served
> (fonts, etc). Will configuring the headers in S3 be sufficient, or will I
> also need to modify something on the CDN side?
i just tested this and it looks like amazon prepends `x-amz-meta-` string to custom http header names. below are the headers from my test. more on object metadata with s3 can be found at => http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
$ curl -I https://code.cdn.mozilla.net/images/firefox_dark_widescreen-wide%3Fv%3D2.jpg
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: image/jpeg
Date: Wed, 09 Jul 2014 21:20:49 GMT
Etag: "fc964348e0b1563ce135bd89b2b101c9"
Last-Modified: Wed, 09 Jul 2014 21:20:15 GMT
Server: AmazonS3
x-amz-id-2: vwocgDisx+UySti5imjmOUnsmqY94X7SkZ2cKNhSCdaMhBC3UoWOuo4koh6HXur2CdsxNMSdHuA=
x-amz-meta-access-control-allow-origin: https://code.cdn.mozilla.net
x-amz-request-id: C4C5D4D79BCCD909
Content-Length: 281440
Flags: needinfo?(server-ops-webops)
Reporter | ||
Comment 31•10 years ago
|
||
> (In reply to Potch [:potch] from comment #29)
> > For some of these files, we will want the proper CORS headers to be served
> > (fonts, etc). Will configuring the headers in S3 be sufficient, or will I
> > also need to modify something on the CDN side?
>
> i just tested this and it looks like amazon prepends `x-amz-meta-` string to
> custom http header names. below are the headers from my test. more on object
> metadata with s3 can be found at =>
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-
> metadata
>
> $ curl -I
> https://code.cdn.mozilla.net/images/firefox_dark_widescreen-wide%3Fv%3D2.jpg
> HTTP/1.1 200 OK
> Accept-Ranges: bytes
> Content-Type: image/jpeg
> Date: Wed, 09 Jul 2014 21:20:49 GMT
> Etag: "fc964348e0b1563ce135bd89b2b101c9"
> Last-Modified: Wed, 09 Jul 2014 21:20:15 GMT
> Server: AmazonS3
> x-amz-id-2:
> vwocgDisx+UySti5imjmOUnsmqY94X7SkZ2cKNhSCdaMhBC3UoWOuo4koh6HXur2CdsxNMSdHuA=
> x-amz-meta-access-control-allow-origin: https://code.cdn.mozilla.net
> x-amz-request-id: C4C5D4D79BCCD909
> Content-Length: 281440
Looks like it requires the creation of a 'cors' subresource in the admin panel: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
All the bucket's CORS settings would need is this:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Assignee | ||
Comment 32•10 years ago
|
||
oh nice, thanks for that :potch. as requested, i have applied this CORE configuration to the code-cdn-origin s3 bucket.
Reporter | ||
Comment 33•10 years ago
|
||
(In reply to Chris Turra [:cturra] from comment #32)
> oh nice, thanks for that :potch. as requested, i have applied this CORE
> configuration to the code-cdn-origin s3 bucket.
For some reason, http://code.cdn.mozilla.net/fonts/woff/FiraSans-Italic.woff doesn't have the proper CORS headers.
Assignee | ||
Comment 34•10 years ago
|
||
i'm not a CORS ninja, so i'd like to ping for further clarification here. below is a verbose curl checking the CORS headers on the .woff resource you've mentioned. which of the response headers are not functioning as you expect?
$ curl -H "Origin: https://code.cdn.mozilla.net" -H "Access-Control-Request-Method: GET" -X OPTIONS --verbose https://code.cdn.mozilla.net/fonts/woff/FiraSans-Italic.woff
* Adding handle: conn: 0x7ff821003a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7ff821003a00) send_pipe: 1, recv_pipe: 0
* About to connect() to code.cdn.mozilla.net port 443 (#0)
* Trying 93.184.215.191...
* Connected to code.cdn.mozilla.net (93.184.215.191) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_RC4_128_SHA
* Server certificate: *.cdn.mozilla.net
* Server certificate: DigiCert High Assurance CA-3
* Server certificate: DigiCert High Assurance EV Root CA
* Server certificate: Entrust.net Secure Server Certification Authority
> OPTIONS /fonts/woff/FiraSans-Italic.woff HTTP/1.1
> User-Agent: curl/7.30.0
> Host: code.cdn.mozilla.net
> Accept: */*
> Origin: https://code.cdn.mozilla.net
> Access-Control-Request-Method: GET
>
< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Origin: https://code.cdn.mozilla.net
< Date: Wed, 23 Jul 2014 20:20:24 GMT
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
< x-amz-id-2: yHeaFdX5fvOGGaIdfWLjVgLX5snZKjcfBSwKBsCKqjzh2cDPaXITr553QJZxF/TEVpPCr5/PboA=
< x-amz-request-id: FD938F73CA908479
< Content-Length: 0
<
* Connection #0 to host code.cdn.mozilla.net left intact
Flags: needinfo?(thepotch)
Reporter | ||
Comment 35•10 years ago
|
||
(In reply to Chris Turra [:cturra] from comment #34)
> i'm not a CORS ninja, so i'd like to ping for further clarification here.
> below is a verbose curl checking the CORS headers on the .woff resource
> you've mentioned. which of the response headers are not functioning as you
> expect?
>
I would like Access-Control-Allow-Origin: * for all resources. We are not opposed to CORS for any files on this CDN, they are meant for maximal distribution.
Flags: needinfo?(thepotch)
Assignee | ||
Comment 36•10 years ago
|
||
updated the CORS config as requested...
$ curl -I -H "Origin: https://code.cdn.mozilla.net" -H "Access-Control-Request-Method: GET" -X OPTIONS https://code.cdn.mozilla.net/fonts/woff/FiraSans-Italic.woff
HTTP/1.1 200 OK
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Date: Fri, 25 Jul 2014 19:28:54 GMT
Server: AmazonS3
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
x-amz-id-2: oedT4s3Gu1E3GoM2qGp2qSUZRcO6/G9+RsSmRBqM8PpkquXcn6art3t9ocSjJ0/pQ7yfhAOei4k=
x-amz-request-id: 1CE6C16CACCB8A5E
Content-Length: 0
Assignee | ||
Comment 37•10 years ago
|
||
:potch - how are things looking now? can we mark this bug as resolved?
Flags: needinfo?(thepotch)
Assignee | ||
Comment 38•10 years ago
|
||
marking this as r/fixed now that it looks like we've addressed any CORS header requests.
Status: REOPENED → RESOLVED
Closed: 10 years ago → 10 years ago
Resolution: --- → FIXED
Reporter | ||
Updated•10 years ago
|
Flags: needinfo?(thepotch)
Comment 39•10 years ago
|
||
For history's sake - https://mana.mozilla.org/wiki/display/SYSADMIN/Granting+access+to+code.cdn.mozilla.net has information on how to grant access to people requesting access to this.
Updated•6 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•