Add ability to specify redirects
Categories
(Developer Infrastructure :: Source Documentation, enhancement)
Tracking
(firefox70 fixed)
Tracking | Status | |
---|---|---|
firefox70 | --- | fixed |
People
(Reporter: ahal, Assigned: championshuttler)
References
(Blocks 2 open bugs)
Details
Attachments
(1 file, 2 obsolete files)
Many places link to firefox-source-docs externally, this makes it really difficult to refactor our existing docs without breaking all of these old links. I would love to have an in-tree file that developers could check-in redirects. I see two ways to implemented this:
A) Package this file alongside the generated docs and have the webserver read it and set up redirects directly on the server.
B) Read this file during doc generation and create dummy pages with <meta http-equiv="refresh" https://firefox-source-docs/newlocation>
.
I think A) is better because it is cleaner and won't cause back button loops. I'm not entirely sure how these docs are hosted or even who owns them though. So not sure if this will be trivial or difficult to implement.
Option B) has the benefit of being entirely implementable from in-tree.
Reporter | ||
Updated•5 years ago
|
Reporter | ||
Comment 1•5 years ago
|
||
From bug 1526796 comment 7:
A further idea might be to replace such files with a redirect object (it's possible to set a Location header on an S3 object) to the root directory of that branch, so that users don't see mysterious 404 pages.
This sounds like exactly what we need and means that option A is also entirely implementable from the tree! We should definitely use this over the meta refresh method.
Now with some clarity on the way forward here, there are two major parts to this bug:
- Figure out how to define redirects in-tree.
We want developers to be able to add these on their own. In the simple case this can be a single file that lives under tools/docs. Or it could be a moz.build variable which can be defined anywhere. I'm leaning towards the latter because it means the redirect configs will live next to the other doc configs (SPHINX_TREES and SPHINX_PYTHON_PACKAGE_DIRS).
- Read this configuration during doc uploading and set the redirects.
The code to upload documentation lives here and runs in the DocUp
task. We'll need to figure out the proper boto3 call to set these headers.
Reporter | ||
Updated•5 years ago
|
Assignee | ||
Comment 2•5 years ago
|
||
Assignee | ||
Updated•5 years ago
|
Comment 3•5 years ago
|
||
Comment on attachment 9073355 [details]
Bug 1527363 - Add ability to specify redirects. r=ahal
Revision D35598 was moved to bug 1526796. Setting attachment 9073355 [details] to obsolete.
Assignee | ||
Comment 4•5 years ago
|
||
Updated•5 years ago
|
Assignee | ||
Comment 5•5 years ago
|
||
Updated•5 years ago
|
Reporter | ||
Comment 6•5 years ago
|
||
We've been trying to use the put_object
API with the WebsiteRedirectLocation
parameter, but it doesn't seem to work (maybe we were calling it wrong).
However, looking through the boto3 docs, it looks like setting RoutingRules directly in the website configuration is a better approach anyway because:
- It means we can specify prefix redirects (e.g when a folder changes name)
- Redirects happen directly in the server which is presumably a bit faster
- Fewer API calls, so therefore cheaper
We can accomplish this with the BucketWebsite.put
API call:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.BucketWebsite.put
This would also be really easy to implement as the in-tree JSON file could directly mirror the format of the RoutingRules
key. But I have some reservations around security here. We would need to grant the DocUp user the S3:PutBucketWebsite
access policy. Dustin, does this seem reasonable to you or would this be a bad idea? I can't think of anything myself.
Comment 7•5 years ago
|
||
It seems reasonable to me, assuming that policy can be limited to only the relevant bucket. /cc AJ for a double-check.
Reviewing: this is a bucket in what is now the Taskcluster AWS account and will eventually be the Firefox CI Taskcluster AWS account. We have AWS credentials stored in the secrets service, one set of creds per SCM level (well, I think we ignore level 2). Each grants access via IAM policy to a single per-level bucket, limiting the actions available to those credentials. The proposed change wuld add S3:PutBucketWebsite
for the referenced bucket to that policy.
Reporter | ||
Comment 8•5 years ago
|
||
(Aside: we could probably delete the -l2
bucket, I can't think of any situation it would be useful)
Reporter | ||
Comment 9•5 years ago
|
||
Also, if this ends up being ok please also grant S3:GetBucketWebsite
so we can inspect the configuration and make sure things are being set up properly.
Just to recap that's both S3:GetBucketWebsite
and S3:PutBucketWebsite
to both the gecko-docs.mozilla.org
and gecko-docs.mozilla.org-l1
buckets.
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Comment 10•5 years ago
|
||
(In reply to Dustin J. Mitchell [:dustin] (he/him) from comment #7)
It seems reasonable to me, assuming that policy can be limited to only the relevant bucket. /cc AJ for a double-check.
Reviewing: this is a bucket in what is now the Taskcluster AWS account and will eventually be the Firefox CI Taskcluster AWS account. We have AWS credentials stored in the secrets service, one set of creds per SCM level (well, I think we ignore level 2). Each grants access via IAM policy to a single per-level bucket, limiting the actions available to those credentials. The proposed change wuld add
S3:PutBucketWebsite
for the referenced bucket to that policy.
That sounds fine to me. What is the current policy for these IAM policies? Also, you can definitely limit to the relevant bucket, let me know if you'd like assistance or review with that.
Comment 11•5 years ago
|
||
Thanks! Policies are now:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"S3:GetBucketWebsite",
"S3:PutBucketWebsite"
],
"Resource": [
"arn:aws:s3:::gecko-docs.mozilla.org"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::gecko-docs.mozilla.org/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketWebsite",
"s3:PutBucketWebsite"
],
"Resource": [
"arn:aws:s3:::gecko-docs.mozilla.org-l1"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::gecko-docs.mozilla.org-l1/*"
]
}
]
}
(hopefully that answers the "current policy" question too).
Comment 12•5 years ago
|
||
lgtm!
Reporter | ||
Comment 13•5 years ago
|
||
Thanks guys!
Shivam, let's do some testing. E.g, does calling BucketWebsite.put()
clobber the existing configuration or update it? The former would actually make things easier for us. See if you can get a patch that dumps the config before and after calling put
with the redirects. Don't worry about the in-tree file format yet (you can just hardcode something for now), we can iron that out later.
Reporter | ||
Comment 14•5 years ago
|
||
Just noting for posterity that the normal s3
endpoint will not honour redirects. We needed to use s3-website
instead. So, e.g:
http://gecko-docs.mozilla.org-l1.s3-website.us-west-2.amazonaws.com/
This will also automatically navigate to index.html
as well as return a proper 404 (instead of AccessDenied). That's likely also why the put_object
method wasn't working for us.
Comment 15•5 years ago
|
||
Pushed by ahalberstadt@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/c060399a0c81 Add ability to specify redirects. r=ahal
Comment 16•5 years ago
|
||
bugherder |
Reporter | ||
Comment 17•5 years ago
|
||
So this is working with the main docs.. but it's not redirecting to the same host name. For example, opening:
https://firefox-source-docs.mozilla.org/testing/marionette/marionette/index.html
Redirects to:
http://gecko-docs.mozilla.org.s3-website-us-west-2.amazonaws.com/testing/marionette/index.html
This is the same document so it's correct. But:
- The change in host is surprising / confusing
- The AWS endpoint doesn't have SSL enabled
I'll file a follow-up bug to investigate this further.
Updated•2 years ago
|
Description
•