Closed Bug 1185261 Opened 9 years ago Closed 9 years ago

Serve Mercurial bundles via CloudFront

Categories

(Developer Services :: Mercurial: bundleclone, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: gps, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(3 files)

Currently, bundleclone is listing separate URLs for different S3 servers/regions.

There was some IRC conversation the other day about using Amazon CloudFront (a CDN) for serving the bundles. I think this makes sense: if bundles are served via CloudFront, clone requests will hit a server in the same geography and you should get a faster clone no matter where you are in the world.

As far as pricing, CloudFront appears to have similar costs as S3. Per request pricing is slightly higher. But I don't think our volume is such that it will be a problem. In fact, data transfer appears to be slightly cheaper over CloudFront.

Now, what isn't obvious to me is whether transfers from CloudFront to AWS are free (data transfer from S3 to the same region is free). If it isn't, we're potentially looking at $100/day for data transfer costs. However, we could work around this advertising the CloudFront URLs first and have our automation in AWS continue to prefer specific EC2 regions via the client-side settings to bundleclone. Or, we could teach the server to recognize when a client is an AWS IP and hand out an appropriate set of URLs.

Another question is how content it "primed" to the CDN. Presumably we'll continue to upload bundles to S3 and have CloudFront serve from S3. If we upload a 1 GB bundle to S3 and advertise it immediately, will the first clients that request that bundle have a significant lag as CloudFront fetches the bundle from S3? If so, do we need to build a CDN "priming" system into the bundle generation process? (I've never used CloudFront before - perhaps they have something like this built in.)

Anyway, I don't think this is too much work and it should be a huge win for bundle distribution. Unless there are some obvious reasons why this would be a bad idea, we should do this.
I created a CloudFront distribution hooked up to the existing S3 bucket. It has mostly default settings. I guess I have to wait a while for it to be created, since the hostname isn't resolving yet.
`hg clone https://hg.mozilla.org/users/gszorc_mozilla.com/mozilla-central` with bundleclone enabled will download a bundle from the CDN. This is a one-off repository with a manually configured bundleclone manifest. It will probably stop working in ~1 week, after the S3 objects have expired.
Here are some URLs for testing:

CDN: https://d3hk7f2pw2ppzm.cloudfront.net/mozilla-central/d317a93e5161438a0e464169bc72311f6d30c1f1.gzip.hg
US West Coast: https://s3-us-west-2.amazonaws.com/moz-hg-bundles-us-west-2/mozilla-central/d317a93e5161438a0e464169bc72311f6d30c1f1.gzip.hg
US East Coast (unless you are closer to west coast): https://s3-external-1.amazonaws.com/moz-hg-bundles-us-east-1/mozilla-central/d317a93e5161438a0e464169bc72311f6d30c1f1.gzip.hg

glandium, Yoric: Could you measure download speeds from Japan and France, respectively, of the CDN and the closest US location from the above list and report results? You may want to try the CDN a few times, as the first pull may be slow, since the object may not have been primed to the server in your region yet.

(This invitation is extended to anyone else who wants to perform a drive-by.)
Flags: needinfo?(mh+mozilla)
Flags: needinfo?(dteller)
Hi, screenshots from Barcelona (Spain). First minutes over wi-fi at home.

https://drive.google.com/folderview?id=0Bz6IvRvsWACIfnlyVENuNWxGNWstaDhNMDdsdzJXMjJrMERtQ2l6VkkxVXNwRGRGUGFRR0k&usp=sharing

Just for fun. Best regards,

Santiago
I did some drive-by testing, as follows:

- Tried download from the CDN, left it running for ~20 minutes. Seemed to hold pretty steady around 650-800 KB/s.
- Tried download from US-west, left it running for ~3 minutes. Seemed to fluctuate a bit, 250-550 KB/s.
- Tried download from US-east, left it running for ~1 minute. Seemed to also fluctuate, 450-700 KB/s.
- Tried download from the CDN again, left it running for 30 seconds. Seemed to fluctuate, 1500-2000 KB/s.

BitTorrent usually tops out around 2.0MB/s, so it's possible that you could get more. CDN looks good to me!
You can tack ?torrent on to the end of the URLs to get torrents. e.g. https://s3-external-1.amazonaws.com/moz-hg-bundles-us-east-1/mozilla-central/d317a93e5161438a0e464169bc72311f6d30c1f1.gzip.hg?torrent

I don't think this works on the CDN because I don't have query string forwarding enabled to the origin servers. And, the swarm is likely very small and I reckon it doesn't provide much of an advantage over direct S3/CDN download. Although, maybe Amazon puts all their regions in the swarm so it behaves like a CDN. I dunno. Please measure!
I have a server in France so I can answer for both Japan and France.

My server in France has a 100Mbps link, so it won't go above that.
- CDN showed a lot of fluctuation, and ended in 2:32 (average 7.17 MBps (1090 / 152)). But a second attemps finished in 2:03 (8.86MBps) and a third in 1:41 (10.79MBps).
In France, all the urls cap the 100Mbps link the server has, mostly. CDN shows a lot of fluctuation and ended in 2:32. USW2 had some fluctuation but usually averaged 10MBps and finished in 2:07
- USW2 was more steady, ended in 2:07 (average 8.58MBps)
- USE1 was steady and ended in 1:40 (average 10.9MBps)

My connection in Japan is a 1Gbps link, I however rarely get more than 10MBps on a single connection.
- CDN was the fastest, ending in 1:38 (average 11.12MBps)
- USW2 had a lot of fluctuation and ended in 2:45 (average 6.6MBps)
- USE1 had more fluctuation and ended in 5:07 (average 3.55MBps)
Err, I was over suboptimal wifi for the Japan measurements. Over the wire, that looks like:
- CDN 0:42 (25.95MBps)
- USW2 1:15 (14.53MBps)
- USE1 2:11 (8.32MBps)

Torrent is not seeded by amazon. There's only one IP in the peer list, and it's delivering me at 75KBps.
Flags: needinfo?(mh+mozilla)
Testing from San Francisco, all 3 servers cap out around 21.5MB/s, which is probably the upper limit of my home internet. However, the CDN completes faster because it hits peak throughput quicker. This is TCP slow start + network latency in action. There's probably a CDN in the Bay Area and its RTTs to ramp almost certainly don't take as long as the equivalent RTTs to Oregon for USW2.

I also noticed some sporadic dips in throughput. But I don't think the CDN had any more than the S3 servers.

      Wall   Time to Peak
CDN:   0:56     0:05
USW2:  0:60     0:07
USE1:  1:09     0:15
Using home wi-fi:
CDN (test 1): 1.538Mb/s => Total wall time 12:05
CDN (test 2): 1.577Mb/s => Total wall time 11:47
USW2: 0.051Mb/s => Could not complete.
USE1: 1.383Mb/s => Total wall time 13:06
Flags: needinfo?(dteller)
With buildduty blessing, I one-off changed build/tools to serve bundles from CloudFront. This change will be overwritten in 8-14 hours when new bundles are generated. I'm doing this mainly as an experiment to see what all is reported in the CloudFront logs and how the billing plays out.
digi, bobm: I'd like to hook this CloudFront instance up to a Mercurial controlled domain (since URLs will be seen by people). Technically, this should be as simple as the creation of a CNAME record. Is something directly off of mozilla.org acceptable, or is something else more appropriate? I know we have a mozaws.net domain...

We will also need a x509 certificate for whatever hostname we choose. I'm comfortable putting the hostname somewhere where we can leverage a wildcard cert. I don't think we need any special certificate foo: just a guarantee that whoever is operating the domain / wildcard cert won't let it expire :)

What are the options?
Flags: needinfo?(bobm)
Flags: needinfo?(bhourigan)
(In reply to Gregory Szorc [:gps] from comment #12)
> digi, bobm: I'd like to hook this CloudFront instance up to a Mercurial
> controlled domain (since URLs will be seen by people). Technically, this
> should be as simple as the creation of a CNAME record. Is something directly
> off of mozilla.org acceptable, or is something else more appropriate? I know
> we have a mozaws.net domain...

Does the CloudFront endpoint belong to an IT AWS account? Getting DNS entries in mozilla.org that point off site (i.e. to infra we dont directly control) requires a blessing from our opsec team. Other than that it's very straight forward.

> 
> We will also need a x509 certificate for whatever hostname we choose. I'm
> comfortable putting the hostname somewhere where we can leverage a wildcard
> cert. I don't think we need any special certificate foo: just a guarantee
> that whoever is operating the domain / wildcard cert won't let it expire :)

NP - but we generally frown on wildcard certs for m.o. We can add a monitor to Nagios to give us a heads up when the certificate is due for renewal so you aren't left in the dark in a year or two. :)

> 
> What are the options?
Flags: needinfo?(bhourigan)
CloudFront endpoint currently belongs to the Developer Services AWS (moz-devservices) account. If anything, I could see us moving that CloudFront endpoint to a RelEng AWS account. (Pretty sure there are all part of the same hierarchy in AWS.)
I wasn't seeing enough traffic to CloudFront from just build/tools, so I added build/talos to CloudFront as well. (It was pretty obvious that CloudFront didn't break anything.)
Travis confirmed with an Amazon rep that CloudFront transfers to EC2 are billed at the default CloudFront data transfer rate [and aren't free, unlike intra-region S3 transfers].

Currently, CloudFront is likely cost prohibitive to be used for transfers to EC2 given the amount of data we transfer. However, the bundleclone extension supports content negotiation and RelEng is already preferring the local EC2 region in their configs. So if we list CloudFront as the primary URL and retain the S3 URLs, RelEng's machines should prioritize the S3 URLs and incur no extra cost.

So, I think we're clear to move ahead here. We just need an SSL cert and to update the bundle manifests to advertise CloudFront first.

I'll file a new bug to track the SSL certificate.
Flags: needinfo?(bobm)
Depends on: 1186526
https://hg.cdn.mozilla.net/ is now a thing. Now we just need to start advertising it.

Also, the CDN configuration currently is shared IPs and thus requires SNI for the TLS cert exchange. This may pose a problem because Python < 2.7.9 is not capable of doing SNI. PyOpenSSL may facilitate SNI, but Mercurial doesn't use PyOpenSSL. Therefore, any machine running an older Python may not be able to fetch from the CDN.

I *think* all of automation has its configuration pinning to an appropriate EC2 region, so they'll get the S3 URLs even if the CDN is advertised first. But, I'm not sure. There is definitely a non-0 chance of things on older Python breaking.

You can use dedicated IPs for the CDN. But that costs money. If it is cheap, I think I'll just enable it. Let me poke around.
From https://aws.amazon.com/cloudfront/pricing/:

You pay $600 per month for each custom SSL certificate associated with one or more CloudFront distributions using the Dedicated IP version of custom SSL certificate support. This monthly fee is pro-rated by the hour. For example, if you had your custom SSL certificate associated with at least one CloudFront distribution for just 24 hours (i.e. 1 day) in the month of June, your total charge for using the custom SSL certificate feature in June will be (1 day / 30 days) * $600 = $20.

So, $7,200/year. To put things in perspective, that likely doubles our AWS bill for hosting Mercurial data in AWS.

We have 2 easy options:

1) Deploy w/ SNI and risk breaking clients on old Pythons, including automation, incurring a tree outage
2) Pay for dedicated IP and have no risks

Requires more work:

3) Modify bundleclone to advertise Python version to server and make server not advertise CDN to clients that don't support SNI

Unrealistic:

4) Wait for the world to upgrade to Python 2.7.9+

jgriffin: $7,200 / year commitment is a bit much for me to decide on my own. Please weigh in.
Flags: needinfo?(jgriffin)
(In reply to Gregory Szorc [:gps] from comment #18)
> Unrealistic:
> 
> 4) Wait for the world to upgrade to Python 2.7.9+

FWIW, python 2.7.9 is not a silver bullet either. I've had problems with Mozilla hosts that use SNI extensively (as in, too many names, and python complains the requested one is not part of the list).
Python 2.7.10 (and presumably 2.7.9) will talk to https://hg.cdn.mozilla.net/ just fine with SNI. Older versions will show something like:

$ /usr/bin/python /usr/local/bin/hg clone https://hg.mozilla.org/users/gszorc_mozilla.com/mozilla-central
downloading bundle https://hg.cdn.mozilla.net/mozilla-central/b12a261ee32e04d96e2e2594b3ba06979770495b.gzip.hg
abort: error fetching bundle: [Errno 1] _ssl.c:507: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
(consider contacting the server operator if this error persists)

I thought of another idea. The bundles manifest on the server could advertise whether a URL requires SNI. Then, the bundleclone extensions can see what version of Python is running and filter out URLs requiring SNI. This would require everyone deploy an updated bundleclone extension. But that should be relatively easy to manage. The hard part is a little down the line where the bundle cloning is built into Mercurial core and we can't work extension magic to pave over gotchas. Who knows, maybe I'll have success getting upstream to take the bundleclone extension close to as-is, client-side content negotiation and all.
scripts: advertise CDN first in bundle manifests (bug 1185261); r?fubar

We now have a CloudFront CDN for serving Mercurial bundles at
https://hg.cdn.mozilla.net/. For many scenarios, the CDN is the
preferred delivery mechanism because, well, it is a CDN and CDNs are
awesome. We thus advertise CDNs first so clients not expressing
a preference get the CDN by default.

An issue with the CDN as it is currently configured is it requires SNI.
We should exercise caution when deploying this change because Python
< 2.7.9 does not support SNI and connections to the CDN will fail.

Another issue is that CDN data transfers always cost money. Contrast
with S3 transfers, which are free within an EC2 region. However, this
should not be a problem within Mozilla's automation because it currently
has client-side preferences that prefer the same EC2 region and thus
clients will bypass the CDN URLs in favor of S3 ones.
Attachment #8643942 - Flags: review?(klibby)
We obviously can't deploy the just-submitted patch until we have a decision on SNI. But we can certainly get it code reviewed!
Comment on attachment 8643942 [details]
MozReview Request: scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

scripts: advertise CDN first in bundle manifests (bug 1185261); r?fubar

We now have a CloudFront CDN for serving Mercurial bundles at
https://hg.cdn.mozilla.net/. For many scenarios, the CDN is the
preferred delivery mechanism because, well, it is a CDN and CDNs are
awesome. We thus advertise CDNs first so clients not expressing
a preference get the CDN by default.

An issue with the CDN as it is currently configured is it requires SNI.
We should exercise caution when deploying this change because Python
< 2.7.9 does not support SNI and connections to the CDN will fail.

Another issue is that CDN data transfers always cost money. Contrast
with S3 transfers, which are free within an EC2 region. However, this
should not be a problem within Mozilla's automation because it currently
has client-side preferences that prefer the same EC2 region and thus
clients will bypass the CDN URLs in favor of S3 ones.
bundleclone: filter URLs that require SNI (bug 1185261); r?smacleod

Python < 2.7.9 does not support SNI. Amazon CloudFront uses SNI by
default unless you pay $7,200 per year to have your CDN on dedicated IPs.

This commit adds client-side support for filtering URLs based on whether
they require SNI. Servers can advertise whether a URL requires SNI.
Clients running on old Python versions that don't support SNI will
filter out these entries and fall back to a non-SNI URL, if available,
or abort otherwise.

This will enable Mozilla to advertise the SNI-requiring CloudFront CDN
URLs as the primary URLs in the bundle manifests while still allowing
fallback to the existing S3, non-SNI URLs for older Python versions.
For Python 2.7.9+, there is no change in behavior.
Attachment #8643979 - Flags: review?(smacleod)
Until proved otherwise, I'm going to assume that Python >= 2.7.9 can speak to https://hg.cdn.mozilla.net/ just fine. After all, Python does advertise support for SNI in these versions.

So, with the patch I just submitted, we'll be able to advertise the CDN as the primary URL while still supporting clients on older Python versions. We *may* have to deploy this updated extension to the release automation to avoid breaking things if any automation is not preferring the ec2region attribute in their hgrc (since I'm pretty sure we're not running 2.7.9+ everywhere). But first thing's first, let's get the filtering reviewed and landed.
Comment on attachment 8643979 [details]
MozReview Request: bundleclone: filter URLs that require SNI (bug 1185261); r?smacleod

bundleclone: filter URLs that require SNI (bug 1185261); r?smacleod

Python < 2.7.9 does not support SNI. Amazon CloudFront uses SNI by
default unless you pay $7,200 per year to have your CDN on dedicated IPs.

This commit adds client-side support for filtering URLs based on whether
they require SNI. Servers can advertise whether a URL requires SNI.
Clients running on old Python versions that don't support SNI will
filter out these entries and fall back to a non-SNI URL, if available,
or abort otherwise.

This will enable Mozilla to advertise the SNI-requiring CloudFront CDN
URLs as the primary URLs in the bundle manifests while still allowing
fallback to the existing S3, non-SNI URLs for older Python versions.
For Python 2.7.9+, there is no change in behavior.
Comment on attachment 8643942 [details]
MozReview Request: scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

We now have a CloudFront CDN for serving Mercurial bundles at
https://hg.cdn.mozilla.net/. Let's advertise the CDN URLs in the bundle
manifest.

Because of SNI concerns, we don't yet advertise the CDN URL as the
primary URL. Once we're satisfied that things won't break, we can bump
the CDN to the primary URL.

We introduce the ``cdn`` and ``requiresni`` attributes on the CDN
manifest entries to indicate the URL belongs to a CDN and our
CloudFront URL requires SNI. The use case for ``cdn`` isn't as
strong as ``requiresni`` (which is required for clients on old
Pythons to filter out URLs they won't be able to talk to), but it
can be useful for clients that wish to prefer connecting to a CDN
over other servers.
Attachment #8643942 - Attachment description: MozReview Request: scripts: advertise CDN first in bundle manifests (bug 1185261); r?fubar → MozReview Request: scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar
Comment on attachment 8643942 [details]
MozReview Request: scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

We now have a CloudFront CDN for serving Mercurial bundles at
https://hg.cdn.mozilla.net/. Let's advertise the CDN URLs in the bundle
manifest.

Because of SNI concerns, we don't yet advertise the CDN URL as the
primary URL. Once we're satisfied that things won't break, we can bump
the CDN to the primary URL.

We introduce the ``cdn`` and ``requiresni`` attributes on the CDN
manifest entries to indicate the URL belongs to a CDN and our
CloudFront URL requires SNI. The use case for ``cdn`` isn't as
strong as ``requiresni`` (which is required for clients on old
Pythons to filter out URLs they won't be able to talk to), but it
can be useful for clients that wish to prefer connecting to a CDN
over other servers.
(In reply to Mike Hommey [:glandium] from comment #19)
> (In reply to Gregory Szorc [:gps] from comment #18)
> > Unrealistic:
> > 
> > 4) Wait for the world to upgrade to Python 2.7.9+
> 
> FWIW, python 2.7.9 is not a silver bullet either. I've had problems with
> Mozilla hosts that use SNI extensively (as in, too many names, and python
> complains the requested one is not part of the list).

From commentary that boiled over into Twitter (https://twitter.com/ChristianHeimes/status/629078605109329924):

  IMO comment 19 confuses SNI with SAN. X509v3 SubjectAltName and TLS extension SNI are completely unrelated.

Also from Twitter (https://twitter.com/ChristianHeimes/status/629076012643520513):

  Python 2.7.9 supports SNI just fine. After all it's just one simple OpenSSL call to enable it.

Client-side filtering of SNI URLs on < 2.7.9 keeps sounding like it will work out.
Clearing needinfo, since it sounds like we're going with client-side filtering.
Flags: needinfo?(jgriffin)
Comment on attachment 8643942 [details]
MozReview Request: scripts: advertise CDN in bundle manifests (bug 1185261); r?fubar

https://reviewboard.mozilla.org/r/15181/#review13733

Ship It\!
Attachment #8643942 - Flags: review?(klibby) → review+
Comment on attachment 8643979 [details]
MozReview Request: bundleclone: filter URLs that require SNI (bug 1185261); r?smacleod

https://reviewboard.mozilla.org/r/15191/#review14649

Ship It!
Attachment #8643979 - Flags: review?(smacleod) → review+
url:        https://hg.mozilla.org/hgcustom/version-control-tools/rev/3a23ee3de50a5df6c4080c52986bc8fc3b9b4709
changeset:  3a23ee3de50a5df6c4080c52986bc8fc3b9b4709
user:       Gregory Szorc <gps@mozilla.com>
date:       Tue Aug 18 12:57:43 2015 -0700
description:
bundleclone: filter URLs that require SNI (bug 1185261); r=smacleod

Python < 2.7.9 does not support SNI. Amazon CloudFront uses SNI by
default unless you pay $7,200 per year to have your CDN on dedicated IPs.

This commit adds client-side support for filtering URLs based on whether
they require SNI. Servers can advertise whether a URL requires SNI.
Clients running on old Python versions that don't support SNI will
filter out these entries and fall back to a non-SNI URL, if available,
or abort otherwise.

This will enable Mozilla to advertise the SNI-requiring CloudFront CDN
URLs as the primary URLs in the bundle manifests while still allowing
fallback to the existing S3, non-SNI URLs for older Python versions.
For Python 2.7.9+, there is no change in behavior.

url:        https://hg.mozilla.org/hgcustom/version-control-tools/rev/349247293001b2fd127011654f60ee4508927e8d
changeset:  349247293001b2fd127011654f60ee4508927e8d
user:       Gregory Szorc <gps@mozilla.com>
date:       Tue Aug 18 12:58:06 2015 -0700
description:
scripts: advertise CDN in bundle manifests (bug 1185261); r=fubar

We now have a CloudFront CDN for serving Mercurial bundles at
https://hg.cdn.mozilla.net/. Let's advertise the CDN URLs in the bundle
manifest.

Because of SNI concerns, we don't yet advertise the CDN URL as the
primary URL. Once we're satisfied that things won't break, we can bump
the CDN to the primary URL.

We introduce the ``cdn`` and ``requiresni`` attributes on the CDN
manifest entries to indicate the URL belongs to a CDN and our
CloudFront URL requires SNI. The use case for ``cdn`` isn't as
strong as ``requiresni`` (which is required for clients on old
Pythons to filter out URLs they won't be able to talk to), but it
can be useful for clients that wish to prefer connecting to a CDN
over other servers.
This is deployed. I manually generated bundles for some repos, including mozilla-central to test this. I can confirm that manifests are advertising CDN URLs and CDN download works. But, you still need client-side preferences to prefer the CDN over S3.

Next steps are to get automation using the latest bundleclone extension. Then, we can start advertising CDN URLs as the primary entry in the manifests and close out this bug.
Depends on: 1196067
Presumably the bundleclone extension updates to properly support SNI have percolated out to automation machines by now. With RyanVM's sheriff blessing, I hacked up the bundleclone manifest for mozilla-central to advertise the CDN as the primary URL. If automation doesn't complain, we'll proceed to make this the default ordering for all repos.

As it stands, my hack will get reverted by the cron that runs in ~10-12 hours.
scripts: advertise CDN URLs first (bug 1185261); r?RyanVM

This should be a rubber stamp review. Now that the updated bundleclone
extension with intelligent SNI handling should be deployed to
automation, we should be able to advertise CDN URLs as the primary
entries. If this assumption is wrong, we may see SSL-related job
failures in automation during clone operations. The recourse will be to
revert this changeset and update bundleclone manifests to the previous
behavior of listing S3 URLs first.
Attachment #8655581 - Flags: review?(ryanvm)
According to bhearsum, Linux builders are regenerated nearly every day and Puppet runs on OS X multiple times in a day. The updated bundleclone extension was merged to the Puppet production branch on Aug 20. So it should be live everywhere by now.
Comment on attachment 8655581 [details]
MozReview Request: scripts: advertise CDN URLs first (bug 1185261); r?RyanVM

https://reviewboard.mozilla.org/r/17941/#review16053

And rubberstamp it I will!
Attachment #8655581 - Flags: review?(ryanvm) → review+
\o/

Deploying now. I'll kick off a manual regeneration of the bundles/manifests shortly. It will take a few hours for the manifests to regenerate since it will be generating new bundles for many repos. I prefer doing over manually hacking up the manifest files because it should remove the element of human error.

If you see any errors in automation regarding https://hg.cdn.mozilla.net/ or other weird SSL weirdness, please let me or someone in #vcs know immediately.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Blocks: 1200792
At this time, mozilla-central, fx-team, inbound, and b2g-inbound are all advertising CDN URLs as primary. Next up is gaia-central followed by release, aurora, beta, esr38, tools, mozharness, talos, comm-central, and a bunch of project branches and b2g repos.

The AWS web console isn't showing any extra CloudFront activity. This is expected, as I'm pretty sure all of automation sets the hgrc config to prefer a specific EC2 region.
All manifests have been updated to advertise the CDN as the primary URLs.
Blocks: 1512305
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: