Closed Bug 1228581 Opened 10 years ago Closed 10 years ago

archive.m.o got really slow in the last few days

Categories

(Cloud Services :: Operations: Miscellaneous, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED WORKSFORME

People

(Reporter: kats, Unassigned)

References

Details

Downloading builds from archive.m.o seems to have slowed to a crawl over the last couple of days. I downloaded an OS X inbound build yesterday [1] on the order of 100M and it took 50 minutes. Right now I'm downloading a Fennec build [2] and it's been going for 15 minutes and it's at 50%, though usually it takes under a minute. I know I ran mozregression earlier this week and it was downloading builds fast, so I think this slowdown happened in the last couple of days. My speedtest.net results and general browsing are as fast as usual so I'm fairly certain the problem isn't on my end at least. [1] http://archive.mozilla.org/pub/firefox/tinderbox-builds/mozilla-inbound-macosx64/1448530453/ [2] http://archive.mozilla.org/pub/mobile/try-builds/kgupta@mozilla.com-94cabe1b630c9a78c9520bf2baca6f16704bf86b/try-android-api-11/
Hm. I just ssh'd into people.m.o and wget'd the fennec build I was trying to download - that was super fast, and then I was able to download it from people.m.o also super fast. So maybe it's just the route from my machine to archive.m.o that is busted. Moving to NetOps. Here's my traceroute if it helps: kats@kgupta-air tmp$ traceroute archive.mozilla.org traceroute to d34chcsvb7ug62.cloudfront.net (52.84.26.231), 64 hops max, 52 byte packets 1 dd-wrt (192.168.1.1) 1.884 ms 1.169 ms 1.100 ms 2 10.67.128.1 (10.67.128.1) 19.137 ms 11.979 ms 7.733 ms 3 21-14-226-24.rev.cgocable.net (24.226.14.21) 16.884 ms 10.690 ms 13.200 ms 4 21-6-226-24.rev.cgocable.net (24.226.6.21) 12.446 ms 13.291 ms 15.733 ms 5 toro-b1-link.telia.net (213.248.92.105) 11.754 ms 13.248 ms 12.846 ms 6 nyk-bb1-link.telia.net (213.155.137.68) 87.275 ms 21.668 ms nyk-bb2-link.telia.net (62.115.141.194) 120.931 ms 7 nyk-b5-link.telia.net (213.155.130.247) 22.448 ms nyk-b5-link.telia.net (80.91.254.14) 25.361 ms nyk-b5-link.telia.net (213.155.135.19) 24.279 ms 8 80.239.132.214 (80.239.132.214) 32.552 ms 25.667 ms 23.605 ms 9 54.240.229.82 (54.240.229.82) 24.189 ms * * 10 72.21.222.31 (72.21.222.31) 26.725 ms * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * ^C
Assignee: nobody → network-operations
Component: Other → NetOps
Product: Release Engineering → Infrastructure & Operations
QA Contact: mshal → jbarnell
Adding Cloud Services people, who look after the archive.m.o system. See also bug 1170832, although that doesn't appear to be what's happening in the traceroute. kats, does the problem still reproduce ? If so could you do a wget -S (or equivalent) to capture the X-Amz-Cf-Id header, which gives Cloud Services the option of opening a ticket with Amazon. Would also help NetOps to know if this is affecting you at home or the TOR office.
Moving the bug to the Cloud services queue as everything is hosted on AWS.
Assignee: network-operations → nobody
Component: NetOps → Operations
Product: Infrastructure & Operations → Cloud Services
QA Contact: jbarnell
Testing again with the same download links I posted in comment 0 it seems to be speedy now. I'll reopen this with the info from comment 2 if I see it again. Thanks!
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → WORKSFORME
(Also for the record this was affecting me at home, not the TOR office)
The problem occurs again. see Bug 1266713
See Also: → 1266713
You need to log in before you can comment on or make changes to this bug.