Upload Bandwidth Inadequate for Streaming from SFO Office

RESOLVED FIXED

Status

Infrastructure & Operations
NetOps
RESOLVED FIXED
5 years ago
4 years ago

People

(Reporter: richard, Unassigned)

Tracking

(Blocks: 3 bugs)

Details

(Reporter)

Description

5 years ago
Available upload bandwidth as tested from the Air Mozilla encoders in SFO Commons, SFO319, and SFO324 is in the range of 2.5 to 4.3 Mbits/sec during business hours.  (Somewhat higher after hours).

This is inadequate for streaming Air Mozilla.   Telestream recommends a minimum of 6 Mbits/sec for the streaming profile we currently use, and 10 Mbits/sec for the profile we would like to use in the near future.

Having only 2.5 to 4.3 Mbits/sec upload speeds would also account for video quality issues we have seen on the Vidyo endpoints in those same rooms.

Some program configurations will require simultaneous streaming from both the encoder and the Vidyo endpoint.

Comment 1

5 years ago
Is this something that will be needed at all our offices?  10M is not a trivial amount of bandwidth.

What is the upload destination?

> Some program configurations will require simultaneous streaming from both the 
> encoder and the Vidyo endpoint.

So you mean you will need more than 10M?

Why did we select this method/provider with this configuration without first consulting with Netops on whether or not we could even support this.  Bandwidth is not cheap.

Comment 2

5 years ago
Lowering to normal because 1) it will take a few weeks to provision more bandwidth once contracts are signed, and 2) so it doesn't page.
Severity: major → normal
(Reporter)

Comment 3

5 years ago
Upload destination for most Air Mozilla Streams is currently:

        rtmp://rtp1.sjc1.bitgravity.com  (West Coast North America)

There are also bitgravity ingest points in Chicago, Ashburn Virginia, London Paris that may be more appropriate for offices not on the West Coast.

We may be forced to switch away from bitgravity to edgecast, in which case the ingest point is:

        rtmp://fso.lax.1237.edgecastcdn.net

Vidyo streams (I believe), all go to the Vidyo portal at v.mozilla.com
(Reporter)

Comment 4

5 years ago
If this is going to be expensive and/or take some time, then we should probably discuss it with Zandr & mrz.

In the meantime I'll reconfigure the SFO streamers to eliminate the HD streams and stream only the SD and mobile bitrates.

Comment 5

5 years ago
10M is a lot of headroom.  I'll route you out a different provider which is pretty much unused, but is non-terrestrial last mile so may come with some additional latency.

If this is just going to be for SFO we can accommodate this better once we convert one of the egress from internet to a p2p, but that is also something that will take weeks once we get it out of Legal.  If this is going to be more than SFO then we will seriously need to evaluate this solution because there is a lot of cost associated with maintaining this headroom.
(Reporter)

Comment 6

5 years ago
I thought the reason we put the a/v machines on the voip vlan was so we could allocate bandwidth with QOS rules.   What am I missing?

Comment 7

5 years ago
Yes.  COS is not bandwidth, but rather bandiwdth control.  The only path we have that control is on the 10M MPLS path where all offices connect (except Taiwan).  This is to be shared with VoIP and at the time of provisioning there was never an expectation of more than 5M of video traffic based on the metrics we had on Vidyo as well as the expected usage for video was inter-office which would go over MPLS.  10M of headroom was never communicated and there are costs associated with that for external connections since the 30M we've allocated for office internet is for normal usage and not for streaming video.

The project apparently grew along with the bandwidth needs and it really sounds like we need to get our teams together and get an understanding what it is you're doing and how we can make it work.
There's something broken here. Bandwidth graphs for sfo1 show a peak upstream bandwidth of 1.5Mb/s, and I can get a speedtest of about 3.5Mb/s. This isn't behaving like a 30Mb/s link, so there must be another bottleneck somewhere.

Comment 9

5 years ago
I can't reproduce this from the admin host in sfo1.  How are you doing your test?  Wired or wireless?

[root@admin1a.private.sfo1 ~]# time wget "http://cow.org/100M"

2012-07-13 22:55:38 (4.06 MB/s) - “100M” saved [104857600/104857600]

real    0m24.745s
user    0m0.386s
sys     0m3.109s

[root@admin1a.private.sfo1 ~]# time scp 100M ravi@happy.cow.org:100M.sfo1

100M                                                                                                 100%  100MB   2.3MB/s   00:44    

real    0m50.493s
user    0m5.556s
sys     0m0.641s

Comment 10

5 years ago
[root@admin1a.private.sfo1 ~]# iperf -c happy.cow.org -P 1 -i 5 -f M -t 60 -T 1
------------------------------------------------------------
Client connecting to happy.cow.org, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[  3] local 10.251.75.6 port 43253 connected with 66.94.69.41 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 5.0 sec  32.5 MBytes  6.50 MBytes/sec
[  3]  5.0-10.0 sec  52.9 MBytes  10.6 MBytes/sec
[  3] 10.0-15.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 15.0-20.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 20.0-25.0 sec  54.9 MBytes  11.0 MBytes/sec
[  3] 25.0-30.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 30.0-35.0 sec  54.8 MBytes  10.9 MBytes/sec
[  3] 35.0-40.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 40.0-45.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 45.0-50.0 sec  54.8 MBytes  10.9 MBytes/sec
[  3] 50.0-55.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 55.0-60.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3]  0.0-60.1 sec   642 MBytes  10.7 MBytes/sec

Comment 11

5 years ago
I ran it for 5 minutes in case there was some throttling going on.  This is 80Mbps which is double what I thought the circuit was.

[root@admin1a.private.sfo1 ~]# iperf -c happy.cow.org -P 1 -i 5 -f M -t 300 -T 1
------------------------------------------------------------
Client connecting to happy.cow.org, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[  3] local 10.251.75.6 port 43256 connected with 66.94.69.41 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 5.0 sec  33.8 MBytes  6.75 MBytes/sec
[  3]  5.0-10.0 sec  56.9 MBytes  11.4 MBytes/sec
[  3] 10.0-15.0 sec  54.9 MBytes  11.0 MBytes/sec
[  3] 15.0-20.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 20.0-25.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 25.0-30.0 sec  53.9 MBytes  10.8 MBytes/sec
[  3] 30.0-35.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 35.0-40.0 sec  55.4 MBytes  11.1 MBytes/sec
[  3] 40.0-45.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 45.0-50.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 50.0-55.0 sec  54.2 MBytes  10.8 MBytes/sec
[  3] 55.0-60.0 sec  56.8 MBytes  11.3 MBytes/sec
[  3] 60.0-65.0 sec  50.0 MBytes  10.0 MBytes/sec
[  3] 65.0-70.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 70.0-75.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 75.0-80.0 sec  56.2 MBytes  11.2 MBytes/sec
[  3] 80.0-85.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 85.0-90.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 90.0-95.0 sec  55.0 MBytes  11.0 MBytes/sec
[  3] 95.0-100.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 100.0-105.0 sec  55.1 MBytes  11.0 MBytes/sec
[  3] 105.0-110.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 110.0-115.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 115.0-120.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 120.0-125.0 sec  54.8 MBytes  10.9 MBytes/sec
[  3] 125.0-130.0 sec  55.9 MBytes  11.2 MBytes/sec
[  3] 130.0-135.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 135.0-140.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 140.0-145.0 sec  54.8 MBytes  10.9 MBytes/sec
[  3] 145.0-150.0 sec  54.0 MBytes  10.8 MBytes/sec
[  3] 150.0-155.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 155.0-160.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 160.0-165.0 sec  55.1 MBytes  11.0 MBytes/sec
[  3] 165.0-170.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 170.0-175.0 sec  55.4 MBytes  11.1 MBytes/sec
[  3] 175.0-180.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 180.0-185.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 185.0-190.0 sec  55.1 MBytes  11.0 MBytes/sec
[  3] 190.0-195.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 195.0-200.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 200.0-205.0 sec  54.6 MBytes  10.9 MBytes/sec
[  3] 205.0-210.0 sec  56.2 MBytes  11.2 MBytes/sec
[  3] 210.0-215.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 215.0-220.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 220.0-225.0 sec  55.1 MBytes  11.0 MBytes/sec
[  3] 225.0-230.0 sec  54.8 MBytes  10.9 MBytes/sec
[  3] 230.0-235.0 sec  57.4 MBytes  11.5 MBytes/sec
[  3] 235.0-240.0 sec  54.9 MBytes  11.0 MBytes/sec
[  3] 240.0-245.0 sec  55.0 MBytes  11.0 MBytes/sec
[  3] 245.0-250.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 250.0-255.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 255.0-260.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 260.0-265.0 sec  54.6 MBytes  10.9 MBytes/sec
[  3] 265.0-270.0 sec  56.2 MBytes  11.2 MBytes/sec
[  3] 270.0-275.0 sec  54.6 MBytes  10.9 MBytes/sec
[  3] 275.0-280.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3] 280.0-285.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 285.0-290.0 sec  53.5 MBytes  10.7 MBytes/sec
[  3] 290.0-295.0 sec  56.1 MBytes  11.2 MBytes/sec
[  3] 295.0-300.0 sec  56.0 MBytes  11.2 MBytes/sec
[  3]  0.0-300.0 sec  3312 MBytes  11.0 MBytes/sec
(Reporter)

Comment 12

5 years ago
Not at all sure that tests run near midnight reflect the environment in which we actually stream.

I just now ran the Speedtest.net test from encoder1.sfocommons  and got 93.59 down and 74.68 up.

Confirmed that by streaming the blue ball machine animation first to Edgecast (2 live streams) and then BitGravity (3 live streams).  In both cases the Flash Queue indicator remained totally empty.

Neither of these are at all similar to the results I was getting during the week last week.

I'll repeat the test tomorrow when there are people in the office, and post the results here.

Comment 13

5 years ago
After talking with Zandr today we think we're done here.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → FIXED
Product: mozilla.org → Infrastructure & Operations
You need to log in before you can comment on or make changes to this bug.