Open
Bug 1252769
Opened 8 years ago
Updated 2 years ago
Avoid buffer under runs when reading from net sockets
Categories
(Core :: WebRTC: Networking, defect, P4)
Core
WebRTC: Networking
Tracking
()
NEW
Tracking | Status | |
---|---|---|
firefox47 | --- | affected |
backlog | webrtc/webaudio+ |
People
(Reporter: drno, Unassigned)
References
Details
As reported in bug 1217677 and bug 1251821 it can happen that we are too slow in reading and processing packets from a network socket with high traffic. With people trying HD and 4K video, and maybe even multiple of these over a single bundled transport, it becomes even more likely that the default OS buffer sizes for receiving are going to be too small. To avoid that the OS drops packets in the receive buffer we probably have three options: 1) Instead of reading a single packet, read multiple/as many possible packets from the socket once poll() tells us that data available. 2) Have the signaling layer do some bandwidth estimates and notify the network layer if receive buffers should be increased. 3) Have the network layer itself collect statistics about the amount of packets and bytes received per given time slot and dynamically request changes of the recv buffer size of the OS if these values cross certain thresholds. And probably there are even more options...
Reporter | ||
Updated•8 years ago
|
backlog: --- → webrtc/webaudio+
Rank: 38
Comment 1•8 years ago
|
||
(In reply to Nils Ohlmeier [:drno] from comment #0) > As reported in bug 1217677 and bug 1251821 it can happen that we are too > slow in reading and processing packets from a network socket with high > traffic. With people trying HD and 4K video, and maybe even multiple of > these over a single bundled transport, it becomes even more likely that the > default OS buffer sizes for receiving are going to be too small. > > To avoid that the OS drops packets in the receive buffer we probably have > three options: > 1) Instead of reading a single packet, read multiple/as many possible > packets from the socket once poll() tells us that data available. > 2) Have the signaling layer do some bandwidth estimates and notify the > network layer if receive buffers should be increased. > 3) Have the network layer itself collect statistics about the amount of > packets and bytes received per given time slot and dynamically request > changes of the recv buffer size of the OS if these values cross certain > thresholds. > > And probably there are even more options... I can say that option 1 has served me well in the past, and is simple.
Comment 2•7 years ago
|
||
Mass change P3->P4 to align with new Mozilla triage process.
Priority: P3 → P4
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•