One hundred years ago, Mac Sennett of the Keystone Film Company made a film short called The Bangville Police. It featured a bunch of inept policemen who became known mostly for running around in circles: The Keystone Cops. Our current strategy for setting up the audio in our video conference rooms appears to be a 21st century remake. We collect user complaints about a particular room that consist mostly of "the microphone is too loud", and "can't hear well". ...but it's not clear from user comments whether they really mean the microphone or the speakers, and whether it's far-end audio that's wrong or in-room sound reinforcement. Our procedure to correct these problems is to go into the "broken" room and adjust levels until we have acceptable results in a Vidyo call to another (uncalibrated) room. That room is usually in the same building as the "broken" room, and is most often chosen because it happens to be unoccupied at the time of the test. The end result of this is that we end up with a room configured to work acceptably with only one other room, and because that room is down the hall from the "broken" room it is highly unlikely to be part of an actual user-initiated video conference with the room under test. We need to develop metrics for measuring room performance and a calibration procedure to set audio levels to standard levels. For starters I suggest we try the following: 1 - Determine a metric for setting a standard audio-reinforcement level in a room. This should probably be measured as a fixed level above the measured noise floor in the room. 2 - Have a standard connection point as the far-end for all room configuration tests. This could be a specific conference room, but availability constraints argue for an HD-50 or similar device connected to measurement instrumentation in the AV lab in SFO1. We should be able to set up a machine to measure audio test signals, and make and play recordings of speech that will give us more repeatable test results. We can then also connect this endpoint to actual user conferences to get data on how well our static test results correlate to user evaluation of the quality of the connections. We can probably configure this to be remotely controlled from the room under test. If we don't change the way we "fix" rooms we'll continue chasing ourselves in circles as we try to reconfigure every room to work with every other room in annendless game of changing gain and noise cancelling levels. Like the Keystone Cops all we'll accomplish is being exhausted at the end of the chase. Please add comments here on how we might develop a more rigorous testing and calibration methodology.
Closing for now