Bug 1580480 Comment 25 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

Testing this, I see a few things that don't seem quite right to me:

1. The adjustment occurs for only one possible case where chunking can occur. Seem like it should occur in FetchTryChunking() where calls to FetchMessage() occurs in chunks so all cases are covered. However, where it is covers the most common chunking case.

2. Chunk size is adjust up by 8192 bytes when the time between chunk fetches is less than 2 seconds. However it only changes every other fetch so you get chunk fetch sequence sizes X, X, X+8192, X+8192, X+2*8192, X+2*8192 ...   Seems like it should just increase by  8192 on every fetch instead of every other. (X is the configured chunk size,  default 65535.)  

3. As long as the time is less than 2 seconds between fetches, the chunk size will keep increasing. You have to download several large messages before you get to a large enough chunk size that chunking is effectively disabled.  Issue 2 makes it take longer.

4. At one time there was a limit on maximum chunk size but now there is not. The chunk size is stored as uint32_t so it could go up to 4Gbytes without roll-over which would be a much larger email than any system supports. So I guess this is OK.

5. When the time between chunk fetches is between 2 and 4 seconds, no adjustment up or down occurs. When the time goes above 4 seconds, the chunk size is decreased by 8192 bytes. To test this you have to slow down your network, both outgoing and especially incoming. I had to slow down incoming to about 8000bps to bring the time to above 4 seconds. Seems to work OK. Minimum chunk size seems to be 2 so can't go to zero or underflow.

6. Adjusted chunk size and threshold (always 1.5 times chunk size) are saved as new prefs when the connection ends or tb shuts down so it is reused as the new prefs on new connections or emails or when tb restarts. But if you want to go back to the default chunk size settings using config editor,  the default or user settings gets overwritten by the adjusted values. You have to disable chunking and then restart tb and set the values back to default or custom values again and then enable chunking for the user settings to take effect.

So maybe there are a few problem. Slow to converge on the optimum larger chunk size due to the linear increase and due to the problem described in item 2. I am curious what would happen if you just double the chunk size when the time is short or half it when the time is long?
Also, resetting back to default or custom values seems like of difficult. But If this really works you probably wouldn't need to do that. Also, maybe a way to disable auto-chunk adjustment and just let the user set their own fixed chunking parameters is needed? Of course, disabling chunking or possibly making that the default is still a possibility since I don't see chunking as that helpful.
Testing this, I see a few things that don't seem quite right to me:

1. The adjustment occurs for only one possible case where chunking can occur. Seem like it should occur in FetchTryChunking() where calls to FetchMessage() occurs in chunks so all cases are covered. However, where it is covers the most common chunking case.

2. Chunk size is adjust up by 8192 bytes when the time between chunk fetches is less than 2 seconds. However it only changes every other fetch so you get chunk fetch sequence sizes X, X, X+8192, X+8192, X+2*8192, X+2*8192 ...   Seems like it should just increase by  8192 on every fetch instead of every other. (X is the configured chunk size,  default 65535.)  

3. As long as the time is less than 2 seconds between fetches, the chunk size will keep increasing. You have to download several large messages before you get to a large enough chunk size that chunking is effectively disabled.  Issue 2 also makes it take longer.

4. At one time there was a limit on maximum chunk size but now there is not. The chunk size is stored as uint32_t so it could go up to 4Gbytes without roll-over which would be a much larger email than any system supports. So I guess this is OK.

5. When the time between chunk fetches is between 2 and 4 seconds, no adjustment up or down occurs. When the time goes above 4 seconds, the chunk size is decreased by 8192 bytes. To test this you have to slow down your network, both outgoing and especially incoming. I had to slow down incoming to about 8000bps to bring the time to above 4 seconds. Seems to work OK. Minimum chunk size seems to be ~~2~~ **(actually, 8192)** so can't go to zero or underflow.

6. Adjusted chunk size and threshold (always 1.5 times chunk size) are saved as new prefs when the connection ends or tb shuts down so it is reused as the new prefs on new connections or emails or when tb restarts. But if you want to go back to the default chunk size settings using config editor,  the default or user settings gets overwritten by the adjusted values. You **may** have to disable chunking and then restart tb and set the values back to default or custom values again and then enable chunking for the user settings to take effect.

So maybe there are a few problem. Slow to converge on the optimum larger chunk size due to the linear increase and due to the problem described in item 2. I am curious what would happen if you just double the chunk size when the time is short or half it when the time is long?
Also, resetting back to default or custom values seems like of difficult. But If this really works you probably wouldn't need to do that. Also, maybe a way to disable auto-chunk adjustment and just let the user set their own fixed chunking parameters is needed? Of course, disabling chunking or possibly making that the default is still a possibility since I don't see chunking as that helpful.
Testing this, I see a few things that don't seem quite right to me:

1. The adjustment occurs for only one possible case where chunking can occur. Seem like it should occur in FetchTryChunking() where calls to FetchMessage() occurs in chunks so all cases are covered. However, where it is covers the most common chunking case.

2. Chunk size is adjust up by 8192 bytes when the time between chunk fetches is less than 2 seconds. However it only changes every other fetch so you get chunk fetch sequence sizes X, X, X+8192, X+8192, X+2*8192, X+2*8192 ...   Seems like it should just increase by  8192 on every fetch instead of every other. (X is the configured chunk size,  default 65535.)  

3. As long as the time is less than 2 seconds between fetches, the chunk size will keep increasing. You have to download several large messages before you get to a large enough chunk size that chunking is effectively disabled.  Issue 2 also makes it take longer.

4. At one time there was a limit on maximum chunk size but now there is not. The chunk size is stored as uint32_t so it could go up to 4Gbytes without roll-over which would be a much larger email than any system supports. So I guess this is OK.

5. When the time between chunk fetches is between 2 and 4 seconds, no adjustment up or down occurs. When the time goes above 4 seconds, the chunk size is decreased by 8192 bytes. To test this you have to slow down your network, both outgoing and especially incoming. I had to slow down incoming to about 8000bps to bring the time to above 4 seconds. Seems to work OK. Minimum chunk size seems to be ~~2~~ **(actually, 8192)** so can't go to zero or underflow.

6. Adjusted chunk size and threshold (always 1.5 times chunk size) are saved as new prefs when the connection ends or tb shuts down so it is reused as the new prefs on new connections or emails or when tb restarts. But if you want to go back to the default chunk size settings using config editor,  the default or user settings gets overwritten by the adjusted values. You **may** have to disable chunking and then restart tb and set the values back to default or custom values again and then enable chunking for the user settings to take effect.

So maybe there are a few problem. Slow to converge on the optimum larger chunk size due to the linear increase and due to the problem described in item 2. I am curious what would happen if you just double the chunk size when the time is short or half it when the time is long?
Also, resetting back to default or custom values seems like of difficult. But If this really works you probably wouldn't need to do that. Also, maybe a way to disable auto-chunk adjustment and just let the user set their own fixed chunking parameters is needed? Of course, disabling chunking or possibly making that the default is still a possibility since I don't see chunking as that helpful **unless the user's connection is really slow, messages are stored only on the server and the messages accessed are often large enough to need chunking**.
Testing this, I see a few things that don't seem quite right to me:

1. The adjustment occurs for only one possible case where chunking can occur. Seem like it should occur in FetchTryChunking() where calls to FetchMessage() occurs in chunks so all cases are covered. However, where it is covers the most common chunking case.

2. Chunk size is adjust up by 8192 bytes when the time between chunk fetches is less than 2 seconds. However it only changes every other fetch so you get chunk fetch sequence sizes ```X, X, X+8192, X+8192, X+2*8192, X+2*8192 ...```   Seems like it should just increase by  8192 on every fetch instead of every other. (X is the configured chunk size,  default 65535.)  

3. As long as the time is less than 2 seconds between fetches, the chunk size will keep increasing. You have to download several large messages before you get to a large enough chunk size that chunking is effectively disabled.  Issue 2 also makes it take longer.

4. At one time there was a limit on maximum chunk size but now there is not. The chunk size is stored as uint32_t so it could go up to 4Gbytes without roll-over which would be a much larger email than any system supports. So I guess this is OK.

5. When the time between chunk fetches is between 2 and 4 seconds, no adjustment up or down occurs. When the time goes above 4 seconds, the chunk size is decreased by 8192 bytes. To test this you have to slow down your network, both outgoing and especially incoming. I had to slow down incoming to about 8000bps to bring the time to above 4 seconds. Seems to work OK. Minimum chunk size seems to be ~~2~~ **(actually, 8192)** so can't go to zero or underflow.

6. Adjusted chunk size and threshold (always 1.5 times chunk size) are saved as new prefs when the connection ends or tb shuts down so it is reused as the new prefs on new connections or emails or when tb restarts. But if you want to go back to the default chunk size settings using config editor,  the default or user settings gets overwritten by the adjusted values. You **may** have to disable chunking and then restart tb and set the values back to default or custom values again and then enable chunking for the user settings to take effect.

So maybe there are a few problem. Slow to converge on the optimum larger chunk size due to the linear increase and due to the problem described in item 2. I am curious what would happen if you just double the chunk size when the time is short or half it when the time is long?
Also, resetting back to default or custom values seems like of difficult. But If this really works you probably wouldn't need to do that. Also, maybe a way to disable auto-chunk adjustment and just let the user set their own fixed chunking parameters is needed? Of course, disabling chunking or possibly making that the default is still a possibility since I don't see chunking as that helpful **unless the user's connection is really slow, messages are stored only on the server and the messages accessed are often large enough to need chunking**.

Back to Bug 1580480 Comment 25