Closed Bug 811646 Opened 12 years ago Closed 12 years ago

Add better handling for HTTP downloading for the stub installer

Categories

(Firefox :: Installer, defect)

x86_64
Windows 7
defect
Not set
normal

Tracking

()

VERIFIED FIXED
Firefox 19
Tracking Status
firefox18 + fixed
firefox19 + fixed

People

(Reporter: bbondy, Assigned: bbondy)

References

Details

(Whiteboard: [stub+])

Attachments

(3 files, 4 obsolete files)

Currently the buffer we use in the NSIS plugin InetBgDL is too small (512), so it leads to slower download speeds.
Also we only do a single big request/response.  Some proxys may buffer a resource completely.
The problem with that is we use the default WinINet timeout of 1 min.
So the stub will hang on downloading.

Fiddler is one example that produces this behavior (unless you change the setting)
It buffers the response until the whole response is there and then it releases it.
Which leads to a timeout loop.  A good proxy streams the content as it receives it.
But I'm sure not all proxies are good.

This may (or may not) be hurting stub installer conversions, so is worth fixing.
We should split the requests into multiple parts and use a larger buffer size.

When a timeout happens for this reason the user would see an indeterminate progress bar forever.
It keeps looping between these states:
1. HTTP GET with response of 302 for http://download.mozilla.org/?product=firefox-nightly-latest&os=win&lang=en-US
2. No response for: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe
3. Wait a minute or so
Attached patch Patch v0.9 - WIP (obsolete) — Splinter Review
This is a working version that uses byte range requests, but needs some cleanup, which I'll do in the morning.
(In reply to Brian R. Bondy [:bbondy] from comment #0)
> Currently the buffer we use in the NSIS plugin InetBgDL is too small (512),
> so it leads to slower download speeds.
Actually, 4096
-#if PLUGIN_DEBUG
-    const UINT cbBufXF = 512;
-    BYTE bufXF[cbBufXF];
-#else
-    const UINT cbBufXF = 4096;
-    BYTE*bufXF = (BYTE*) pURL->text;
-#endif

> Also we only do a single big request/response.  Some proxys may buffer a
> resource completely.
> The problem with that is we use the default WinINet timeout of 1 min.
> So the stub will hang on downloading.
Should the ability to set INTERNET_OPTION_CONNECT_TIMEOUT and INTERNET_OPTION_RECEIVE_TIMEOUT via the NSIS script be added?

diff --git a/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp b/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
--- a/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
+++ b/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
@@ -25,16 +25,19 @@ volatile UINT g_FilesTotal = 0;
 volatile UINT g_FilesCompleted = 0;
 volatile UINT g_Status = STATUS_INITIAL;
 volatile FILESIZE_T g_cbCurrXF;
 volatile FILESIZE_T g_cbCurrTot = FILESIZE_UNKNOWN;
 CRITICAL_SECTION g_CritLock;
 UINT g_N_CCH;
 PTSTR g_N_Vars;
 
+int g_ConnectTimeout = 0,
+int g_ReceiveTimeout = 0;
+
 #define NSISPI_INITGLOBALS(N_CCH, N_Vars) do { \
   g_N_CCH = N_CCH; \
   g_N_Vars = N_Vars; \
   } while(0)
 
 #define ONELOCKTORULETHEMALL
 #ifdef ONELOCKTORULETHEMALL
 #define TaskLock_AcquireExclusive() EnterCriticalSection(&g_CritLock)
@@ -204,16 +207,26 @@ diegle:
     if (InternetQueryOption(hInetSes, INTERNET_OPTION_CONNECTED_STATE, &longOpt, &cbio))
     {
       if (INTERNET_STATE_DISCONNECTED_BY_USER&longOpt)
       {
         INTERNET_CONNECTED_INFO ci = {INTERNET_STATE_CONNECTED, 0};
         InternetSetOption(hInetSes, INTERNET_OPTION_CONNECTED_STATE, &ci, sizeof(ci));
       }
     }
+
+    if(g_ConnectTimeout > 0)
+    {
+      InternetSetOption(hInetSes, INTERNET_OPTION_CONNECT_TIMEOUT, &g_ConnectTimeout, sizeof(g_ConnectTimeout));
+    }
+
+    if(g_ReceiveTimeout > 0)
+    {
+      InternetSetOption(hInetSes, INTERNET_OPTION_RECEIVE_TIMEOUT, &g_ReceiveTimeout, sizeof(g_ReceiveTimeout));
+    }
   }
 
   DWORD ec = ERROR_SUCCESS;
   hLocalFile = CreateFile(pFile->text,GENERIC_WRITE,FILE_SHARE_READ|FILE_SHARE_DELETE,NULL,CREATE_ALWAYS,0,NULL);
   if (INVALID_HANDLE_VALUE == hLocalFile) {
     goto diegle;
   }
 
@@ -319,28 +332,40 @@ NSISPIEXPORTFUNC Get(HWND hwndNSIS, UINT
   pX->RegisterPluginCallback(g_hInst, NSISPluginCallback);
   for (;;)
   {
     NSIS::stack_t*pURL = StackPopItem(ppST);
     if (!pURL) {
       break;
     }
 
-    if (0==lstrcmp(pURL->text,_T("/END")))
+    if (lstrcmpi(pURL->text, _T("/connecttimeout")) == 0)
+    {
+      NSIS::stack_t*pConnectTimeout = StackPopItem(ppST);
+      g_ConnectTimeout = _tcstol(pConnectTimeout->text, NULL, 10) * 1000;
+      continue;
+    }
+    else if (lstrcmpi(pURL->text, _T("/receivetimeout")) == 0)
+    {
+      NSIS::stack_t*pReceiveTimeout = StackPopItem(ppST);
+      g_ReceiveTimeout = _tcstol(pReceiveTimeout->text, NULL, 10) * 1000;
+      continue;
+    }
+    else if (lstrcmpi(pURL->text, _T("/reset")) == 0)
+    {
+      StackFreeItem(pURL);
+      Reset();
+      continue;
+    }
+    else if (lstrcmpi(pURL->text, _T("/end")) == 0)
     {
 freeurlandexit:
       StackFreeItem(pURL);
       break;
     }
-    if (0==lstrcmp(pURL->text,_T("/RESET")))
-    {
-      StackFreeItem(pURL);
-      Reset();
-      continue;
-    }
 
     NSIS::stack_t*pFile = StackPopItem(ppST);
     if (!pFile) {
       goto freeurlandexit;
     }
 
     TaskLock_AcquireExclusive();
 

> 
> Fiddler is one example that produces this behavior (unless you change the
> setting)
> It buffers the response until the whole response is there and then it
> releases it.
> Which leads to a timeout loop.  A good proxy streams the content as it
> receives it.
> But I'm sure not all proxies are good.
> 
> This may (or may not) be hurting stub installer conversions, so is worth
> fixing.
> We should split the requests into multiple parts and use a larger buffer
> size.
Agreed

> 
> When a timeout happens for this reason the user would see an indeterminate
> progress bar forever.
> It keeps looping between these states:
> 1. HTTP GET with response of 302 for
> http://download.mozilla.org/?product=firefox-nightly-latest&os=win&lang=en-US
> 2. No response for:
> http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mozilla-
> central/firefox-19.0a1.en-US.win32.installer.exe
> 3. Wait a minute or so
Thanks for catching this.
Sorry that patch was not ready for review, I have a new one I've been working on. I'll post it in about 15 min, just getting it to compile in VC6
Attached patch Patch v1 - Download in parts (obsolete) — Splinter Review
This patch has a lot more fixes and work into it.  I'll wait on compiling the VC6 DLL since there will likely be some review comments.
Attachment #681384 - Attachment is obsolete: true
Attachment #681553 - Flags: review?(robert.bugzilla)
I've tested this with and without Fiddler (which reproduces the problem above) and with and without the server supporting Range requests.  In all cases the downloads are successful now instead of the hanging for Fiddler.
I'll incorporate your fix from Comment 2 into my newer patch.  I also have some minor fixes that were needed to get it working on VC6.
Comment on attachment 681553 [details] [diff] [review]
Patch v1 - Download in parts

Removing request per comment #6

>+BasicParseURL(LPCWSTR url, wchar_t (&protocol)[A], wchar_t (&port)[B],
>+         wchar_t (&server)[C], wchar_t (&path)[D]) {
nit: alignment

>+  ZeroMemory(protocol, A * sizeof(wchar_t));
>+  ZeroMemory(port, B * sizeof(wchar_t));
>+  ZeroMemory(server, C * sizeof(wchar_t));
>+  ZeroMemory(path, D * sizeof(wchar_t));
>+
>+  const WCHAR *p = url;
>+  // Skip past the protocol
>+  int pos = 0;
>+  while (*p != L'\0' && *p != L'/' && *p != L':') {
>+    if (pos + 1 >= A)  
>+      return false;
>+    protocol[pos++] = *p++;
>+  }
>+  // Skip past the ://
>+  p += 3;
>+
>+  wcsncpy(port, L"80", B);
>+  if (!wcsicmp(protocol, L"https")) {
>+    wcsncpy(port, L"443", B);
>+  }
Seems cleaner for this to return an INTERNET_PORT and use INTERNET_INVALID_PORT_NUMBER so it uses "Uses the default port for the service specified by dwService."
http://msdn.microsoft.com/en-us/library/windows/desktop/aa384363%28v=vs.85%29.aspx
Attachment #681553 - Flags: review?(robert.bugzilla)
Attached patch Patch v2 - InetBgDL (obsolete) — Splinter Review
- Added in options for 
- Other minor code changes
Attachment #681553 - Attachment is obsolete: true
Attachment #681638 - Flags: review?(robert.bugzilla)
Attachment #681639 - Flags: review?(robert.bugzilla)
Attached patch Patch v2 - VC6 DLL (obsolete) — Splinter Review
Attachment #681640 - Flags: review?(robert.bugzilla)
Attachment #681639 - Flags: review?(robert.bugzilla) → review+
Comment on attachment 681638 [details] [diff] [review]
Patch v2 - InetBgDL

>diff --git a/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp b/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
>--- a/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
>+++ b/other-licenses/nsis/Contrib/InetBgDL/InetBgDL.cpp
>...
>@@ -107,46 +111,131 @@ UINT_PTR __cdecl NSISPluginCallback(UINT
>   {
>   case NSPIM_UNLOAD:
>     Reset();
>     break;
>   }
>   return NULL;
> }
> 
>+
>+/* ParseURL is a quickly thrown together simple way to parse a URL into parts.
>+ * It is only designed to support simple URLs and doesn't support things like
>+ * authorization information in the URL.
>+ * The format of the URL is assumed to be:
>+ * <protocol>://<server>:<port>/<path>
>+ *
>+ * @param url      The input URL to parse
>+ * @param protocol Out variable which will store the protocol of the passed url
>+ * @param server Out variable which will store the server of the passed url
>+ * @param port Out variable which will store the port of the passed url
>+ * @param path Out variable which will store the path of the passed url
>+ * @return true if successful
>+*/
>+template<size_t A, size_t B, size_t C> static bool
>+BasicParseURL(LPCWSTR url, wchar_t (*protocol)[A], INTERNET_PORT *port,
>+              wchar_t (*server)[B], wchar_t (*path)[C])
>+{
>+  ZeroMemory(*protocol, A * sizeof(wchar_t));
>+  ZeroMemory(*server, B * sizeof(wchar_t));
>+  ZeroMemory(*path, C * sizeof(wchar_t));
>+
>+  const WCHAR *p = url;
>+  // Skip past the protocol
>+  int pos = 0;
>+  while (*p != L'\0' && *p != L'/' && *p != L':')
>+  {
>+    if (pos + 1 >= A)
>+      return false;
>+    (*protocol)[pos++] = *p++;
>+  }
>+  // Skip past the ://
>+  p += 3;
>+
>+  *port = 80;
>+  if (!wcsicmp(*protocol, L"https"))
>+  {
>+    *port = 443;
>+  }
Shouldn't this just work with INTERNET_INVALID_PORT_NUMBER? If not and the doc is incorrect, then I would think INTERNET_DEFAULT_HTTP_PORT or INTERNET_DEFAULT_HTTPS_PORT should be used.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa384363%28v=vs.85%29.aspx

>@@ -188,146 +281,304 @@ diegle:
>...
>   TRACE2(_T("TaskThreadProc completed %s, ec=%u\n"), pURL->text, ec);
>   InternetCloseHandle(hInetFile);
>-  if (ERROR_SUCCESS == ec) {
>-    if (INVALID_HANDLE_VALUE != hLocalFile) {
>+  if (ERROR_SUCCESS == ec)
>+  {
>+    if (INVALID_HANDLE_VALUE != hLocalFile)
>+    {
>       CloseHandle(hLocalFile);
>       hLocalFile = INVALID_HANDLE_VALUE;
>     }
>     StackFreeItem(pURL);
>     StackFreeItem(pFile);
>     ++completedFile;
>-  }
>-  else
>-  {
>+  } else {
nit: you are mixing styles... elsewhere you use
}
else
{

>@@ -386,8 +639,16 @@ EXTERN_C BOOL WINAPI _DllMainCRTStartup(
>   }
>   return TRUE;
> }
> 
> BOOL WINAPI DllMain(HINSTANCE hInst, ULONG Reason, LPVOID pCtx)
> {
>   return _DllMainCRTStartup(hInst, Reason, pCtx);
> }
>+
>+// For some reason VC6++ doesn't like wcsicmp and swprintf.
>+// If you use them, you get a linking error about _main
>+// as an unresolved external.
>+int main(int argc, char**argv)
>+{
>+  return 0;
>+}
Strange... have you found others using this workaround?

I went through this pretty quickly and would like another chance to look at an updated patch
Attachment #681638 - Flags: review?(robert.bugzilla)
Attachment #681640 - Flags: review?(robert.bugzilla)
> Shouldn't this just work with INTERNET_INVALID_PORT_NUMBER? If not and the doc
>  is incorrect, then I would think INTERNET_DEFAULT_HTTP_PORT or
> INTERNET_DEFAULT_HTTPS_PORT should be used.
> http://msdn.microsoft.com/en-us/library/windows/desktop/aa384363%28v=vs.85%29.aspx

Using invalid here would always lead to port 80 even when the passed in URL is https, so I'll go with the constants.  Yup I agree using those constants are better than hardcoding values.


> } else {

Doh I searched for ") {" to make that consistent but of course it didn't catch else clauses.

> Strange... have you found others using this workaround?

Nope it's really odd, if I comment out all of the new code except the wcsicmp it gives me the linking error. If I additionally comment out the wcsicmp it works.  VS2010 is fine without that hack. I could use different code instead but this seems like th easiest fix.  The problem also happens with the wcsprintf.  I did test the DLL and it does work.  Also the entry point is set correctly to DllMain in the linker options, so who knows. I think it's a linker bug.


> I went through this pretty quickly and would like another chance to look 
> at an updated patch

I noticed that overall this is slightly slower, but not significantly.  If I only did the changes to the buffer sizes then it would be a lot faster than originally, but the extra sending requests slows it down a bit.  I think it's worth not having to have a single big request though.  We might want to tweak the value of cbRangeReadBufXF (which is set to 2 MiB currently).  This will have a direct effect on speed if we increase it and we can probably safely do that since we increased the timeouts.  It'll make the progress bar more choppy though when you have a bad proxy.
Fixes as per previous comment.
Attachment #681638 - Attachment is obsolete: true
Attachment #681640 - Attachment is obsolete: true
Attachment #681658 - Flags: review?(robert.bugzilla)
Comment on attachment 681658 [details] [diff] [review]
Patch v3 - InetBgDL

>+  // Up the default internal buffer size from 4096 to internalReadBufferSize.
>+  DWORD internalReadBufferSize = cbRangeReadBufXF;
>+  if (!InternetSetOption(hInetCon, INTERNET_OPTION_READ_BUFFER_SIZE,
>+                         &internalReadBufferSize, sizeof(DWORD)))
>+  {
>+    // Maybe it's too big, try half of the optimal value.  If that fails just
>+    // use the default.
>+    internalReadBufferSize /= 2;
>+    InternetSetOption(hInetCon, INTERNET_OPTION_READ_BUFFER_SIZE,
>+                      &internalReadBufferSize, sizeof(DWORD));
>+  }
>+
>+  // Up the default timeout of 30 seconds to 2 minutes instead.
>+  // This is in case a proxy in between caches the results, it could in theory
>+  // take 2 minutes to get the first 2MiB chunk in that case.
This comment is not correct... it is only changed when specified by the script. I'm fine with not adding a comment since the code is easily understood without a comment here. Perhaps add a comment to the script regarding the proxy issue?

>+  if (g_ReceiveTimeout)
>+  {
>+    InternetSetOption(hInetCon, INTERNET_OPTION_DATA_RECEIVE_TIMEOUT,
>+                      &g_ReceiveTimeout, sizeof(DWORD));
>+  }
>+
>+  // Obtain the file size using a HEAD request.
>+  // Some proxy servers will hold up an entire download for GET requests, so
>+  // HEAD is important here.
>+  HINTERNET hInetFile = HttpOpenRequest(hInetCon, L"HEAD",
>+                                        path, NULL, NULL, NULL, 0, 0);
>+  if (!hInetFile)
>+  {
>+    goto diegle;
>+  }
>+
>+  if (!HttpSendRequest(hInetFile, NULL, 0, NULL, 0))
>+  {
>+    goto diegle;
>+  }
>+
>+  // Get the file length via the Content-Length header
>   FILESIZE_T cbThisFile;
>   DWORD cbio = sizeof(cbThisFile);
>-  if (!HttpQueryInfo(hInetFile, HTTP_QUERY_CONTENT_LENGTH|HTTP_QUERY_FLAG_NUMBER, &cbThisFile, &cbio, NULL))
>+  if (!HttpQueryInfo(hInetFile,
>+                     HTTP_QUERY_CONTENT_LENGTH | HTTP_QUERY_FLAG_NUMBER,
>+                     &cbThisFile, &cbio, NULL))
>   {
>     cbThisFile = FILESIZE_UNKNOWN;
>   }
It would be nice to get the final url for redirects as well.
Attachment #681658 - Flags: review?(robert.bugzilla) → review+
(In reply to Robert Strong [:rstrong] (do not email) from comment #14)
>...
> It would be nice to get the final url for redirects as well.
meant to also say in a different bug
Heh ok good because the HTTP request for downloading the installer leads to a redirect currently so the comment would have applied here as well :)

The comment was just left over because I had added a hardcoded 2 min this morning after the patch that you seen but before I seen your review comments. I'll fix that.

What do you think about this?

> I noticed that overall this is slightly slower, but not significantly.  
> If I only did the changes to the buffer sizes then it would be a lot 
> faster than originally, but the extra sending requests slows it down a bit. 
>  I think it's worth not having to have a single big request though.  
> We might want to tweak the value of cbRangeReadBufXF (which is set to 2 MiB
>  currently).  This will have a direct effect on speed if we increase it and we
> can probably safely do that since we increased the timeouts.  It'll make the 
> progress bar more choppy though when you have a bad proxy.

Trade off is:
- Larger range requests is faster
- Larger range requests leads to choppier downloading if you have a bad proxy in between

What is the maximum size you think would be good? Maybe 4MiB? 4 or 8 would probably be enough to leave the speed unchanged. 2 is slightly slower than before, maybe by 5 or 10 seconds.
I've been mulling that over. I think the trade off of speed when going with 2 is ok during the download since this will allow recovering from the last chunk.
> > It would be nice to get the final url for redirects as well.
> meant to also say in a different bug

Oh I misunderstood this, I thought you wrote the comment in the wrong bug.  So you want the redirect destination returned to NSIS or? I understand now that you want a task done in another bug but need clarification on what that is.
We are going to need the final url used for the download.
Ah you mean for subsequent Range HTTP requests right? Ya that would probably speed things up more quite a bit too. I'll do that here to avoid having to uplift another aurora patch.  I'll do the work in a new patch so you only have to review that.
That and it will also be used for the next version of the ping.
I don't think the ping request gets redirected, it gets a 200 back right away.
The final url is to be included in the ping so we can detect problematic mirrors
OK thanks, so to clarify, I'll do that in another bug, but in this bug I'll add support for not having to go through a redirect with each 2MiB chunk.
In the first HTTP HEAD request I set a flag that indicates not to auto handle the redirects. That's the only way to get the final destination URL. 
Then I use that final destination URL for all subsequent range requests.

I tested the InetBgDL code with and without a proxy and with and without passing the new flag for range requests.

I bumped the chunk from 2MB to 4MB as well.
Attachment #682612 - Flags: review?(robert.bugzilla)
Comment on attachment 682612 [details] [diff] [review]
Patch v1 - Better redirect handling

>   // If the server doesn't have range request support or doesn't have a
>   // Content-Length header, then get everything all at once.
>-  if (!useRangeRequest)
>+  // If the user didn't want a range request, then we already issued the request
>+  if (g_WantRangeRequest && !shouldUseRangeRequest)
>   {
>+    // If the user wanted a range request, then we already sent the request
nit: If the consumer...

It would be helpful to expand on this comment to at least include when the request was already sent.

Looks good!
Attachment #682612 - Flags: review?(robert.bugzilla) → review+
http://hg.mozilla.org/integration/mozilla-inbound/rev/1ed5a733887f
http://hg.mozilla.org/integration/mozilla-inbound/rev/4bcc96719688
http://hg.mozilla.org/integration/mozilla-inbound/rev/d9c68e8e78be
http://hg.mozilla.org/integration/mozilla-inbound/rev/a2d0f3ba09eb

Juan, once this lands on m-c, I'd like for it to be tested first before requesting that it go to aurora.  If you have a program like fiddler open before, that would lead to an infinite downloading loop.  Now it'll work but the download will be more choppy.
Keywords: verifyme
The landed patches addressed the latest review nits btw. Thanks for the reviews!
QA Contact: jsmith
> We're reaching out to you because there are unresolved bugs tracked for Firefox 
> 18/19 assigned to your team. We *strongly* urge all teams to prioritize tracking 
> bugs immediately, so that we're not calling you over the holidays :)

This is waiting on QA to verify that it's working OK before we can merge it to 18.
Here's what I am thinking to verify this. Let me know if I'm not right.

Operating Systems

- Windows 7
- Windows XP

Tests for HTTP Better Handling

- Test that I can install Firefox to it's default install directory as an admin with the stub installer
- Test that I can install Firefox to it's default install directory as an admin with the stub installer with fiddler running
- Test that I can fail with appropriate error if I install firefox with the stub installer with no internet connection
- Test that I can install Firefox with an antivirus running with default preferences on the antivirus with the stub installer
Also - each install test needs to check that pings are still sent. I already verified the download cancel ping flow in a different bug, so I don't need to recheck that.

I'll also add an additional test that will do a stub install test with wireshark running.
While running the install test with fiddler, I'm seeing:

#	Result	Protocol	Host	URL	Body	Caching	Content-Type	Process	Comments	Custom	
1	302	HTTP	download.mozilla.org	/?product=firefox-nightly-latest&os=win&lang=en-US	0	max-age=15  	text/html; charset=UTF-8	firefox-20.0a1.en-us.win32.installer-stub:5652			
2	200	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	0	max-age=3600  Expires: Tue, 27 Nov 2012 23:25:03 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
3	206	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	4,194,305	max-age=3600  Expires: Tue, 27 Nov 2012 23:41:00 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
4	206	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	4,194,305	max-age=3600  Expires: Tue, 27 Nov 2012 23:34:53 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
5	206	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	4,194,305	max-age=3600  Expires: Tue, 27 Nov 2012 23:36:45 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
6	206	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	4,194,305	max-age=3600  Expires: Tue, 27 Nov 2012 23:36:47 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
7	206	HTTP	ftp.mozilla.org	/pub/mozilla.org/firefox/nightly/latest-mozilla-central/firefox-19.0a1.en-US.win32.installer.exe	3,723,264	max-age=3600  Expires: Tue, 27 Nov 2012 23:36:49 GMT	application/octet-stream	firefox-20.0a1.en-us.win32.installer-stub:5652			
8	200	HTTP	download-stats.mozilla.org	/stub/v3/nightly/en-US/0/2/1/20020/1/1/	0		text/html; charset=UTF-8	firefox-20.0a1.en-us.win32.installer-stub:5652			

What's weird in the above longs is the multiple ftp URLs being seen. I don't think this is a stub issue (possibly cdn issue?), but why is this happening?
Results:

Tests for HTTP Better Handling - Windows 7

[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer
[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer with fiddler running and a ping is sent
[X]Test that I can fail with appropriate error if I install firefox with the stub installer with no internet connection
[X]Test that I can install Firefox with an antivirus running with default preferences on the antivirus with the stub installer
[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer with wireshark running and a ping is sent

Tests for HTTP Better Handling - Windows XP

[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer
[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer with fiddler running and a ping is sent
[X]Test that I can fail with appropriate error if I install firefox with the stub installer with no internet connection
[X]Test that I can install Firefox with an antivirus running with default preferences on the antivirus with the stub installer
[X]Test that I can install Firefox to it's default install directory as an admin with the stub installer with wireshark running and a ping is sent

Issues:

* [NEW] Multiple ftp URLs seen fired in fiddler when installing stub
* [NEW] Download interrupted prompt appears without a network connection with Fiddler running - click ok to continue closes stub - that doesn't make sense
* [KNOWN BUG] Smashed UI seen when installing stub with fiddler occasionally
* [KNOWN BUG] No timeout seen when trying to download stub bits without a network connection

Safe to Uplift: Yes

Brian - Any opinions on the issues hit? Generally, I think you are okay to uplift safely, although I'm curious what those two NEW issues are.
Flags: needinfo?(netzen)
The multiple requests is expected and on purpose. It does 1 request per chunk instead of 1 request total now.  The download interrupted prompt with fiddler sounds strange, but if it's only with fiddler that sounds ok.
Flags: needinfo?(netzen)
This one was known before this patch? I.e. can you also reproduce this on Aurora stub?
* [KNOWN BUG] No timeout seen when trying to download stub bits without a network connection
(In reply to Brian R. Bondy [:bbondy] from comment #37)
> This one was known before this patch? I.e. can you also reproduce this on
> Aurora stub?
> * [KNOWN BUG] No timeout seen when trying to download stub bits without a
> network connection

Yup. It's always been known - see bug 797998.
Filed bug 815938 about download interrupted prompt with fiddler. Marking as verified. Good to uplift.
Status: RESOLVED → VERIFIED
Keywords: verifyme
Comment on attachment 682612 [details] [diff] [review]
Patch v1 - Better redirect handling

[Triage Comment]
Approving for uplift to Beta 18, given positive QA verification.
Attachment #682612 - Flags: approval-mozilla-beta+
Attachment #681639 - Flags: approval-mozilla-beta+
Attachment #681658 - Flags: approval-mozilla-beta+
Consolidated the m-c pushes into one and pushed to beta here:
http://hg.mozilla.org/releases/mozilla-beta/rev/7e3f8479f43e

Thanks for the detailed feedback/testing jsmith and for the mozilla-beta approval akeybl.
Keywords: verifyme
Keywords: verifyme
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: