Exception in asynchronous callback not visible in web or error console

VERIFIED FIXED

Status

defect
P1
normal
VERIFIED FIXED
9 years ago
Last year

People

(Reporter: catlee, Assigned: msucan)

Tracking

Trunk
Dependency tree / graph

Firefox Tracking Flags

(blocking2.0 betaN+)

Details

(Whiteboard: [patchclean:0118][softblocker])

Attachments

(1 attachment, 7 obsolete attachments)

Reporter

Description

9 years ago
I have code that runs after receiving data from an async jquery call.  If that code tries to run a function that doesn't exist, I don't get an error in the web console, nor in the error console.  I see just 'hi'.

<script type="text/javascript" src="http://code.jquery.com/jquery-1.4.4.min.js"></script>
<script type="text/javascript">
    $(document).ready(function() {
            $.get("http://google.ca", function(data)
            {
                console.info('hi');
                foo(1);
                console.info('bye');
            });
    });
</script>
Any chance to translate what that jquery code does.
What is $.get() ?
Ah, I think I found that documentation, but it is still not sure when and
who calls the callback.
From looking at the jQuery source [1], get is the same as jQuery.ajax.  It ends up creating an xhr, and from onreadystate change we end up calling:

>	handleSuccess: function( s, xhr, status, data ) {
>		// If a local callback was specified, fire it and pass it the data
>		if ( s.success ) {
>			s.success.call( s.context, data, status, xhr );
>		}
>
>		// Fire the global callback
>		if ( s.global ) {
>			jQuery.triggerGlobal( s, "ajaxSuccess", [xhr, s] );
>		}
>	},

[1] http://code.jquery.com/jquery-1.4.4.js
Turns out this is due to the HUDService's incorrect error reporting facilities.  I can't find the correct component for that, however.  I think this should block, or at least get approval, because it's a broken part of a new feature.
blocking2.0: --- → ?
Component: DOM → Developer Tools
Product: Core → Firefox
QA Contact: general → developer.tools

Comment 6

9 years ago
This is quite likely fixed by some patches that are in the process of review (or just landed, possibly). I'm assigning this to Mihai who has been working on ensuring that things get logged properly.
Assignee: nobody → mihai.sucan
Blocks: devtools4

Updated

9 years ago
Priority: -- → P1
Assignee

Comment 7

9 years ago
This looks to me like a different issue, one that hasn't been yet tackled by any other patch.

Josh: can you explain your patch? As I understand, our network logging code seems to intervene into the way network stream failures are reported. What you did is add a generic Cr.NS_ERROR_FAILURE for the network request cancel reason. This suggests to me that (1) our Web Console code doesn't play nice with stream errors, (2) and the patch doesn't always pass the real stream error to the cancel() method.
blocking2.0: ? → betaN+
(In reply to comment #0)
> I have code that runs after receiving data from an async jquery call.  If that
> code tries to run a function that doesn't exist, I don't get an error in the
> web console, nor in the error console.

Is this a regression compared to 3.6?
Gavin: this isn't a regression, since it turns out that the error console sees the error just fine if the web console isn't open, the reason for which is made clear by the patch.

Mihai: I admit to not knowing precisely how JS exceptions translate to result codes.  If you tell me that they'll magically convert to the correct value, I'll trust you.  However, we need to ensure that we report the exceptions to the error console or they'll just get swallowed as they currently do.
(In reply to comment #9)
> Gavin: this isn't a regression, since it turns out that the error console sees
> the error just fine if the web console isn't open

That isn't what catlee says he's seeing (in the text I quoted). The two consoles are entirely independent, so whether or not one or the other is open shouldn't change behavior. Are you sure you're seeing the same thing he is? What testcase are you using to test?
(In reply to comment #10)
> (In reply to comment #9)
> > Gavin: this isn't a regression, since it turns out that the error console sees
> > the error just fine if the web console isn't open
> 
> That isn't what catlee says he's seeing (in the text I quoted). The two
> consoles are entirely independent, so whether or not one or the other is open
> shouldn't change behavior. Are you sure you're seeing the same thing he is?
> What testcase are you using to test?

I did investigative work and discobered that the web console net listeners swallow exceptions in the stream listener callbacks, causing the observed behabiour change.  The original bug report was not fully correct.
The original report being "not correct" seems like an odd conclusion to reach based on what you've said. Your issue sounds different than the one catlee reported. Exceptions for web content don't get thrown in the chrome JS that you're patching, so I don't think you're talking about the same exceptions.
Assignee

Comment 13

9 years ago
This bug is about eating JS exceptions in event handlers coming from network requests, like onreadystate for XHR.

The issue is well known in the Firebug world, and I am currently investigating their fix - it seems they also had this issue and they have solved the issue.

Relevant bugs: 492534 and 515051. I am now spelunking through the firebug 1.6 source code to see their solution. I have an idea already of how to do it. I hope to have the patch ready some time tomorrow.
Granted, I was on a train without internet access yesterday when investigating this, so I'll explain the process I used.  I took catlee's test case in a local html file which I opened via file://, saved jquery locally and had the page load that.  Given this setup, the error console will show the correct error when the web console is closed, but no error when the web console is open (which also shows no error).

onreadystatechange is the callback function being triggered here, and in the stacks I was viewing, it was being triggered from nsXMLHttpRequest::OnStopRequest -> nsXMLHttpRequest::Error -> nxXMLHttpRequest::ChangeState.  Now, the OnStopRequest passes through the WebConsole, since it observes all traffic.  Since onreadystatechange was throwing, the web console's OnStopRequest was swallowing it.

Further corroborating this point, when the page became cached, I began to see the error pop up in the web console.  This occurred because it no longer went through the global http OnStopRequest listener, and took a path directly through nsHttpChannel instead.
(In reply to comment #12)
> The original report being "not correct" seems like an odd conclusion to reach
> based on what you've said. Your issue sounds different than the one catlee
> reported. Exceptions for web content don't get thrown in the chrome JS that
> you're patching, so I don't think you're talking about the same exceptions.

Furthermore, I was watching over catlee's shoulder when he was first poking around this bug, and we never tried looking at the error console on its own without the web console open.
Oh I see, the network listener itself is interfering (and that's only installed once you open the web console). Still odd that swallowing an exception thrown in chrome affects the reporting of the content exception, but I guess I shouldn't put anything past xpconnect :) Sorry for the sidetrack!
Assignee

Comment 17

9 years ago
Comment on attachment 498930 [details] [diff] [review]
Ensure HUDService reports all stream listener errors properly.

This patch breaks the Web Console. Cu.reportError(ex) is not allowed (the call throws a bad exception), and the aRequest.cancel(Cr.NS_ERROR_FAILURE) is ineffective. Also, Cr is undefined - the HUDService code does not have the Cr = Components.results constant defined.

Marking this patch as obsolete. Will attach a patch in a few moments.
Attachment #498930 - Attachment is obsolete: true
Assignee

Comment 18

9 years ago
Posted patch proposed patch (obsolete) — Splinter Review
The proposed patch which fixes the issue reported, and it includes a mochitest.

For an explanation of the problem please see bug 515051 comment #10. For an explanation of the solution please see bug 515051 comment #44. Both comments represent states of discussion at that time - for a more complete understanding ... read the rest. :)

Obviously, Firebug's 1.6 source code was a really useful resource as well. Our network logging code already borrows a lot of the same approaches used in Firebug's source.

This patch makes important changes to the way we are monitoring and reading the request response. Instead of putting our JS code in the network listeners chain, we fork the network listeners chain with an nsIStreamListenerTee, and we pipe the data through an nsIPipe buffer. All this to allow JS exceptions in event handlers from the web page to be reported.
Attachment #499500 - Flags: feedback?(rcampbell)
Assignee

Updated

9 years ago
Status: NEW → ASSIGNED
OS: Linux → All
Hardware: x86_64 → All
Whiteboard: [patchclean:1223]
Comment on attachment 499500 [details] [diff] [review]
proposed patch

only have a minor quibble with the test code:

In your test file, performTest() is really at the end so it's kind of misnamed. I was looking for it to run some sort of test when all it's doing is checking. It's more like checkResults() or checkOutput().

Otherwise, I think this is a very nice looking patch that should fix us up. If I'd gotten to this bug sooner, I probably could have pointed you to the referenced bugs, but it looks like you found them anyway. Good work!
Attachment #499500 - Flags: feedback?(rcampbell) → feedback+
Assignee

Comment 20

9 years ago
Posted patch updated patch (obsolete) — Splinter Review
Updated the patch, renamed the test function as requested. Thanks Rob for the feedback+! Asking for review from Shawn.
Attachment #499500 - Attachment is obsolete: true
Attachment #499659 - Flags: review?(sdwilsh)
Assignee

Updated

9 years ago
Whiteboard: [patchclean:1223] → [patchclean:1224]
Comment on attachment 499659 [details] [diff] [review]
updated patch

>diff --git a/toolkit/components/console/hudservice/HUDService.jsm b/toolkit/components/console/hudservice/HUDService.jsm

>   onDataAvailable: function RL_onDataAvailable(aRequest, aContext, aInputStream,
>                                                 aOffset, aCount)

>-    let data = NetUtil.readInputStreamToString(aInputStream, aCount);
>+    let bis = Cc["@mozilla.org/binaryinputstream;1"].
>+              createInstance(Ci.nsIBinaryInputStream);
>+    bis.setInputStream(aInputStream);
>+
>+    let data = bis.readBytes(aCount);

Mentioned this on IRC, but just noting here for reference - I think this change is unnecessary/unwanted.
Comment on attachment 499659 [details] [diff] [review]
updated patch

To make sure I understand what we are doing here...
This patch makes us register as an async listener, which means we do not override the original listener, correct?

>+++ b/toolkit/components/console/hudservice/HUDService.jsm
>   /**
>+   * Set the async listener for the given nsIAsyncInputStream.
>+   *
>+   * @param nsIAsyncInputStream aStream
>+   * @param nsIInputStreamCallback aListener
>+   * @returns void
>+   */
More description please

>+  setAsyncListener: function RL_setAsyncListener(aStream, aListener)
>+  {
>+    try {
>+      // Throws NS_BASE_STREAM_CLOSED if the stream is closed normally or at
>+      // end-of-file.
>+      aStream.available();
>+    }
>+    catch (ex) {
>+      return;
>+    }
I don't understand why you need to call available here.  The documentation for asyncWait seems to indicate so, at least:
https://mxr.mozilla.org/mozilla-central/source/xpcom/io/nsIAsyncInputStream.idl#86

>+    try {
>+      // Asynchronously wait for the stream to be readable or closed.
>+      aStream.asyncWait(aListener, 0, 0, null);
You want to pass the main thread as the fourth argument here.

>   onDataAvailable: function RL_onDataAvailable(aRequest, aContext, aInputStream,
>                                                 aOffset, aCount)
>   {
>     this.setResponseHeader(aRequest);
> 
>-    let data = NetUtil.readInputStreamToString(aInputStream, aCount);
>+    let bis = Cc["@mozilla.org/binaryinputstream;1"].
>+              createInstance(Ci.nsIBinaryInputStream);
>+    bis.setInputStream(aInputStream);
>+
>+    let data = bis.readBytes(aCount);
Pretty sure you don't want this change seeing as how readInputStreamToString was implemented specifically for this use.

>   onStartRequest: function RL_onStartRequest(aRequest, aContext)
>   {
>+    this.request = aRequest;
>+    this.setAsyncListener(this.sink.inputStream, this);
A comment about why we use setAsyncListener would be nice here

>   onStopRequest: function RL_onStopRequest(aRequest, aContext, aStatusCode)
>   {
>+    this.setAsyncListener(this.sink.inputStream, null);
ditto

>+  /**
>+   * The nsIInputStreamCallback.
>+   *
>+   * @param nsIAsyncInputStream aStream
>+   * @returns void
>+   */
More verbosity please

>   QueryInterface: XPCOMUtils.generateQI([
>-    Ci.nsIStreamListener,
>-    Ci.nsISupports
>+    Ci.nsIStreamListener, Ci.nsIInputStreamCallback,
>+    Ci.nsIRequestObserver, Ci.nsISupports
Went from one interface per line to multiple per line.  Would prefer it if you kept with one per line.

>+            let tee = Cc["@mozilla.org/network/stream-listener-tee;1"].
>+                      createInstance(Ci.nsIStreamListenerTee);
>+            tee = tee.QueryInterface(Ci.nsIStreamListenerTee);
createInstance will already do the QI for you

>+            // The response will be written into the outputStream of this pipe.
This comment appears to the state the obvious.  I'm fine with that if you also explain why we need the pipe in the first place.

>+            // Both ends of the pipe must be blocking.
>+            let sink = Cc["@mozilla.org/pipe;1"].createInstance(Ci.nsIPipe);
>+            sink.init(false, false, 0x20000, 0x4000, null);
Magic numbers!  Please pull these out into constants!

>+++ b/toolkit/components/console/hudservice/tests/browser/browser_webconsole_bug_618078_network_exceptions.js
>@@ -0,0 +1,81 @@
>+/* vim:set ts=2 sw=2 sts=2 et: */
>+/* ***** BEGIN LICENSE BLOCK *****
>+ * Any copyright is dedicated to the Public Domain.
>+ * http://creativecommons.org/publicdomain/zero/1.0/
>+ *
>+ * Contributor(s):
>+ *  Mihai Șucan <mihai.sucan@gmail.com>
>+ *
>+ * ***** END LICENSE BLOCK ***** */
obligatory "this isn't the boilerplate specified on https://www.mozilla.org/MPL/boilerplate-1.1/"
Attachment #499659 - Flags: review?(sdwilsh) → review-
Assignee

Comment 23

9 years ago
(In reply to comment #22)
> Comment on attachment 499659 [details] [diff] [review]
> updated patch
> 
> To make sure I understand what we are doing here...
> This patch makes us register as an async listener, which means we do not
> override the original listener, correct?

We override the original listener with an nsIStreamListenerTee which gives the output stream both to us and to the original listener.


> >+    try {
> >+      // Asynchronously wait for the stream to be readable or closed.
> >+      aStream.asyncWait(aListener, 0, 0, null);
> You want to pass the main thread as the fourth argument here.

Main thread? How do I do that?

> >   onDataAvailable: function RL_onDataAvailable(aRequest, aContext, aInputStream,
> >                                                 aOffset, aCount)
> >   {
> >     this.setResponseHeader(aRequest);
> > 
> >-    let data = NetUtil.readInputStreamToString(aInputStream, aCount);
> >+    let bis = Cc["@mozilla.org/binaryinputstream;1"].
> >+              createInstance(Ci.nsIBinaryInputStream);
> >+    bis.setInputStream(aInputStream);
> >+
> >+    let data = bis.readBytes(aCount);
> Pretty sure you don't want this change seeing as how readInputStreamToString
> was implemented specifically for this use.

Ah, ok. I wasn't sure if this change was needed or not.

Thanks for your review! I will update the patch accordingly.
(In reply to comment #23)
> Main thread? How do I do that?
nsIThreadManager has a mainThread property
Assignee

Comment 25

9 years ago
Posted patch patch update 2 (obsolete) — Splinter Review
Updated the patch as requested.

I had to do a bit of changes because adding that fourth argument to asyncWait() - the Services.tm.mainThread caused the current code to break due to the async behavior. With the changes I did now the code runs fine. Will attach an interdiff to make it clearer what changed.

I hope I have addressed your comments. Thanks!
Attachment #499659 - Attachment is obsolete: true
Attachment #500088 - Flags: review?(sdwilsh)
Assignee

Comment 26

9 years ago
Posted patch patch update 2 interdiff (obsolete) — Splinter Review
Interdiff showing the latest changes.
Assignee

Updated

9 years ago
Whiteboard: [patchclean:1224] → [patchclean:1228]
Comment on attachment 500089 [details] [diff] [review]
patch update 2 interdiff

>+++ b/toolkit/components/console/hudservice/HUDService.jsm
>   setAsyncListener: function RL_setAsyncListener(aStream, aListener)
>   {
>     try {
>       // Asynchronously wait for the stream to be readable or closed.
>+      aStream.asyncWait(aListener, 0, 0, Services.tm.mainThread);
>     }
>     catch (ex) { }
Fairly certain you do not need this try catch.  Why do you have it?

>   /**
>+   * The nsIInputStreamCallback for when the request input stream is ready -
>+   * either it has more data or it is closed.
>    *
>    * @param nsIAsyncInputStream aStream
nit: add comments about aStream

>+    let available = -1;
>     try {
>-      let available = aStream.available();
>-      this.onDataAvailable(this.request, null, aStream, 0, available);
>+      available = aStream.available();
>     }
>     catch (ex) { }
Do we actually need the try-catch now?  We should always have bytes available when we use asyncWait now, right?

>+    if (available != -1) {
>+      try {
>+        if (available != 0) {
>+          this.onDataAvailable(this.request, null, aStream, 0, available);
>+        }
>+      }
>+      catch (ex) { }
Why do we need a try-catch here?  In general, it's always good to add a comment about why you throw away errors like this to the code so when someone else comes along, they don't have to try and figure out why this was done.

>   QueryInterface: XPCOMUtils.generateQI([
>+    Ci.nsIStreamListener,
>+    Ci.nsIInputStreamCallback,
>+    Ci.nsIRequestObserver,
>+    Ci.nsISupports
nit: add trailing comma

>+++ b/toolkit/components/console/hudservice/tests/browser/browser_webconsole_bug_618078_network_exceptions.js
> /* vim:set ts=2 sw=2 sts=2 et: */
>+/*
>  * Any copyright is dedicated to the Public Domain.
>  * http://creativecommons.org/publicdomain/zero/1.0/
>  *
>  * Contributor(s):
>  *  Mihai Șucan <mihai.sucan@gmail.com>
>+ */
There isn't meant to be a contributors section in the public domain header (per https://www.mozilla.org/MPL/boilerplate-1.1/pd-c).
Comment on attachment 500088 [details] [diff] [review]
patch update 2

per comment 27
Attachment #500088 - Flags: review?(sdwilsh) → review-
Assignee

Comment 29

9 years ago
(In reply to comment #27)
> Comment on attachment 500089 [details] [diff] [review]
> patch update 2 interdiff
> 
> >+++ b/toolkit/components/console/hudservice/HUDService.jsm
> >   setAsyncListener: function RL_setAsyncListener(aStream, aListener)
> >   {
> >     try {
> >       // Asynchronously wait for the stream to be readable or closed.
> >+      aStream.asyncWait(aListener, 0, 0, Services.tm.mainThread);
> >     }
> >     catch (ex) { }
> Fairly certain you do not need this try catch.  Why do you have it?

Good point. I can remove that.


> >+    let available = -1;
> >     try {
> >-      let available = aStream.available();
> >-      this.onDataAvailable(this.request, null, aStream, 0, available);
> >+      available = aStream.available();
> >     }
> >     catch (ex) { }
> Do we actually need the try-catch now?  We should always have bytes available
> when we use asyncWait now, right?

It may throw when the stream is closed, and I can't be sure when onInputStreamReady is fired (before or after onStopRequest). This was a problem that affected the code when I added Services.tm.mainThread. It's all async, and the code needs to work in both cases.

Can I leave it? I believe it's not safe to remove the try-catch in this case.

> >+    if (available != -1) {
> >+      try {
> >+        if (available != 0) {
> >+          this.onDataAvailable(this.request, null, aStream, 0, available);
> >+        }
> >+      }
> >+      catch (ex) { }
> Why do we need a try-catch here?  In general, it's always good to add a comment
> about why you throw away errors like this to the code so when someone else
> comes along, they don't have to try and figure out why this was done.

Good point. Will remove this try-catch here as well.

Thanks again for your review! Will update the patch accordingly.
Assignee

Comment 30

9 years ago
Posted patch patch update 3 (obsolete) — Splinter Review
Updated the patch as requested. I removed two of the try-catches you asked about. I did not remove the try-catch around aStream.available() due to concerns explained in the previous comment.
Attachment #500088 - Attachment is obsolete: true
Attachment #500089 - Attachment is obsolete: true
Attachment #500194 - Flags: review?(sdwilsh)
Assignee

Updated

9 years ago
Whiteboard: [patchclean:1228] → [patchclean:1229]
(In reply to comment #29)
> It may throw when the stream is closed, and I can't be sure when
> onInputStreamReady is fired (before or after onStopRequest). This was a problem
> that affected the code when I added Services.tm.mainThread. It's all async, and
> the code needs to work in both cases.
> 
> Can I leave it? I believe it's not safe to remove the try-catch in this case.
Oops.  Because I was only looking at the interdiff, I missed the bigger picture.  available will throw if the input stream has already been closed, which is the issue you were hitting.  Before, when we were not passing the main thread into asyncWait, we were probably getting called synchronously (aka, we got lucky usually).  On the plus side, our code will always take the same path now :)

However, you should not need to call onInputStreamReady in onStopRequest (we'll still be called by asyncWait regardless).  Calling onStreamClose in onStopRequest seems a bit premature too.  Since everything is asynchronous, we could possibly not have gotten all the data from the input stream yet.  onInputStreamReady ques itself up to be notified again (with the call to setAsyncListener) when it has bytes available, or it calls onStreamClose when the stream was closed.

I'm cc'ing a necko peer here (bz) just to make sure what I'm saying is correct.
Assignee

Comment 32

9 years ago
(In reply to comment #31)
> (In reply to comment #29)
> > It may throw when the stream is closed, and I can't be sure when
> > onInputStreamReady is fired (before or after onStopRequest). This was a problem
> > that affected the code when I added Services.tm.mainThread. It's all async, and
> > the code needs to work in both cases.
> > 
> > Can I leave it? I believe it's not safe to remove the try-catch in this case.
> Oops.  Because I was only looking at the interdiff, I missed the bigger
> picture.  available will throw if the input stream has already been closed,
> which is the issue you were hitting.  Before, when we were not passing the main
> thread into asyncWait, we were probably getting called synchronously (aka, we
> got lucky usually).  On the plus side, our code will always take the same path
> now :)

Sure, the change to use Services.tm.mainThread looks to be fine. The code is now more reliable.


> However, you should not need to call onInputStreamReady in onStopRequest (we'll
> still be called by asyncWait regardless).  Calling onStreamClose in
> onStopRequest seems a bit premature too.  Since everything is asynchronous, we
> could possibly not have gotten all the data from the input stream yet. 
> onInputStreamReady ques itself up to be notified again (with the call to
> setAsyncListener) when it has bytes available, or it calls onStreamClose when
> the stream was closed.

That's what I also wanted to do, and actually did. Problem I ran into is that asyncWait will call onInputStreamReady for the last time with a huge delay of a few seconds, so the call there to onStreamClose executes "too late" for the user. Tests take a lot of time to run as well.

The only solution I was able to come up with is to call onInputStreamReady() in onStopRequest and onStreamClose. Thus, I assume that when onStopRequest is called the stream is also ready. This seems to hold true - no problems with code execution.

Any better ideas?
(In reply to comment #32)
> That's what I also wanted to do, and actually did. Problem I ran into is that
> asyncWait will call onInputStreamReady for the last time with a huge delay of a
> few seconds, so the call there to onStreamClose executes "too late" for the
> user. Tests take a lot of time to run as well.
I think we should file a bug on this then; it doesn't seem right that it takes seconds to get a callback for this.  We should try to fix platform bugs instead of working around them first.  We can work around them when it is clear a fix will not come in a timely manner.
Assignee

Comment 34

9 years ago
If any of you want to test what I mean: just remove the calls to onInputStreamReady() and onStreamClose() from onStopRequest(). You'll see that it takes a few seconds before the last onInputStreamReady() is called - the last call that also invokes onStreamClose.

It will take a few seconds before you see the response body in the network panel. (you'll have to make sure you shift-click reload, to not get the cached response, and reopen the network panel a few times for the request you're interested of).

I'd actually want others to test that. Shall I post an alternate patch with the changes and some debug logging?
So what part of the patch do you want me to look at, and what's the question, exactly?
(In reply to comment #35)
> So what part of the patch do you want me to look at, and what's the question,
> exactly?
Mostly I want to make sure that I'm not talking crazy in comment 31.  Additionally, any advice you have about comment 33 would be appreciated.
I just feel like I'm missing some context for comment 31...  So we have a stream listener, right?  But it also has some other input stream?  How, if at all, is this other input stream connected with the notifications the stream listener is supposed to get?

Taking seconds to get data after asyncWait seems odd, but could depend on where the data is coming from, what else is hogging the event loop, etc...  Again, I don't seem to have enough context here to comment intelligently.
Assignee

Comment 38

9 years ago
(In reply to comment #37)
> I just feel like I'm missing some context for comment 31...  So we have a
> stream listener, right?  But it also has some other input stream?  How, if at
> all, is this other input stream connected with the notifications the stream
> listener is supposed to get?
> 
> Taking seconds to get data after asyncWait seems odd, but could depend on where
> the data is coming from, what else is hogging the event loop, etc...  Again, I
> don't seem to have enough context here to comment intelligently.

For better context I can only point you to some of the existing comments describing the bug, the situation, and the solution. Please see comment 0 which explains that exceptions in the onreadystate event handler of the XHR are missing when the Web Console is open. Also please see comment 18 which links to other two comments from a different bug. Finally, comments 31 to 34 are relevant for the currently attached patch.

Thanks!
(In reply to comment #37)
> I just feel like I'm missing some context for comment 31...  So we have a
> stream listener, right?  But it also has some other input stream?  How, if at
> all, is this other input stream connected with the notifications the stream
> listener is supposed to get?
The stream listener is listening to an input stream on an nsIPipe, which is created here:
https://bugzilla.mozilla.org/attachment.cgi?id=500194&action=diff#a/toolkit/components/console/hudservice/HUDService.jsm_sec6
OK.  Let me rephrase my question.  What the hell is the code being modified actually trying to do?
Assignee

Comment 41

9 years ago
(In reply to comment #40)
> OK.  Let me rephrase my question.  What the hell is the code being modified
> actually trying to do?

The code tries to log all network requests happening in the tab associated to the open Web Console. For that purpose we use an http activity observer (via the activity distributor). Network request and response bodies are also logged if the user enables the associated option.
OK.  So the pipe is our attempt to log the response body?  Thanks.  I guess now I get to go figure out how you're using it, exactly, so I can try to answer the questions above...
bz - I'll be around on irc today if you want to talk about this.  I don't want to review this until I make sure all the assumptions about the APIs are correct here.
Sorry, I was out all day on Thursday and offline most of today working on other things...  This is at the top of my list to look at on Monday morning; I just need to sort out what the code is doing in detail.  :(
Whiteboard: [patchclean:1229] → [patchclean:1229][softblocker]
OK, general questions/comments on the patch, now that I understand what's going on:

1)  Why are the 1mb and 128kb constants decimal, not hex (or even just write
    them out as 1024*1024 and 128*1024)?  Would be more readable!
2)  Why 128kb buffers?  Necko usually uses smaller buffers than that...  I'm not
    sure if the necko-default buffer size is exposed to JS, though.  Worth
    checking.
3)  The business about the pipe being blocking should document that this is a
    requirement the stream listener tee imposes.
4)  Given the finite blocking pipe, it could fill up (12MB of data).  And then
    you lose, no?  "Lose" in the "browser UI hangs" sense.  Can you possibly
    read the pipe off the main thread?  Or make it infinite?  Or something?
5)  Technically your onDataAvailable call to yourself is bogus because 0 is the
    wrong thing to pass for aOffset.  But since you don't use aOffset it might
    not matter much.... just don't start using it!

Now as far as comment 31:

> However, you should not need to call onInputStreamReady in onStopRequest
> (we'll still be called by asyncWait regardless).

I'm not sure about that as thing stand.  But you _should_ probably close the output end of the pipe, and _that_ would make sure your callback will be called.

If you don't do that but remove the onInputStreamReady call, you may well end up getting your callback when the stream listener tee is GCed or something silly like that.  Which could explain your several-second lag.  Does closing the output end make that lag go away?
A thought.  Instead of messing around with the blocking pipe issue, it might be faster and easier to just write a C++ component similar to the existing converter input/output streams which implements nsIOutputStream, is initialized with a charset, and converts the incoming bytes into Unicode from that charset into an nsString member.  And you can read the nsString whenever you want.  In your case, you'd just read it in OnStopRequest.
Comment on attachment 500194 [details] [diff] [review]
patch update 3

Canceling this request.  Please address bz's comments, and then I'll take a look :)
Attachment #500194 - Flags: review?(sdwilsh)
Assignee

Comment 48

9 years ago
Thanks Boris for your comments! :)


(In reply to comment #45)
> OK, general questions/comments on the patch, now that I understand what's going
> on:
> 
> 1)  Why are the 1mb and 128kb constants decimal, not hex (or even just write
>     them out as 1024*1024 and 128*1024)?  Would be more readable!

Sure, I can that.

> 2)  Why 128kb buffers?  Necko usually uses smaller buffers than that...  I'm
> not
>     sure if the necko-default buffer size is exposed to JS, though.  Worth
>     checking.

Good point. It looks like I can use network.buffer.cache.count (which is 24) and network.buffer.cache.size (which is 32768).

> 3)  The business about the pipe being blocking should document that this is a
>     requirement the stream listener tee imposes.

Will do.


> 4)  Given the finite blocking pipe, it could fill up (12MB of data).  And then
>     you lose, no?  "Lose" in the "browser UI hangs" sense.  Can you possibly
>     read the pipe off the main thread?  Or make it infinite?  Or something?

Hm? I wouldn't have expected that it will hang the UI when it fills up. I was under the impression (based on docs) that it will dump whatever is over 12MB when it fills up (starting with the oldest data received).

I can make it "infinite" - by setting it to a really big size, IIANM. How much would be fine? 1GB? I expect it won't use 1GB of RAM - because the pipe is cleared once it's read (as data arrives).


> 5)  Technically your onDataAvailable call to yourself is bogus because 0 is the
>     wrong thing to pass for aOffset.  But since you don't use aOffset it might
>     not matter much.... just don't start using it!

Hehe, good point. I think I should note that in a comment.


> Now as far as comment 31:
> 
> > However, you should not need to call onInputStreamReady in onStopRequest
> > (we'll still be called by asyncWait regardless).
> 
> I'm not sure about that as thing stand.  But you _should_ probably close the
> output end of the pipe, and _that_ would make sure your callback will be
> called.
> 
> If you don't do that but remove the onInputStreamReady call, you may well end
> up getting your callback when the stream listener tee is GCed or something
> silly like that.  Which could explain your several-second lag.  Does closing
> the output end make that lag go away?

That seems to make sense to me, but I did not know I had to close the output end of the pipe. Thanks for the suggestion. Will try this and see how things work out.
> I was under the impression (based on docs) that it will dump whatever is over
> 12MB

Which docs were these?  They're wrong.  When a pipe with a blocking input end fills up, trying to write to the input end will .... block.  As in, block the calling thread and wait on a monitor to be notified that something read from the pipe.  Obviously that read will NOT be able to happen on the same thread that the write happened on.

> I can make it "infinite" - by setting it to a really big size

Or by doing what the nsIPipe idl says and passing PR_UINT32_MAX for the segment count.

> I expect it won't use 1GB of RAM

It sure could, in theory.  In practice it's unlikely.  The fact that you're only doing this for http helps with that a bit.

> but I did not know I had to close the output end of the pipe

Well, _something_ has to tell the pipe there's no more data coming... ;)
Assignee

Comment 50

9 years ago
(In reply to comment #49)
> > I was under the impression (based on docs) that it will dump whatever is over
> > 12MB
> 
> Which docs were these?  They're wrong.  When a pipe with a blocking input end
> fills up, trying to write to the input end will .... block.  As in, block the
> calling thread and wait on a monitor to be notified that something read from
> the pipe.  Obviously that read will NOT be able to happen on the same thread
> that the write happened on.

Having re-read the docs (MXR and MDN) now ... it seems I did not fully understood the possible outcomes of blocking streams for the nsIPipe. Thanks for the clarification.

> > but I did not know I had to close the output end of the pipe
> 
> Well, _something_ has to tell the pipe there's no more data coming... ;)

Yeah, it works. Thanks! Will attach the updated patch.
Assignee

Comment 51

9 years ago
Posted patch patch update 4 (obsolete) — Splinter Review
Made the changes suggested by Boris - mainly to use an infinite-space pipe and to use pipe output stream close(), to have onInputStreamReady called at the end of the request, without any delays.

All tests pass.

Please note that I had to do a small update for browser_webconsole_netlogging.js because testXhrGet() and testXhrPost() had their callbacks executed too early - before lastRequest was updated.
Attachment #500194 - Attachment is obsolete: true
Attachment #504549 - Flags: review?(sdwilsh)
Assignee

Updated

9 years ago
Whiteboard: [patchclean:1229][softblocker] → [patchclean:0117][softblocker]
Comment on attachment 504549 [details] [diff] [review]
patch update 4

>+++ b/toolkit/components/console/hudservice/HUDService.jsm
>   /**
>+   * Set the async listener for the given nsIAsyncInputStream. This allows us to
>+   * wait asynchronously for any data coming from the stream.
>+   *
>+   * @param nsIAsyncInputStream aStream
>+   * @param nsIInputStreamCallback aListener
Need a description for these

>+  /**
>    * See documention at
>    * https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIStreamListener
>    *
>    * Grabs a copy of the original data and passes it on to the original listener.
>    *
>    * @param nsIRequest aRequest
>    * @param nsISupports aContext
>    * @param nsIInputStream aInputStream
>    * @param unsigned long aOffset
>    * @param unsigned long aCount
We are touching this method, so I'd like to see these to get a description too

>@@ -366,28 +365,16 @@ ResponseListener.prototype =
>    * HUDService.lastFinishedRequestCallback is called if there is one.
>    *
>    * @param nsIRequest aRequest
>    * @param nsISupports aContext
>    * @param nsresult aStatusCode
ditto

Pretty sure this addresses everything I and bz have brought up.  r=sdwilsh with comments addressed.
Attachment #504549 - Flags: review?(sdwilsh) → review+
Assignee

Comment 53

9 years ago
Attachment #504549 - Attachment is obsolete: true
Assignee

Comment 54

9 years ago
Thanks Shawn for the review+!

Patch is now ready to land.
Whiteboard: [patchclean:0117][softblocker] → [patchclean:0118][softblocker][checkin]
Whiteboard: [patchclean:0118][softblocker][checkin] → [patchclean:0118][softblocker]
Comment on attachment 504789 [details] [diff] [review]
[checked-in] patch update 5

http://hg.mozilla.org/mozilla-central/rev/37e85354ebaf
Attachment #504789 - Attachment description: patch update 5 → [checked-in] patch update 5
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Verified fixed on Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b10) Gecko/20100101 Firefox/4.0b10 ID:20110121161358
Status: RESOLVED → VERIFIED

Updated

Last year
Product: Firefox → DevTools
You need to log in before you can comment on or make changes to this bug.