Open Bug 754861 Opened 12 years ago Updated 3 months ago

console.log doesn't show objects at the time of logging if console is closed

Categories

(DevTools :: Console, defect, P2)

defect

Tracking

(Not tracked)

People

(Reporter: BenWa, Unassigned)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

(Whiteboard: [console-papercuts][polish-backlog])

When logging an object and then expending it, the object is expanded using the current values and not using a snapshot at the time the object was logged. This is a pain point when doing development because the results are extremely misleading and unexpected.

The Mozilla ToJam team ran into this problem and lost time while debugging being mislead by this behavior.
Currently the bug you are seeing happens because we store a reference to the object you evaluate and inspect it live. So any changes you do to object A after evaluation are visible when you try to inspect the result - since we store only a reference.

This behavior is going to change when part 3 of bug 673148 lands. From there on a copy of the object you evaluate is kept in memory, but only the first level. For example when you inspect object A you will get A as it was when you evaluated it. However, if A holds a property that references object B ... when you expand object B you get the current values of B, not a copy/snapshot of the object B as it was when you evaluated object A.

We do not go deeper to copy the rest of the objects because that would slow things down.

Please let us know if you have further concerns about this. Thanks for your bug report!
Depends on: async-webconsole
If that's the behavior we go for then it should be explicit in the UI that the nested objects are taken at a different time. Without any UI hints this behavior is very misleading as well.
Indeed, this stuff needs to be consolidated.
So does this mean that when I do

   console.log(x);

the browser keeps a _strong_ reference to x?
 - if yes, then how's that not a huge leak when doing many console.log calls with objects?
 - if no, then how does that work with the GC?

For this reason alone I don't see how this approach can work reasonably well.
I also want to say that console.log being a _debugging_ helper, I would expect predictability and reliability to be valued higher than performance, as far as it's concerned.
(In reply to Benoit Jacob [:bjacob] from comment #4)
> So does this mean that when I do
> 
>    console.log(x);

We are talking here about several different things.

When you do console.log(x) you get x.toString() displayed and no object reference is kept around at the Web Console level, and you cannot yet inspect |x|. This is a feature we are planning to implement.

The Console API implementation has a backend, a storage that caches the console API calls. That keeps strong references to the objects you point to.

If you type |x| in the Web Console input and you press Enter, the code you typed is evaluated. The result, if it is an inspectable object, is presented as such. The user can click to inspect the result object (which is |x|). In this case, the Web Console itself keeps a strong reference to the result object until output is pruned or cleared otherwise.

> the browser keeps a _strong_ reference to x?

Yes, a strong reference is kept to |x| in the current implementation when you evaluate |x| in the Web Console input. No kind of reference is kept around when you do console.log(x) by the Web Console itself, but as noted above the Console API storage backend keeps strong references to the objects you pass.

>  - if yes, then how's that not a huge leak when doing many console.log calls
> with objects?

In the case of console.log() calls with objects the Console API storage backend clears references to objects when the owning inner window is destroyed. So the lifetime of the strong references to logged objects is the same as the lifetime of the inner window for which the window.console API methods exist.
I also was bitten by this and wasted a good amount of time debugging.  The current behavior in FF<23 is extremely misleading.  Take for example the simple example of:

  var a = [1, 2];
  console.log(a); // <-- shows [1, 99]
  a[1] = 99;
  console.log(a); // <-- shows [1, 99]

Chrome gets this right(if you have console open at the time you run the script).
This is actually a bit more complicated. If you execute all the code as one block, you get the behavior Brendan observes. If you execute it over multiple lines, you get the desired behavior. This is definitely an issue.
I have retested this bug and I confirm it continues to be valid. However, I believe this is the expected and desired behavior, unless I am missing something.

I tested Chrome's Web Inspector and Firebug: they do the same in varying degrees.

Web console details:

1. when you call console.log(foo) a strong reference is stored to foo and any other arguments you may pass to the log method.
2. whenever the inner window that owns the console object is destroyed all messages that were cached are removed. This is how we avoid memory leaks.
3. the webconsole actor (in the debugger server) keeps Debugger.Objects for every |foo| object (any object arguments passed to any console log method). This happens only when the webconsole is open.
4. the webconsole client releases all of the Debugger.Objects when output is cleared in the webconsole UI.
5. whenever you inspect an object in the webconsole UI a request is sent to the server to retrieve properties and their values - this means you always get the most recent state of the object displayed.

Web Inspector results:
1. try to do |console.log(foo)|, then |foo.bar = 'test'|, then expand |foo| in the output. You will see that the output will show the most recent object state, not the state of the object as it was when you invoked the |console.log| method.
2. if you expand the |foo| object before you make the state change then the UI output is cached. After a few minutes of playing I saw it clears the cache - so if you close/reopen you can get current state |foo| in old output messages.
3. make sure you have an object with more than ~5 properties - so you get an expander.

Firebug does something similar, but UI/UX is different.

To fix this we would have to clone each JavaScript object whenever a console API method is invoked. This might be a performance problem. Do we want anything like this?

(In reply to Brendan Dahl from comment #7)
> I also was bitten by this and wasted a good amount of time debugging.  The
> current behavior in FF<23 is extremely misleading.  Take for example the
> simple example of:
> 
>   var a = [1, 2];
>   console.log(a); // <-- shows [1, 99]
>   a[1] = 99;
>   console.log(a); // <-- shows [1, 99]
> 
> Chrome gets this right(if you have console open at the time you run the
> script).

What Chrome does better is that it shows the first N properties/values. Please try with more values, so you get an object expander. You will see similar behavior.

We should probably morph this bug into an "improve output for objects" bug.
OS: Mac OS X → All
Hardware: x86 → All
(In reply to Mihai Sucan [:msucan] from comment #9)
> What Chrome does better is that it shows the first N properties/values.
> Please try with more values, so you get an object expander. You will see
> similar behavior.
> 
> We should probably morph this bug into an "improve output for objects" bug.

This is definitely the key. Having a one level depth preview of properties (even a limited subset of properties) would go a long way toward mitigating the problem.
(In reply to Brandon Benvie [:bbenvie] from comment #10)
> (In reply to Mihai Sucan [:msucan] from comment #9)
> > What Chrome does better is that it shows the first N properties/values.
> > Please try with more values, so you get an object expander. You will see
> > similar behavior.
> > 
> > We should probably morph this bug into an "improve output for objects" bug.
> 
> This is definitely the key. Having a one level depth preview of properties
> (even a limited subset of properties) would go a long way toward mitigating
> the problem.

In bug 753332 we will be adding a "summary" request or property to an object actor to do exactly this.
Depends on: 843004
Priority: -- → P3
I would like to add my vote to get this bug fixed, its extremely misleading for debugging things like arrays, and behaves very differently from chrome on this. As a developer trusting your debugtools is one of the most important things, and this undermines that trust.
So ive been told that console.log improvement is a performance issue. What i would really like then instead is being able to set a flag on the console (console.deeplog = 1) and be told that 'objects may not represent the object at the time of logging, please use console.deeplog=1) in the UI when im inspecting it.
If you turn it into an about:Flags you risk making developers primary browsers slow.
Apparently Chrome doesnt show deep console.logs when the devtools are closed, but when they are open they do a deep copy. Would this be a solution? Its surprising how much its upsetting me not to be able to console.log things and trust it. But then again i might be one of the last printf debuggers :)
This can be really a pain when trying to debug something or understanding someone else's code by adding logging.
Could we tell the developer and let him decide if he wants the performance hit or not? Or at least document this behavior somewhere?
It's in debug mode so performance hit is expected. We should allow devs to set how deep they want to dump. Definitely documenting is good idea.
The consensus in the team meeting today was to try storing 1-level deep snapshots and measure the perf hit to see if it's acceptable. That would help with the most common cases, like comment 7.

If nobody else gets here first, I'll try to find some time to work on it next week.
(In reply to Panos Astithas [:past] from comment #21)
> The consensus in the team meeting today was to try storing 1-level deep
> snapshots and measure the perf hit to see if it's acceptable. That would
> help with the most common cases, like comment 7.
No idea how much that costs, but you can also decide to do that lazily, watching the object with Object.prototype.watch (or Object.observe when that exists) when logged and only do the 1-level deep copy if/when the object change. This way, there is no copy cost in cases where the object doesn't change (only watching cost, which is probably negligeable) which is probably a very common case.
If anyone was not convinced that having console.log not logging objects at the time of logging was going to cause lots of confusion, the number of duplicate bugs recorded here should have convinced them by now!

console.log is a debugging helper. It should have simple semantics.
Note that we also have confusing examples on MDN, that don't work as expected if you copy and paste them to the console.
Another re about performance hit:
If you simply have the browser console open your browsing is taking a massive perf hit already. So please don't worry about adding option to snapshot object at time of logging, and an option for levels deep.
Priority: P3 → P2
Whiteboard: [console-papercuts]
It's time we fix this. I'm going to have a shot at this. Note that this could take me some time, since I'm not familiar with the webconsole code.
Assignee: nobody → ejpbruel
Bug 1107699 fixed the case described in comment 7, if the logging takes place while the console is open. It removed the async recording of the console API call in the backend, so by the time the console actor gets a chance to create a Debugger.Object of the argument, it sees the correct one.

I think that if we want the same behavior while the console is closed, we will have to move the Debugger API code from the console actor to the console storage service. That will probably make console API calls more expensive even when the console is off, which, depending on the size of the perf hit, could be a a no-go.
To log the state of objects at the time of logging, even when the console is closed, we need a way to clone objects up to N levels deep, where N can be set as a preference. We can then set the default value of N to whatever makes sense from a performance point of view.

I looked at the structured cloning algorithm, but I'm not sure if that is the right API to use here. My understanding is that the structured cloning algorithm is mainly useful to serialize/deserialize objects across workers. It seems wasteful to do that when we simply want to create a clone of an object on the same thread. In addition, the structured cloning algorithm doesn't have a way to set a maximum cloning depth, as far as I can tell.

Cc'ing Luke, since he might be able to tell what is the preferred way to clone an object in SpiderMonkey up to N levels.
Flags: needinfo?(luke)
Forwarding to Jason, who has worked a lot more with structured cloning.
Flags: needinfo?(luke) → needinfo?(jorendorff)
Interesting aspects of structured cloning:

*   Feel free to add an optional depth limit in our implementation (js/src/vm/StructuredCloning.{h,cpp}). Easy to get started: just add an `int depth;` to JSStructuredCloneWriter and maintain it, or else define `int depth() const { return objs.length(); }`. The hard part is deciding what to do when the depth limit is reached.

*   Structured cloning boldly calls getters and proxy handlers, so console.log(obj) can re-enter JS, call arbitrary code, cause state to change, take arbitrary time to run, etc. Not sure if this is desirable or not!

*   You'll be stuck with the choices the HTML spec made regarding what structured cloning copies and what it doesn't. For example I think it silently ignores expando properties on arrays. It also refuses to copy functions. Maybe that's OK.

*   In particular, you don't get to keep object identity, so if I console.log(element) and then click on the output in the console, you can't highlight that element in the web page.  ...However if you keep both the original object *and* the clone, it would mostly fix that problem, plus the console can show you what has changed since the object was logged, a nice feature...

Right now I can't get console.log() to work at all in the console, in Nightly. Not sure what's wrong. console.error() works, but console.log() output doesn't appear in the console, even though I have all the menu options checked under "Logging".


> It seems wasteful [...]

To this I can only say: try it and see if it's fast/small enough?

If not, you could presumably split both JSStructuredCloneWriter and JSStructuredCloneReader into two parts:

    JSStructuredCloneWriter ===becomes===> ObjectGraphWalker + CloneBufferWriter
    JSStructuredCloneReader ===becomes===> CloneBufferReader + CloneBuilder

where CloneBufferWriter and ObjectBuilder support the same interface. Then, for single-thread use, you could just glue an ObjectGraphWalker directly to a CloneBuilder, skipping the serialization step.

But having done all that you may find that serialization/deserialization is fast, and the bulk of the time is spent in ObjectGraphWalker and CloneBuilder. :-\
Flags: needinfo?(jorendorff)
(In reply to Benoit Jacob [:bjacob] from comment #4)
> So does this mean that when I do
> 
>    console.log(x);
> 
> the browser keeps a _strong_ reference to x?
>  - if yes, then how's that not a huge leak when doing many console.log calls
> with objects?

Old comment I know, but:

Note that if we instead make a full copy of x and keep it around, that is *at least* as big a leak, and possibly much worse.
I am not sure what the state of things is about this bug (and sorry but it _is_ a bug, I understand where it comes from but this can be solved, see Chrome) but I wanted to report that it still is as irritating as ever:

http://jsfiddle.net/8qeykjk2/
open the console and click in the main window field - the state of the object logged is just the exact opposite of what it really is at time of logging. First the object is logged with id, but it does NOT have an id set in reality, then it is logged without, though it DOES have an id set. :-(
(In reply to Balint from comment #34)
> I am not sure what the state of things is about this bug (and sorry but it
> _is_ a bug, I understand where it comes from but this can be solved, see
> Chrome) but I wanted to report that it still is as irritating as ever:
> 
> http://jsfiddle.net/8qeykjk2/
> open the console and click in the main window field - the state of the
> object logged is just the exact opposite of what it really is at time of
> logging. First the object is logged with id, but it does NOT have an id set
> in reality, then it is logged without, though it DOES have an id set. :-(

What version of Firefox are you using?

In today's nightly build of Firefox I see this:

https://dl.dropboxusercontent.com/u/1212936/lice/fx-log-state.gif

This is exactly the same as chrome's behaviour:

https://dl.dropboxusercontent.com/u/1212936/lice/chrome-log-state.gif

As Panos mentions in comment 29, this covers the use case where the console is open already, and unless I am mistaken this is your use case as well. It may just be that you are using an older build of Firefox ( anything older than 38, which is the current Developer Edition build ).

I think the current status of this bug is that we are trying to figure out how to handle console.* calls made before the console window is opened - fwiw IMO chrome behaves oddly here as well.
Dear Jeff, thanks for your comment - you are right, I was using an older version (36.x) of FF. This explains indeed the problem and now things work well with the latest version!
Whiteboard: [console-papercuts] → [console-papercuts][devedition-40]
(In reply to Balint from comment #36)
> Dear Jeff, thanks for your comment - you are right, I was using an older
> version (36.x) of FF. This explains indeed the problem and now things work
> well with the latest version!

Awesome - glad we fixed it for you!
Whiteboard: [console-papercuts][devedition-40] → [console-papercuts]
Whiteboard: [console-papercuts] → [console-papercuts][devedition-40]
I removed the [devedition-40] whiteboard because I think our main issue was resolved with bug 673148 - do you still think this bug is critical for the current cycle?
Flags: needinfo?(jryans)
(In reply to Jeff Griffiths (:canuckistani) from comment #38)
> I removed the [devedition-40] whiteboard because I think our main issue was
> resolved with bug 673148 - do you still think this bug is critical for the
> current cycle?

Ah, I did not notice you had already set it once and removed it already...  I'll update the summary to reflect that it's only about the console being closed, as the current summary makes it sound even worse.

I think the remaining issue is still pretty unfortunate, for example if you log something quickly at page startup before opening the console, then you still get confusing behavior.  If you don't know about this bug's existence it's unlikely you would guess that running again with the console open makes it "work".

It still feels worth at least discussing the remaining work for the current cycle.  We still have to balance it against all the other polish work.
Flags: needinfo?(jryans)
Summary: console.log object logging doesn't show the object at the time of logging → console.log doesn't show objects at the time of logging if console is closed
(In reply to J. Ryan Stinnett [:jryans] from comment #39)
> (In reply to Jeff Griffiths (:canuckistani) from comment #38)
...
> I think the remaining issue is still pretty unfortunate, for example if you
> log something quickly at page startup before opening the console, then you
> still get confusing behavior.  If you don't know about this bug's existence
> it's unlikely you would guess that running again with the console open makes
> it "work".

Okay, agreed.
Not actively working on this at the moment. If anyone thinks this should still be a priority, send me an e-mail, an I'll put it back on my to do list.
Assignee: ejpbruel → nobody
Whiteboard: [console-papercuts][devedition-40] → [console-papercuts][polish-backlog]
Just wasted a lot of time trying to debug my JS. Would be nice if someone actually fixed this issue.

Right now, I'm using 
> console.log(JSON.parse(JSON.stringify(obj)));
as a workaround.
Product: Firefox → DevTools
Severity: normal → S3

The severity field for this bug is relatively low, S3. However, the bug has 7 duplicates.
:nchevobbe, could you consider increasing the bug severity?

For more information, please visit auto_nag documentation.

Flags: needinfo?(nchevobbe)
Flags: needinfo?(nchevobbe)
Duplicate of this bug: 1744016
Duplicate of this bug: 1513978
See Also: → 1519618
Duplicate of this bug: 1505304

Bug 1519618 would be a first step. We could at least acknowledge in the UI that:

  • the object may not be the right one, when coming console cache
  • The object is the live one when expending properties

Chrome seems to also have a distinct behavior between console API call happening before DevTools are opened, versus while they are opened.
It also surface similar issue when it comes around logging DOM Element.

On the following example:
data:text/html,<a class="foo">link</a><script>const o = {foo:true}; console.log(o, document.querySelector("a")); o.bar=true;document.querySelector("a").className = "foo bar";</script>
We should only see foo attribute/class name on both objects, and we should especially not see the bar ones.

When logging before the DevTools are opened:
Chrome: Object <a class=​"foo bar">​link​</a>​
Firefox: Object { foo: true, bar: true } <a class="foo bar">
Here Firefox is wrong on both cases, while chrome is only wrong for DOM Elements.
Chrome special case the logging of objects that happens without DevTools opened and disable their preview.

When logging after the DevTools are opened:
Chrome: {foo: true} <a class="foo bar">link</a>
Firefox: Object { foo: true } <a class="foo">
Here Firefox is all correct, but Chrome is still wrong on DOM Elements. It looks like DOM Elements are always considered as live preview and are never cached at the time of console api call.

Depends on: 1876084
You need to log in before you can comment on or make changes to this bug.