Closed Bug 1420945 Opened 7 years ago Closed 7 years ago

Fix the initial height calculation of the Library panel when shown in the overflow menu

Categories

(Firefox :: Bookmarks & History, enhancement, P1)

enhancement

Tracking

()

RESOLVED FIXED
Firefox 59
Tracking Status
firefox59 --- fixed

People

(Reporter: Paolo, Assigned: Paolo)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

When the Library button is moved into the overflow menu, it becomes apparent that the initial height calculation does not include the "Recent Highlights" section.
There are two approaches here, one of which is to wait to display the view until after the Recent Highlights items have been fetched, only when the Library subview is opened in the overflow menu. I prepared a patch with the other approach.
Comment on attachment 8932476 [details]
Bug 1420945 - Keep the existing items in the Recent Highlights section while opening the Library panel.

https://reviewboard.mozilla.org/r/203534/#review209026

This regresses usability, because now it's possible for the view to open, for you to hover over a particular highlight, and for that highlight to change under you right before you click it. The current implementation doesn't have this problem because it clears the list synchronously before populating it with new content.

Can we instead fix the 'empty' appearance to have the height of the 6 items with some reasonable CSS, or, perhaps, by creating 6 disabled items with no icon and " " as a title, whose content we fill in afterwards?

::: browser/components/customizableui/content/panelUI.js:517
(Diff revision 1)
> -      this._loadingRecentHighlights = false;
>        return;
>      }
>  
> +    this._loadingRecentHighlights = true;
> +    try {

Why put all of this in a try...finally? There's only 1 early return. If we object to duplicating the 1-line content of the finally block, I would prefer factoring out the building of the actual list nodes, once we have the list information, to a separate method, and then just do:

```js
if (highlights.length) {
  this._buildHighlights();
}
this._loadingRecentHighlights = false;
```

or similar.
Attachment #8932476 - Flags: review?(gijskruitbosch+bugs)
(In reply to :Gijs from comment #4)
> This regresses usability, because now it's possible for the view to open,
> for you to hover over a particular highlight, and for that highlight to
> change under you right before you click it. The current implementation
> doesn't have this problem because it clears the list synchronously before
> populating it with new content.
> 
> Can we instead fix the 'empty' appearance to have the height of the 6 items
> with some reasonable CSS, or, perhaps, by creating 6 disabled items with no
> icon and " " as a title, whose content we fill in afterwards?

We may reserve space by creating empty items in equal number to the enabled items that are present before the view is opened. Another approach is the one in comment 1, which would equally require us to add some code to check the position, but would potentially slow down the subview opening in the overflow panel. Which one would you prefer?

> ::: browser/components/customizableui/content/panelUI.js:517
> Why put all of this in a try...finally?

...exception handling? :-)
Flags: needinfo?(gijskruitbosch+bugs)
(In reply to :Paolo Amadini from comment #5)
> (In reply to :Gijs from comment #4)
> > This regresses usability, because now it's possible for the view to open,
> > for you to hover over a particular highlight, and for that highlight to
> > change under you right before you click it. The current implementation
> > doesn't have this problem because it clears the list synchronously before
> > populating it with new content.
> > 
> > Can we instead fix the 'empty' appearance to have the height of the 6 items
> > with some reasonable CSS, or, perhaps, by creating 6 disabled items with no
> > icon and " " as a title, whose content we fill in afterwards?
> 
> We may reserve space by creating empty items in equal number to the enabled
> items that are present before the view is opened. Another approach is the
> one in comment 1, which would equally require us to add some code to check
> the position, but would potentially slow down the subview opening in the
> overflow panel. Which one would you prefer?

I didn't really understand that comment. Why only in the overflow panel? Don't we have the same issue when opening the library panel directly?

> > ::: browser/components/customizableui/content/panelUI.js:517
> > Why put all of this in a try...finally?
> 
> ...exception handling? :-)

There's no catch, so "handling" isn't really what's going on? Also, what do you expect to throw, in this block? There wasn't a try block before - should there have been?
Flags: needinfo?(gijskruitbosch+bugs)
(In reply to :Gijs from comment #6)
> Why only in the overflow panel?
> Don't we have the same issue when opening the library panel directly?

Since we don't have an animation in that case, and the height caching works differently, this isn't particularly noticeable there. However, I guess that it may be more noticeable on slower machines. If we fixed this for the overflow panel only, this would allow us to proceed with bug 1392340, on the other hand it wouldn't improve or regress the standalone view.

> > > ::: browser/components/customizableui/content/panelUI.js:517
> > > Why put all of this in a try...finally?
> > 
> > ...exception handling? :-)
> 
> There's no catch, so "handling" isn't really what's going on?

This does handle the exception, and it does it by ensuring that the local state is reset, then implicitly re-throws the original exception to the caller, which has the chance to report it or do further exception handling.

> Also, what do
> you expect to throw, in this block? There wasn't a try block before - should
> there have been?

The "await" statement throws if a rejected Promise is passed to it, and we shouldn't assume that getHighlights always succeeds. If we don't handle the exception, then in case of errors we'll stop updating this section of the view until the browser is restarted. So yes, this exception handling pattern should have been used from the start.
(In reply to :Paolo Amadini from comment #7)
> (In reply to :Gijs from comment #6)
> > Also, what do
> > you expect to throw, in this block? There wasn't a try block before - should
> > there have been?
> 
> The "await" statement throws if a rejected Promise is passed to it, and we
> shouldn't assume that getHighlights always succeeds. If we don't handle the
> exception, then in case of errors we'll stop updating this section of the
> view until the browser is restarted. So yes, this exception handling pattern
> should have been used from the start.

IMO, if we think that `getHighlights` can reject, we should use a .catch() instead, or try...catch that specific bit, not the entire 30-odd lines of code. As written, your patch changes behaviour such that the old highlights would remain in place rather than be removed if `getHighlights` did indeed reject, which also seems wrong.
(In reply to :Gijs from comment #8)
> As written, your patch changes behaviour such that the old highlights
> would remain in place rather than be removed if `getHighlights` did indeed
> reject, which also seems wrong.

That's right, handling this by removing the highlights would be better.
Which approach should we follow given comment 5 and the first part of comment 7? I also forgot to mention that the waiting approach would fix the issue the first time the view is displayed in the overflow panel.
Flags: needinfo?(gijskruitbosch+bugs)
(In reply to :Gijs from comment #8)
> not the entire 30-odd lines of code.

Apart from comment 9, I don't see the issue you're trying to solve by _not_ guarding the entire async function, including the part that adds the elements, if this is actually what you're saying...

In my opinion a broader try block is better than guarding a subset of the function based on assumptions on the rest of it (it can't throw, it can't yield to the event loop).
(In reply to :Paolo Amadini from comment #10)
> Which approach should we follow given comment 5 and the first part of
> comment 7? I also forgot to mention that the waiting approach would fix the
> issue the first time the view is displayed in the overflow panel.

Wouldn't both approaches do that?

I think I slightly prefer having dummy elements over potentially slowing down the view opening, which will negatively affect perceived perf. Based on how we e.g. invest in trying to show an empty window as quickly as possible on startup (over waiting for the window to have been populated), I expect the perceived performance of showing the items ASAP after/while sliding in the view will work better than waiting to slide in the view until we have them.

Really, ideally this code would have some kind of backoff timer and cache to avoid having to fetch the highlights information repeatedly - that or the implementation of getHighlights should do that so it returns 'quickly' if it was called recently. However, we don't need to fix that here.


(In reply to :Paolo Amadini from comment #11)
> (In reply to :Gijs from comment #8)
> > not the entire 30-odd lines of code.
> 
> Apart from comment 9, I don't see the issue you're trying to solve by _not_
> guarding the entire async function, including the part that adds the
> elements, if this is actually what you're saying...

Overly broad try...catch blocks are annoying because they hide the causes of exceptions if not done right, they make it hard to figure out why we actually have a try block at all (ie which bit of the code can actually throw?), they reduce confidence in the block of code as a whole, they have a performance cost, and in this case they would hamper annotate/blame for all of those lines even though, AFAICT, they're not actually necessary besides handling the `getHighlights` call rejecting.

As a general rule, I think the only cases I would normally expect exceptions, and thus try...catch blocks, in our JS code are:

- xpcom APIs that return non-NS_OK (which === throwing in JS)
- promise rejections (which IMO are better handled by being explicit with a .catch() or second arg to .then()).

FWIW, I actually think the `getHighlights` API should be documenting if/when that promise rejects - it's not really obvious from a quick look at the implementation (though it'd probably still be worth handling even if it doesn't reject/throw right now).

> In my opinion a broader try block is better than guarding a subset of the
> function based on assumptions on the rest of it (it can't throw, it can't
> yield to the event loop).

I don't think they're assumptions. They're verifiably true. Besides the AS call, this block of code doesn't call out to anything except native methods and `clearLibraryRecentHighlights`, which shouldn't ever throw either. If we later added more code that *would* (sometimes/potentially) throw, then that code would want its own try...catch block.

If "but maybe this will throw, if not now then maybe later" was a valid reason to have a try...catch block, we would have them everywhere, right? :-)
Flags: needinfo?(gijskruitbosch+bugs)
(In reply to :Gijs from comment #12)
> (In reply to :Paolo Amadini from comment #10)
> > Which approach should we follow given comment 5 and the first part of
> > comment 7? I also forgot to mention that the waiting approach would fix the
> > issue the first time the view is displayed in the overflow panel.
> 
> Wouldn't both approaches do that?

No, if we don't wait in all cases, then the first time we open the view we can't know that we have items to show before the getHighlights call returns, so we can't reserve the space.

> I think I slightly prefer having dummy elements over potentially slowing
> down the view opening, which will negatively affect perceived perf.

I implemented this approach. You can emulate a slow retrieval by adding this after the getHighlights call in the latest patch:

  await new Promise(r => setTimeout(r, 1000));

After trying it, I think this looks quite right.

> Overly broad try...catch blocks are annoying because they hide the causes of
> exceptions if not done right, they make it hard to figure out why we
> actually have a try block at all (ie which bit of the code can actually
> throw?), they reduce confidence in the block of code as a whole, they have a
> performance cost, and in this case they would hamper annotate/blame for all
> of those lines even though, AFAICT, they're not actually necessary besides
> handling the `getHighlights` call rejecting.

I think we're on the same page here, but we're talking about two different cases.

The try-finally I originally wrote ensures that a resource is always freed when the block of code that needs it exits. The resource here is similar to a mutex. There is a new try-catch that handles the specific case of getHighlights failing, which wasn't handled before, and that obviously adheres to the guidelines you mention.

The try-finally should conceptually wrap all the operations. For example, if we modified the code to fetch the "image" favicons asynchronously, then releasing the mutex before that code finished would be incorrect.

Note that I can move the part of the function that creates the elements to a different function, and that would preserve blame even if we use the try-finally properly, if this is a concern of yours.
Comment on attachment 8932476 [details]
Bug 1420945 - Keep the existing items in the Recent Highlights section while opening the Library panel.

https://reviewboard.mozilla.org/r/203534/#review209556

::: browser/components/customizableui/content/panelUI.js:31
(Diff revision 2)
>      return {
>        mainView: "appMenu-mainView",
>        multiView: "appMenu-multiView",
>        helpView: "PanelUI-helpView",
>        libraryView: "appMenu-libraryView",
> +      libraryRecentHighlights: "appMenu-library-recentHighlights",

Ick. This reminds me that this stuff should really live elsewhere. Can you file a follow-up bug to move this somewhere more appropriate? PanelUI used to basically only deal with the hamburger menu and opening individual subview things, not with the internals of those subviews.

::: browser/components/customizableui/content/panelUI.js:521
(Diff revision 2)
>      }
>  
> -    let highlights = await NewTabUtils.activityStreamLinks.getHighlights({
> -      // As per bug 1402023, hard-coded limit, until Activity Stream develops a
> -      // richer list.
> +    this.makeLibraryRecentHighlightsInvisible();
> +
> +    this._loadingRecentHighlights = true;
> +    try {

I actually don't think we're on the same page here.

(In reply to :Paolo Amadini from comment #14)
> The try-finally should conceptually wrap all the operations. For example, if
> we modified the code to fetch the "image" favicons asynchronously, then
> releasing the mutex before that code finished would be incorrect.


The key word here is "if". Right now, the code doesn't do that, so there's no point wrapping all of it in a try...finally. If we started to do more async stuff, then the commit that did that would want to create/extend a try...finally, or duplicate the "mutex" clearing.

As it is, the only early exit points are the try catch and the early return if there are no highlights. They can be coalesced, and then you might as well just clear the mutex immediately, also because the remainder of the code is entirely synchronous anyway so re-entrancy is not possible.

I understand that it's sad that we don't have C++'s AutoFoo stuff here (which lives for the duration of the method), but I don't think substituting try...finally blocks is a reasonable solution, because of the aforementioned objections and just because it's ugly to indent long stretches of code like that.

If you feel strongly that we absolutely *must* do this, then please factor the insides of the try...finally into its own function (which, in effect, means we're factoring the mutex-y thing into its own function, but hey, potato/pot-ah-toe).

::: browser/components/customizableui/content/panelUI.js:530
(Diff revision 2)
> -    // If there's nothing to display, or the panel is already hidden, get out.
> -    let multiView = viewNode.panelMultiView;
> -    if (!highlights.length || (multiView && multiView.getAttribute("panelopen") != "true")) {
> -      this._loadingRecentHighlights = false;
> +      } catch (ex) {
> +        // Hide the section if we can't retrieve the items from the database.
> +        this.clearLibraryRecentHighlights();
> +        throw ex;
> +      }

Out of interest, why use a try...catch rather than just .catch() ? AFAICT there isn't anything that throws synchronously here (and doing that from an async method would be bad form anyway, certainly if undocumented).

::: browser/components/customizableui/content/panelUI.js:531
(Diff revision 2)
> -    let multiView = viewNode.panelMultiView;
> -    if (!highlights.length || (multiView && multiView.getAttribute("panelopen") != "true")) {
> -      this._loadingRecentHighlights = false;
> +        // Hide the section if we can't retrieve the items from the database.
> +        this.clearLibraryRecentHighlights();
> +        throw ex;

I would just set `highlights` to [] and `Cu.reportError()` the error, and then fall through to the other `clearLibraryRecentHighlights()` call and early return, rather than duplicating code and effectively creating more exit-points from the code.

If for some reason you really don't want to do that, what you're doing here is functionally equivalent to:

```js
try {
  ...
} finally {
  this.clearLibraryRecentHighlights();
}
```

which is also shorter and, IMO, clearer.
Attachment #8932476 - Flags: review?(gijskruitbosch+bugs)
(In reply to :Gijs from comment #15)
> (In reply to :Paolo Amadini from comment #14)
> > The try-finally should conceptually wrap all the operations. For example, if
> > we modified the code to fetch the "image" favicons asynchronously, then
> > releasing the mutex before that code finished would be incorrect.
> 
> The key word here is "if". Right now, the code doesn't do that, so there's
> no point wrapping all of it in a try...finally. If we started to do more
> async stuff, then the commit that did that would want to create/extend a
> try...finally, or duplicate the "mutex" clearing.
> 
> As it is, the only early exit points are the try catch and the early return
> if there are no highlights. They can be coalesced, and then you might as
> well just clear the mutex immediately, also because the remainder of the
> code is entirely synchronous anyway so re-entrancy is not possible.

These are the assumptions I mentioned in comment 11, that is, some code can't throw and can't yield to the event loop.

Maybe what we disagree upon in this case is the balance between:
1) Likelihood that someone could change the code without noticing we made assumptions about it
2) Using JavaScript exception handling structures
3) Keeping the current code literally and preserving the "annotate" results when you don't ignore whitespace

Even if you consider point (1) weak, I believe it is stronger than both (2) and (3) here. It can happen more easily with bulk changes, like making an API asynchronous across all the tree, compared to cases like this one in which I'm paying attention to this function specifically.

With regards to (2), I don't think the arguments you made in comment 11 apply here:

> Overly broad try...catch blocks

Just to clarify, what is "overly broad"? If it only wraps a single function call but that function is 100 lines, is this broad or narrow? Some of the points you mentioned below are open to either interpretation.

> are annoying because they hide the causes of exceptions if not done right

That's why they have to be used correctly, like any other language structure we use. Normally, they don't hide the cause of exceptions...

> they make it hard to figure out why we actually have a try block at all (ie which bit of the code can actually throw?)

I don't follow. Exception handling allows developers to have one less concern about figuring out whether some code can fail or not, and it allows to clean up resources and still report these errors properly.

In the specific case of this try-finally, wrapping it around all the function is conceptually correct, therefore it makes it _easier_ to figure out why we have it. Compare:


// Protect from concurrent modifications of the section
try {
  // Query results
  // Modify the section
} finally {
  // Stop protecting from concurrent modifications of the section
}


// Protect from concurrent modifications of the section
try {
  // Query results
} finally {
  // Stop protecting from concurrent modifications of the section
}
// Modify the section


> they reduce confidence in the block of code as a whole

I don't follow here either.

> they have a performance cost

I don't think there is a measurable performance impact in the vast majority of cases when exceptions are not thrown. The performance cost often mentioned here is about functions that raise exceptions as part of their normal code flow.

> and in this case they would hamper annotate/blame for all of those lines

Which is point (3), and in my book it's the least important of all the other aspects.

> I understand that it's sad that we don't have C++'s AutoFoo stuff here
> (which lives for the duration of the method), but I don't think substituting
> try...finally blocks is a reasonable solution, because of the aforementioned
> objections and just because it's ugly to indent long stretches of code like
> that.

I don't think it's ugly to indent long stretches of code, but I'm not against using the easy workaround that we both suggested of moving some bits to another function.

> ::: browser/components/customizableui/content/panelUI.js:530
> Out of interest, why use a try...catch rather than just .catch() ? AFAICT
> there isn't anything that throws synchronously here (and doing that from an
> async method would be bad form anyway, certainly if undocumented).

Uh, throwing from an async method is absolutely fine, has anyone else suggested it's bad form? For the caller, the exception becomes a rejection if not caught.

Conversely, there is in fact a "prefer-await-to-then" rule in the ESLint Promise module. Assuming we catch the error, the same code using "then" would look like:

await (getHighlights(...).then(result => highlights = result)
                         .catch(Cu.reportError));

Compared to:

try {
  highlights = await getHighlights(...);
} catch (ex) {
  Cu.reportError(ex);
}

I slightly prefer the latter, also because of how easy it is to edit it later.

> ::: browser/components/customizableui/content/panelUI.js:531
> I would just set `highlights` to [] and `Cu.reportError()` the error

Sounds good, in fact I had also considered doing this.
I'll post a new patch later today.
(In reply to :Paolo Amadini from comment #16)
> (In reply to :Gijs from comment #15)
> > (In reply to :Paolo Amadini from comment #14)
> > > The try-finally should conceptually wrap all the operations. For example, if
> > > we modified the code to fetch the "image" favicons asynchronously, then
> > > releasing the mutex before that code finished would be incorrect.
> > 
> > The key word here is "if". Right now, the code doesn't do that, so there's
> > no point wrapping all of it in a try...finally. If we started to do more
> > async stuff, then the commit that did that would want to create/extend a
> > try...finally, or duplicate the "mutex" clearing.
> > 
> > As it is, the only early exit points are the try catch and the early return
> > if there are no highlights. They can be coalesced, and then you might as
> > well just clear the mutex immediately, also because the remainder of the
> > code is entirely synchronous anyway so re-entrancy is not possible.
> 
> These are the assumptions I mentioned in comment 11, that is, some code
> can't throw and can't yield to the event loop.
> 
> Maybe what we disagree upon in this case is the balance between:
> 1) Likelihood that someone could change the code without noticing we made
> assumptions about it
> 2) Using JavaScript exception handling structures
> 3) Keeping the current code literally and preserving the "annotate" results
> when you don't ignore whitespace
> 
> Even if you consider point (1) weak, I believe it is stronger than both (2)
> and (3) here. It can happen more easily with bulk changes, like making an
> API asynchronous across all the tree,

The DOM APIs aren't going to change, and I don't see us using anything else here besides DOM APIs.

> compared to cases like this one in
> which I'm paying attention to this function specifically.

A try... block suggests that the code inside the try block can throw/return at any point. It's a control flow block. Fewer control flow blocks (and/or smaller control flow blocks) make a function easier to understand, and more/longer blocks do the opposite. This is especially the case if the block is so large it isn't easy to scan anymore, like is the case here. I have to read all the code, line by line, to figure out which bits can/can't throw, if I'm modifying the method (because maybe I need to add some other 'reset' code to the finally block, depending on where we throw - or maybe not, but now I have to carefully examine each line to see if/when/how it could throw).


> With regards to (2), I don't think the arguments you made in comment 11
> apply here:
> 
> > Overly broad try...catch blocks
> 
> Just to clarify, what is "overly broad"? If it only wraps a single function
> call but that function is 100 lines, is this broad or narrow? Some of the
> points you mentioned below are open to either interpretation.

I think either is bad, but wrapping 100 lines directly is in some ways worse than wrapping a function call where the inner function is 100 lines. But more generally if you have a function of 100 lines where every line can throw, I have questions... which I guess is why we're having this conversation.

> > are annoying because they hide the causes of exceptions if not done right
> 
> That's why they have to be used correctly, like any other language structure
> we use. Normally, they don't hide the cause of exceptions...

Here's an example of what I mean: https://reviewboard.mozilla.org/r/195630/diff/14/ .

There are so many try blocks, each of which catches any exceptions individually and just Cu.reportErrors them.

Basically, I think we should only use try...catch if it's the only reasonable solution (like if we *know* the called code throws exceptions (in some cases)! :-) ). Because only a small portion of the code here can throw, I don't think we should be using it for the rest of the code.

> > they make it hard to figure out why we actually have a try block at all (ie which bit of the code can actually throw?)
> 
> I don't follow. Exception handling allows developers to have one less
> concern about figuring out whether some code can fail or not, and it allows
> to clean up resources and still report these errors properly.

"properly" is often not the same for every caller of a method. Refer to the linked example above, where there are a bunch of public methods that all have their own try...catch stuff, so when some of those methods call some of the other methods, they don't get exceptions because the inner method has swallowed them. It follows the "if I just try...catch this I don't need to worry about whether it throws" logic - but with no regards for desired behaviour if the code unexpectedly does throw errors in the face of multiple callsites.

> In the specific case of this try-finally, wrapping it around all the
> function is conceptually correct, therefore it makes it _easier_ to figure
> out why we have it. Compare:
> 
> 
> // Protect from concurrent modifications of the section
> try {
>   // Query results
>   // Modify the section
> } finally {
>   // Stop protecting from concurrent modifications of the section
> }
> 
> 
> // Protect from concurrent modifications of the section
> try {
>   // Query results
> } finally {
>   // Stop protecting from concurrent modifications of the section
> }
> // Modify the section

I don't really follow what this example is supposed to demonstrate. Do you think the bottom example in your quoted comment is worse? Because equally, I think this would be a valid way to describe what's happening:

// protect against re-entrancy
try {
  // do async stuff
} finally {
  // stop protecting against re-entrancy
}
// do other non-async, non-returning stuff.

Which seems fine/better to me.

> > they reduce confidence in the block of code as a whole
> 
> I don't follow here either.

If I have 1 line in the body of a try... block, I must assume that that 1 line can throw.

If I have 100 lines in the body of a try... block, I must assume that any or all of those lines can throw, until I prove the opposite.

> > they have a performance cost
> 
> I don't think there is a measurable performance impact in the vast majority
> of cases when exceptions are not thrown. The performance cost often
> mentioned here is about functions that raise exceptions as part of their
> normal code flow.

My understanding is we drop out of some of the optimized JIT flows when we do this, which also impacts non-throwing bits, but I  agree this is less important for most code we write.

> > I understand that it's sad that we don't have C++'s AutoFoo stuff here
> > (which lives for the duration of the method), but I don't think substituting
> > try...finally blocks is a reasonable solution, because of the aforementioned
> > objections and just because it's ugly to indent long stretches of code like
> > that.
> 
> I don't think it's ugly to indent long stretches of code, but I'm not
> against using the easy workaround that we both suggested of moving some bits
> to another function.
> 
> > ::: browser/components/customizableui/content/panelUI.js:530
> > Out of interest, why use a try...catch rather than just .catch() ? AFAICT
> > there isn't anything that throws synchronously here (and doing that from an
> > async method would be bad form anyway, certainly if undocumented).
> 
> Uh, throwing from an async method is absolutely fine, has anyone else
> suggested it's bad form? For the caller, the exception becomes a rejection
> if not caught.

Hm, do you know if behaviour here changed? I definitely recall thinking it was an issue - perhaps because the uncaught exception still gets reported as an exception, or perhaps because this used to be the case with Task.jsm (maybe?). Thanks for pointing this out.

> Conversely, there is in fact a "prefer-await-to-then" rule in the ESLint
> Promise module.

What do you mean by the "ESLint Promise module"? Do we use this rule? I find 0 hits in m-c for that string...

> Assuming we catch the error, the same code using "then"
> would look like:
> 
> await (getHighlights(...).then(result => highlights = result)
>                          .catch(Cu.reportError));
> 
> Compared to:
> 
> try {
>   highlights = await getHighlights(...);
> } catch (ex) {
>   Cu.reportError(ex);
> }

I meant:

let highlights = await getHighlights().catch(e => {
  Cu.reportError(e);
  return [];
});

which seems like it would work? await foo().catch(.../* default value */ ) is a pattern I thought we use elsewhere...
(In reply to :Gijs from comment #18)
> The DOM APIs aren't going to change, and I don't see us using anything else
> here besides DOM APIs.

I was thinking of something like:

button.setAttribute("image", await generateSmallIcon(highlight.favicon));

But this is just a theoretical example to show that it's not impossible for this code to become asynchronous. More realistically, the asynchronous operation would be absorbed in getHighlights.

> Fewer control flow blocks (and/or
> smaller control flow blocks) make a function easier to understand, and
> more/longer blocks do the opposite. This is especially the case if the block
> is so large it isn't easy to scan anymore, like is the case here.

Sounds alright.

> I have to
> read all the code, line by line, to figure out which bits can/can't throw,
> if I'm modifying the method

This doesn't follow the above, at least in the case of this try-finally. On the opposite, you can stop worrying about which bits can/can't throw because the cleanup is done regardless.

> (because maybe I need to add some other 'reset'
> code to the finally block, depending on where we throw - or maybe not, but
> now I have to carefully examine each line to see if/when/how it could throw).

If you set up a new resource that needs cleaning up, like a file, you may have to add a new try-finally block dedicated to it.

> But more generally if you have a function of 100 lines where every line can
> throw, I have questions... which I guess is why we're having this
> conversation.

Yes, in my approach, by assuming most lines can throw, you only have to worry about cleaning up the important resources. It doesn't mean that _every_ failure scenario must be handled though. Errors would very likely leave things in an inconsistent state, the question is how broken that state is. Using try-finally around resources is a common pattern to limit the damage.

> > That's why they have to be used correctly, like any other language structure
> > we use. Normally, they don't hide the cause of exceptions...
> 
> Here's an example of what I mean:
> https://reviewboard.mozilla.org/r/195630/diff/14/ .
> 
> There are so many try blocks, each of which catches any exceptions
> individually and just Cu.reportErrors them.

I agree that this is an example of unnecessary use of exception handling structures.

> Basically, I think we should only use try...catch if it's the only
> reasonable solution (like if we *know* the called code throws exceptions (in
> some cases)! :-) ). Because only a small portion of the code here can throw,
> I don't think we should be using it for the rest of the code.

Right, this is a different approach than the one I'm following here.

> I don't really follow what this example is supposed to demonstrate. Do you
> think the bottom example in your quoted comment is worse? Because equally, I
> think this would be a valid way to describe what's happening:
> 
> // protect against re-entrancy
> try {
>   // do async stuff
> } finally {
>   // stop protecting against re-entrancy
> }
> // do other non-async, non-returning stuff.

This assumes the latter code is non-async, though.

> > Uh, throwing from an async method is absolutely fine, has anyone else
> > suggested it's bad form? For the caller, the exception becomes a rejection
> > if not caught.
> 
> Hm, do you know if behaviour here changed? I definitely recall thinking it
> was an issue - perhaps because the uncaught exception still gets reported as
> an exception, or perhaps because this used to be the case with Task.jsm
> (maybe?). Thanks for pointing this out.

I don't think this has changed over Task.jsm.

> > Conversely, there is in fact a "prefer-await-to-then" rule in the ESLint
> > Promise module.
> 
> What do you mean by the "ESLint Promise module"? Do we use this rule? I find
> 0 hits in m-c for that string...

We don't use eslint-plugin-promise yet, but as far as I can tell, Mark considered borrowing some rules from it.

> I meant:
> 
> let highlights = await getHighlights().catch(e => {
>   Cu.reportError(e);
>   return [];
> });
> 
> which seems like it would work? await foo().catch(.../* default value */ )
> is a pattern I thought we use elsewhere...

Ah, thanks, this looks more compact and I think will work.
Comment on attachment 8932476 [details]
Bug 1420945 - Keep the existing items in the Recent Highlights section while opening the Library panel.

https://reviewboard.mozilla.org/r/203534/#review210116

Thanks, and apologies this was such a protracted affair.
Attachment #8932476 - Flags: review?(gijskruitbosch+bugs) → review+
Thanks to you, and no worries, this week I was working on this in the background anyways.

https://treeherder.mozilla.org/#/jobs?repo=try&revision=88020c449222669f2b5bda094e282dbef79ae1ec
Blocks: 1422299
Pushed by paolo.mozmail@amadzone.org:
https://hg.mozilla.org/integration/mozilla-inbound/rev/f249f5238ef9
Keep the existing items in the Recent Highlights section while opening the Library panel. r=Gijs
https://hg.mozilla.org/mozilla-central/rev/f249f5238ef9
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 59
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: