Closed Bug 1072102 Opened 10 years ago Closed 9 years ago

implement the load and check methods on FontFaceSet

Categories

(Core :: DOM: CSS Object Model, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla41
Tracking Status
firefox41 --- fixed

People

(Reporter: heycam, Assigned: heycam)

References

Details

(Keywords: dev-doc-complete)

Attachments

(2 files, 5 obsolete files)

The initial landing of the CSS Font Loading API in bug 1028497 doesn't support the load method, which allows loading the fonts required for a given text string.
The check method too.
Summary: implement FontFaceSet.load → implement the load and check methods on FontFaceSet
Attached patch WIP v0.1 (obsolete) — Splinter Review
This implements check and load.

The spec could do with some clarification.  The font matching algorithm in css-fonts-3 takes a character as input, but the "find the matching font faces" algorithm in css-font-loading doesn't explicitly pass each character from the text string in.  It seems like it is saying that all FontFace objects with matching unicode-range descriptors should be returned, but I think we should just returns those needed to match the specified characters.

Also, the spec should perhaps take into account the actual characters available in the font face if it has finished downloading.  The patch here does so; if check is called and it matches a FontFace that has finished downloading, and that FontFace doesn't have a cmap entry for the character, then it won't be returned.  I think this is closer to what authors would want check to do -- return true if the given character can be rendered with the currently loaded set of fonts, without fallback.
Assignee: nobody → cam
Status: NEW → ASSIGNED
Blocks: 1149381
Attached patch WIP v0.2 (obsolete) — Splinter Review
This matches the spec better.
Attachment #8585135 - Attachment is obsolete: true
I posted to the list about whether we should take into account the actual glyphs available in a font, and a couple of other issues: https://lists.w3.org/Archives/Public/www-style/2015May/0002.html
Component: CSS Parsing and Computation → DOM: CSS Object Model
Depends on: 1056479
Ah, just realised I'm not handling this case properly:

  document.fonts.add(new FontFace("test", "url(x)"));
  document.fonts.check("16px test");

Should return false (since the FontFace isn't loaded), but is returning true.  That's because I'm creating a font group and trying to match its gfxFontEntrys up to the gfxUserFontEntrys in all the FontFaces.  The first problem is that the FontFace doesn't actually have a created gfxUserFontEntry yet, but even if we do force its creation first, it's not inserted in the gfxUserFontSet, so the gfxFontGroup doesn't know about it at all.

Not sure what the best way to do matching against non-loaded FontFaces is now.  I think it might be to add a Matches method to FontFace that compares its family name to one passed in (we'll need to iterate over each of the family names we parsed out of the font shorthand), then calls a version of Matches on gfxUserFontEntry that compares all of the style bits (so, something like the existing Matches but without comparing the src list or unicode-range).

Although on the positive side this alleviates all the problems with not have a device context, since it avoids nsFontMetrics.
Attached patch patch (obsolete) — Splinter Review
Attachment #8601284 - Attachment is obsolete: true
Attachment #8606712 - Flags: review?(jdaggett)
Comment on attachment 8606712 [details] [diff] [review]
patch

> +void
> +FontFaceSet::FindMatchingFontFaces(const nsAString& aFont,
> +                                   const nsAString& aText,
> +                                   nsTArray<FontFace*>& aFontFaces,
> +                                   ErrorResult& aRv)
> +{
> +  nsRefPtr<FontFamilyListRefCnt> familyList;
> +  uint32_t weight;
> +  int32_t stretch;
> +  uint32_t italicStyle;
> +  ParseFontShorthandForMatching(aFont, familyList, weight, stretch, italicStyle,
> +                                aRv);
> +  if (aRv.Failed()) {
> +    return;
> +  }
> +
> +  // Convert the UTF-16 string to an array of its Unicode scalar values, then
> +  // sort the array so that we can process each unique character just once.
> +  nsTArray<uint32_t> chars;
> +  UniqueCharacters(aText, chars);
> +
> +  nsTArray<FontFaceRecord>* arrays[2];
> +  arrays[0] = &mNonRuleFaces;
> +  arrays[1] = &mRuleFaces;
> +
> +  for (nsTArray<FontFaceRecord>* array : arrays) {
> +    for (FontFaceRecord& record : *array) {
> +      FontFace* f = record.mFontFace;
> +      nsAutoString familyName;
> +      f->GetFamilyName(familyName);
> +      if (!familyList->Contains(familyName)) {
> +        continue;
> +      }
> +      // We have a FontFace whose family name matches.  Force the
> +      // creation of a gfxUserFontEntry, if we don't have one
> +      // already, so that we can match the style descriptors and
> +      // check its unicode-range.
> +      gfxUserFontEntry* ufe = f->CreateUserFontEntry();
> +      if (!ufe) {
> +        // Should only happen in OOM or similar unusual situations.
> +        continue;
> +      }
> +      if (ufe->Matches(weight, stretch, italicStyle) &&
> +          HasAnyCharacterInUnicodeRange(ufe, chars)) {
> +        aFontFaces.AppendElement(f);
> +      }
> +    }
> +  }
> +}

This isn't really correct and it's unnecessary :)

The code here should be looking up faces using the same style matching code used within gfxFontGroup::FindPlatformFont. From the family name in the familyList, call LookupFamily on the userfont set to get the gfxFamily. Then FindAllFontsForStyle to get the set of faces that are closest to the specified style. This will give you gfxUserFontEntry*'s, so you can check the unicode-range with those. Mapping back to the FontFace will take a bit of code ballet.

The tests should probably include a test for styles that don't match, i.e. loading the bold face with "600 100% MyFont".
(In reply to John Daggett (:jtd) from comment #8)
> The code here should be looking up faces using the same style matching code
> used within gfxFontGroup::FindPlatformFont. From the family name in the
> familyList, call LookupFamily on the userfont set to get the gfxFamily. Then
> FindAllFontsForStyle to get the set of faces that are closest to the
> specified style. This will give you gfxUserFontEntry*'s, so you can check
> the unicode-range with those.

Turns out this isn't as straightforward as I thought, since the gfxFamily will only contain the font entries which have been added the gfxUserFontSet, which does not include those for FontFaces which are in the FontFaceSet but which haven't yet been loaded.  Maybe I should create a gfxFamily myself, add all of the gfxUserFontEntrys for the FontFaces in the FontFaceSet (regardless of whether they have been loaded), then call FindAllFontsForStyle on it?
Flags: needinfo?(jdaggett)
Attached patch patch v2 (obsolete) — Splinter Review
https://treeherder.mozilla.org/#/jobs?repo=try&revision=6cfda05c9135
Attachment #8606712 - Attachment is obsolete: true
Attachment #8606712 - Flags: review?(jdaggett)
Attachment #8626611 - Flags: review?(jdaggett)
(In reply to Cameron McCormack (:heycam) from comment #9)

> Turns out this isn't as straightforward as I thought, since the gfxFamily
> will only contain the font entries which have been added the gfxUserFontSet,
> which does not include those for FontFaces which are in the FontFaceSet but
> which haven't yet been loaded.  Maybe I should create a gfxFamily myself,
> add all of the gfxUserFontEntrys for the FontFaces in the FontFaceSet
> (regardless of whether they have been loaded), then call
> FindAllFontsForStyle on it?

That means our existing code is wrong then, since lazy loading will only work if the gfxUserFontEntry is in the gfxFamily.
Flags: needinfo?(jdaggett)
They will be in the gfxFamily if the FontFace has been added to the FontFaceSet.  load() also looks at FontFaces that haven't been added to the FontFaceSet, so it's necessarily different from the lazy loading case.
(In reply to Cameron McCormack (:heycam) from comment #12)
> They will be in the gfxFamily if the FontFace has been added to the
> FontFaceSet.  load() also looks at FontFaces that haven't been added to the
> FontFaceSet, so it's necessarily different from the lazy loading case.

Um, huh? FontFaceSet.load() should only be loading FontFace's in the FontFaceSet :)
Er, indeed. :-)

I found the problem: I wasn't calling FlushUserFontSet in Check/Load so we weren't creating the user font entries soon enough.
Attached patch patch v3 (obsolete) — Splinter Review
Attachment #8626611 - Attachment is obsolete: true
Attachment #8626611 - Flags: review?(jdaggett)
Attachment #8626681 - Flags: review?(jdaggett)
Comment on attachment 8626681 [details] [diff] [review]
patch v3


> +gfxUserFontEntry::Matches(uint32_t aWeight,
> +                          int32_t aStretch,
> +                          uint32_t aItalicStyle)

Move this logic back since it's not needed as a separate helper function.

> +  // Resolve relative font weights against the initial of font-weight
> +  // (normal, which is equivalent to 400).
> +  if (weight == NS_STYLE_FONT_WEIGHT_BOLDER) {
> +    weight = 700;
> +  } else if (weight == NS_STYLE_FONT_WEIGHT_LIGHTER) {
> +    weight = 100;
> +  }

Instead of using 700 and 100, use NS_FONT_WEIGHT_BOLD and add a new 
NS_FONT_WEIGHT_THIN to gfxFontConstants.h.

> +static bool
> +HasAnyCharacterInUnicodeRange(gfxUserFontEntry* aEntry,
> +                              const nsTArray<uint32_t>& aChars)
> +{
> +  for (uint32_t c : aChars) {
> +    if (aEntry->CharacterInUnicodeRange(c)) {
> +      return true;
> +    }
> +  }
> +  return false;
> +}
> +
> +static void
> +Explode(const nsAString& aInput, nsTArray<uint32_t>& aOutput)
> +{
> +  aOutput.Clear();
> +
> +  const char16_t* p = aInput.Data();
> +  const char16_t* end = p + aInput.Length();
> +
> +  while (p < end) {
> +    uint32_t c = UTF16CharEnumerator::NextChar(&p, end);
> +    aOutput.AppendElement(c);
> +  }
> +}
> +
> +static void
> +UniqueCharacters(const nsAString& aText, nsTArray<uint32_t>& aChars)
> +{
> +  Explode(aText, aChars);
> +  aChars.Sort();
> +  auto it = std::unique(aChars.begin(), aChars.end());
> +  aChars.TruncateLength(it - aChars.begin());
> +}

This seems like real overkill. The 99% case here is that there's a
single userfont entry in the list of fonts and the string is relatively
small, such that creating an array of unique characters won't represent
a huge perf saving. So I think it would be much simpler to just have
HasAnyCharacterInUnicodeRange walk the string using
UTF16CharEnumerator::NextChar and skip the complicated character array
creation. 

> +    // Take note of all the FontFace objects that have a matching family
> +    // name, so that we restrict our aFontFaces result array to those
> +    // FontFaces.  Otherwise, since a user font entry can be shared across
> +    // multiple FontFaces (when it references the same src), we might
> +    // incorrectly include FontFaces with non-matching family names.
> +    for (nsTArray<FontFaceRecord>* array : arrays) {
> +      for (FontFaceRecord& record : *array) {
> +        FontFace* f = record.mFontFace;
> +        nsAutoString fontFaceFamilyName;
> +        f->GetFamilyName(fontFaceFamilyName);
> +        ToLowerCase(fontFaceFamilyName);
> +        if (fontFaceFamilyName.Equals(familyName)) {
> +          facesWithMatchingFamilyName.PutEntry(f);
> +        }
> +      }
> +    }

I'm baffled by this. I don't understand the scenario that you seem to be
sniffing for here.
(In reply to John Daggett (:jtd) from comment #16)
> I'm baffled by this. I don't understand the scenario that you seem to be
> sniffing for here.

I was thinking that two FontFaces with different families could share the same gfxUserFontEntry, but I was wrong about that.  I will get rid of this.
bz, could you review the .webidl change and the use of Sequences/Promises in FontFaceSet::Load?
Attachment #8626681 - Attachment is obsolete: true
Attachment #8626681 - Flags: review?(jdaggett)
Attachment #8626773 - Flags: review?(jdaggett)
Attachment #8626773 - Flags: review?(bzbarsky)
Attached patch Part 2: Tests.Splinter Review
Attachment #8626775 - Flags: review?(jdaggett)
Comment on attachment 8626773 [details] [diff] [review]
Part 1: Implement FontFaceSet load and check. (v4)

Review of attachment 8626773 [details] [diff] [review]:
-----------------------------------------------------------------

> +class nsFontMetrics;

Trim.
Attachment #8626773 - Flags: review?(jdaggett) → review+
Comment on attachment 8626773 [details] [diff] [review]
Part 1: Implement FontFaceSet load and check. (v4)

>+  [Throws] Promise<sequence<FontFace>> load(DOMString font, optional DOMString text = " ");

You should probably mark this [NewObject] and then you can remove the [Throws].

>+Reject(nsIGlobalObject* aGlobalObject, ErrorResult& aRv)

Why do you need this?  If you have a promise-returning webidl method with an ErrorResult& arg, you can just return null after throwing on the ErrorResult& and the bindings will synthesize a rejected promise for you.

>+    JS::Rooted<JS::Value> promiseJSVal(aCx);

You shouldn't need this either.  There's a Promise::All overload that takes nsTArray<nsRefPtr<Promise>> as its second argument.  So just build your array of promises and then call Promise::All with it.

r=me on the IDL and Load bits with those issues fixed.  I wish there were a way to avoid having to create a GlobalObject here. :(
Attachment #8626773 - Flags: review?(jdaggett)
Attachment #8626773 - Flags: review?(bzbarsky)
Attachment #8626773 - Flags: review+
Attachment #8626773 - Flags: review?(jdaggett) → review+
Comment on attachment 8626775 [details] [diff] [review]
Part 2: Tests.

r+ with the addition of fontlist tests with multiple fonts, containing a mixture of system fonts that exist and generics.
Attachment #8626775 - Flags: review?(jdaggett) → review+
(In reply to Boris Zbarsky [:bz] from comment #21)
> Comment on attachment 8626773 [details] [diff] [review]
> Part 1: Implement FontFaceSet load and check. (v4)
> 
> >+  [Throws] Promise<sequence<FontFace>> load(DOMString font, optional DOMString text = " ");
> 
> You should probably mark this [NewObject] and then you can remove the
> [Throws].
>
> >+Reject(nsIGlobalObject* aGlobalObject, ErrorResult& aRv)
> 
> Why do you need this?  If you have a promise-returning webidl method with an
> ErrorResult& arg, you can just return null after throwing on the
> ErrorResult& and the bindings will synthesize a rejected promise for you.

I didn't know that.  That makes things simpler.

> >+    JS::Rooted<JS::Value> promiseJSVal(aCx);
> 
> You shouldn't need this either.  There's a Promise::All overload that takes
> nsTArray<nsRefPtr<Promise>> as its second argument.  So just build your
> array of promises and then call Promise::All with it.

Oh, great, that's new since I wrote the patch.
Keywords: dev-doc-needed
https://hg.mozilla.org/mozilla-central/rev/c0af03c6f877
https://hg.mozilla.org/mozilla-central/rev/b6379bec3c17
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla41
Depends on: 1226400
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: