Closed Bug 566906 Opened 14 years ago Closed 7 years ago

Add L20n bindings for XUL

Categories

(Core :: XUL, defect)

defect
Not set
normal

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: zbraniecki, Assigned: zbraniecki)

References

()

Details

Attachments

(1 file, 14 obsolete files)

27.69 KB, patch
Details | Diff | Splinter Review
Here I'm going to post WIP's for L20n XUL bindings in order to get feedback and reviews.
Assignee: nobody → gandalf
Attached patch WIP v1 (obsolete) — Splinter Review
Comment on attachment 446252 [details] [diff] [review]
WIP v1

WIP1 - this patch adds l20n PI logic and attribute expanding.
Attachment #446252 - Attachment is patch: true
Attachment #446252 - Attachment mime type: application/octet-stream → text/plain
Attachment #446252 - Flags: feedback?(l10n)
Comment on attachment 446252 [details] [diff] [review]
WIP v1

This goes into the right direction, but I see a few things tbd here:

Firstly, http://beaufour.dk/jst-review/ shows a bunch, with only one false positive.

The include of nsIChromeRegistry.h seems to be not needed.

I'm not sure about "timed" and about NS_TIMELINE_MARK. You should try to find out if this should convert to mozilla/FunctionTimer.h instead.
Attachment #446252 - Flags: feedback?(l10n) → feedback+
Gandalf, what is the status of this bug?  Seems like Pike has given some direction.  Can you move forward or are you blocked somehow?
Attached patch patch v1 (obsolete) — Splinter Review
content sink patch updated to trunk and cleaned up.

The remaining warnings from jst review tool are:
 - prints, I'll remove them if we agree that there are no major issues to solve
 - one "operators should be located at the end of line" - I believe a bug in review tool
 - one "line is too long" which I'm not sure what to do with unless I make the source ugly(ier).
Attachment #446252 - Attachment is obsolete: true
Attachment #451162 - Flags: review?
Attachment #451162 - Flags: feedback?(l10n)
Attached patch browser dtd->l20n patch (obsolete) — Splinter Review
this patch turns most of browser-menubar.inc into l20n calls.
Attachment #451162 - Flags: feedback?(l10n)
Comment on attachment 451162 [details] [diff] [review]
patch v1

Here's a few things I see:

- I bet you're leaking the old attr list, calls for a 
delete[] attrs;
after you copy the values over.

- Do we need to fix internal properties here? I bet we don't want to copy those over to the element?

- Do we need to remove duplicate attributes?
a) sure, will come with an updated patch
b) should I recognize them by "." ?
c) what duplicated attributes?
update:
a) fixed,
b) internals will go through __foo
c) duplicated <elem value="foo" l10n_id="id1"> and this.id1 = {'value': 'foo2'};
Status: NEW → ASSIGNED
Attached patch browser dtd->l20n (obsolete) — Splinter Review
updated Firefox dtd->l20n patch to mozilla-central.
Attachment #451163 - Attachment is obsolete: true
Attached patch browser dtd->l20n WIP (obsolete) — Splinter Review
updated XUL dtd->l20n patch to tip.
Attachment #453241 - Attachment is obsolete: true
Blocks: 595812
Attachment #451162 - Flags: review?
Attached patch patch v3 (obsolete) — Splinter Review
This is the patch against beta4.

I'd like to get Pike's feedback first, and then review from someone from the platform team.

I envision the code from createContext and appendJSCode and L20n_JS to eventually live in a separate place in the source tree (intl/l20n ?) but not sure if that's the right way.
Attachment #451162 - Attachment is obsolete: true
Attachment #454643 - Attachment is obsolete: true
Attachment #474688 - Flags: feedback?(l10n)
Comment on attachment 474688 [details] [diff] [review]
patch v3

I can't really do a full review here, but there are a few things that trip my mind:

- more code comments ;-)
- not sure how much of the stuff in the header file is required

On to the beef:

- the l10n-data piece should go into another js object with the core l20n code being the prototype. You probably want to use that object only for l20n expansions if you actually need it, for perf reasons. This will factor your code differently, AFAICT.
- also, l10n-data should really be just json, I guess you'll need to bite the 3-api-calls stuff from jsapi.h for that.
Attachment #474688 - Flags: feedback?(l10n) → feedback-
thanks pike. I'll work on the patch today.
Attached patch patch v4 (obsolete) — Splinter Review
- the beef is there ;)
 - we now use JSON for l10n-data
 - we now do prototype global to l10n-data
 - the whole thing with referencing other variables is tricky, watch out.

I'll move the struct to the nsXULDocument in patch v5 and add some comments before calling it done.
Attachment #474688 - Attachment is obsolete: true
Attached patch patch v5 (obsolete) — Splinter Review
patch v5:
 - feedback from pike applied
 - temporarily we do not call L20n_JS destructor since we're not sure what we should do there - up to reviewer.

I believe the patch is ready for review from the content team.
Attachment #484582 - Attachment is obsolete: true
Blocks: 611095
Blocks: 611096
Attached patch patch v6 (obsolete) — Splinter Review
- added comments
 - minor cleanups
Attachment #489334 - Attachment is obsolete: true
Attachment #491003 - Flags: feedback?(l10n)
Attached patch patch v7 (obsolete) — Splinter Review
minor cleanups as I went through xbl and intl/l20n. should be final for review
Attachment #491003 - Attachment is obsolete: true
Attachment #491003 - Flags: feedback?(l10n)
Attachment #491356 - Flags: feedback?(l10n)
Blocks: 614536
No longer blocks: 614536
Depends on: 614536
Attached patch patch, v8 (obsolete) — Splinter Review
this patch updates the code to the latest j20n spec. It's rather a final (or very close to final) in the way it interacts with j20n, but rather initial in how the exact C++ code is written.

Next step is to sync with Mounir and handle the code to him :)
Attachment #491356 - Attachment is obsolete: true
Attachment #548649 - Flags: feedback?(mounir)
Attachment #491356 - Flags: feedback?(l10n)
Comment on attachment 548649 [details] [diff] [review]
patch, v8

(This is not a proper review. I don't know the JS API well enough.)

>+  NS_IMETHOD GetL20nJSObject(L20n_JS** obj) = 0;

Is this method meant to be called from JS? (If not, I'd just return L20n_JS* directly.) Arguments names should start with a lower-case 'a': aObj.

>+static JSClass global_class = {

I don't know about the JS API conventions, but I'd expect this to be named sGlobalClass to conform to the Gecko C++ conventions.

>+// Ensure that the context has been initialized for the given
>+// l20n_js object

It would be nicer to use JavaDoc-style comments
/**
 * API explanation here
 */
for documenting methods and fields than to use // comments. I'm not sure if we actually have any tools that extract /** ... */ comments, but it still seems to be the favored convention.

>+nsresult ensureContext(nsIWeakReference* mDocument, L20n_JS** obj)

The return value should be on the previous line. Passing nsIWeakReference* as an argument is weird. Why are you passing nsIWeakReference* instead of e.g. nsIDocument*? The convention is to start method and function names with a capital letter in .cpp files. The argument names should be aDocument and aObj.

Why is this a loose function instead of being a static method on the nsXULDocument class? Or, rather, why isn't this a non-static method on nsXULDocument so that you could use |this| instead of a document argument?

Since this function isn't exposed to XPConnect, you could have L20n_JS* as the return type, return null in out of memory cases and get rid of the out param pattern.

>+        if (!JS_InitStandardClasses(l20n_js->cx, l20n_js->global))
>+            return NS_ERROR_OUT_OF_MEMORY;

Is this operation still performed with a fallible allocator?

>+// Add data to the l20n_js context, compile it and execute
>+JSObject* appendJSCode(L20n_JS* l20n_js, nsCString data)

The stylistic considerations mentioned for the previous function apply here, too. Furthermore, the type of the second argument should be const nsACString&. (Or if .get() doesn't work in that case, const nsCString&.)

>+    nsIPrincipal *mPrincipal;
>+    JSPrincipals *gJSPrincipals = nsnull;

I suppose it doesn't really matter, but it seems to me that Foo* bar; is favored a little over Foo *bar.

Anyway, local variables shouldn't start with m, a, s or g. Since mPrincipal isn't a member, it should be just principal. Since gJSPrincipals isn't global, it should be just jsPrincipals.

>+    // use nullprincipal for l20n_js since we don't need any
>+    // privileges.
>+    nsresult rv = CallCreateInstance("@mozilla.org/nullprincipal;1",
>+                                    &mPrincipal);

It seems to me that you end up leaking the null principal. Please use
nsCOMPtr<nsIPrincipal> principal = do_CreateInstance(NS_NULLPRINCIPAL_CONTRACTID);

>+    rv = mPrincipal->GetJSPrincipals(l20n_js->cx, &gJSPrincipals);

This rv is never read. It appears that the status exists only for legacy reasons, so you could as well not assign the status to anything.

>+// Add JSON l10n-data to l20n_js object and make it a prototype
>+// for the global object.
>+// This way developers can provide variables and localizers can
>+// override them in their j20n.
>+JSObject* appendJSONCode(L20n_JS* l20n_js, const nsAString& data)
>+{
>+    jsval json = JSVAL_NULL;
>+    JSAutoRequest ar(l20n_js->cx);
>+    JS_ParseJSON(l20n_js->cx, data.BeginReading(), (uint32)data.Length(), &json);
>+    JS_ValueToObject(l20n_js->cx, json, &l20n_js->l10ndata);
>+    
>+    // make JSON l10ndata a prototype of a global object
>+    JS_SetPrototype(l20n_js->cx, l20n_js->global, l20n_js->l10ndata);
>+    return NS_OK;

You are returning NS_OK from a function whose return type is JSObject*!

>+    // localize is a PI that should provide uri to
>+    // an L20n resource which will be added to the
>+    // document's l20n context.
>+    if (target.Equals(NS_LITERAL_STRING("localize"))) {
>+        nsCOMPtr<nsIURI> uri;
>+        
>+        // take the argument of the PI as an URI
>+        rv = NS_NewURI(getter_AddRefs(uri), data);

Any reason not to pass the URI of the current document as the base URI?

>+        NS_ENSURE_SUCCESS(rv, rv);
>+        nsCAutoString u;
>+        uri->GetSpec(u);
>+        nsIIOService* ioService = nsContentUtils::GetIOService();
>+        NS_ENSURE_SUCCESS(rv, rv);

You haven't actually assigned to rv after the previous NS_ENSURE_SUCCESS.

>+        while (1) {
>+            rv = inputStream->Read(buf, sizeof(buf), &amt);

This looks bad if the stream doesn't come from a fast local source. Please have some mechanism in place to make sure this code never runs for unless we are dealing with a URL pointing to a local localization bundle.
>-      rv = NormalizeAttributeString(aAttributes[i * 2], attrs[i].mName);
>+      nsString str;
>+      rv = NormalizeAttributeString(aAttributes[i * 2],
>+                                    aElement->mAttributes[i].mName);

What's the purpose of changing the argument to NormalizeAttributeString?

You don't need to declare variables at top of a block.

str should probably be nsAutoString instead of nsString unless you know nsString to be faster in this particular case.

>       NS_ENSURE_SUCCESS(rv, rv);
>+      attrs[i].mName.GetQualifiedName(str);
>+      if (str.Equals(NS_LITERAL_STRING("l10n_id"))) {

You shouldn't be making decisions based on the qualified name. You should be making decisions by checking the attribute namespace and the local name of the attribute (as an nsIAtom* so that it's just a pointer compare).

Is there an established XML style of using the underscore in attribute names? If we're going to extend these conventions to HTML, I suggest removing the underscore, since there are no underscores in existing HTML attribute names. (By convention, in HTML it would be l10nid, which isn't pretty. Are there going to be other l10n-foo attributes? If yes, then it's OK to hyphenate: l10n-foo, l10n-bar, etc. If there's going to be only one attribute, can this be just l10n?) Is there reason for having a different set of localization ids as opposed to using id="foo" for localization also? 

>+    //delete l20n_js;
>+

Why is this commented out?


>+struct L20n_JS {

Why is this a struct as opposed to being a class?

>+    L20n_JS() : cx(nsnull), global(nsnull), l10ndata(nsnull){}

You could declare a zeroing operator new to avoid nulling all members explicitly. You can get a zeroing operator new by saying NS_DECL_AND_IMPL_ZEROING_OPERATOR_NEW in the class declaration.

>+    JSContext *cx;
>+
>+    // it should contain what's in j20n file 
>+    JSObject *global;
>+
>+    // it should contain what's in JSON l10n-data
>+    JSObject *l10ndata;

Please name these mCx (or mContext), mGlobal and mL10nData.
Comment on attachment 548649 [details] [diff] [review]
patch, v8

Some random comments:

>--- a/content/xul/document/src/nsXULContentSink.cpp
>+++ b/content/xul/document/src/nsXULContentSink.cpp
>+    if (target.Equals(NS_LITERAL_STRING("localize"))) {

target.EqualsLiteral("localize")

>+    } else if (target.Equals(NS_LITERAL_STRING("l10n-data"))) {

target.EqualsLiteral("l10n-data")

>+        

Remove trailing whitespace.

>+        if (res == JS_TRUE) {

if (res) {

And I think this is conventionally called "ok".

>+                extAttrs = new nsXULPrototypeAttribute[aAttrLen +
>+                                                       prop_array.length()];
>+                if (!extAttrs)
>+                  return NS_ERROR_OUT_OF_MEMORY;

Remove this check.

>--- a/content/xul/document/src/nsXULDocument.h
>+++ b/content/xul/document/src/nsXULDocument.h
>+struct L20n_JS {
>+};

I wonder if this needs to be refcounted.
(In reply to comment #20)
> Is there an established XML style of using the underscore in attribute
> names? 

Oops. I meant *XUL* style--not XML style.

> If we're going to extend these conventions to HTML, I suggest
> removing the underscore, since there are no underscores in existing HTML
> attribute names. (By convention, in HTML it would be l10nid, which isn't
> pretty. Are there going to be other l10n-foo attributes? If yes, then it's
> OK to hyphenate: l10n-foo, l10n-bar, etc. If there's going to be only one
> attribute, can this be just l10n?) Is there reason for having a different
> set of localization ids as opposed to using id="foo" for localization also? 

timeless suggested term='foo' on IRC.
Attached patch patch, v9 (obsolete) — Splinter Review
updated patch - mostly comments and feedback. Important changes are:

 - l10n_id -> 110n-id
 - j20n entity.attrs -> entity._attrs
Attachment #548649 - Attachment is obsolete: true
Attachment #548649 - Flags: feedback?(mounir)
Thanks for the review Henri!

(In reply to Henri Sivonen (:hsivonen) (Not reading bugmail until 2011-08-29) from comment #20)
> Why is this a loose function instead of being a static method on the
> nsXULDocument class? Or, rather, why isn't this a non-static method on
> nsXULDocument so that you could use |this| instead of a document argument?

Because we may want to introduce a separate l20n component with Context class and L20n service class and have a full blown L20n API for C++ code, like we do have for:

 - js: https://github.com/zbraniecki/l20n/blob/master/js/l20n.js
 - python: (partial) https://github.com/zbraniecki/l20n/blob/master/python/l20n/__init__.py

the basic API works like this:

ctx = l20n.getContext(path)
ctx.addResource(path)

ctx.get('entity')

ctx.getAttribute('entity, 'attr')

but I don't know how and if this is what we should do for C++ code.

> >+        if (!JS_InitStandardClasses(l20n_js->cx, l20n_js->global))
> >+            return NS_ERROR_OUT_OF_MEMORY;
> 
> Is this operation still performed with a fallible allocator?

I don't think I understand :(

> This rv is never read. It appears that the status exists only for legacy
> reasons, so you could as well not assign the status to anything.

I don't know what conventions we use for verifying if the command executed successfully. 
 
> Is there an established XML style of using the underscore in attribute
> names? If we're going to extend these conventions to HTML, I suggest
> removing the underscore, since there are no underscores in existing HTML
> attribute names. (By convention, in HTML it would be l10nid, which isn't
> pretty. Are there going to be other l10n-foo attributes? If yes, then it's
> OK to hyphenate: l10n-foo, l10n-bar, etc. If there's going to be only one
> attribute, can this be just l10n?) Is there reason for having a different
> set of localization ids as opposed to using id="foo" for localization also? 
> 

I don't think we want to go for reusing "id" because then we'll have to check every node with ID against the table of entity ids.

I like l10n-id (we may want to add l10n-data later), so I changed to that.

> >+    //delete l20n_js;
> >+
> 
> Why is this commented out?

Because it crashes with some errors that I fail to understand on Fx shutdown. It seems to me that the object is dereferenced earlier and when this line is executed it's already not there and it causes a crash.
 
 
> >+struct L20n_JS {
> 
> Why is this a struct as opposed to being a class?

Same as above. It's the first attempt and I tried to minimize the impact on the code base since I don't know where it should live structurally.

The JS code has been reviewed by mrbkap and jorendorf and I do plan to make them take a look at this again before we're done here, but I believe the major decisions right now are more related to where to put this code and how to expose it so that it's available for XBL as well (right now I just have a patch that duplicates this code in XBL), maybe elevate to XMLContentSink?
In the future we will want to add this to HTML5 parser, so should we put it in a separate component/module?

The way it's structured right now was the most naive, easiest way to inject the right algorithms in place and make them work. Now I need help with how to wrap it into something that is reusable and fits Gecko architecture.
(In reply to Zbigniew Braniecki [:gandalf] from comment #24)
> (In reply to Henri Sivonen (:hsivonen) (Not reading bugmail until
> 2011-08-29) from comment #20)
> > Why is this a loose function instead of being a static method on the
> > nsXULDocument class? Or, rather, why isn't this a non-static method on
> > nsXULDocument so that you could use |this| instead of a document argument?
> 
> Because we may want to introduce a separate l20n component with Context
> class and L20n service class and have a full blown L20n API for C++ code,
> like we do have for:
> 
>  - js: https://github.com/zbraniecki/l20n/blob/master/js/l20n.js
>  - python: (partial)
> https://github.com/zbraniecki/l20n/blob/master/python/l20n/__init__.py
> 
> the basic API works like this:
> 
> ctx = l20n.getContext(path)
> ctx.addResource(path)
> 
> ctx.get('entity')
> 
> ctx.getAttribute('entity, 'attr')
> 
> but I don't know how and if this is what we should do for C++ code.

Looks like there should be a C++ class named L20nContext in L20nContext.h/cpp somewhere, and the class should have a static GetContext method that returns a context instance.

> > >+        if (!JS_InitStandardClasses(l20n_js->cx, l20n_js->global))
> > >+            return NS_ERROR_OUT_OF_MEMORY;
> > 
> > Is this operation still performed with a fallible allocator?
> 
> I don't think I understand :(

After bug 611123, most allocations in Gecko are done with an allocator that never returns null. *If* JS_InitStandardClasses is backed by the allocator that never returns null, it's not necessary to null-check the result anymore. I don't know if JS_InitStandardClasses is backed by the allocator that never returns null.

> > This rv is never read. It appears that the status exists only for legacy
> > reasons, so you could as well not assign the status to anything.
> 
> I don't know what conventions we use for verifying if the command executed
> successfully. 

We use
NS_ENSURE_SUCCESS(rv, rv);
but it seems to me that mPrincipal->GetJSPrincipals will always succeed now that we have infallible allocation.

> I like l10n-id (we may want to add l10n-data later), so I changed to that.

Cool.

> > >+    //delete l20n_js;
> > >+
> > 
> > Why is this commented out?
> 
> Because it crashes with some errors that I fail to understand on Fx
> shutdown. It seems to me that the object is dereferenced earlier and when
> this line is executed it's already not there and it causes a crash.

OK. Obviously this can't land before this has been figured out. :-)

> > >+struct L20n_JS {
> > 
> > Why is this a struct as opposed to being a class?
> 
> Same as above. It's the first attempt and I tried to minimize the impact on
> the code base since I don't know where it should live structurally.

intl/l20n/ seems plausible. bsmedberg suggested the existing intl/strres/.

> maybe elevate to XMLContentSink?
> In the future we will want to add this to HTML5 parser, so should we put it
> in a separate component/module?

Unfortunately, I don't remember the conclusions from our discussion in December. However, I do remember the issues: If we want to evolve this into an API for Web apps, it's necessary to consider how the IO model fits with the Web platform and how the string substitution works with the DOM.

Currently, the code in the XUL content sink does blocking IO on the main thread. That's unacceptable when the localization data is loaded over HTTP, so we need a different way of dealing with IO if L20n is exposed to Web apps.

Having the DOM output of the parser depend on anything but the XML or HTML source is highly radical, so it might be more appropriate to do the string substitution in a shadow DOM or in the frame tree as opposed to doing it in the DOM. IIRC, sicking had an opinion about this in December but I don't remember what it was. CCing sicking to find out.

If we don't let L20n modify the DOM, then the substitution doesn't belong in the HTML parser or in the XML content sink. The IO interaction of the localization data with parsing would still belong there.
(In reply to Henri Sivonen (:hsivonen) from comment #25)
> (In reply to Zbigniew Braniecki [:gandalf] from comment #24)
> > (In reply to Henri Sivonen (:hsivonen) (Not reading bugmail until
> > > >+        if (!JS_InitStandardClasses(l20n_js->cx, l20n_js->global))
> > > >+            return NS_ERROR_OUT_OF_MEMORY;
> > > 
> > > Is this operation still performed with a fallible allocator?
> > 
> > I don't think I understand :(
> 
> After bug 611123, most allocations in Gecko are done with an allocator that
> never returns null. *If* JS_InitStandardClasses is backed by the allocator
> that never returns null, it's not necessary to null-check the result
> anymore. I don't know if JS_InitStandardClasses is backed by the allocator
> that never returns null.

js/ doesn't use infallible allocators. Note that JSAPI functions return a generic success/failure boolean, and I'm not sure all false-returns are OOM.
I have had a long and fruitful conversation with henri on IRC.

1) JS vs CPP

We currently have the XUL/XBL bindings (will replace DTDs) written in C++ and JS bindings written in JS (will replace .properties).

If I recall correctly @sicking and @jst mentioned that we want that to avoid switching contexts between JS and CPP all the time to get strings.

As Henri suggested that we move most of the logic away from the ContentSink itself and wrap it into an API in C++ we will end up with sth like C++ API and JS API both very similar and will both leave in ./intl/l20n (or similar) component.
Does it make sense to have both?

We do need to address all the uses cases of .properties and .DTD, we will want to offer it to extension authors.

Should we use .jsm for JS module? What about C++ component?



2) is ./intl/l20n the right space?

Currently as part of bug 595816 I wrote a scriptable interface @mozilla.org/intl/l20n;1 that lives in ./intl/l20n and now we want to write an API classes for C++ bindings (patch in this bug) which may land in the same place. Is it the right place? How should we structure it?

3) how do we expose the services?

Should we use separate class ids or contract ids to separate C++ bindings from JS bindings?

4) Should the strings go into DOM or shadow DOM?

It seems that we have to options here. We either:
a) load the resources while blocking the parser, put the strings into DOM and be done. Henri suggests that it will impair our chances to push L20n as a standard
b) load the resources without blocking the parser, block layout, and push strings into shadow DOM.

The implications are pretty severe. It'll require relearning since suddenly DOM tree will not contain UI strings, and we'll need some new way to access the strings but will allow us to do arbitrary JS on getting the string (which is what L20n wants to do) and it'll nicely separate markup from content.
I'm worried about how it'll impact the way we operate on DOM trees and how third-parties manipulate DOM. It's a bit like CSS vs computed CSS but imho with more profound implications.

5) Synchronous vs Asynchronous IO resource loading

Should we go for synchronous resource loading or asynchronous? 

6) if we block parser, Henri suggests using <link> for HTML5, if blocking layout, then <script>. Should we replace PI in XUL/XBL with <link> or <script> to match HTML5?

7) If we go for <link> or <script> for resource loading, what do we do with l10n-data? (l10n-data is currently a PI with JSON blob as content)

8) how can we get ability to get dynamically inserted nodes to be localizable and add ability to relocalize.

Nodes inserted with createElement etc. should trigger localization if they contain l10n_id. We also want to add DOM method .retranslate() that will trigger node retranslation.

9) If we decide to go for shadow DOM, can we do some smart trickery to show the shadow DOM node if content exists instead of the original one to mitigate the transition issue and learning curve from normal XUL/HTML5 to this internationalized XUL/HTML5?

10) Can we make L20n strings to be DOM fragments instead of plain strings? Henri said that storing DOM fragments in the shadow tree will be significantly more complex than simple strings, but we will either now or later need to respond to the need of people to just add a block of HTML to be injected (like, if a string is a paragraph with formatting tags or a list)

I'm cc'ing some people that may be able to answer those questions. I need help from you all to help me with this :)
(In reply to Zbigniew Braniecki [:gandalf] from comment #27)
> As Henri suggested that we move most of the logic away from the ContentSink
> itself and wrap it into an API in C++ we will end up with sth like C++ API
> and JS API both very similar and will both leave in ./intl/l20n (or similar)
> component.
> Does it make sense to have both?

I asked bz and he said that calling from JS into an XPCOM component that is implemented in JS does not result in a JS-JS call but in a JS-XPConnect-JS call. If we have a C++ implementation anyway, JS-XPConnect-JS doesn't seem like a win compared to JS-XPConnect-C++.

So I think it doesn't make sense to have both JS and C++ implementations if both are exposed through XPCOM/XPConnect. (I don't know enough about browser chrome to say if it would be possible to expose a non-XPCOM JS service to JS.)

On IRC, sicking mentioned that APIs designed for C++ might sometimes be awkward for JS, so depending on the API, having different APIs for C++ and JS might be an option if the JS API would be nicer to use that way.

> The implications are pretty severe. It'll require relearning since suddenly
> DOM tree will not contain UI strings, and we'll need some new way to access
> the strings but will allow us to do arbitrary JS on getting the string
> (which is what L20n wants to do) and it'll nicely separate markup from
> content.

But OTOH, dynamically added stuff can be localized by the same automation mechanism in the shadow tree case.

> 5) Synchronous vs Asynchronous IO resource loading
> 
> Should we go for synchronous resource loading or asynchronous? 
> 
> 6) if we block parser, Henri suggests using <link> for HTML5, if blocking
> layout, then <script>.

The other way round. :-) <script> if blocking the parser (to put the localized strings in the DOM) and <link> if blocking layout only (to wait for the shadow DOM to be ready to be fed into the frame constructor without flash of unlocalized content).

> 7) If we go for <link> or <script> for resource loading, what do we do with
> l10n-data? (l10n-data is currently a PI with JSON blob as content)

<script type="application/l20n">blob here</script> maybe?

> 8) how can we get ability to get dynamically inserted nodes to be
> localizable and add ability to relocalize.
> 
> Nodes inserted with createElement etc. should trigger localization if they
> contain l10n_id. 

This points heavily in the shadow DOM direction.

> 9) If we decide to go for shadow DOM, can we do some smart trickery to show
> the shadow DOM node if content exists instead of the original one to
> mitigate the transition issue and learning curve from normal XUL/HTML5 to
> this internationalized XUL/HTML5?

Do you mean showing to an API? (As opposed to showing in layout.)

> 10) Can we make L20n strings to be DOM fragments instead of plain strings?
> Henri said that storing DOM fragments in the shadow tree will be
> significantly more complex than simple strings

Well, at least they'd put the strain on different types of nodes and operations.

11) How should L20n shadow content interact with XBL shadow content?
We can/should use a JSM for the JS side to avoid any XPCOM or xpconnect overhead.

The shadow idea sounds like a wonderful but entirely impractical idea. It would allow us to alter the UI language on the fly. But unless there is already some magic in the works I'm not aware of, I really don't think we ought to gate a new l10n system on a magical DOM feature.
One minor comment: L20N might be a good name for the project, but I'm not sure it's optimal to use that name in the code authors need to write. Note sure what would be better, though.
Attached patch WIP 1 (obsolete) — Splinter Review
patch that reuses intl/l20n (bug 704500)
Attachment #552558 - Attachment is obsolete: true
Attachment #576341 - Flags: feedback?(hsivonen)
Attachment #576341 - Flags: feedback?(hsivonen) → feedback+
Comment on attachment 576341 [details] [diff] [review]
WIP 1

>--- a/content/xul/content/src/Makefile.in
>+++ b/content/xul/content/src/Makefile.in

>+	-I$(topsrcdir)/intl/l20n \

>--- a/content/xul/document/src/Makefile.in
>+++ b/content/xul/document/src/Makefile.in

>+		  -I$(topsrcdir)/intl/l20n \

Any reason not to EXPORT what you need to share?
Attached patch WIP 2 (obsolete) — Splinter Review
Updated patch.

New stuff:
 - support for L10nData
 - support for per-call l10n-args
 - separated L10nContext (generic) from L20nXMLContext (for XML documents)
 - do not create L20nContext until nsIDocument starts using it

I did not clean the code yet, but I'm close to be done with the class features. I'll clean up the code and ask for the next round of feedback now.
Attachment #576341 - Attachment is obsolete: true
Attached patch WIP 2Splinter Review
ups, didn't add L20nXMLContext in the previous submit
Attachment #579699 - Attachment is obsolete: true
Seven years later, we're making another attempt to refactor our l10n layer.

The new tracking bug is bug 1365426 and I'll mark the previous effort as "INCOMPLETE".
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: