Closed
Bug 135425
Opened 22 years ago
Closed 22 years ago
empty elements (eg: <em/>) not closed in XHTML documents
Categories
(Core :: DOM: HTML Parser, defect)
Core
DOM: HTML Parser
Tracking
()
RESOLVED
INVALID
People
(Reporter: mike, Assigned: harishd)
References
Details
(Keywords: regression, testcase, xhtml)
Attachments
(2 files)
Given an XHTML document served as text/html, empty elements which normally contain character data (eg: <em/>) are not getting closed when the document is served as text/html. If the document is served as text/xml, it works fine. Putting a space before the slash: <em /> does not help, and this problem also exists for HTML documents. Not that this behaviour is also apparent for an identical HTML (i.e., non-XHTML) document, but that's probably fine as (AFAIK) HTML does not support empty elements. This is a regression, it works fine in 0.9.9, not working in builds at least from 2002-04-03. Two test cases to come.
Reporter | ||
Updated•22 years ago
|
Reporter | ||
Comment 1•22 years ago
|
||
Minimal test case, served as text/html.
Reporter | ||
Comment 2•22 years ago
|
||
This testcase is identical to the first, but is served as text/xml and so does not exhibit the same behaviour.
Comment 3•22 years ago
|
||
XHTML served as text/html is parsed by SGML (well, tag-soup, anyway) rules rather than XML rules, a practice endorsed by the HTML WG. This is probably INVALID.
Reporter | ||
Comment 4•22 years ago
|
||
Ahh, right, Section C.3 of the XHTML 1.0 spec suggests avoiding this sort of minimised element, so I guess this could be marked INVALID. My $0.02: It would be nice to fix the regression so Moz does not appear to be one of those legacy browsers that Appendic C is trying to cover for, I also think it could make transitioning to XHTML easier for web developers - this bug certainly broke the UI of two XML based web applications I'm working on.
Comment 5•22 years ago
|
||
Resolving INVALID. To cut a long story short, remember that if you're serving this kind of markup as text/html, it will be accepted by older user-agents and probably throw a monkey wrench into their parsing. If you don't care about these user-agents, you should just switch to text/xml or application/xml to deliver your content :) We uphold this (and the W3C Markup Working Group agrees) so that people aren't tempted to try to throw XML constructs at these old user agents. (Incidentally, if you've been having problems with IE displaying "raw" XML when you give it XHTML with an XML content-type, I understand an "identity" XSLT stylesheet--that is, one that invokes XSLT processing but doesn't actually transform any of the document--applied to such documents will make IE work fine with them.)
Status: UNCONFIRMED → RESOLVED
Closed: 22 years ago
Resolution: --- → INVALID
Comment 6•22 years ago
|
||
*** Bug 146084 has been marked as a duplicate of this bug. ***
Comment 7•20 years ago
|
||
*** Bug 266447 has been marked as a duplicate of this bug. ***
Comment 8•19 years ago
|
||
*** Bug 286497 has been marked as a duplicate of this bug. ***
*** Bug 293692 has been marked as a duplicate of this bug. ***
Comment 10•18 years ago
|
||
*** Bug 327637 has been marked as a duplicate of this bug. ***
Comment 11•18 years ago
|
||
I've added <meta http-equiv="Content-Type" CONTENT="text/xml; charset=UTF-8" /> to the test case from Bug 327637, but this does not solve the problem. (Tried other charsets as well.) The file is being loaded from the local filesystem (FireFox 1.5.0.1, Win2K), so there is no different way to configure the content type and no conflicting setting in HTTP headers. I suppose this could be a bug in the parsing of HTTP-EQUIV that affects this bug, too.
You need to log in
before you can comment on or make changes to this bug.
Description
•