Closed Bug 81202 Opened 23 years ago Closed 15 years ago

view|page info should show HTTP headers

Categories

(SeaMonkey :: Page Info, enhancement, P3)

enhancement

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: Marko.Macek, Assigned: db48x)

References

Details

(Keywords: helpwanted)

Attachments

(3 files)

summary says it all... (it would be nice to have it for all protocols).
-->view page info problem
Assignee: neeti → blakeross
Component: Networking → XP Apps: GUI Features
QA Contact: tever → sairuh
Daniel, sounds like more enhancements. Where will this stop? Guess for more
future work :(
CC'ing Daniel and myself.
Yay! More work! Really though, this'll probably never be done. Jesse Ruderman's
got me beat here anyway, see http://www.squarefree.com/bookmarklets. 
Blocks: 52730
It didn't know about:

http://webtools.mozilla.org/web-sniffer/

It shows the server headers fine... unless the server has some kind of browser
sniffing and might alter the headers.

However, this bug is also about showing the request... starting from GET ...
HTTP/1.1
No longer blocks: 52730
-> Daniel to fix or wontfix.
Assignee: blakeross → db48x
WONTFIX, but that doesn't stop us from accepting contributions, etc.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → WONTFIX
*** Bug 85815 has been marked as a duplicate of this bug. ***
Any chance at least Server: viewing could be implemented, since that's probably
the most interesting one? The links browser shows this (no not lynx) in it's
equivilent of page info, quite nice...
Keywords: helpwanted
Summary: view|page info should show HTTP request/response headers → [RFE] view|page info should show HTTP headers
*** Bug 96792 has been marked as a duplicate of this bug. ***
reopen, -> future, since a patch will be taken. (Assign to nobody@mozilla.org if
you want, but don't resolve it - that just blocks this from searches)
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
Target Milestone: --- → Future
*** Bug 101036 has been marked as a duplicate of this bug. ***
mass moving open bugs pertaining to page info to pmac@netscape.com as qa contact.

to find all bugspam pertaining to this, set your search string to
"BigBlueDestinyIsHere".
QA Contact: sairuh → pmac
Blocks: 76099
No longer blocks: 82059
Component: XP Apps: GUI Features → Page Info
*** Bug 122240 has been marked as a duplicate of this bug. ***
The new page info has a lot more stuff... one of the main things still missing
is 'Server:'. Anything else that's useful enough to specify? Maybe a button to
dump all of the headers in a text/plain popup window?
just some thoughts for interesting 'properties' to list in 'page info':
*for images, the 'properties' window could list the size of the image.
*and indeed the HTTP headers sent and received would be nice.
*cookies sent and received for this page
*size in bytes for the HTML (== content-length) and in total (==including all media)
*time between header sent, first byte received, last byte received
*** Bug 131211 has been marked as a duplicate of this bug. ***
*** Bug 132248 has been marked as a duplicate of this bug. ***
I should be able to get to all of these by the time 1.1 is released.
Status: REOPENED → ASSIGNED
Target Milestone: Future → mozilla1.1beta
Priority: -- → P3
Blocks: 87408
Blocks: 112041
Depends on: 140108
No longer blocks: 79518
I like the Page Info as it is right now in Moz1.0RC2. But of course I always
want more. Wouldn't it be a good idea just to show the raw HTTP header as it is
recieved by Mozilla? Like View Source but then for the header. That way you
don't need to change the UI for every HTTP field someone wants to see. This
would be great for debugging web apps.
Just want to mention that as a workaround, one can find header information in
the cache info, like this:
about:cache-entry?client=HTTP&sb=1&key=http://bugzilla.mozilla.org/
So a keyword bookmark like about:cache-entry?client=HTTP&sb=1&key=%s is handy,
then when viewing a page just squeeze in the keyword in the URL field and hit
enter to see the cached header information for that page.
I made a bookmarklet with:

javascript:location.href="about:cache-entry?client=HTTP&sb=1&key=" + location.href

Unfortunately security restrictions are triggered when trying to use this. Copy
to clipboard is broken in JS console, but, something about remote content
linking to local.
*** Bug 157307 has been marked as a duplicate of this bug. ***
*** Bug 157364 has been marked as a duplicate of this bug. ***
Attached patch proof of concept — — Splinter Review
this shows that it should not be too difficult. this specific patch adds a
button "dump headers" which uses dump() to firstly display all metadata that
the cache has for this url and then to display the servers' response (which
therefore gets written twice).

so whoever will implement this can either show all metadata or only the
headers.
So that's where it stores it. I had wondered about that. Has it always been
there, or is this relatively new? Does it store the request headers as well?

I can probably get to this sometime soon. Working on one other bug first though.
It's been there for pretty long. about:cache uses it at least since April last year.
As for the request headers, I do not know if/how/where they are stored.
Attached image Ethereal showing HTTP headers —
A button for this is ideal. At the least, the UI for this should mimic
Ethereal's "Follow TCP Steam" function, which is in the screenshot. Request
headers in one color, responce in another (if source coloring is off, both
should be black on white). The data really isn't needed, though if view source
could be appended, that might be neat. (I'm not requesting the rest of the
UI... the EBCDIC/hex/printing stuff)
request headers are impossible, to my knowledge (well, maybe if you hack necko)
taking

I will implement this as a new "Server" Tab which contains a textbox displaying
the headers; thereby fixing bug 140108 as well.
Assignee: db48x → cbiesinger
Status: ASSIGNED → NEW
db48x, could you review this patch?
Comment on attachment 94951 [details] [diff] [review]
patch

I like it, r=db48x
Attachment #94951 - Flags: superreview+
Comment on attachment 94951 [details] [diff] [review]
patch

oops, wrong button
Attachment #94951 - Flags: superreview+ → review+
Comment on attachment 94951 [details] [diff] [review]
patch

I mean, thanks for giving me a super-review, but last I checked you were not on
the super-reviewer list :)
Comment on attachment 94951 [details] [diff] [review]
patch

hmm. not sure I'm a big fan of going through the cache to get the headers
(though maybe that's the only way?)

let's cc darin to see if he has any advice here. I'm really looking forward to
seeing http headers!
cc'ing darin to get the HTTP headers from an existing document - I assume the
channel has been closed, but maybe the headers are still hanging around somewhere..

yup, so long as you have a reference to the http channel, you can always QI to
nsIHttpChannel to inspect the request or response headers.  response headers are
available once OnStartRequest has fired.
darin - well, this is likely after the document has finished loading, so we need
to get to the channel for a document that was already loaded - just in case the
previous channel hangs out somewhere. If the channel isn't available, does this
access to the cache look right to you - can you think of any better way to get
to the actual entry for the current document that's loaded? (i.e. for example I
know that session history maintains certain load-specific data.. )
if after a page load you lose your reference to the channel and the underlying
cache entry (see nsICachingChannel::cacheToken), then there is no guaranteed way
to get back to the same cache entry.  remember the cache can evict entries at
any time, on any thread, for any reason... but it cannot evict entries that are
in use (or referenced in the XPCOM sense).

so, the right solution is to somehow capture a reference to the nsIChannel used
to retrieve each frame for which you might want to provide page-info.  perhaps
you could implement nsIWebProgressListener??
so, i looked a the current patch... it'll break in a number of interesting ways.

1) suppose the cache entry doesn't exist; then, as i described earlier, you
won't get any results.

2) suppose the cache entry is in use by another channel; then, you likewise
won't get any results, though in this case the cache entry does exist.  the
cache does not allow anyone to read an entry that is open for writing.

3) suppose the channel decides to use something other than the URL string as a
cache key.  then your code will break.  documents that result from a form
submission do not use the URL as the cache key, for example.
1 and 2 are known, and we're pretty much resigned to our fate there, unless 
there's some way to get the httpchannel that I don't know of (quite possible)

As for 3, pages like that use everything up to the url encoded form data as the 
key, right? what about post forms?
3- no, HTTP POST requests use a special key format that really should remain an
implementation detail, but here's an example:

  id=3d594f5a&uri=http://bugzilla.mozilla.org/process_bug.cgi

the "id" prefix is a unique integer value that is incremented each time a form
POST is issued.  the value is initialized at browser startup to the time of day.

as you can see, knowledge of this cache key does not belong in any other part of
the code.

surely there must be some way to hook into docshell to observe the channels used
to load each frame.  then you could grab a reference to each, query the
information needed to construct page-info, and then release the channels. 
perhaps someone who works on docshell would be able to tell you how to properly
hook in to observe the channels.  perhaps all you have to do is implement
nsIWebProgressListener... but i'm not sure if that would really be sufficient
for your needs.
I think session history is worth looking at - the whole reason that the unique
cache entry keys were designed was so that when you hit "back" you got the page
that had loaded, and not some other random cache entry.. i.e. if I post to
http://bugzilla.mozilla.org/enter_bug.cgi in two windows, I want to later be
able to hit back in each window and go back to the post in each one..

Anyway, there is definitely a way to get at least the key from session history
for the current window.
session history only owns the opaque channel-defined secondary cache key.  it's
a nsISupports pointer, which doesn't mean anything to anyone but the channel
used to again fetch the same document.

session history uses the cache key like this:

  channel = NewChannel(saved_uri)
  cachingChannel = channel->QI(nsICachingChannel)
  cachingChannel->SetCacheKey(saved_cache_key)
  channel->AsyncOpen(listener,...)

and then, inside the listener implementation...

listener::OnStartRequest(request, ...)
{
  httpChannel = request->QI(nsIHttpChannel)
  // now you can view the channel's response headers, etc.
}

notice, that this approach assumes that you are actually loading the URL. 
page-info doesn't want to load the URL, so this solution doesn't work.  we'd
need to add some kind of interface on nsICachingChannel that asynchronously
opens the cache entry and notifies you when it is opened.  currently, however,
it doesn't try to open the cache entry until AsyncOpen is called.
so I now have a solution which will probably work for the main document but not
frames:

in nsBrowserStatusHandler somewhere, QI the request to nsIHttpChannel and use
visitRequestHeaders there. store them in a global variable, which page info
accesses through window.arguments or window.opener. 
(also get the contentLength from the nsIChannel if available so that the cache
needs not be used)


now.... this will not work for any frames. there's no status handler for them. I
think.
yeah, i don't think you want to be hacking this into the status handler.
For the time being, I'd settle for just about anything.
I don't currently stand a chance of compiling Mozilla from source, but can I
just hand-patch the files in the latest patch and expect something meaningful to
happen (like seeing a "Server" tab in my Page Info window?  It looks like it's
all stuff that would be runtime-read, but maybe I'm just being naive about how
Mozilla is really built.
Note: I did try what I just described, but didn't see a "Server" tab.  I did
unjar the files in question, and dropped them in what I believe to be the right
directories.
I'm running this on NT.  Any hints?  I just want a Mozilla that'll do this, even
under the limited scenarios that this patch provides.
Thanks for any help.
Yeah, all that stuff is indeed runtime-read.
as for the server tab, are you sure that you applied the patch for the
pageInfo.xul file? That's the important one. (If you apply that patch, you also
must apply the one to the .dtd file, and the one for the .js is also a good
addition :) the .css is not as important)
once you've edited the files, you either have to edit installed-chrome.txt so
that mozilla looks for the uncompressed version, or you have to recompress the
files back into the jar.
So far nothing I've done has had even the slightest effect.
I uncompressed all the jars and changed all the lines in installed-chrome.txt to
refer just to the trailing directory (replaced "jar:resource:.*\.jar!:/" with
"resource:/").  Simple changes to the files don't seem to have any effect (I
changed some of the labels in pageInfo.dtd, but no visible change when I
restarted).  Also tried prefixing all the resource items with "/chrome"... still
no joy.
This probably isn't the right forum for this...  Can someone point me to the
best place (short of the source) to go to understand how this all fits together?
exit mozilla before you edit installed-chrome.txt, then restart it. next, turn
off the xul cache in the preferences (Debug->Networking->Disable XUL Cache)

then just as a test, load up a file from the chrome in the browser window (.js
and .dtd files work best for this, so put chrome://navigator/content/pageInfo.js
in the urlbar.) now go change that file you just loaded in some small way. add a
comment to the first line or something. save it, then go back to the mozilla
window and reload the file. if you see your changes everything is good to go,
otherwise, you've forgotten something somewhere.
please take the tech support to a newsgroup or e-mail :) most of us get enough
bugmail as it is!
I have learned that UI patches are not wanted in this project. reassigning to
default owner.
Assignee: cbiesinger → db48x
Summary: [RFE] view|page info should show HTTP headers → view|page info should show HTTP headers
*** Bug 187355 has been marked as a duplicate of this bug. ***
livehttpheaders at mozdev does this.  Why not contact them and see about
integrating it into Mozilla by default?
*** Bug 189438 has been marked as a duplicate of this bug. ***
*** Bug 196370 has been marked as a duplicate of this bug. ***
It looks like this is in now?  Noticed in recent nightly builds.
philipp: it's not in mozilla builds.... maybe you installed the livehttpheaders
extension... (http://livehttpheaders.mozdev.org)
Oops!  You're right.  I didn't realize it would also add that tab into the page
info window.
It's great.  I'd love to have it integrated, but I can certainly live with
reinstalling it when I need it.
Since an extension exists for this, and the plan for Firebird is to have a
simple browser with lots of extensions, does anybody think this bug is relevant
anymore?

Me =) I still couldn't imagine a day spent by installing tons of FB extensions
to get Mozilla capabilities in Firebird. Futhermore, PageInfo is IMHO included
in FireBird at default (correct me, if it's wrong, I saw FB 0.4 last) and patch
looks pretty small.

cbiesinger: according to comment #53 - Mozilla project is changing - what about
second try with this bug?
Target Milestone: mozilla1.1beta → ---
Not having to fire up Ethereal every time I want to inspect HTTP headers would
be a huge boon for software developers.
This is a bug in the Browser product, not in Firebird, so yes I do see it as
still relevant
I won't be using the patch currently attached to this bug, because the way it's
implemented in livehttpheaders is a much better way to do it, since it doesn't
depend on the page being cached to get the headers. On the other hand,
livehttpheaders uses a tree to show the results, and I think I'd rather use an
html table or something, styling it to look the same as the trees currently in
page info. It doesn't need everything a tree provides.
Yes, a simple two-column table to represent the name/value HTTP header pairs
makes good sense.
bug 241656 requests Content-Location header in page info for 1.7, because
 1.7 adds support for Content-Location header (base for relative urls)
 which mysteriously breaks sites that provide wrong location (e.g., internal),
 and many sites do (bug 238654 estimates maybe even 10%).
 so showing Content-Location could help users diagnose problems 
 and avoid more bug reports blaming mozilla1.7.
Product: Browser → Seamonkey
This is implemented in Firefox. How difficult would it be to copy the XUL from
there? (Or is there more involved?)
Ok, so some extensions are actually quite neat. 'Live HTTP headers' does what
this bug requests and more. Perhaps we could ask the author if he would like to
donate some code for page info?

Comment #68 was incorrect, by the way.
(In reply to comment #20)
> So a keyword bookmark like about:cache-entry?client=HTTP&sb=1&key=%s is handy,
it is very handy.
however cannot use this method with trunk build.

I opened bug 300188 ("cannot use about:cache with Bookmark Keywords")
*** Bug 330924 has been marked as a duplicate of this bug. ***
If it's so simple, why the developers don't add this feature to FF and SeaMonkey?
QA Contact: pmac
This is extension fodder e.g. <http://xsidebar.mozdev.org/modified.html#livehttpheaders>
Closing as WONTFIX.
Status: NEW → RESOLVED
Closed: 23 years ago15 years ago
Resolution: --- → WONTFIX
Just FYI LiveHTTPHeaders doesn't provide this functionality anymore. In 0.14 the Headers tab was even removed from the page info because it just didn't work since the times of (I think) FF3.
You may want to fix the description of LiveHTTPHeaders on your site accordingly.

So there is no extension for Seamonkey that shows the HTTP headers of the current page in the page info.

I vote for REOPENing this bug.
The headers tab is displayed and works for me in my 0.14.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: