1. Open above URL 2. Empty page :-( It's likely something specific to this add-on since it does happen with the few other add-ons that I tried.
(In reply to comment #0) > 1. Open above URL > 2. Empty page :-( > > It's likely something specific to this add-on since it does happen with the few > other add-ons that I tried. For me, both "compare with public version" and "view contents" lead to a empty page. Compare with public version uses a lot of the same code as view contents, so this could be a duplicate of bug 450045 Basil: Do you recall any of the other add-ons that also caused this to happen? If both links produce blank pages, then I can wager that this is a dup.
Assignee: nobody → cdolivei.bugzilla
Status: NEW → ASSIGNED
Created attachment 342677 [details] [diff] [review] minor mistake So an update. While bug 450045 is closely related, it's not a dup. The reason XSL Results leaves a blank page was because it was hitting a memory limit. 450045 is more than just a memory limit, but also an execution limit. (or so it seems). I attached a bug in 450045 that fixed this in view contents, but not in compare with public version (doh!). It is a minor bug, which I am correcting in this patch.
Attachment #342677 - Flags: review?(fwenzel)
Comment on attachment 342677 [details] [diff] [review] minor mistake yup, that should work.
Attachment #342677 - Flags: review?(fwenzel) → review+
Created attachment 353939 [details] [diff] [review] wip This is a work in progress, but it builds on 342677. This patch unfortunately does not solve all my problems. Mainly large binary binary files that aren't libraries/xpt files. Some real world examples of this is "Suomen kielen oikoluku 0.11.0" and their .pro, .sym_l, .lex_l files. So I guess the only sure-way to fix this is to increase php's memory limit. If we want to avoid that, then I think this patch is the next best thing. But if anyone has any better ideas, I'd like to hear it.
Attachment #342677 - Attachment is obsolete: true
I haven't completely forgot about this. Another experiment I want to try is to use a whitelist of file extensions rather than a blacklist.
Personally I'd be happy if a md5/whatever checksum just confirmed if two files were the same or different, in the case of them being too large. Not sure if this applicable to how the system works, or if I'm ignorantly making an obvious statement, but I thought I'd make the suggestion.
(In reply to comment #7) > Not sure if this applicable to how the system works, or if I'm ignorantly > making an obvious statement, but I thought I'd make the suggestion. For binary files, that is definitely a good suggestion.
(In reply to comment #7) > Personally I'd be happy if a md5/whatever checksum just confirmed if two files > were the same or different, in the case of them being too large. This is what we currently try to do anyways. But instead of extracting to disk and doing md5 comparisons on files, we extract all files as string and do sha1 comparisons. When we approach add-ons with large binary files, we start hitting memory issues.
Created attachment 372125 [details] [diff] [review] more wip This is a patch that I think works very well. We basically extract files we know are text files, and extract all unknown files individually (so a whitelist of files rather than a blacklist that I tried before). This seems to be rather successful on : https://preview.addons.mozilla.org/en-US/editors/review/56985 https://preview.addons.mozilla.org/en-US/editors/review/56007 at least, on my local box. I'm not asking for review yet because this may not be the final patch.
Attachment #353939 - Attachment is obsolete: true
Cesar, if you have time do you want to update this patch to work with the latest version? Then we could put it into AMO 5.4.
Priority: -- → P3
Woah I'm still assigned to this? I think I put this on hold until 474234 was done. Which it now is!
Created attachment 419777 [details] [diff] [review] final patch I think I listed some add-ons where this patch works on comment 10. Though, only the second link works now. This patch works by only extracting known text files (ie. js, xul, rdf, ...) into memory first and computing sha1, before individually going through all unknown files and extracting those files. Doing this seems to avoid hitting php's memory limit. The reason why we don't go through each file individually is that it is very slow.
Comment on attachment 419777 [details] [diff] [review] final patch I think Works well for me with a few side notes: - I can't attest to it using less memory than the old version, but I follow your reasoning and trust your judgment. - "computeArchieveHash" has an extra e in "Archive" - +// vim:tabstop=4:shiftwidth=4:expandtab: -- you probably want to remove that and put those settings into your vimrc? While that line won't hurt, it probably also doesn't help a lot in only one file out of many. PS: I am confident since our constant battle not to hit the memory limit will go away with Zamboni (= AMO in Python), we can simplify this component a bit again when we move it over. Yay!
Attachment #419777 - Flags: review?(fwenzel) → review+
Thanks Fred. I took out the vim reference, since your right about consistency (but I love my 8 character tabs! so it isn't going in my vimrc). Thanks for catching the typo. It's all in r58938.
Status: ASSIGNED → RESOLVED
Last Resolved: 9 years ago
Resolution: --- → FIXED
Assignee: a.sacred.line+bugzilla → fwenzel
computeArchiveHash was for some reason a global function inside another function that threw errors in the syslog about being redefined multiple times. Weird. I moved it out as a private function in the same controller: r59034.
Verififed FIXED on https://preview.addons.mozilla.org/en-US/firefox/files/diff/39828/, as best I understand it.
Status: RESOLVED → VERIFIED
Product: addons.mozilla.org → addons.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.