Closed Bug 534079 Opened 15 years ago Closed 14 years ago

Sub chunks are incorrectly applied in some cases

Categories

(Toolkit :: Safe Browsing, defect)

defect
Not set
critical

Tracking

()

RESOLVED FIXED
Firefox 3.7a2
Tracking Status
blocking2.0 --- beta1+
blocking1.9.2 --- needed
status1.9.2 --- .2-fixed
blocking1.9.1 --- needed
status1.9.1 --- .9-fixed

People

(Reporter: bryner+phishing, Assigned: ddahl)

Details

(Keywords: verified1.9.0.19, verified1.9.1, verified1.9.2, Whiteboard: [3.6.x])

Attachments

(2 files, 4 obsolete files)

We think we've discovered a fairly serious problem in Firefox's handling of sub chunks for the phishing and malware lists.

If a sub chunk contains consecutive entries that have the same host key, but reference different add chunks, then it appears that the second entry is not subbed from the database.  I think the problem specifically has to do with this code in nsURLClassifierDBService::SubChunk():

    if (i == 0 || lastKey != thisEntry.mKey) {
      existingEntries.Clear();
      lastKey = thisEntry.mKey;

      if (haveAdds) {
        rv = mMainStore.ReadAddEntries(thisEntry.mKey, thisEntry.mTableId,
                                       thisEntry.mAddChunkId, existingEntries);
        NS_ENSURE_SUCCESS(rv, rv);
      }
    }

If I'm following this correctly, the code is intended to avoid reloading the add entries from the database if the host key is the same as it was for the previous entry.  However, the add chunk number (thisEntry.mAddChunkId) could differ between the two, and as it behaves currently, I think the result will be that it doesn't load the correct entries into existingEntries, and doesn't sub the expression from the database.

This is particularly bad in the case where the user has a full hash cached for the subbed entry.  In this case, Firefox will continue to block the entry until the add chunk is deleted (which could take several weeks), since it's allowed to reuse the gethash result as long as the list has been updated recently.  We've received reports from some users who are continuing to be blocked after we have removed sites from our list, and this bug is likely the cause (see also bug 522033).

David, would you be able to take a look at this?  We'd really like to get a fix out as soon as possible, i.e. in the next 3.5 and 3.0 security releases.

We'll also need to come up with some strategy to fix this problem for users who already have bad data cached.  Perhaps the easiest approach would be to have the clients clear their database when they update, or at least clear their sub list so that they'll be able to re-apply any sub entries they missed the first time around.  Alternately, we could try to delete the affected add chunks from the server, but I think this would be somewhat complicated, and would force all of our clients to redownload the data.
Flags: blocking1.9.0.16?
Flags: blocking-firefox3.6?
I should be able to take a look in the next week or two.
Thanks for the report, bryner. I've adjusted the flags for the next security release (3.0.16 and 3.5.6 are about to ship, and I don't think we should hold them for this) and although I've marked this as not blocking 3.6, I'd take a tested and reviewed patch if it makes it in. If not, we'll get it for the first 3.6 security update.
blocking2.0: --- → alpha1
Flags: blocking1.9.0.17?
Flags: blocking1.9.0.16?
Flags: blocking1.9.0.16-
Flags: blocking-firefox3.6?
Flags: blocking-firefox3.6-
Whiteboard: [3.6.x]
blocking1.9.1: ? → .7+
Flags: wanted1.9.0.x+
Flags: blocking1.9.0.17?
Flags: blocking1.9.0.17+
I know folks have probably been out for the holidays, just wondered if you've had a chance to take a look at this yet?
Just back from holidays and no, not yet. Soon.
blocking1.9.1: .8+ → needed
Some initial notes:

if (i == 0 || lastKey != thisEntry.mKey)

seems like a pretty naive check then...

So we end up with entries that do not have ".Clear()" called on them? i.e.: existingEntries.Clear();

clear doesn't appear to affect the database at all:

void
nsUrlClassifierEntry::Clear()
{
  mId = -1;
  mHavePartial = PR_FALSE;
  mHaveComplete = PR_FALSE;
}

maybe later in the execution that information is used to delete chunks from the database?
Actually it looks like the code in if(i == 0 || lastKey != thisEntry.mKey) {

has nothing to do with what is queued up for substitution:

Later on, the method checks if there are no addChunks, and only then are the subChunks added to the mPendingSubStore


    if (!haveAdds) {
      // Save this entry in the pending subtraction store.
      rv = mPendingSubStore.WriteEntry(thisEntry);
      NS_ENSURE_SUCCESS(rv, rv);
    }

I assume mPendingSubStore is then processed later. back to gdb for more info.
Hi David,

My understanding of the SubChunk() function is that it works like this:

- For each entry in the sub chunk, read the entries from the database for the add chunk and host key that it refers to.  (into existingEntries)

- Search through the add entries for one matching the sub entries.  If the entry to be subbed is found, delete it from the database (mMainStore.DeleteEntry).

- If the add chunk number is not present in the database, the sub is saved in mPendingSubStore so that it can be applied when the corresponding add chunk is received.

As an optimization, if the host key for the current sub entry is the same as for the previous entry, the add entries are reused (that is, it avoids clearing and reloading existingEntries from the database).  However, this fails to take into account the case where the host key is the same as the previous entry but the add chunk numbers they refer to are different.  This means that the sub entry will not match anything in existingEntries, so the add entry will remain in the database and may continue to block page views.

As a first step, it would be great if there was a test to exercise this case -- maybe one could be added to test_addsub.js?
Ok, here is a test that I wrote, I thought it would pass if the this bug manifests itself. No dice, the sub is always subbed out.

function testSubsDifferentChunksSameHostId() {
  var subUrls1 = [ "1:foo.com/a" ];
  var subUrls2 = [ "1:foo.com/b" ];
  var subUrls3 = [ "2:foo.com/c" ];

  var addUrls = [ "foo.com/a", "foo.com/b" ];
  var addUrls2 = [ "foo.com/c" ];

  var addUpdate = buildPhishingUpdate(
    [{ "chunkNum" : 1,
       "urls" : addUrls }]);
  var addUpdate2 = buildPhishingUpdate(
    [{ "chunkNum" : 2,
       "urls" : addUrls2 }]);

  var subUpdate1 = buildPhishingUpdate(
    [{ "chunkNum" : 1,
       "chunkType" : "s",
       "urls": subUrls1 }]);
  var subUpdate2 = buildPhishingUpdate(
    [{ "chunkNum" : 2,
       "chunkType" : "s",
       "urls" : subUrls2 }]);
  var subUpdate3 = buildPhishingUpdate(
    [{ "chunkNum" : 3,
       "chunkType" : "s",
       "urls" : subUrls3 }]);


  var assertions = {
    "tableData" : "test-phish-simple;a:1:s:1-2",
    "urlsExist" : [ "foo.com/c" ],
    "urlsDontExist" : [ "foo.com/a", "foo.com/b" ],
    "subsDontExist" : [ "foo.com/a", "foo.com/b", "foo.com/c" ]
  };

  doTest([subUpdate1, subUpdate2, addUpdate], assertions);
}


"urlsExist" : [ "foo.com/c" ], <-- always fails, as in it is subbed each time.
Not going to make the FF3.0.18 release, we'll get it in when there's something to get in.
Flags: blocking1.9.0.18+ → blocking1.9.0.19?
To trigger the bug, the expressions need to appear consecutively within the same sub chunk.  So, in this test, you could combine subUrls3 into subUrls2, and just have 2 sub chunks.  I'd expect that foo.com/c will still exist in the database if you do that.

I'm also not sure I understand your tableData assertions -- shouldn't the client have all of the add chunks and all of the sub chunks when the test finishes?
(In reply to comment #10)
> To trigger the bug, the expressions need to appear consecutively within the
> same sub chunk.  So, in this test, you could combine subUrls3 into subUrls2,
> and just have 2 sub chunks.  I'd expect that foo.com/c will still exist in the
> database if you do that.
> 
Ok, I tried that but the test still fails...

> I'm also not sure I understand your tableData assertions -- shouldn't the
> client have all of the add chunks and all of the sub chunks when the test
> finishes?

I do not know. I thought the subchunks would be deleted. Funny enough, it's not at all clear from the test suite how url-classifer actually works. I have been commenting those assertions out, and the test fails anyway

here is the output:

TEST-UNEXPECTED-FAIL | /home/ddahl/code/moz/mozilla-central/obj-i686-pc-linux-gnu-debug/_tests/xpcshell/test_url-classifier/unit/head_urlclassifier.js | test-phish-simple ==  - See following stack:
JS frame :: /home/ddahl/code/moz/mozilla-central/mozilla/testing/xpcshell/head.js :: do_throw :: line 202
JS frame :: /home/ddahl/code/moz/mozilla-central/mozilla/testing/xpcshell/head.js :: do_check_eq :: line 232
JS frame :: /home/ddahl/code/moz/mozilla-central/obj-i686-pc-linux-gnu-debug/_tests/xpcshell/test_url-classifier/unit/head_urlclassifier.js :: anonymous :: line 191

line 191 is inside "gAssertions.checkUrls", etc:

checkUrls: function(urls, expected, cb)
{
  // work with a copy of the list.
  urls = urls.slice(0);
  var doLookup = function() {
    if (urls.length > 0) {
      var fragment = urls.shift();
      dumpn("*** frag: " + fragment);
      dbservice.lookup("http://" + fragment,
                       function(arg) {
                         do_check_eq(expected, arg);
                         doLookup();
                       }, true);
    } else {
      cb();
    }
  }
  doLookup();
},
Thanks to some help from Brian, I have a test that demonstrates the bug:

function testSubsDifferentChunksSameHostId() {
  var subUrls1 = [ "1:foo.com/a" ];
  var subUrls2 = [ "1:foo.com/b", "2:foo.com/c" ];

  var addUrls = [ "foo.com/a", "foo.com/b" ];
  var addUrls2 = [ "foo.com/c" ];

  var subUpdate1 = buildPhishingUpdate(
    [{ "chunkNum" : 1,
       "chunkType" : "s",
       "urls": subUrls1 }]);
  var subUpdate2 = buildPhishingUpdate(
    [{ "chunkNum" : 2,
       "chunkType" : "s",
       "urls" : subUrls2 }]);

  var addUpdate = buildPhishingUpdate(
    [{ "chunkNum" : 1,
       "urls" : addUrls }]);
  var addUpdate2 = buildPhishingUpdate(
    [{ "chunkNum" : 2,
       "urls" : addUrls2 }]);


  var assertions = {
    "tableData" : "test-phish-simple;a:1-2:s:1-2",
    "urlsExist" : [ "foo.com/c" ],
  };

  doTest([addUpdate, addUpdate2, subUpdate1, subUpdate2], assertions);
}

"urlsExist" assertion should not pass.
Based on email conversation with ddahl last week, bumping this from an alpha1 blocker to a beta1 blocker.  It's hard to justify blocking an alpha on something that we're shipping in a major release.

That's not to say it isn't worth fixing sooner rather than later.
blocking2.0: alpha1 → beta1
I switched the test so that it fails by checking using the assertion "urlsDontExist", but that failure seems hardcoded:

urlsDontExist: function(urls, cb)
{
  this.checkUrls(urls, '', cb); <-- this always fails since it is hardcoded to compare the second arg ('')  to "test-phish-simple"
},

I'm not sure if this means that the url exists if the test is trying to compare "test-phish-simple", without regard to what the actual url is? Seems like a round-about way to confirm the url
I think this should work ok.  checkUrls calls UrlClassifierDBService::Lookup on each url, which returns a comma-separated list of tables that the url matched.  So if the url is not in the database at all, the expected return value is an empty string.  If you're seeing this return "test-phish-simple" instead, that would indicate that the url _is_ still in the database.
Looks like my obj dir was busted, I can make the test pass now if I remove the if statement:

if (i == 0 || lastKey != thisEntry.mKey)

Which then does a  mMainStore.ReadAddEntries() 

I'm wondering if another test is in order that can determine just how inefficient this method would be without this optimization.
That's certainly the simplest fix, so it sounds good to me if the performance impact isn't too bad.  A slightly more complicated fix would be to keep track of the last-seen add chunk number, and check that in addition to lastKey. I pulled some stats from the current data to get an idea of how many extra calls to ReadAddEntries would happen if you just remove the optimization.

Out of 819,404 expressions in currently-live sub chunks (for the phishing and malware lists), 23,623 had the same host key and add chunk number as the previous expression in the chunk.  So, there would be an extra call to ReadAddEntries for only about 2.9% of the total sub expressions, which seems fairly reasonable.

Of course, keep in mind this is just the current state of the list, and could change over time.
Ok, I'm going to attach this simple patch to see what Tony thinks.
Yes, let's the get simple fix in asap. Please file a followup for implementing Brian's suggestion in comment #17.
Attached patch v 0.1 Patch (obsolete) — Splinter Review
Please ignore my extra dumpn() calls in the tests, i will remove them later. What do you think about this patch removing the optimization?
Attachment #425835 - Flags: review?
Attachment #425835 - Flags: review? → review?(tony)
Comment on attachment 425835 [details] [diff] [review]
v 0.1 Patch

Tony can review as well, I just had a few quick comments.

>--- a/toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp
>+++ b/toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp
>@@ -2646,25 +2646,23 @@ nsUrlClassifierDBServiceWorker::SubChunk
>     nsUrlClassifierEntry& thisEntry = entries[i];
> 
>     HandlePendingLookups();
> 
>     // Check if we have the add chunk associated with the sub.
>     PRBool haveAdds = (mCachedAddChunks.BinaryIndexOf(thisEntry.mAddChunkId) !=
>                        mCachedAddChunks.NoIndex);
> 
>-    if (i == 0 || lastKey != thisEntry.mKey) {
>-      existingEntries.Clear();
>-      lastKey = thisEntry.mKey;
>-
>-      if (haveAdds) {
>-        rv = mMainStore.ReadAddEntries(thisEntry.mKey, thisEntry.mTableId,
>-                                       thisEntry.mAddChunkId, existingEntries);
>-        NS_ENSURE_SUCCESS(rv, rv);
>-      }
>+    existingEntries.Clear();

You should be able to move the definition of existingEntries inside the for loop now, since it's going to be repopulated every time.  Then there will also be no need to call .Clear().

>+    lastKey = thisEntry.mKey;

You should be able to get rid of lastKey now.

Can you also bump the IMPLEMENTATION_VERSION define on line 134?  That way the database will be removed on upgrade and users will no longer have sites stuck in their SafeBrowsing list.
Attached patch v 0.2 Simple Patch - Trunk (obsolete) — Splinter Review
Attachment #425835 - Attachment is obsolete: true
Attachment #425860 - Flags: review?
Attachment #425835 - Flags: review?(tony)
Attachment #425860 - Flags: review? → review?(bryner+phishing)
Comment on attachment 425860 [details] [diff] [review]
v 0.2 Simple Patch - Trunk

>diff --git a/toolkit/components/url-classifier/tests/unit/test_addsub.js b/toolkit/components/url-classifier/tests/unit/test_addsub.js
>--- a/toolkit/components/url-classifier/tests/unit/test_addsub.js
>+++ b/toolkit/components/url-classifier/tests/unit/test_addsub.js
>@@ -282,16 +282,48 @@ function testSubsDifferentChunks() {
>     "urlsExist" : [ "foo.com/c" ],
>     "urlsDontExist" : [ "foo.com/a", "foo.com/b" ],
>     "subsDontExist" : [ "foo.com/a", "foo.com/b" ]
>   };
> 
>   doTest([subUpdate1, subUpdate2, addUpdate], assertions);
> }
> 
>+// for bug 534079
>+function testSubsDifferentChunksSameHostId() {
>+  var subUrls1 = [ "1:foo.com/a" ];
>+  var subUrls2 = [ "1:foo.com/b", "2:foo.com/c" ];

Note that to reproduce the bug, it shouldn't be necessary to have 2 sub chunks, or to sub all 3 expressions -- the interesting part is just that subUrls2 is processed correctly.  But either way is fine with me.
Attachment #425860 - Flags: review?(bryner+phishing) → review+
Good to know, thanks for guiding me on this. Now I know who to ask for all my url-classifier questions.
Comment on attachment 425860 [details] [diff] [review]
v 0.2 Simple Patch - Trunk

Just updating the patch as trunk patch, working on branches now
Attachment #425860 - Attachment description: v 0.2 Simple Patch → v 0.2 Simple Patch - Trunk
This patch applied cleanly and all tests pass.
Attachment #425905 - Flags: review+
Just getting all of the branch patches together, running the full suite of tests for url-classifier gives me this error, with a single test failure:

http://pastebin.mozilla.org/701957

pldhash: for the table at address 0x8b53758, the given entrySize of 52 probably favors chaining over double hashing.
uncaught exception: [Exception... "Component returned failure code: 0x80570018 (NS_ERROR_XPC_BAD_IID) [nsIJSCID.getService]"  nsresult: "0x80570018 (NS_ERROR_XPC_BAD_IID)"  location: "JS frame :: /home/ddahl/code/moz/mozilla-central/obj-i686-pc-linux-gnu-debug/_tests/xpcshell/test_url-classifier/unit/test_hashcompleter.js :: <TOP_LEVEL> :: line 45"  data: no]
Line 45 which is:

 var listManager = Cc["@mozilla.org/url-classifier/listmanager;1"].
                  getService(Ci.nsIUrlClassifierListManager);

not sure what XPCOM bits blew up here.
Sorry for the noise ( comment 28 ) this is related to some tests that were added to my repo outside of my queues. please ignore.
Attachment #425860 - Attachment is obsolete: true
Attachment #426051 - Flags: review?(tony)
Comment on attachment 425905 [details] [diff] [review]
v 0.2 Simple Patch - mozilla-1.9.1 branch

marking obsolete, all branch patches are ready but only going to ask for checkin on trunk for baking
Attachment #425905 - Attachment is obsolete: true
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

LGTM
Attachment #426051 - Flags: review?(tony) → review+
Keywords: checkin-needed
If we end up shipping a 3.0.19 I'd take this patch for it, but not cause a 3.0.19 because of it.

Can we get the patch nominated for 1.9.1 and 1.9.2 landing, though?
blocking1.9.2: --- → needed
Flags: blocking1.9.0.19? → blocking1.9.0.19-
Yep, I have patches for both, just waiting 24 hours on trunk baking. I will attach those patches this afternoon.
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

The attached patch cleanly applies and builds on both branches as well as trunk
Attachment #426051 - Attachment description: v 0.2 Simple Patch for Trunk → v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2
Any updates on landing this on the relevant branches?
I'll have to ping someone to check it in.
(In reply to comment #38)
> I'll have to ping someone to check it in.

No, you need to request approval (and be granted such) first.

Also, resolved on trunk --> RESOLVED FIXED.
Status: NEW → RESOLVED
Closed: 14 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 3.7a2
(In reply to comment #39)
> (In reply to comment #38)
> > I'll have to ping someone to check it in.
> 
> No, you need to request approval (and be granted such) first.
> 
I guess I do not understand the flags on this bug then

"dveditz:  	 wanted1.9.0.x"

etc...
(In reply to comment #40)
> I guess I do not understand the flags on this bug then
> 
> "dveditz:       wanted1.9.0.x"

That just means that the branch drivers want this fix on the 1.9.0 branch. It doesn't give you permission to actually land a patch on that branch, though. You need to request the appropriate branch approval flags on the attachment itself.

See this thread for some more information. It's a bit old, but everything in it still applies:
http://groups.google.com/group/mozilla.dev.planning/browse_thread/thread/9b31b7523bd8cc48/5a289dbac727fead
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

asking for approval on branches
Attachment #426051 - Flags: approval1.9.2.2?
Attachment #426051 - Flags: approval1.9.0.19?
Attachment #426051 - Flags: approval1.9.0.19? → approval1.9.1.9?
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

a=beltzner, assuming this applies cleanly on both branches

Thanks for including tests.
Attachment #426051 - Flags: approval1.9.2.2?
Attachment #426051 - Flags: approval1.9.2.2+
Attachment #426051 - Flags: approval1.9.1.9?
Attachment #426051 - Flags: approval1.9.1.9+
Keywords: checkin-needed
Can we also get this applied to the 3.0 branch? I understand you don't necessarily want to put out an update to 3.0 just for this, but assuming something else comes up that causes a patch to be released (e.g. pwn2own) would be great if this could piggyback on that.
This still needs to land on the 1.9.1 branch, correct?  Can someone please check it in there?
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

Requesting approval for the Firefox 3.0.19 release as well -- see Ian's comment #44.
Attachment #426051 - Flags: approval1.9.0.19?
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

I assume that this patch needs to be applied to the Lorentz branch as well? (Unless there will be a merge from 1.9.2.)
Comment on attachment 426051 [details] [diff] [review]
v 0.2 Simple Patch for Trunk, 1.9.1, 1.9.2

a19019=beltzner, assuming that this applies correctly (be sure to test that, first!)

David: no need to land on Lorentz, they'll sync from 1.9.2 tip.
Attachment #426051 - Flags: approval1.9.0.19? → approval1.9.0.19+
We determined that this does not apply cleanly to the 1.9.0 branch.  The problem is that the branch got a different version of the patch for bug 459281, which kept the implementation version at 5.  Bumping the version to 6 on the 1.9.0 branch seems like a bad idea since this would be different from the version 6 that was on the newer branches.  We wouldn't want to bump it to version 7 on 1.9.0 for the same reason.

I see a few options:
 - bump the 1.9.0 implementation version to some large number that will not be reached on the trunk.
 - bump the 1.9.0 version to 8 and the 1.9.1/1.9.2/trunk version to 9.
 - apply the full patch for bug 459281 to the 1.9.0 branch to bring it in line with the newer branches, and sync the implementation versions at 7.

Since we need to clear the database anyway for this change, and that was the main objection to the trunk version of 459281, it might be easiest to just sync the branch with that change.  However, if something less risky is desired, we could do a one-off implementation version.

Thoughts?
Just for reference, the scaled-back version of bug 459281 that was applied previously for 1.9.0 is on bug 461891.
Keywords: checkin-needed
I'm not sure how many more updates will be created for 1.9.0.x - so the first option seems to be easy and fast. Anyone else have an opinion?
That sounds fine too... it looks like the version is a 32-bit integer, so there is plenty of space.  Maybe a convention like (190 << 16) + 1 would make sense?  

It would probably be good to ping whoever owns the sql code to make sure this won't get us into trouble.  I'm not sure who that is currently.
What's wrong with version 8, and just bumping it up one more step (to 9) next time on trunk or whatever branch?  Seems like simplicity trumps all here.
Talked about this offline with sdwilsh.  The main concern with sequential numbering is that we would inadvertently reuse a version number when we shouldn't, especially if there are some version numbers only in use on branches.  That said, this could also come up if we pick some large number to use on the branch.

Since it would be good to have some history for the different revision numbers anyway, Shawn's suggestion was to set up a wiki page that lists the different versions along with where they are in use, and the changes between them.  A code comment would tell people to check (and update) the wiki when bumping the implementation version.  This seems like it would solve the problem and maybe make things easier in the future as well.

David, would you want to take a crack at putting this together?
Yeah - this is exactly what the JS guys do for their internal api versions
Attached patch v 0.1 Patch for 1.9.0 branch (obsolete) — Splinter Review
Yay! cvs. built fine, all tests pass.
Attachment #429021 - Flags: review?(bryner+phishing)
also added a wiki page on the versioning issue: https://wiki.mozilla.org/Urlclassifier-notes
Verified for 1.9.1 and 1.9.2 via checked in tests.
Comment on attachment 429021 [details] [diff] [review]
v 0.1 Patch for 1.9.0 branch

Looks good in general, just one comment.

>--- toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp	14 Nov 2008 23:30:10 -0000	1.82
>+++ toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp	26 Feb 2010 00:40:37 -0000
>@@ -18,16 +18,17 @@
> // The implementation version is updated during development when we
> // want to change schema, or to recover from updating bugs.  When an
> // implementation version change is detected, the database is scrapped
> // and we start over.
>-#define IMPLEMENTATION_VERSION 5
>+#define IMPLEMENTATION_VERSION 8

Please add a comment just above this that points to the wiki page.  Something along the lines of, "<url> lists the implementation versions that have been used.  When updating the implementation version, please make sure to update this page as well."

This comment should also be added on the trunk and the 1.9.1 / 1.9.2 branches.
Updated with comment about verisoning and wiki page. I knew I forgot to do something.
Attachment #429021 - Attachment is obsolete: true
Attachment #429134 - Flags: review+
Attachment #429021 - Flags: review?(bryner+phishing)
Whiteboard: [3.6.x] → [3.6.x] [checkin-1.9.0] [checkin-needed]
Whiteboard: [3.6.x] [checkin-1.9.0] [checkin-needed] → [3.6.x]
Attachment #429134 - Flags: approval1.9.0.19?
Comment on attachment 429134 [details] [diff] [review]
v 0.2 Patch for 1.9.0 branch

Approved for 1.9.0.19, a=dveditz for release-drivers
Attachment #429134 - Flags: approval1.9.0.19? → approval1.9.0.19+
Any update on getting this checked in for 1.9.0.19?
Whiteboard: [3.6.x] → [3.6.x] [checkin-1.9.0] [checkin-needed]
i just re-added [checkin-needed] - someone removed it
Keywords: checkin-needed
Whiteboard: [3.6.x] [checkin-1.9.0] [checkin-needed] → [3.6.x] [checkin-1.9.0]
mozilla/toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp 	1.83
mozilla/toolkit/components/url-classifier/tests/unit/head_urlclassifier.js 	1.15
mozilla/toolkit/components/url-classifier/tests/unit/test_addsub.js 	1.9
Whiteboard: [3.6.x] [checkin-1.9.0] → [3.6.x]
Verified for 1.9.0.19 via checked in tests.
This bug is not really fixed yet, since the comment below has not been added to trunk & other branches.

> Please add a comment just above this that points to the wiki page.  Something
> along the lines of, "<url> lists the implementation versions that have been
> used.  When updating the implementation version, please make sure to update
> this page as well."
> 
> This comment should also be added on the trunk and the 1.9.1 / 1.9.2 branches.
I have been "heads down" on another project and will fix this comment issue early in Q2.
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: