Closed
Bug 1283007
Opened 9 years ago
Closed 9 years ago
Implement variable length PrefixSet class for Safe Browsing v4
Categories
(Toolkit :: Safe Browsing, defect, P2)
Toolkit
Safe Browsing
Tracking
()
RESOLVED
DUPLICATE
of bug 1305801
People
(Reporter: dimi, Assigned: dimi)
References
Details
(Whiteboard: #sbv4-m1)
Attachments
(3 files, 7 obsolete files)
Currently safebrowsing only support 4-bytes prefix set(implement in nsUrlClassifierPrefixSet.cpp).
In V4, update response may contain variable prefix length so we need to support it.
Comment 1•9 years ago
|
||
As discussed in London, it's probably worthwhile to just stow non-4-byte Prefixes in a sorted array as they will be a minority of the data.
| Assignee | ||
Comment 2•9 years ago
|
||
Comment 3•9 years ago
|
||
FWIW it might be worthwhile to check if anything really needs the IDL/JS API access to this data-structure. It used to be required for testing, but we can write C++ unit tests now, so maybe that part can go.
| Assignee | ||
Comment 4•9 years ago
|
||
| Assignee | ||
Comment 5•9 years ago
|
||
(In reply to Gian-Carlo Pascutto [:gcp] from comment #3)
> FWIW it might be worthwhile to check if anything really needs the IDL/JS API
> access to this data-structure. It used to be required for testing, but we
> can write C++ unit tests now, so maybe that part can go.
Ok, I'll see if i can use C++ unit test :)
Thanks for suggestion!
| Assignee | ||
Comment 6•9 years ago
|
||
(In reply to Dimi Lee[:dimi][:dlee] from comment #4)
> Created attachment 8767064 [details] [diff] [review]
> WIP testcase
This testcase will be re-written to C++ if testcase is the only one access nsIUrlClassifierVLPrefixSet idl
| Assignee | ||
Updated•9 years ago
|
Whiteboard: #sbv4-v1
Updated•9 years ago
|
Whiteboard: #sbv4-v1 → #sbv4-m1
Updated•9 years ago
|
No longer blocks: safebrowsingv4
| Assignee | ||
Updated•9 years ago
|
Attachment #8767028 -
Attachment is obsolete: true
| Assignee | ||
Updated•9 years ago
|
Attachment #8767064 -
Attachment is obsolete: true
| Assignee | ||
Comment 7•9 years ago
|
||
Hi gcp, francois,
As talked in London Work Week about Variable-Length Prefixes, the conclusion is that for 4-bytes prefixes, we will use origin approach because it is already optimized.
But after starting to work on V4, i am not sure if we should still do that.
In V4 spec, prefixes sent from server are already lexicographically-sorted. The order is important because when doing partial update, server send removal "index" indicate which prefix should be removed.
If we store 4-bytes prefixes using origin approach, then each time an update occurs, we will need to:
1. Sort entire 4-bytes prefixes array with lexicographical order
2. Removal and Merge prefixes according to update response
3. Sort 4-bytes prefixes again but using old approach(Convert to integer and sort).
4. Store prefixes to disk.
Step1 and Step3 are extra efforts compare to just store what server send to us.
And the array to be sorted is entire array we got, so even update response doesn't change too much, we still need to sort whole array.
Since server already pass sorted prefixes, I think we can just store it to disk.
Do you have any suggestion ? thanks!
Flags: needinfo?(gpascutto)
Flags: needinfo?(francois)
Comment 8•9 years ago
|
||
(In reply to Dimi Lee[:dimi][:dlee] from comment #7)
> If we store 4-bytes prefixes using origin approach, then each time an update
> occurs, we will need to:
> 1. Sort entire 4-bytes prefixes array with lexicographical order
> 2. Removal and Merge prefixes according to update response
> 3. Sort 4-bytes prefixes again but using old approach(Convert to integer and
> sort).
> 4. Store prefixes to disk.
PrefixSet itself is sorted, and whatever extra array contains the non-4-byte prefixes will be sorted too if you're going to be doing lookups into it. So you can materialize a joint array into lexicographical order (which will not take any extra sorting, just a merge step) and operate on that. Or you can scan through both PrefixSet and the extra array simultaneously and you don't even need to materialize the joint array then.
Even if this weren't to work, sorting the array takes on the order of milliseconds, IIRC, so it's not worth bothering much about if we're only going to be doing it at most every half hour or so.
> Since server already pass sorted prefixes, I think we can just store it to
> disk.
The issue is whether you are able to do fast lookups into it with minimal memory overhead? That is not so obvious to me, because the server uses Rice coding IIRC?
Storage size used to be a strong consideration too (for Firefox on Android), but SB v4 allows us to limit the size of the database so I guess that is not as pressing any more now.
Flags: needinfo?(gpascutto)
| Assignee | ||
Comment 9•9 years ago
|
||
Hi gcp,
Thanks for quick reply! Ni again just to confirm the sorted prefix set question.
(In reply to Gian-Carlo Pascutto [:gcp] from comment #8)
> (In reply to Dimi Lee[:dimi][:dlee] from comment #7)
> > 1. Sort entire 4-bytes prefixes array with lexicographical order
> > 2. Removal and Merge prefixes according to update response
> > 3. Sort 4-bytes prefixes again but using old approach(Convert to integer and
> > sort).
> > 4. Store prefixes to disk.
>
> PrefixSet itself is sorted, and whatever extra array contains the non-4-byte
> prefixes will be sorted too if you're going to be doing lookups into it. So
> you can materialize a joint array into lexicographical order (which will not
> take any extra sorting, just a merge step) and operate on that. Or you can
> scan through both PrefixSet and the extra array simultaneously and you don't
> even need to materialize the joint array then.
>
Yes, PrefixSet is sorted. IIRC, it is sorted after converting to Integer. But convert 4-bytes prefix to integer depends on Endianness, if the platform is Little-Endian, then the sorted prefixes cannot just simply merged with lexicographical-sorted prefix set. I mean, for example, if server tell us to remove the third prefix, even if we only have 4-bytes prefix set, we still need to sort again according to lexicographical order to find the third prefix.
Does this make sense?
> Even if this weren't to work, sorting the array takes on the order of
> milliseconds, IIRC, so it's not worth bothering much about if we're only
> going to be doing it at most every half hour or so.
>
I will check it, last time I test the v4 server update response, there are about one million prefixes.
it did take some time to sort the array, but i will double check it later.
Flags: needinfo?(gpascutto)
Comment 10•9 years ago
|
||
For v4 I'd just convert the values to big-endian before giving them to PrefixSet then?
Flags: needinfo?(gpascutto)
| Assignee | ||
Comment 11•9 years ago
|
||
(In reply to Gian-Carlo Pascutto [:gcp] from comment #10)
> For v4 I'd just convert the values to big-endian before giving them to
> PrefixSet then?
Yes, I think this will work. I will take this approach :)
Comment 12•9 years ago
|
||
(In reply to Gian-Carlo Pascutto [:gcp] from comment #8)
> Storage size used to be a strong consideration too (for Firefox on Android),
> but SB v4 allows us to limit the size of the database so I guess that is not
> as pressing any more now.
And also we shouldn't optimize for Android because we should instead switch to the new Android platform API.
Flags: needinfo?(francois)
Comment 13•9 years ago
|
||
(In reply to François Marier [:francois] from comment #12)
> And also we shouldn't optimize for Android because we should instead switch
> to the new Android platform API.
Well yes but the problem of that argument is that it would block v4 support rolling out on rewriting the relevant parts of Firefox for Android.
Comment 14•9 years ago
|
||
(In reply to Gian-Carlo Pascutto [:gcp] from comment #13)
> Well yes but the problem of that argument is that it would block v4 support
> rolling out on rewriting the relevant parts of Firefox for Android.
Or we can ship V4 on Desktop and keep Android on V2 for a little longer.
| Assignee | ||
Comment 15•9 years ago
|
||
| Assignee | ||
Comment 16•9 years ago
|
||
Hi Henry,
I am still refining testcase, will upload for feedback after finishing it.
Attachment #8778820 -
Attachment is obsolete: true
Attachment #8779612 -
Flags: feedback?(hchang)
| Assignee | ||
Comment 17•9 years ago
|
||
Attachment #8779612 -
Attachment is obsolete: true
Attachment #8779612 -
Flags: feedback?(hchang)
Attachment #8779687 -
Flags: feedback?(hchang)
| Assignee | ||
Comment 18•9 years ago
|
||
Attachment #8779688 -
Flags: feedback?(hchang)
| Assignee | ||
Comment 19•9 years ago
|
||
obsolete previous one because of attaching wrong file ><
Attachment #8779687 -
Attachment is obsolete: true
Attachment #8779687 -
Flags: feedback?(hchang)
Attachment #8779957 -
Flags: feedback?(hchang)
Comment 20•9 years ago
|
||
Comment on attachment 8779957 [details] [diff] [review]
P1. Implement variable length PrefixSet class for Safe Browsing v4
Review of attachment 8779957 [details] [diff] [review]:
-----------------------------------------------------------------
It overall looks pretty complete to me except I haven't figured out the endianness while accessing nsUrlClassifierPrefixSet. But it's still a good implementation! Will have a offline discussion with you. Thanks!
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp
@@ +13,5 @@
> +#include "mozilla/EndianUtils.h"
> +#include "mozilla/Logging.h"
> +#include "mozilla/unused.h"
> +#include <algorithm>
> +
It seems some of the headers are not required in this file.
@@ +132,5 @@
> +}
> +
> +// If there are multiple different length prefixes with common prefixes, for example,
> +// Prefixes: { 12345AA, 12345BB,12345AX }, FullHashes : 12345AA..... , we do not gurantee
> +// which one we will return in this case.
According to the implementation, the order is
01) 4-byte prefix
02) 1-byte prefix
...
32) 32-byte prefix
Am I understanding right?
@@ +177,5 @@
> +
> + mPrefixSet->IsEmpty(aEmpty);
> + if (*aEmpty) {
> + *aEmpty = mVLPrefixSet.IsEmpty();
> + }
I'd prefer
mPrefixSet->IsEmpty(aEmpty);
*aEmpty = *aEmpty && mVLPrefixSet.IsEmpty();
but the original is also good to me :)
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.h
@@ +13,5 @@
> +#include "mozilla/Mutex.h"
> +
> +class nsUrlClassifierPrefixSet;
> +
> +using namespace mozilla::safebrowsing;
I'd suggest not exposing namespace in the header file.
@@ +16,5 @@
> +
> +using namespace mozilla::safebrowsing;
> +
> +class nsUrlClassifierVLPrefixSet final
> + : public nsISupports
Since I haven't know how this class would be used, I cannot tell if it needs to inherit nsISupports. Would it be ref counted?
Attachment #8779957 -
Flags: feedback?(hchang) → feedback+
Comment 21•9 years ago
|
||
Comment on attachment 8779688 [details] [diff] [review]
P2. Testcase
Review of attachment 8779688 [details] [diff] [review]:
-----------------------------------------------------------------
Very thorough testing :)
My only concern is uncertainty of nsUrlClassifierVLPrefixSet::Match(). Could we implement it as returning the longest (or shortest) length of prefix that matches the full hash? We may keep this in mind until the uncertainty becomes a thing :) Thanks!
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp
@@ +64,5 @@
> + nsCString* set = map.Get(iter.Key());
> + nsCString* get = iter.Data();
> +
> + ASSERT_TRUE(set->Length() == get->Length());
> + ASSERT_TRUE(0 == memcmp(set->BeginReading(), get->BeginReading(), set->Length()));
Couldn't it just be
ASSERT_TRUE(set->Equals(*get));
?
@@ +92,5 @@
> + }
> + }
> + ASSERT_TRUE(find);
> + } else {
> + ASSERT_TRUE(true);
It seems not needed.
@@ +115,5 @@
> + uint32_t expected = 0;
> + for (uint32_t j = 0; j < array.Length(); j++) {
> + const nsACString& str = array[j];
> + if (0 == memcmp(buf, str.BeginReading(), str.Length())) {
> + expected = expected != 0 && expected < str.Length() ? expected : str.Length();
Could it just be
expected = std::max(expected, str.Length());
Attachment #8779688 -
Flags: feedback?(hchang) → feedback+
Updated•9 years ago
|
Priority: -- → P2
| Assignee | ||
Comment 22•9 years ago
|
||
Thanks for your suggestion! I will fix them in next patch.
(In reply to Henry Chang [:henry][:hchang] from comment #20)
>
> @@ +132,5 @@
> > +}
> > +
> > +// If there are multiple different length prefixes with common prefixes, for example,
> > +// Prefixes: { 12345AA, 12345BB,12345AX }, FullHashes : 12345AA..... , we do not gurantee
> > +// which one we will return in this case.
>
> According to the implementation, the order is
>
> 01) 4-byte prefix
> 02) 1-byte prefix
> ...
> 32) 32-byte prefix
>
> Am I understanding right?
>
In current patch, it will first find from 4-bytes prefix set, if there is an matches, return match length 4. If it doesn't match 4-bytes prefix set, we will then iterate through 5-bytes ~ 32-bytes hashtable to find match prefix. AFAIK iterator for hash table do not guarantee the order(in this case: 5,6,7, ... 32), and we will return immediately if there is an match, so that's why i add the comments this function doesn't guarantee the matched prefix size is the longgest or the shortest.
I'll find you to discuss this offline to have a better approach :)
> @@ +16,5 @@
> > +
> > +using namespace mozilla::safebrowsing;
> > +
> > +class nsUrlClassifierVLPrefixSet final
> > + : public nsISupports
>
> Since I haven't know how this class would be used, I cannot tell if it needs
> to inherit nsISupports. Would it be ref counted?
Yes, it will be added to LookupCache.h like mPrefixSet now
https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/LookupCache.h#146
| Assignee | ||
Comment 23•9 years ago
|
||
(In reply to Henry Chang [:henry][:hchang] from comment #20)
>
> It overall looks pretty complete to me except I haven't figured out the
> endianness while accessing nsUrlClassifierPrefixSet. But it's still a good
> implementation! Will have a offline discussion with you. Thanks!
>
Hi Henry,
Below is what i understand about the endian question in this patch
Would you mind take a look and see if you think this is correct ? thanks!
Sorted lexicographic order char array :[0,1,0,0] < [1,0,0,0] < [1,1,0,0]
Convert to Integer with big-Endian : 2^17 < 2^25 < 2^25 + 2^17
(BigEndian::readUint32, this value
will be stored to disk)
(A)On Big-Endian Platform:
Read integer from disk : 2^17 < 2^25 < 2^25 + 2^17
actual char array in memory :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
SwapToBigEndian : 2^17 , 2^25 , 2^25 + 2^17
char array (lexicographic sorted) :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
(B)On Little-Endian Platform:
Read integer from disk : 2^17 < 2^25 < 2^25 + 2^17
actual char array in memory :[0,0,1,0] , [0,0,0,1] , [0,0,1,1]
SwapToBigEndian : 2^9 , 2^1 , 2^9 + 2^1
char array (lexicographic sorted) :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
Flags: needinfo?(hchang)
Comment 24•9 years ago
|
||
(In reply to Dimi Lee[:dimi][:dlee] from comment #23)
> (In reply to Henry Chang [:henry][:hchang] from comment #20)
> >
> > It overall looks pretty complete to me except I haven't figured out the
> > endianness while accessing nsUrlClassifierPrefixSet. But it's still a good
> > implementation! Will have a offline discussion with you. Thanks!
> >
> Hi Henry,
> Below is what i understand about the endian question in this patch
> Would you mind take a look and see if you think this is correct ? thanks!
>
> Sorted lexicographic order char array :[0,1,0,0] < [1,0,0,0] < [1,1,0,0]
>
> Convert to Integer with big-Endian : 2^17 < 2^25 < 2^25 + 2^17
> (BigEndian::readUint32, this value
> will be stored to disk)
>
> (A)On Big-Endian Platform:
> Read integer from disk : 2^17 < 2^25 < 2^25 + 2^17
> actual char array in memory :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
>
> SwapToBigEndian : 2^17 , 2^25 , 2^25 + 2^17
> char array (lexicographic sorted) :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
>
> (B)On Little-Endian Platform:
> Read integer from disk : 2^17 < 2^25 < 2^25 + 2^17
> actual char array in memory :[0,0,1,0] , [0,0,0,1] , [0,0,1,1]
>
> SwapToBigEndian : 2^9 , 2^1 , 2^9 + 2^1
> char array (lexicographic sorted) :[0,1,0,0] , [1,0,0,0] , [1,1,0,0]
Thanks for your explanation to help me figure out the whole things!
Since PrefixSet requires an ordered ints in native order and the given ints
are ordered by Big endian, we only need to swap the endianness only when
the native endianness is not Big endian. That's exactly what BigEndian::swapToBigEndian
does!
Thanks!
Flags: needinfo?(hchang)
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Attachment #8779688 -
Attachment is obsolete: true
| Assignee | ||
Updated•9 years ago
|
Attachment #8779957 -
Attachment is obsolete: true
Comment 27•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781440 [details]
Bug 1283007 - P3. Testcase for variable length prefixSet.
https://reviewboard.mozilla.org/r/71878/#review70004
I like this test a lot. It's very thorough!
One thing that would be good to add is creating PrefixSets of length < 4 and length > 32. They should both fail.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:78
(Diff revision 1)
> + ASSERT_TRUE(set->Equals(*get));
> + }
> +}
> +
> +// This test loop through all the prefixes and convert each prefix to
> +// fullhash by appending 'F'. Each converted fullhash should at least
What do you mean by "appending 'F'". It seems like the `CreateFullHash()` function appends random characters.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:83
(Diff revision 1)
> +// fullhash by appending 'F'. Each converted fullhash should at least
> +// matches its origin length in the prefixSet.
> +static void DoExpectedLookup(nsUrlClassifierVLPrefixSet* pset,
> + _PrefixArray& array)
> +{
> + uint32_t matches = 0;
nit: `matchLength` would be a clearer name here
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:95
(Diff revision 1)
> + ASSERT_TRUE(false);
> + } else if (matches != prefix.Length()) {
> + // Return match size is not the same as prefix size
> + // it could match other prefixes, find out if this prefix exist.
> + const char* begin = fullhash->BeginReading();
> + bool find = false;
grammar nit: `found` would be better as a name
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:96
(Diff revision 1)
> + } else if (matches != prefix.Length()) {
> + // Return match size is not the same as prefix size
> + // it could match other prefixes, find out if this prefix exist.
> + const char* begin = fullhash->BeginReading();
> + bool find = false;
> + for (uint32_t j = 0; j < array.Length(); j++) {
You're defining `j` as the iterator here, but you don't appear to be using it in the loop. You're using only the other loop's iterator (`i`). Typo?
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:101
(Diff revision 1)
> + for (uint32_t j = 0; j < array.Length(); j++) {
> + if (array[i].Length() != matches) {
> + continue;
> + }
> +
> + if (0 == memcmp(begin, array[i].BeginReading(), matches)) {
I find the `begin` variable here a little hard to understand. Maybe just use `fullhash->BeginReading()` directly to make it clear we're comparing the fullhash and the j^th prefix in the array.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:133
(Diff revision 1)
> + if (0 == memcmp(buf, str.BeginReading(), str.Length())) {
> + expected.AppendElement(str.Length());
> + }
> + }
> +
> + uint32_t matches = 0;
Again, `matchLength` would be clearer.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:140
(Diff revision 1)
> +
> + ASSERT_TRUE(expected.IsEmpty() ? matches == 0 : expected.Contains(matches));
> + }
> +}
> +
> +static void SetupPrefixesMap(const _PrefixArray& array,
grammar nit: `SetupPrefixMap`
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:145
(Diff revision 1)
> +static void SetupPrefixesMap(const _PrefixArray& array,
> + PrefixesStringMap& map)
> +{
> + map.Clear();
> +
> + nsClassHashtable<nsUint32HashKey, _PrefixArray> table;
For clarity, I would suggest the following comment:
// Buckets are keyed by prefix length and contain an array of all prefixes of that length.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:157
(Diff revision 1)
> + }
> +
> + prefixes->AppendElement(array[i]);
> + }
> +
> + for (auto iter = table.Iter(); !iter.Done(); iter.Next()) {
Here's a suggested comment:
// The resulting map entries will be a concatenation of all prefix data for the prefixes of a given size.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:219
(Diff revision 1)
> +{
> + RefPtr<nsUrlClassifierVLPrefixSet> pset = new nsUrlClassifierVLPrefixSet;
> + pset->Init(NS_LITERAL_CSTRING("test"));
> +
> + PrefixesStringMap map;
> + _PrefixArray array = { _Prefix("bravo"), _Prefix("charlie"), _Prefix("delta"),
Maybe this should have longer prefixes too? The longest one is only 8 bytes.
You could have `_Prefix("novembernovembernovembernovember")` for a 32-byte prefix.
::: toolkit/components/url-classifier/tests/gtest/TestUrlClassifierVLPrefixSet.cpp:258
(Diff revision 1)
> +{
> + RefPtr<nsUrlClassifierVLPrefixSet> pset = new nsUrlClassifierVLPrefixSet;
> + pset->Init(NS_LITERAL_CSTRING("test"));
> +
> + PrefixesStringMap map;
> + _PrefixArray array = { _Prefix("Venus"), _Prefix("Apollo"), _Prefix("Mars"),
Again, it might be worth having longer prefixes too.
Attachment #8781440 -
Flags: review?(francois) → review-
Comment 28•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review70024
::: toolkit/components/url-classifier/Entries.h:309
(Diff revision 1)
> return aStream->Write(reinterpret_cast<char*>(aArray.Elements()),
> aArray.Length() * sizeof(T),
> &written);
> }
>
> +typedef nsClassHashtable<nsUint32HashKey, nsCString> PrefixesStringMap;
grammar nit: `PrefixStringMap` would be better
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.h:21
(Diff revision 1)
> +
> +class nsUrlClassifierVLPrefixSet final
> + : public nsISupports
> +{
> +public:
> + typedef mozilla::safebrowsing::PrefixesStringMap PrefixesStringMap;
nit: the typedef seems a little unnecessary
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.h:50
(Diff revision 1)
> + uint32_t CalculatePreallocateSize();
> + nsresult WritePrefixes(nsIOutputStream* out);
> + nsresult LoadPrefixes(nsIInputStream* in);
> +
> +#if DEBUG
> + void Dump(PrefixesStringMap& aPrefixMap);
Not defined anywhere. You should either define it in the cpp or take it out.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:40
(Diff revision 1)
> +}
> +
> +NS_IMETHODIMP
> +nsUrlClassifierVLPrefixSet::Init(const nsACString& aName)
> +{
> + // TODO : This init function is for Memory Reporter
We should implement this before landing.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:48
(Diff revision 1)
> +
> +nsUrlClassifierVLPrefixSet::~nsUrlClassifierVLPrefixSet()
> +{
> +}
> +
> +// TODO : is it possilbe to make it const
We should check this and remove the comment before landing.
Comment 29•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review70032
I noticed a few small things, but before I take a thorough look at the logic, I was wondering whether we should merge PrefixSet and VLPrefixSet.
Since we'll never remove the 4-byte support from the codebase (we need it in V4 too), I think it makes sense to simply evolve the existing code to support both 4-bytes and 5-32 bytes in the same class.
A small advantage of that approach is that your gtest would then directly test the old V2 class too.
Attachment #8781439 -
Flags: review?(francois)
| Assignee | ||
Comment 30•9 years ago
|
||
(In reply to François Marier [:francois] from comment #29)
>
> Since we'll never remove the 4-byte support from the codebase (we need it in
> V4 too), I think it makes sense to simply evolve the existing code to
> support both 4-bytes and 5-32 bytes in the same class.
>
> A small advantage of that approach is that your gtest would then directly
> test the old V2 class too.
Hi Francois,
I have some concern to merge VLPrefixSet and Prefixset.
One problem is that the interfaces are quite different. Set/GetPrefixes of VLPrefixSet use sorted lexicographical string as parameter and PrefixSet is integer array based. Also the interface to check if a fullhash is matched is different.
I think keep them separate make the user of prefixSet knows exactly which interface should be used, it will be less confusing since you won't get one mPrefixSet and you are only allowed to call part of interfaces according to your protocol version.
Also since the implementation is quite different, so for me i think the code is somewhat more clear by separating them instead of merging them into single class.
How do you think ?
Flags: needinfo?(francois)
| Assignee | ||
Comment 31•9 years ago
|
||
| mozreview-review-reply | ||
Comment on attachment 8781440 [details]
Bug 1283007 - P3. Testcase for variable length prefixSet.
https://reviewboard.mozilla.org/r/71878/#review70004
It will hit assert in nsUrlClassifierVLPrefixSet, i am not sure how to verify it in gtest unless i remove the assert and return error.
> What do you mean by "appending 'F'". It seems like the `CreateFullHash()` function appends random characters.
ah sorry , i forget to update comment after changing implementation.
> You're defining `j` as the iterator here, but you don't appear to be using it in the loop. You're using only the other loop's iterator (`i`). Typo?
typo, thanks for finding this!
| Assignee | ||
Comment 32•9 years ago
|
||
(In reply to François Marier [:francois] from comment #29)
>
> A small advantage of that approach is that your gtest would then directly
> test the old V2 class too.
Sorry one thing i forget to address, we already have a very thorough test for v2 class[1].
This is where i referenced when working on this test :)
[1] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/tests/unit/test_prefixset.js
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Comment 35•9 years ago
|
||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
Cancel review until we have a conclusion about comment 28, comment 29
Attachment #8781439 -
Flags: review?(francois)
Comment 36•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781440 [details]
Bug 1283007 - P3. Testcase for variable length prefixSet.
https://reviewboard.mozilla.org/r/71878/#review70492
Thanks for addressing all of my review comments!
Can we easily add tests for prefix of length < 4 and length > 32 to ensure that they fail?
Attachment #8781440 -
Flags: review?(francois)
Updated•9 years ago
|
Flags: needinfo?(francois) → needinfo?(dlee)
| Assignee | ||
Comment 37•9 years ago
|
||
(In reply to François Marier [:francois] from comment #36)
> Comment on attachment 8781440 [details]
> Bug 1283007 - P2. Testcase for variable length prefixSet.
>
> https://reviewboard.mozilla.org/r/71878/#review70492
>
> Thanks for addressing all of my review comments!
>
> Can we easily add tests for prefix of length < 4 and length > 32 to ensure
> that they fail?
Yes, In previous version i add MOZ_ASSERT for these cases, so I didn't add a google test for that.
But I think it would be more reasonable just return fail instead of assertion.
I will add the testcase in next patch.
Flags: needinfo?(dlee)
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Attachment #8781439 -
Flags: review?(francois)
Comment 40•9 years ago
|
||
(In reply to Dimi Lee[:dimi][:dlee] from comment #30)
> I think keep them separate make the user of prefixSet knows exactly which
> interface should be used, it will be less confusing since you won't get one
> mPrefixSet and you are only allowed to call part of interfaces according to
> your protocol version.
>
> Also since the implementation is quite different, so for me i think the code
> is somewhat more clear by separating them instead of merging them into
> single class.
>
> How do you think ?
After chatting about this, we decided to keep them separate to avoid adding a perf overhead to the 4-byte prefix handling code.
Comment 41•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781440 [details]
Bug 1283007 - P3. Testcase for variable length prefixSet.
https://reviewboard.mozilla.org/r/71878/#review71878
Attachment #8781440 -
Flags: review?(francois) → review+
Comment 42•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review75660
That looks quite good Dimi. Couldn't find anything wrong in the logic!
My comments are around making a few things easier to read, defending against OOM crashes, adding Telemetry and also three small coding-style related things:
- The new class doesn't need the legacy `ns` prefix and can probably also lose the `UrlClassifier` prefix since it's already in the directory name. Then it could simply be called `VariableLengthPrefixSet`.
- The guard at the top of the CPP file should not have a trailing underscore as per https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Coding_Style#CC_practices
- Is it possible to put the new class in the `mozilla::safebrowsing` namespace?
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.h:58
(Diff revision 3)
> + // of the operations) and the main thread (which does memory reporting).
> + // It should be held for all operations between Init() and destruction that
> + // touch this class's data members.
> + mozilla::Mutex mLock;
> +
> + RefPtr<nsUrlClassifierPrefixSet> mPrefixSet;
Nit: maybe we could call this `mFixedPrefixSet` (fixed-length) to make it easier to spot mistakes in the code?
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:18
(Diff revision 3)
> +#include <algorithm>
> +
> +// This class will process prefix size between 4~32. But for 4 bytes prefixes,
> +// they will be passed to nsUrlClassifierPrefixSet because of better optimization.
> +
> +#define PREFIX_SIZE_4BYTES 4
Nit: how about `PREFIX_SIZE_FIXED` instead? Having "4bytes" in the name seems a little redundant.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:85
(Diff revision 3)
> +
> + // Prefixes are lexicographically-sorted, so the interger array
> + // passed to nsUrlClassifierPrefixSet should also follow the same order.
> + // To make sure of that, we convert char array to integer with Big-Endian
> + // instead of casting to integer directly.
> + nsTArray<uint32_t> array;
Maybe this should be a fallible array and return an out of memory error if `SetCapacity` fails?
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:146
(Diff revision 3)
> +
> + return NS_OK;
> +}
> +
> +// If there are multiple different length prefixes with common prefixes, for example,
> +// Prefixes: { 12345AA, 12345BB,12345AX }, FullHashes : 12345AA..... , we do not gurantee
If I understand correctly, you mean that if many prefixes match to different full hashes, we don't guarantee which full hash we return.
I would expand the example to make it clearer:
```
If there are multiple prefixes mapping to the same full hash which start with the same characters but have different lengths, we do not guarantee which of the prefix length we will return.
For example, if we have these prefixes:
12345AA -> 12345AABBBBBBBBB
12345A -> 12345AABBBBBBBBB
12345 -> 12345AABBBBBBBBB
then calling Matches("12345AABBBBBBBBB", length) will return a length of either 6, 7 or 8.
```
but maybe I've misunderstood what you meant.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:199
(Diff revision 3)
> +
> +NS_IMETHODIMP
> +nsUrlClassifierVLPrefixSet::LoadFromFile(nsIFile* aFile)
> +{
> + MutexAutoLock lock(mLock);
> +
Maybe we should add telemetry like in the fixed-length prefix set so that we can compare the performance of both: `URLCLASSIFIER_VLPS_FILELOAD_TIME`
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:248
(Diff revision 3)
> +
> + uint32_t fileSize = 0;
> + // Preallocate the file storage
> + {
> + nsCOMPtr<nsIFileOutputStream> fos(do_QueryInterface(localOutFile));
> +
Again, we could create a new `URLCLASSIFIER_PS_FALLOCATE_TIME` telemetry probe here.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:290
(Diff revision 3)
> + uint32_t count;
> + rv = in->Read(reinterpret_cast<char*>(&count), sizeof(uint32_t), &read);
> + NS_ENSURE_SUCCESS(rv, rv);
> + NS_ENSURE_TRUE(read == sizeof(uint32_t), NS_ERROR_FAILURE);
> +
> + if (count == 0) {
I think this check is unnecessary since if `count` is equal to 0, it will fail the check at the start of the `for` loop and then we'll get down to the `return NS_OK`.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:306
(Diff revision 3)
> + rv = in->Read(reinterpret_cast<char*>(&stringLength), sizeof(uint32_t), &read);
> + NS_ENSURE_SUCCESS(rv, rv);
> + NS_ENSURE_TRUE(read == sizeof(uint32_t), NS_ERROR_FAILURE);
> +
> + nsCString* vlPrefixes = new nsCString();
> + vlPrefixes->SetLength(stringLength);
Is there a way we can check for out-of-memory errors here and return an error?
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:354
(Diff revision 3)
> +
> + // Store PrefixSize, Length of Prefix String and then Prefix String
> + for (auto iter = mVLPrefixSet.ConstIter(); !iter.Done(); iter.Next()) {
> + const nsCString& vlPrefixes = *iter.Data();
> +
> + uint8_t key = iter.Key();
`prefixSize` would be a better name here since it would match what the comment says and what the `LoadPrefixes()` function calls it too.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:360
(Diff revision 3)
> + writelen = sizeof(uint8_t);
> + rv = out->Write(reinterpret_cast<char*>(&key), writelen, &written);
> + NS_ENSURE_SUCCESS(rv, rv);
> + NS_ENSURE_TRUE(written == writelen, NS_ERROR_FAILURE);
> +
> + uint32_t len = vlPrefixes.Length();
Similarly, maybe we should use `stringLength` here to match what's in `LoadPrefixes()`.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:381
(Diff revision 3)
> + const nsACString& aPrefixes,
> + uint32_t aPrefixSize)
> +{
> + const char* fullhash = aFullHash.BeginReading();
> + const char* prefixes = aPrefixes.BeginReading();
> + int32_t num = aPrefixes.Length() / aPrefixSize;
Reusing `num` here makes the algorithm a little confusing.
How about this instead?
int32_t begin = 0;
int32_t end = aPrefixes.Length() / aPrefixSize;
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:384
(Diff revision 3)
> + const char* fullhash = aFullHash.BeginReading();
> + const char* prefixes = aPrefixes.BeginReading();
> + int32_t num = aPrefixes.Length() / aPrefixSize;
> + int32_t begin = 0, end = num;
> +
> + while(end >= begin) {
nit: space between `while` and `(`
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:385
(Diff revision 3)
> + const char* prefixes = aPrefixes.BeginReading();
> + int32_t num = aPrefixes.Length() / aPrefixSize;
> + int32_t begin = 0, end = num;
> +
> + while(end >= begin) {
> + num = (begin + end) >> 1;
nit: `mid` might be a better name here, emphasizing that we're always checking the item in the middle.
::: toolkit/components/url-classifier/nsUrlClassifierVLPrefixSet.cpp:410
(Diff revision 3)
> +
> + size_t amount = SizeOfIncludingThis(UrlClassifierMallocSizeOf);
> +
> + return aHandleReport->Callback(
> + EmptyCString(), mMemoryReportPath, KIND_HEAP, UNITS_BYTES, amount,
> + NS_LITERAL_CSTRING("Memory used by the vlprefix set for a URL classifier."),
Nit: expand `vlprefix set` to `variable-length prefix set`
Attachment #8781439 -
Flags: review?(francois) → review-
Comment 43•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review75668
| Assignee | ||
Comment 44•9 years ago
|
||
Thanks for your review and suggestions!
I'll fix them in next patch.
(In reply to François Marier [:francois] from comment #42)
> If I understand correctly, you mean that if many prefixes match to different
> full hashes, we don't guarantee which full hash we return.
>
> I would expand the example to make it clearer:
>
> ```
> If there are multiple prefixes mapping to the same full hash which start
> with the same characters but have different lengths, we do not guarantee
> which of the prefix length we will return.
>
> For example, if we have these prefixes:
>
> 12345AA -> 12345AABBBBBBBBB
> 12345A -> 12345AABBBBBBBBB
> 12345 -> 12345AABBBBBBBBB
>
> then calling Matches("12345AABBBBBBBBB", length) will return a length of
> either 6, 7 or 8.
> ```
Yes, that's exactly what I mean.
This is also addressed in chromium code:
https://chromium.googlesource.com/chromium/src.git/+/master/components/safe_browsing_db/v4_store.cc#574
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Attachment #8789178 -
Flags: review?(francois) → review+
| Assignee | ||
Updated•9 years ago
|
Attachment #8781440 -
Flags: review+ → review?(francois)
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Attachment #8789178 -
Flags: review?(francois) → review+
| Assignee | ||
Comment 50•9 years ago
|
||
Hi Francois,
I just add another patch for telemetry because it will require other reviewer.
But somehow its seems make MozReview a little bit strange...
The testcase have been reviewed so you can just review P1 and P3.
sorry about making this confusing.
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Attachment #8781440 -
Flags: review?(francois) → review+
Comment 54•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8789178 [details]
Bug 1283007 - P2. Add telemetry for variable-length prefix set.
https://reviewboard.mozilla.org/r/77482/#review76058
You can add `alert_emails` just like in https://reviewboard.mozilla.org/r/74412/diff/1#index_header and then there's no need to add anything to the whitelist.
Attachment #8789178 -
Flags: review?(francois) → review-
Comment 55•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review76062
::: toolkit/components/url-classifier/VariableLengthPrefixSet.cpp:78
(Diff revision 5)
> + mVLPrefixSet.Clear();
> +
> + // 4-bytes prefixes are handled by nsUrlClassifierPrefixSet.
> + nsCString* prefixes = aPrefixMap.Get(PREFIX_SIZE_FIXED);
> + if (prefixes) {
> + MOZ_ASSERT(prefixes->Length() % PREFIX_SIZE_FIXED == 0);
Should we use a run-time check instead of an assertion here or do we check this elsewhere already?
::: toolkit/components/url-classifier/VariableLengthPrefixSet.cpp:92
(Diff revision 5)
> + }
> +
> + const char* begin = prefixes->BeginReading();
> + const char* end = prefixes->EndReading();
> +
> + while(begin != end) {
nit: missing space between `while` and `(`
::: toolkit/components/url-classifier/VariableLengthPrefixSet.cpp:146
(Diff revision 5)
> + }
> +
> + return NS_OK;
> +}
> +
> +// If there are multiple different length prefixes with common prefixes, for example,
Let's replace this comment with the one that Chrome uses since it's clearer:
// It should never be the case that more than one hash prefixes match a given
// full hash. However, if that happens, this method returns any one of them.
// It does not guarantee which one of those will be returned.
Attachment #8781439 -
Flags: review?(francois) → review-
Comment 56•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review76064
This looks good to me. I'll let you update the comment and then decide what to do with the assertion v. run-time check. r+
Attachment #8781439 -
Flags: review- → review+
| Assignee | ||
Comment 57•9 years ago
|
||
(In reply to François Marier [:francois] from comment #56)
>
> This looks good to me. I'll let you update the comment and then decide what
> to do with the assertion v. run-time check. r+
Thanks for review!
Although we already check that when we create table updates, but it may still get corrupted when doing partial update. So i will change it to run time check.
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
Comment 61•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8789178 [details]
Bug 1283007 - P2. Add telemetry for variable-length prefix set.
https://reviewboard.mozilla.org/r/77482/#review76106
r+ and datareview+
Attachment #8789178 -
Flags: review?(francois) → review+
Updated•9 years ago
|
Attachment #8789178 -
Flags: review?(benjamin)
Comment 62•9 years ago
|
||
In this case, you don't need a data review from Benjamin since I'm one of the Firefox Data Collection peers: https://wiki.mozilla.org/Firefox/Data_Collection
Comment 63•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review76356
::: toolkit/components/url-classifier/VariableLengthPrefixSet.cpp:97
(Diff revision 6)
> + while (begin != end) {
> + array.AppendElement(BigEndian::readUint32(begin), fallible);
> + begin += sizeof(uint32_t);
> + }
> +
> + nsresult rv = mFixedPrefixSet->SetPrefixes(array.Elements(), array.Length());
Having to allocate the extra buffer here is sad. Maybe it's worthwhile to consider adding a flag to SetPrefixes that says whether there needs to be a BigEndian conversion, and seeing if the data can be passed directly then.
::: toolkit/components/url-classifier/VariableLengthPrefixSet.cpp:326
(Diff revision 6)
> +
> + // Store how many prefix string.
> + fileSize += sizeof(uint32_t);
> +
> + for (auto iter = mVLPrefixSet.ConstIter(); !iter.Done(); iter.Next()) {
> + // Store prefix size, prefix sting length, and prefix string.
typo
Comment 64•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781439 [details]
Bug 1283007 - P1. Implement variable length PrefixSet class for Safe Browsing v4.
https://reviewboard.mozilla.org/r/71876/#review76360
Attachment #8781439 -
Flags: review?(gpascutto) → review+
Comment 65•9 years ago
|
||
| mozreview-review | ||
Comment on attachment 8781440 [details]
Bug 1283007 - P3. Testcase for variable length prefixSet.
https://reviewboard.mozilla.org/r/71878/#review76364
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:67
(Diff revision 7)
> + PrefixStringMap out;
> + pset->GetPrefixes(out);
> +
> + for (auto iter = out.Iter(); !iter.Done(); iter.Next()) {
> + nsCString* set = map.Get(iter.Key());
> + nsCString* get = iter.Data();
This is really confusing to read. Can you expand what "set" and "get" are?
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:73
(Diff revision 7)
> +
> + ASSERT_TRUE(set->Equals(*get));
> + }
> +}
> +
> +// This test loop through all the prefixes and convert each prefix to
...test loops...
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:73
(Diff revision 7)
> +
> + ASSERT_TRUE(set->Equals(*get));
> + }
> +}
> +
> +// This test loop through all the prefixes and convert each prefix to
...test loops...and converts...
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:73
(Diff revision 7)
> +
> + ASSERT_TRUE(set->Equals(*get));
> + }
> +}
> +
> +// This test loop through all the prefixes and convert each prefix to
...test loops...and converts...
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:73
(Diff revision 7)
> +
> + ASSERT_TRUE(set->Equals(*get));
> + }
> +}
> +
> +// This test loop through all the prefixes and convert each prefix to
...test loops...and converts...characters...match...original (origin is something else in the context of a browser!)
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:82
(Diff revision 7)
> + _PrefixArray& array)
> +{
> + uint32_t matchLength = 0;
> + for (uint32_t i = 0; i < array.Length(); i++) {
> + const nsCString& prefix = array[i];
> + nsAutoPtr<nsCString> fullhash(CreateFullHash(prefix));
AutoPtr is deprecated. Use UniquePtr.
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:87
(Diff revision 7)
> + nsAutoPtr<nsCString> fullhash(CreateFullHash(prefix));
> +
> + // Find match for prefix-generated full hash
> + pset->Matches(*fullhash, &matchLength);
> + if (matchLength == 0) {
> + ASSERT_TRUE(false);
Yeah uh, NOPE.
Just put MOZ_ASSERT(matchLength != 0) (or some variation thereof) in front of the if.
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:341
(Diff revision 7)
> + }
> +
> + // Should not match any of the first prefix set
> + uint32_t matchLength = 0;
> + for (uint32_t i = 0; i < array1.Length(); i++) {
> + nsAutoPtr<nsACString> fullhash(CreateFullHash(array1[i]));
UniquePtr
::: toolkit/components/url-classifier/tests/gtest/TestVariableLengthPrefixSet.cpp:426
(Diff revision 7)
> + {
> + _PrefixArray array = { _Prefix("123") };
> +
> + SetupPrefixMap(array, map);
> + nsresult rv = pset->SetPrefixes(map);
> + ASSERT_TRUE(rv != NS_OK);
NS_FAILED(rv)
Attachment #8781440 -
Flags: review?(gpascutto) → review+
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Comment hidden (mozreview-request) |
| Assignee | ||
Updated•9 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → DUPLICATE
You need to log in
before you can comment on or make changes to this bug.
Description
•