Open Bug 1950427 Opened 9 months ago Updated 2 months ago

[Search Consolidation] Test to assert engine details are correct

Categories

(Firefox for Android :: Search, task)

All
Android
task

Tracking

()

People

(Reporter: skhan, Unassigned, NeedInfo)

References

(Blocks 1 open bug)

Details

(Whiteboard: [fxdroid][group3][search-consolidation])

For a few engines, like Baidu, Bing, DDG, Ebay, Ecosia, Google, Qwant, Rakuten, Wikipedia, yahooJP , assert the setup and run functionality return success.

Assignee: nobody → azinovyev
Status: NEW → ASSIGNED

I've noticed that we have a skip-if = [ "os == 'android'", ] in the xpcshell.toml which skips all of the tests in the searchconfigs directory. I was wondering if this is intentional and if it is, what would be the right steps in order to run these tests? Thank you!

Flags: needinfo?(standard8)

(In reply to Andrey Zinovyev from comment #1)

I've noticed that we have a skip-if = [ "os == 'android'", ] in the xpcshell.toml which skips all of the tests in the searchconfigs directory. I was wondering if this is intentional and if it is, what would be the right steps in order to run these tests? Thank you!

It is intentional as Android doesn't use the toolkit search service code. Android has its own code for generating search engine information and building URLs and that's in the Kotlin code in android-components.

It would be nice if we could re-use some of the test code directly as that would make it easier to maintain, however think we don't have a suitable test harness where we could run it on both Kotlin code and Gecko code. So we might end up having to near-duplicate these tests.

Flags: needinfo?(standard8)

Spoke with skhan offline on how we should approach writing these tests:

I was going through the Desktop tests more deeply and found that the desktop tests seem to be doing real API calls, but unfortunately that’s not what Android should be doing. In that case, should maybe it’s not worth working on these tests? Because I feel like mocking them defeats the whole purpose of verifying whether the results are correct. Since the tests are aimed towards verifying specific configurations will return specific search engines, mocking won’t really help IMO.

Since we don't have a good story on how to test this, I'd recommend closing the bug as invalid/wontfix and re-open it if something changes in the future.

We implemented the desktop tests to ensure that our main engines are set-up correctly, especially with respect to the parameters they use. Prior to when we had those tests, we'd had various cases where updates to the configurations were shipped with issues that weren't caught, due to the complexity of testing (50+ locales and 50+ regions with various combinations).

The configuration dumps are shipped with the product, and available when testing, hence we test using those directly. We load the dump and use almost the same APIs that the front-end will use because that's guarantees we're as correct as possible (the engines aren't initialised in exactly the same way, but it is close enough).

I would have expected that it would be possible for Android to do something similar. However, thinking about it more in terms of how we develop the search configuration and what the tests are for - I'm starting to think that having a near-duplicate set of tests on Android (and then possibly a third set on iOS in future), would make development even harder, as we'd have multiple sets of tests to update instead of one configuration. Due to the way the selector works, it would potentially be possible to verify the mobile parts of the configuration via the desktop tests.

Therefore, I'm pondering if it might be better to ensure that Android has a set of tests which are for verifying that what is returned from the search engine selector will be correctly handled at an appropriately level, e.g. checking that URLs are correctly formed with the expected parameters and codes from the data given. It is possible Android has most or all of this already in these tests, but we should verify that everything is covered.

I think this would be a viable approach, but I need to think a little bit more about if there would be any subtleties and also chat to the desktop team about it. Hopefully I should be able to confirm this by the end of next week, but in the meantime, any thoughts welcome here as well.

Assignee: azinovyev → nobody
Status: ASSIGNED → NEW

Hey Mark, agree with your approach, we have most of the tests covered for specific search engines. Is there anything else needed on our part?

Flags: needinfo?(standard8)
See Also: → 1990781

Sorry, I forgot to follow-up here. I had a discussion a couple of weeks ago with the search team and we agreed that given the structure, we could extend the desktop tests to cover what we need for mobile and this makes it easier for development since we'll only have one platform to handle when the tests need updating. I've just filed bug 1990781 for that.

Please could you link me to the file(s) where the tests are? I'd like to have a quick look over to make sure we're covering what we need to and haven't missed anything. If there are gaps, we can either cover those here, or file follow-up bugs.

Flags: needinfo?(standard8) → needinfo?(skhan)

UI tests:

Other related tests:

I'll update if I find other related files.

Flags: needinfo?(skhan)
Flags: needinfo?(standard8)
You need to log in before you can comment on or make changes to this bug.