Closed Bug 1083731 Opened 10 years ago Closed 8 years ago

Add UI test for bug 1074379


(Firefox OS Graveyard :: Gaia::UI Tests, defect)

Gonk (Firefox OS)
Not set


(Not tracked)



(Reporter: gerard-majax, Assigned: jlorenzo)




(1 file)

46 bytes, text/x-github-pull-request
: review+
: review-
: review+
Details | Review
In bug 1074379 attachment 8504694 [details], jlorenzo provided a UI Test that helps reproducing the issue. We should make it land to make sure this does not regress
Blocks: 1083729
This UI test is not really clean. I removed a wait to pop the error more quickly and I put a sleep in redial() that we should replace by a wait.

I'll create a PR with the thing cleaned up.
Isn't there already test_dialer_receive_call_with_locked_screen? It seems very similar to that patch.
It's actually closer to Actually the only thing that this test does is to place 30 phone calls. In bug 1074379, with an Orange SIM card, the 24th call always fails because of a bfcache issue. We need to check if it also does on an AT&T SIM card before including this test.
Zac, is there a way to execute this test by Jenkins without including the test in master?
Flags: needinfo?(zcampbell)
We won't merge a test with a loop in it like that. and we also can't determine the type of SIM card.

Does this really need to be in CI? It sounds like an edge case. 

Why don't you just use --repeat command without --restart in an adhoc job?
Flags: needinfo?(zcampbell)
Is it possible to use self.testvars['carrier']['network'] once we know the issue does repro on AT&T's? 

Yeah, I agree, this sounds like one. Everybody in the comms team saw that bug before we were able to get the STR though. This was also a bad regression that cost a lot to determine the actual STR (see bug 1074379 comment 35 and bug 1081714).

I am not sure that using the --repeat option is semantically correct. We want to check that we're able to place at least 30 phone calls. By using the repeat option, if we see a failure, we could handle that as an intermittent problem. By putting everything in the one test, we would know that we're not able to handle more than 30 calls. (By the way, I need to change the test's function name to reflect that).

What do you think Zac?
OK, you can put it in the repo but don't add it into the manifest file as we don't want it picked up in a regular functional test run.

I think this will be a good candidate for being run on the external device array where we will hopefully have much finer control over the device and its capabilities.
Attached file Github PR
Geo, can you check this test allow to catch bug 1074379 on an AT&T and a T-Mobile SIM card? It should be around the 25th phone call.

You can use this build to check:

Attachment #8507871 - Flags: feedback?(gmealer)
This is still on my radar. I need to get the SIMs together to test it.
QA Whiteboard: [fxosqa-auto-backlog+]
Attachment #8507871 - Flags: feedback?(gmealer)
Comment on attachment 8507871 [details] [review]
Github PR

Do you have any news about the American SIMs cards?
Attachment #8507871 - Flags: feedback?(gmealer)
(In reply to Johan Lorenzo [:jlorenzo] (QA) from comment #9)
> Comment on attachment 8507871 [details] [review]
> Github PR
> Do you have any news about the American SIMs cards?

Yeah, I have materials in place, but the other SIM has been in use on performance runs. It'll be free for the first part of this week, and I'll stage then.
I tested with an AT&T SIM card from the lab. The issue is present there.

It doesn't have a voicemail number set on it though. So I updated the test to make phone calls to the remote phone number instead.
Comment on attachment 8507871 [details] [review]
Github PR

I reproduced the issue after 24 calls with a T-Mobile SIM card on that build :

I also tested on yesterday's master, the 40 phone calls passed. 

I think the test is ready to be reviewed then.
Attachment #8507871 - Flags: review?(gmealer)
Attachment #8507871 - Flags: review?(florin.strugariu)
Attachment #8507871 - Flags: review?(drs.bugzilla)
Attachment #8507871 - Flags: feedback?(gmealer)
Comment on attachment 8507871 [details] [review]
Github PR

Missing manifest entry 
Please add the test to the manifest file.

Also, I don't think this test should be ran in the daily builds
Attachment #8507871 - Flags: review?(florin.strugariu) → review-
As discussed on IRC, the content of the manifest might change regarding the way we want to execute this test.

The current consensus is to not put this test in the daily functional suite. I think we should run this test at least once a week. In fact, it might discover some memory leaks on the call screen.

One proposition is to not add anything in the manifest (so it would be excluded from the daily run). A second option is to add in the manifest but disable it and add a comment explaining the reason why. A third option could be to put in the endurance suite. :rwood, do you think we can add in this test suite? If so, I'll make the changes.
Flags: needinfo?(rwood)
Hi Johan, I would advise against adding new endurance tests at this point, as they aren't really being maintained at the moment. In order to bring real value, if it was decided to keep the endurance suite, they would need to be overhauled and updated. They are not a priority at the moment (and perhaps might just be rolled into the MTBF test suite eventually). Thanks.
Flags: needinfo?(rwood)
Comment on attachment 8507871 [details] [review]
Github PR

Looks good. I'm assuming that comment 13 isn't relevant here since we don't want this to run in the daily test site, but for it to have to be manually triggered.

(In reply to Johan Lorenzo [:jlorenzo] (QA) from comment #14)
> A second option is to add in the manifest but
> disable it and add a comment explaining the reason why.

This seems like the best option to me.
Attachment #8507871 - Flags: review?(drs.bugzilla) → review+
I would agree with comment 16 that this is the best way to go, since we want the test accounted for easily. I think all tests should be in a manifest somewhere.

My issue with this is that I don't think it's going to get run at all unless we create a formal test run around it. And tests that don't get run are essentially dead code. I don't like keeping them around because they bitrot easily but still have to maintained as dependencies.

I'm not against creating lower-frequency test runs, as long as they can be automatically scheduled, but that seems like something we haven't broached yet. 

I'm about to r=me the code pending a manifest entry, because I think it's fine in terms of implementation, but I really think run scheduling is something to work out before you merge.
Comment on attachment 8507871 [details] [review]
Github PR

r=me with a manifest entry somewhere so it's tracked, based strictly on the code. But please see comment 17 first, because I have concerns around execution.
Attachment #8507871 - Flags: review?(gmealer) → review+
(In reply to Geo Mealer [:geo] from comment #17)
> I'm not against creating lower-frequency test runs, as long as they can be
> automatically scheduled, but that seems like something we haven't broached
> yet. 
Let's discuss this creation in the next meeting.

Meanwhile, like Rob suggested, this might be a good candidate for MTBF. Do you want to take this test Paul?
Flags: needinfo?(pyang)
I would say no since it's not an user behaviour as other test cases we have.
We expect a redial test case such like

- launch phone app
- check if call log exists
- dial or redial
- not to kill active apps (sorts of approaching user's behaviour)

Per random test case picking by mtbf, this case will be triggered 6 times/hour

If we need to increase executing rate, 2 to 3 call/redial in a case is appropriate IMO.
Flags: needinfo?(pyang)
Gaia UI tests are not run or maintained anymore.
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.