Closed Bug 410730 Opened 18 years ago Closed 8 years ago

update verification should test current build as well as previous

Categories

(Testing :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: rhelmer, Unassigned)

Details

The update verification scripts only test that the previous release can be updated to the current release. We should test that the current release is able to apply an update without error. One simple way to do this would be to apply the current complete MAR over the current build.
Myk points out that there's no way to prove that the update was actually applied, if there is no difference between the unpacked build and the complete MAR. A way to test this scenario: 1) unpack current release to current/ current-modified/ 2) modify every file in current-modified/ (e.g. append a byte) 3) apply complete update to current-modified/ 4) diff -r current/ current-modified/
A while ago, dolske stumbled upon a "mac updater goes into an infinite loop bug" (see bug #373908). Back then he pointed out: "if we break updater / software update, we won't know until it's too late (after we ship)!" I thought we logged a bug on this, but I can't find it. So I'm glad it is finally logged. (thanks rhelmer!) while we can automate the updater side, we still might need to test the software update UI side by hand, so adding the in-litmus flag to track that. in order to allow QA to manually test this sort of thing, is there any way to have an aus channel set up instead of serving an empty snippet (if you are running the latest build), it serves up a snipped with the latest complete mar (and no partial)? We still have the problem noted by myk in comment #1 (how do you know it worked?)
Flags: in-litmus?
(In reply to comment #2) > A while ago, dolske stumbled upon a "mac updater goes into an infinite loop > bug" (see bug #373908). > > Back then he pointed out: "if we break updater / software update, we won't > know until it's too late (after we ship)!" > > I thought we logged a bug on this, but I can't find it. So I'm glad it is > finally logged. (thanks rhelmer!) > > while we can automate the updater side, we still might need to test the > software update UI side by hand, so adding the in-litmus flag to track that. > > in order to allow QA to manually test this sort of thing, is there any way to > have an aus channel set up instead of serving an empty snippet (if you are > running the latest build), it serves up a snipped with the latest complete mar > (and no partial)? > > We still have the problem noted by myk in comment #1 (how do you know it > worked?) I think we should have a small in-tree server to do this kind of testing. I'm working on a front-end for configuring AUS, I could make it do this kind of thing.. basically it will be able to output AUS2 snippets, and static update.xml files, but making it a dynamic webserver would be good too (no bug yet). We could write a functional test which does a procedure like in comment #1, and have it able to run straight from the tree without hitting AUS (just make the server URL customizable, so we can still use it for production functional testing). I believe that above should be separate from unit tests, although perhaps unit tests will want to use an in-tree web server (though they should be testing things at a much lower level, like each function/method directly using mock objects not real data..). I think that the procedure in comment #1 would prove the "did it work" case, any thoughts on that?
rob, what do you mean by "in tree server"? One reason I suggested a separate channel that served a real mar is that it exercises (as much as possible) the whole production system end-to-end.
(In reply to comment #4) > rob, what do you mean by "in tree server"? Sorry, just a fancy term for "little AUS server in my default update-packaging checkout". Like httpd.js is for other tests. > One reason I suggested a separate channel that served a real mar is that it > exercises (as much as possible) the whole production system end-to-end. For production functional testing sure, I think we can do it all local much more quickly, and without the person wanting to do the test to have to have access to and/or maintain their own real AUS server.
(In reply to comment #4) > rob, what do you mean by "in tree server"? > > One reason I suggested a separate channel that served a real mar is that it > exercises (as much as possible) the whole production system end-to-end. For QA manual testing purposes? Hmm.. the problem is that (for releases, anyway) I don't think the AUS server really knows which is the latest, so we'd need to preconfigure this case in patcher and basically inject dummy data into AUS. We could probably set it up to do this automatically. However it would be really easy for QA, just flip a channel name. This is a bit different from the kind of testing I was worried about in comment #0 (automated functional testing) but it's a good point, I missed this before.
Everything left in Core:Testing is going to Testing:General. Filter on CleanOutCoreTesting to ignore.
Component: Testing → General
Flags: in-litmus?
Product: Core → Testing
QA Contact: testing → general
Version: unspecified → Trunk
Mass closing bugs with no activity in 2+ years. If this bug is important to you, please re-open.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.