Closed Bug 1397394 Opened 7 years ago Closed 7 years ago

test_log_view_artifact_builder.py should test against db state rather than golden files

Categories

(Tree Management :: Treeherder, enhancement, P5)

enhancement

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: camd, Unassigned, Mentored)

References

Details

(Keywords: good-first-bug, Whiteboard: [lang=py])

The current tests compare the output of the BuildbotLogViewArtifactBuilder with a pre-saved JSON file that used to be saved as an artifact. We now create records in TextLogStep and TextLogError that are used by the Log Viewer. We should change these tests to verify the resultant entries in those models rather than comparing with the "golden" files. This would make the tests less brittle.
Priority: -- → P5
See Also: → 1063691
Mentor: cdawson
Keywords: good-first-bug
Whiteboard: [lang=py]
Hello, I'm a new contributor and would like to work on this bug as my first PR
Component: Treeherder → Treeherder: Log Parsing & Classification
Hi Sharno-- Sure, please feel free to take a crack at it. The main change will be to change each test to inspect the values created in the database by calling TextLogStep.objects.all() and inspecting each result. I recommend comparing that with each of the existing "golden" files (expected result) and seeing if the values match up as one would expect. Please need-info me if you have any more questions. Thanks for taking this on! :)
A chat on IRC that might be good to remember: 17:07 sharno camd: Should I add compare the json example files to the DB or use the DB as example to compare to what I get from the artifacts? 17:10 camd sharno: The tests in this change will get more complicated than they look now. Instead of comparing one result to a saved file, you'll need to check the DB records for each model. So... 17:11 camd query TextLogStep and TextLogError and ensure it has the right entries. 17:11 camd to know what the right entries are, you can look at the "expected result" file for each of the source test files 17:12 camd After you've written your tests, we will be able to discard those expected result files. 17:12 camd but you can use them for now to see what records we SHOULD be expecting. 17:12 camd does that make sense? 17:12 sharno I got it 17:14 sharno So after the removal of these json files the actual test would be comparing the DB entries to actual artifacts. Did I get that right? 17:17 camd sharno: not exactly compare to the generated artifacts. The artifacts are used by the system to create the DB entries. 17:17 camd So the DB entries are what you're checking, and you'll need to write the asserts in the code for each test to verify that it has what we want. 17:18 camd so, for example, you'd fetch all the TextLogErrors entries 17:18 camd and then check that there is, for instance, one for line 217 that says "X" and one for line 382 that says "Y" 17:18 camd like that
Can u make me understand this bug more clearly. This is my good-first-bug plz help I want to contribute
(In reply to adityabitw108 from comment #5) > Can u make me understand this bug more clearly. > This is my good-first-bug > plz help > I want to contribute
Flags: needinfo?(cdawson)
(In reply to adityabitw108 from comment #5) > Can u make me understand this bug more clearly. > This is my good-first-bug > plz help > I want to contribute Hi there. I think that Sharno in the comments above is going to tackle this bug. But I've added some comments with more information which may help you to understand the bug more clearly.
Flags: needinfo?(cdawson)
Is sharno still working on it?
Flags: needinfo?(sharnoby3)
(In reply to David Teller [:Yoric] (please use "needinfo") from comment #8) > Is sharno still working on it? Sorry I forgot to put an update on this. I couldn't figure it out and then I got busy, so I stopped working on it.
Flags: needinfo?(sharnoby3)
Can I jump in on this? What is the current state? This will be my first contribution to open source.
Where is the link to the repo and files concerned?
Got it I will take a crack at it
What is the database to connect to?
Got it!
TextLogStep and TextLogError have no values stored though
(<QuerySet []>, 0) .(<QuerySet []>, 0) .s(<QuerySet []>, 0) .ssssssssssssssss(<QuerySet []>, 0) .(<QuerySet []>, 0) .(<QuerySet []>, 0) .(<QuerySet []>, 0) .(<QuerySet []>, 0) .(<QuerySet []>, 0)
To get values into those tables, you will need to run the ``celery`` tasks for data import. You can find instructions to do this here: https://treeherder.readthedocs.io/installation.html#running-the-ingestion-tasks There are several tasks in there, but the log parser, and associated ones, are the ones that will populate those tables. Thanks for looking into this bug! :)
is this bug unassigned. i would like to take it and will need someone to guide me through (new to open source). i know python and have worked on flask framework
(In reply to us241077 from comment #18) > is this bug unassigned. i would like to take it and will need someone to > guide me through (new to open source). i know python and have worked on > flask framework Thanks for your interest. Your first step is to get the development environment setup. Please follow the instructions here: https://treeherder.readthedocs.io/installation.html
Hi. I am new to opensource and would like to work on this bug. I am reading the treeherder docs mentioned in comment #19
hi, I would like to work on this issue. (In reply to Cameron Dawson [:camd] from comment #0) > The current tests compare the output of the BuildbotLogViewArtifactBuilder > with a pre-saved JSON file that used to be saved as an artifact. We now > create records in TextLogStep and TextLogError that are used by the Log > Viewer. > > We should change these tests to verify the resultant entries in those models > rather than comparing with the "golden" files. This would make the tests > less brittle.
I am not able to locate test_log_view_artifact_builder.py in mozilla-unified/. Where else can I look?
I would like to take this bug if its unassigned
Go ahead! :-)
This is my first bug. While setting up the instance of treeherder , when i ran "vagrant up --provision" This happened. Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'bento/ubuntu-16.04' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 201802.02.0 Am i doing something wrong or this was supposed to happen?
That's expected. Vagrant is saying there isn't a cached version of that image on your local machine (which makes sense, given it's a first install), and so is downloading it for you. The lines immediately after the part quoted will say "Downloading..." or similar.
The box 'bento/ubuntu-16.04' could not be found or could not be accessed in the remote catalog. If this is a private box on HashiCorp's Vagrant Cloud, please verify you're logged in via `vagrant login`. Also, please double-check the name. The expanded URL and error message are shown below: URL: ["https://vagrantcloud.com/bento/ubuntu-16.04"] Error:
Ah yes, that's the actually error message. Googling it found: https://github.com/hashicorp/vagrant/issues/7969 From reading that, it looks like this is caused by using a very old Vagrant version (1.8.7 - which was released November 2016). I would update to a newer version of Vagrant (latest is 2.1.1). If that doesn't work - file a new bug and we can debug further over there :-)
I was on 2.1.1 Now I am trying older versions. I first tried 1.8.5. Its not supported. Vagrant need to be >=1.9.0. Now I am trying 1.9.0
I downgraded my vagrant to 2.0.0 and My virtual box to 5.1 and now its downloading Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'bento/ubuntu-16.04' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 201802.02.0 ==> default: Loading metadata for box 'bento/ubuntu-16.04' default: URL: https://vagrantcloud.com/bento/ubuntu-16.04 ==> default: Adding box 'bento/ubuntu-16.04' (v201803.24.0) for provider: virtualbox default: Downloading: https://vagrantcloud.com/bento/boxes/ubuntu-16.04/versions/201803.24.0/providers/virtualbox.box
I have started the local treeherder instance and the ingestion task but I don't know what to do after that.What I think you said is that I should write an assert statement which will compare value of textlogstep which is in models.py with do_test function in test_log_artifact_builder.py?
Hi! I think this bug might not be a great first bug since it seems that multiple attempts over the last 8 months haven't worked out. I'd take the good-first-bug annotation off, but it's a P5 so something we wouldn't be working on. As such I think it's best to just close this out. If there are any other marked good first bugs you'd like to work on, let us know :-)
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
Component: Treeherder: Log Parsing & Classification → TreeHerder
You need to log in before you can comment on or make changes to this bug.