Open Bug 1428705 Opened 2 years ago Updated 12 days ago

[manifestparser] Add support for Python 3

Categories

(Testing :: Mozbase, enhancement, P3)

Version 3
enhancement

Tracking

(Not tracked)

ASSIGNED

People

(Reporter: whimboo, Assigned: sandeepb, Mentored)

References

(Blocks 3 open bugs)

Details

(Keywords: good-first-bug)

Attachments

(3 files)

Without dropping support for legacy Python, we need to add support for Python 3 to manifestparser.
Priority: -- → P3
To work on this bug you will need to install and configure Mercurial, which will enable you to download the Firefox source code. It will also be used to commit your changes locally and prepare a patch for review. See the installation guide at https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmozilla/installing.html for help getting Mercurial on your system. Once installed, there are some extensions we recommend installing, details of these can be found here: https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmozilla/extensions.html

To clone the Firefox source code we recommend using the unified repository. Details of this and how to create a bookmark for your work can be found here: https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmozilla/unifiedrepo.html

As this bug relates to Python 3, you will need to have this installed on your system. Our continuous integration is currently using Python 3.5, so for best results we recommend using the same version locally. There are a number of ways to install Python, and they vary depending on your environment. We suggest reading over the guides at http://docs.python-guide.org/en/latest/starting/installation/#python-3-installation-guides to find the best method for you. Note that we also need to maintain support for Python 2, so you'll also need to have Python 2.7 installed.

Whilst we're moving towards adding support for Python 3, we've disabled any tests that fail against this version. This means that in order to work on this bug you will need to enable the tests. This can be done by removing "skip-if = python == 3" from the manifest file in `testing/mozbase/manifestparser/tests/manifest.ini`

To run the tests against Python 3, execute the following command:

```
mach python-test --python=3.5 testing/mozbase/manifestparser
```

Work through the test failures, and update the tests and source code for the package to simultaneuously support Python 2.7 and Python 3.5. You can always check that your changes have not inadvertantly broken support for Python 2 by using `--python=2.7` on the command line. 

If you're new to Python 3, the guide at https://docs.python.org/3/howto/pyporting.html may help you to get started with understanding the differences. To assist with supporting both Python 2 and 3, we have vendored the "six" package. You can read more about this and how to use it at https://pythonhosted.org/six/

Once the package supports Python 3 and the tests pass, we will also need to prepare for a new release. Bump the version number in `testing/mozbase/manifestparser/setup.py` to "2.0.0", and update the classifiers so they reflect the versions of Python that we now support.

Finally, to prepare a universal distribution, create a `testing/mozbase/manifestparser/setup.cfg` file with the following contents:

```
[bdist_wheel]
universal=1
```

When your patch(es) are ready, you will need to push them for review. We are use MozReview for this, and instructions for how to prepare your machine and how to structure your commit messages can be found in our guide: https://mozilla-version-control-tools.readthedocs.io/en/latest/mozreview-user.html

There may be some review issues to address, but once your contribution has been approved and subsequently landed, we will release the new version of the package!
Mentor: dave.hunt
Keywords: good-first-bug
I am new to Mozilla and i have recently started contributing to it. I have the entire firefox environment set. I would like to work on this bug while using some mentoring. Please do let me know if i can start working on this..!
Flags: needinfo?(dave.hunt)
Hi Sandeep. Thank you for your interest! 

In regard of this bug no-one else mentioned to work on it yet. So feel free to get started. If you have problems please let us know.

Also please be aware that Dave is currently out. In the meantime I will try to get as best as possible answers to you if questions come up.
Flags: needinfo?(dave.hunt)
(In reply to Henrik Skupin (:whimboo) from comment #3)
> Hi Sandeep. Thank you for your interest! 
> 
> In regard of this bug no-one else mentioned to work on it yet. So feel free
> to get started. If you have problems please let us know.
> 
> Also please be aware that Dave is currently out. In the meantime I will try
> to get as best as possible answers to you if questions come up.

Thank you henrik.  Will start working on this.
Hi Henrik,

    I have made the changes for manifestparser to support both Python 2 & 3. All tests are passed in python 2.7. But for python 3.5, i am getting 3 failed tests which are attached as file. I was able to understand the reasons for the errors but am unable to resolve them. Could you please help me on resolving this errors.

As for the failed tests:

1. The "TypeError: 'NoneType' object is not callable" in filters.py is due to non-implementation of __hash__() in python 3 but i am unclear where and how to implement that in current code.

2. The "ValueError: min() arg is an empty sequence" in test_chunking.py might be due to `lengths` having no elements. But in that case, shouldn't max() also throw that error?

3. The " AssertionError: Lists differ: [] != ['test_3']" in test_manifestparser.py says that parser.get('name',x='level_3') is giving empty ([]) list. Does this have anything to do with my changes in manifestparser.py line 185 ? I changed that line to the following 
> data = dict(six.viewitems(self._ancestor_defaults) | six.viewitems(data))
Flags: needinfo?(hskupin)
I haven't done that much with Python 3 yet. Given that Dave is on PTO I will forward your questions to Andrew who worked closely with Dave previously in getting the basics for Python 3 into the tree. I'm fairly sure that he can help you with those questions.
Flags: needinfo?(hskupin) → needinfo?(ahal)
Hi Sandeep, I don't know the answers to any of these questions off the top of my head. Could you upload what you have so far so I can try it out myself?

If the solution ends up being too hard we could skip these particular tests on python 3 and land what you have so far (then file a follow-up to fix up the rest).
Flags: needinfo?(ahal)
Flags: needinfo?(kumarsandeep2357)
Assignee: nobody → kumarsandeep2357
Status: NEW → ASSIGNED
For your first issue, calling super(InstanceFilter, self).__eq__(other) instead of doing manually doing the __hash__() comparison might work.
(In reply to Andrew Halberstadt [:ahal] from comment #8)
> For your first issue, calling super(InstanceFilter, self).__eq__(other)
> instead of doing manually doing the __hash__() comparison might work.

Andrew, this worked. Thanks.

I am trying to push the changeset to mozReview but it is saying "MozReview is no longer accepting new review requests". Please let me know how to proceed.
Flags: needinfo?(kumarsandeep2357)
Yes, we've switched over to phabricator now:
http://moz-conduit.readthedocs.io/en/latest/phabricator-user.html

For now you could just upload a diff of your patch to this bug (though you'll want to get phabricator set up anyway if you plan to be asking for review somewhat frequently).
Andrew, i attached the latest diff file. Could you please check it and provide feedback.
Attachment #8999273 - Flags: feedback?(ahal)
Comment on attachment 8999273 [details] [diff] [review]
bug-1428705.patch

Thanks!

The min error is due to changes in 'map' between python 2 and 3:
http://python-future.org/compatible_idioms.html#map (the option 1 solution is likely good enough here)

After fixing that I still see two other errors, though I can't seem to find any obvious reason for them to fail. The error in test_manifestparser.py seems to be intermittent, which is.. strange.

If you don't want to dig into these any further I'd suggest we skip these two remaining errors (via pytest.skip instead of the manifest.ini so we only skip those specific test functions). That way we can land what you have so far and prevent it from bit-rotting.
Attachment #8999273 - Flags: feedback?(ahal) → feedback+
(In reply to Andrew Halberstadt [:ahal] from comment #12)
> Comment on attachment 8999273 [details] [diff] [review]
> bug-1428705.patch
> 
> Thanks!
> 
> The min error is due to changes in 'map' between python 2 and 3:
> http://python-future.org/compatible_idioms.html#map (the option 1 solution
> is likely good enough here)
> 
> After fixing that I still see two other errors, though I can't seem to find
> any obvious reason for them to fail. The error in test_manifestparser.py
> seems to be intermittent, which is.. strange.
> 
> If you don't want to dig into these any further I'd suggest we skip these
> two remaining errors (via pytest.skip instead of the manifest.ini so we only
> skip those specific test functions). That way we can land what you have so
> far and prevent it from bit-rotting.

Thanks Andrew. I fixed the min error. 

I want to dig into the remaining errors so that we can complete the porting of manifestparser. 

Regarding the error in test_manifestparser.py, I doubt that my changes in manifestparser.py might be affecting data we get. Could you please check that and let me know.
Flags: needinfo?(ahal)
MozReview-Commit-ID: IGrh4OArTOu
I don't believe the failure has anything to do with your changes in that file. As I mentioned, it's intermittent, so something much more tricky and subtle is likely happening.

I took another look at the failures in both test_chunking.py and test_manifestparser.py and I can't figure them out.
Flags: needinfo?(ahal)
(In reply to Andrew Halberstadt [:ahal] from comment #15)
> I don't believe the failure has anything to do with your changes in that
> file. As I mentioned, it's intermittent, so something much more tricky and
> subtle is likely happening.
> 
> I took another look at the failures in both test_chunking.py and
> test_manifestparser.py and I can't figure them out.

Andrew, I made the changes suggested to the differential revision. 

Regarding the intermittent failures, i guess you are right. The mentioned errors are not getting shown on every test run.
Attachment #8999273 - Flags: review?(dave.hunt)
Attachment #8999273 - Flags: review?(dave.hunt)
Dave, I have made the final changes except the setup.cfg file. Could you please check those on phabricator and suggest how to proceed further?
Flags: needinfo?(dave.hunt)
Sorry for not clearing the needinfo. I've commented on the patch here https://phabricator.services.mozilla.com/D3287#120102
Flags: needinfo?(dave.hunt)

Hey :sandeepb are you still interested in working on this patch?

Flags: needinfo?(kumarsandeep2357)

Hi :davehunt, i am still interested in working on this patch. Been little busy with work.

Going by our last discussion, I have made one of the changes suggested as mentioned here https://phabricator.services.mozilla.com/D3287#166110

I will work on the mentioned reraise issue.

Regarding intermittent failures, i am still unable too find the reason for them to appear occasionally. Could you provide some more pointers to debug this.

Flags: needinfo?(kumarsandeep2357) → needinfo?(dave.hunt)

(In reply to Dave Hunt [:davehunt] [he/him] ⌚️UTC from comment #19)

Hey :sandeepb are you still interested in working on this patch?

forgot to clear needinfo.

Flags: needinfo?(dave.hunt)

Thanks :sandeepb, let us know when you have an update. Regarding the intermittent failures, if I recall correctly these were quite easy for my to reproduce locally just by running the tests multiple times. Are you able to replicate these failures?

(In reply to Dave Hunt [:davehunt] [he/him] ⌚️UTC from comment #22)

Thanks :sandeepb, let us know when you have an update. Regarding the intermittent failures, if I recall correctly these were quite easy for my to reproduce locally just by running the tests multiple times. Are you able to replicate these failures?

Hi :davehunt, I tried debugging the test_chunking.py for intermittent failures. One problem i identified is with chunk_by_round_robin method which is returning chunks list based on the runtime of last chunk.

Specifically in our case, for the 2nd test we have 4 directories which generates only 3 tests because of directory bar with 0 value. These 3 tests have runtimes of (3,14,81). In chunk_by_round_robin, we are creating manifest set(unordered collection). If the chunk of runtime 3 is processed last, we get only 3 chunks(scenario 1). In other cases, we get 14 and 81 chunks of which only 3 chunks are of tests and remaining are empty lists(scenario 2).

While creating chunks normally, we create an empty chunk for bar as well. Because of this, the runtime_delta of local chunks is 81-0 every time. But when round robin returns only 3 chunks(scenario 1), the runtime_delta of those chunks is 81-3=78 which is less than our normal runtime_delta and thus the test failure.

To rectify this, I want to create max(runtime) of chunks rather than the runtime of last test in chunk_by_round_robin method. Could you please let me know whether that is an appropriate fix or not?

Flags: needinfo?(dave.hunt)

(In reply to Dave Hunt [:davehunt] [he/him] ⌚️UTC from comment #22)

Thanks :sandeepb, let us know when you have an update. Regarding the intermittent failures, if I recall correctly these were quite easy for my to reproduce locally just by running the tests multiple times. Are you able to replicate these failures?

Hi :davehunt, Please check my earlier comment and suggest a possible solution.

Regarding the second intermittent failure occuring in test_manifestparser.py, I found the reason for it to happen. But i cant pinpoint why it is happening intermittently.
In this case, we are creating ManifestParser Object in test_parent_defaults method in test_manifestparser.py. The parser data is different when the tests are failing. I couldnt identify why the parser object is different on different runs.

(In reply to Sandeep Bypina [:sandeepb] from comment #23)

(In reply to Dave Hunt [:davehunt] [he/him] ⌚️UTC from comment #22)

Thanks :sandeepb, let us know when you have an update. Regarding the intermittent failures, if I recall correctly these were quite easy for my to reproduce locally just by running the tests multiple times. Are you able to replicate these failures?

Hi :davehunt, I tried debugging the test_chunking.py for intermittent failures. One problem i identified is with chunk_by_round_robin method which is returning chunks list based on the runtime of last chunk.

Specifically in our case, for the 2nd test we have 4 directories which generates only 3 tests because of directory bar with 0 value. These 3 tests have runtimes of (3,14,81). In chunk_by_round_robin, we are creating manifest set(unordered collection). If the chunk of runtime 3 is processed last, we get only 3 chunks(scenario 1). In other cases, we get 14 and 81 chunks of which only 3 chunks are of tests and remaining are empty lists(scenario 2).

While creating chunks normally, we create an empty chunk for bar as well. Because of this, the runtime_delta of local chunks is 81-0 every time. But when round robin returns only 3 chunks(scenario 1), the runtime_delta of those chunks is 81-3=78 which is less than our normal runtime_delta and thus the test failure.

To rectify this, I want to create max(runtime) of chunks rather than the runtime of last test in chunk_by_round_robin method. Could you please let me know whether that is an appropriate fix or not?

I'm not familiar with this code, perhaps :ahal can help here?

Flags: needinfo?(dave.hunt) → needinfo?(ahal)

Hi Sandeep, I think the fix sounds good but I'm having a bit of trouble following the problem. Specifically what is different between python 2/3 that would cause this intermittent?

Could you push your fix to phabricator and flag me for review? I think it would help to see what you are talking about in the code. Thanks!

Flags: needinfo?(ahal) → needinfo?(kumarsandeep2357)

(In reply to Andrew Halberstadt [:ahal] from comment #26)

Hi Sandeep, I think the fix sounds good but I'm having a bit of trouble following the problem. Specifically what is different between python 2/3 that would cause this intermittent?

Could you push your fix to phabricator and flag me for review? I think it would help to see what you are talking about in the code. Thanks!

Hi Andrew, I have pushed the fix to phabricator and flagged you. Please take a look at the code and suggest any changes needed.
Thanks.

Flags: needinfo?(kumarsandeep2357) → needinfo?(ahal)

Nice investigation! That makes sense, I can see by looking at the old code why this might fail intermittently.

So it's probably blind luck that the python 2 set implementation was iterating over the items in the set consistently. The python 3 implementation must be more likely to iterate in a different order.

I set :davehunt as a blocking reviewer to make sure he's happy with the other changes before proceeding. Thanks very much for figuring this out :)

Flags: needinfo?(ahal)
Attachment #9000041 - Attachment description: Bug 1428705 - [manifestparser] Add support for Python 3 → Bug 1428705 - [manifestparser] Add support for Python 3. r=ahal
Blocks: 1520458
You need to log in before you can comment on or make changes to this bug.