Closed Bug 221191 Opened 21 years ago Closed 21 years ago

account deactivation

Categories

(mozilla.org :: Repository Account Requests, task)

x86
Windows XP
task
Not set
normal

Tracking

(Not tracked)

RESOLVED INVALID

People

(Reporter: bernd_mozilla, Assigned: marcia)

Details

I'ts time to terminate my CVS write account (bmlk@gmx.de). I don't share the
vision of the roadmap and the vision of the module owner where I usually work.
Let me start with the roadmap:
I work in layout-table and fixed minor bugs that have sometimes big visibility
as table based layouts dominate currently the web.
I believe what I have done at least partly is: "accumulation of code to patch
around core problems." I believed we should do so in order to have always a
product that is shippable.  I am not able to successfully code large
rearchitecture projects in  gecko. My feeling was however that even with these
large changes one should kill those small stupid bugs that annoy the users. 
This worked perfectly as long as Chris Karnaze was the table module owner.
The roadmap sets another goal where I can't participate due to my limited coding
abilities and time.
In the contrary to roadmap I believe that the institute of review and
superreview is the best way for the project to evolve. I see them as the main
educational path for the more experienced coders to get the next generation
familiar with the code base.  For me the reduced enigineering input due to the
nscp closure is a big threat and mozilla should extend the number of
contributors as I fear the number of underowned modules will increase.

"Code review, like testing, is an auditing procedure that cannot make excellent
code from mediocre input."

This statement scares away people that estimate theire coding capabilities as I
am. I don't want to serve as an example for mediocre input. The patch history at
http://bugzilla.mozilla.org/show_bug.cgi?id=173277 shows that for larger
projects I am not the right person. I thought that I should keep the knowledge
alive untill a real new owner appears. There was a hope that John Keiser will be
the one, but it did not happen. Dbaron argued that it is better to document the
knowledge, that is what I have done recently.

I believe in regression testing and that it should be a checkin requirement
before code goes into the layout directory. Further that possible regressions
should be fixed before checkin. When Chris Karnaze and Marc Attinasi worked in
layout this was common sense. Table layout had a no 'r' without rtest policy. I
can not see that the current module owner (dbaron) shares the same vision. I
don't have the abilities to fix the introduced regressions and I dont want to
end as a lamer by permanently asking him or to make noise in the bugs. I found
it expecially frustrating when I tried to motivate new contributors to run those
tests that David explained them that they are not necessary. 

The second point is that I dont want to be liable for things that I can not
change (http://bugzilla.mozilla.org/show_bug.cgi?id=221140#c3). So my decision
to give the CVS rights back will make the owner responsibilities clearer and
free me from the need to defend table-layout like in bug 198506.

I feel that with the current relationship to regression testing the conflicts as
in bug 221140 will not vanish but will escalate.

I am not paid for this, this is my hobby, so it should also be fun. Arguing with
dbaron is anything else than fun. I think the last time I was really happy with
mozilla was when I coded for bug 3166. 

I hope that I will find the lost fun, when I can attach some patches, when I
know how to solve these bugs, but for this I dont need a CVS write account. Or
maybe some other part of the project far away from layout will be good. But then
in order to avoid the cross tree hacking that the roadmap is so afraid of (which
I believe is a myth) I will apply then when I know that code for a new CVS write
account.

Bernd
Bernd, I'm sorry for getting mad at you yesterday.  I was angry because I felt
like you were continuing to criticize me for not running the regression tests
after I had already explained to you on IRC that the regression tests were
useless for the change in question, since they would have shown false positives
for practically every use of 'overflow: hidden', since the structure of the
frame tree was changed, and I don't think they would have shown any change at
all for the tests they actually broke, since they changed only what happens at
paint time.

I do think the current table code has problems, but I certainly don't blame you
for them.  It was just an inappropriate reaction to criticism that I felt was
misplaced.
Bernd, now it's my turn to apologize for those roadmap words being harsh enough
that you thought they might apply to you, or to anyone clueful who attempts a
patch.  They do not.  Perhaps I should remove them, now that Netscape is gone. 
They were probably mostly about people hired by Netscape who had gained CVS
access long ago just by virtue of being hired, not because they had merited it
as you have by contributing good patches.

The abstract point that you can't "review quality in", easily or at all, still
stands -- but if it's hurtful, I now see no good in making it in the roadmap.

Anyway, I hope you'll keep contributing to the project.  We intend to increase
our automated regression testing, not decrease it.  And the roadmap will be
updated shortly, so stay tuned for that.

/be
The last clash is only an instance of that shows our different vision on
regression testing, there is another source of clashes (MEW) that would be
tolerable alone. But both together are too much for me and my pursuit of happiness.

I earn the money for my hobby with testing soft- and hardware and verifying that
they adhere to the promised spec. The basic rule is : Everything that is not
tested does not work. I have seen some happy cases when it did not come that bad.
There are no small changes. One needs to retest the whole thing.
Everytime a problem occurs at a late testing stage I ask myself what went wrong
before, where is the hole in the testprocedure that needs to be closed. 

In layout the situation today seems to me somehow different. The management does
not make pressure that things are heavily tested before checkin. David just ask
yourself when did you ask the last time that somebody runs the regression tests
before checkin. We know that they are somehow flawed but did you make pressure
that it changes? Just have a look at your last week
http://bonsai.mozilla.org/cvsquery.cgi?treeid=default&module=all&branch=HEAD&branchtype=match&dir=&file=&filetype=match&who=dbaron&whotype=regexp&sortby=Date&hours=2&date=week&mindate=&maxdate=&cvsroot=%2Fcvsroot
How many of these checkins are regression fixes and ask yourself why did these
problems not popup in the tests before. Will there be tests that these
regressions will never happen again? I know that these tests are time consuming
and will slow down the development, but for me its pretty frustrating when I
checkin on Sept 13th a patch that took me weeks to implement and to see that on
Sept. 17th its already broken by my sr.

The intention to increase the automated testing requires that before the manual
testing is accepted. For me the history shows something different: april 21 bug
201624 horks the viewer under windows bryner tells me that he does not care.
Layoutdebug is nice but the advantages of the viewer of being xul-free have gone.
April 1 http://bugzilla.mozilla.org/show_bug.cgi?id=158920#c27 the regression
tests under windows are broken. Not to tell that printing regression tests dont
work under Linux and get every other time broken by printing people under windows.

This tells me that there is no working quality management before the layout
rewrite neither do I see the fear of regression.  This will produce predictable
a bunch of regressions. I will yell at dbaron that he broke table stuff
somewhere. I don't want this and I don't want responsibility for this. This is
simply a notice for my responsibilities in layout-table. This bug will bring my
CVS rights in accordance with the role I see for me in layout. Possibly I will
go in to some other component where things go more the way that I envision. 
There's a big difference between regression testing and testing that we adhere
to a spec.  Regression testing is only testing that our behavior doesn't change.
 The table code in general is, relative to other code in the tree, bad at
adhering to specs because the core principle is to avoid changing anything
rather than to change towards adhering to the specifications.

Is there anything for which you need a xul-free viewer for which you can't use
MfcEmbed or WinEmbed (or whatever the Windows equivalent of TestGtkEmbed is)?

Our layout engine is currently practically unmaintainable -- the interactions
between different parts are so complex that it's extremely difficult to do
anything.  If we want a layout engine that continues to be relevant in a few
years, we need to do new things with it.  Implementing new standards will
require major change, and given the number of people we have, we have no chance
of accomplishing that change if we work at the glacial pace required by our
current regression test structure, which produces huge numbers of false
positives for many things that involve even minor architectural change.
David your comment sums pretty good up where we disagree. I am afraid that if
the table layout gets horked seriously there will be no in a few years. Its
simply a cultural dissenz, from all my experience I know that there should be
always a shippable product, probably thats why I matched so good with Chris. I
don't want to stop you as I know that you are a code ninja. I simply don't want
participate in a thing that contradicts my feelings and experience. And these
small fixes like bug 220536 and bug 220653 are still fun for me.

Today I looked at rdf bugs which look like the quite place that I am looking for.
Bernd, are you sure you disagree with David on the particular issues?  If our
current regression tests are too full of noise (false positives, etc.), then we
shouldn't be spending too much time running them after every change.  Developing
accurate regression tests and automating them (using people at first, scaling
across large numbers of volunteers) seems like a better plan, although that will
take time to bring to a point where we can depend on it.

You're right that the roadmap says now is the time to fix Gecko architecture
bugs that have been patched around for too long, and that doing so risks more
regressions.  If that's a general change that you don't want to work under, I
completely understand your desire to hack elsewhere.  But why not keep your CVS
account active?  If you don't use it, it'll be deactivated automatically after
six months anyway.

/be
David I have somehow difficulties to see what else do the testcases that I
checked in test for than for conformance to the spec. The last quirk that I had
a relationship with was attinasi's <img><img> mew hack. Table testcases that
have been checked in over the last 2 years are driven by making the table code
standards compliant. What you remember might have been true in a very distant past.
Your neglecting of regression tests however is a managment problem. Believe it
or not you are the layout manager now. Quality control is a pure management
issue. Any known serious big product undergoes today automated testing. The
absence of tools that verify the integrity of the product and stop regressions
is a clear indication of management problems. As a manager you have to take care
of it and to live it.
You have just left Harvard and as a young bright programmer you will probably go
a carier as senior program manager. So you should see mozilla as a great
possibility to improve your management skills. Mozilla is a tough environment,
usually in a company people will not openly resist you but either leave the
group or stop to work, here in mozilla people will either resist or simply
leave. If you can motivate here your contributors you will have learned a lot
and make it everywehre. You can of course learn how management does not work by
example, MNG comes into mind. What certainly will demotivate is when a boss does
not take enough care what his group has worked on and break things that have
previously worked. This is the internal side effect of regressions. Another very
efficient method of demotivation is to make general statements that devaluate
the work your contributors. What I think is probably pretty clear, but I can
tell you for sure that Marc and Chris suffered as well from your statements
before, just ask them.

I don't share your negative assessment of table layout. From what I have seen (I
did not see MacIE) mozilla's table layout is the most standard compliant
rendering. With the comming fix for bug 4510 the background rendering will be
the most compliant  as the proposal that fantasai wrote was first accepted and
will now be implemented. Once that is fixed finally the dynamic column/row
changes will be turned on.
I hope you will someday understand what I mean and use this opportunity now to
learn the management before real people with their economic and social future
depend on your decisions.
I will over the next days/weeks check in the stuff that I have already in my
tree regardless whether it will be broken a few days later. 

I guess I will need my account for this, I don't want to ask the poor Boris for
help. And as a addicted I asked myself why we have
http://lxr.mozilla.org/seamonkey/source/xpinstall/packager/packages-static-win#233
when these files have been removed from CVS a long time ago. :-)
Status: NEW → RESOLVED
Closed: 21 years ago
Resolution: --- → INVALID
When I referred to standards compliance problems I wasn't referring to the table
layout algorithms themselves.  I was referring to all the other things that we
support that don't work correctly on tables, such as 'overflow'.  They don't
work correctly on tables because the tests for them that we have been using test
them on blocks, and perhaps inlines, but not on tables, yet our architecture
requires that they be implemented separately for tables.  It is a design flaw
that things like this need to be implemented in multiple places to work on
multiple formatting objects.

I think testing is important, but there has to be a balance.  Additional testing
that has a very low chance of catching regressions isn't useful.  I have made
mistakes, but I think my general testing strategy is generally correct.  I try
to test testcases specific to what I am changing to make sure that the code
changes I am making lead to the changes in behavior that I expect.  I only run
the full regression tests if I'm unsure what behavior will change from the patch
or if I want to see how common the cases that are being changed are within the
sample of the web that we have in the regression tests.  If the regression tests
were easier to run or caught a higher percentage of regressions, I'd run them
more, since the benefit/cost ratio would be higher.  Given the work I'm hoping
to do in the near future, I may well try to improve the tests myself.

In a project as complicated as Mozilla's layout engine, any testing strategy
will allow some regressions to fall through.  Lack of regressions would be a
sign that we were spending *way* too much time on testing relative to other
things.  When I cause regressions, I try to fix them.  (When they're pointed out
in an offensive way, by insisting that the presence of regressions implies that
my development procedures are bad, I reserve the right to get annoyed.)
Mozilla's a little different from other projects. We have thousands of users
downloading and using the latest builds every night and to a large extent, they
do our regression testing for us*. Thus we have less need for a very
comprehensive, fully automated battery of tests.

* Well, we used to. I'm not sure if that's still true now that our quality is
high enough that people aren't always searching for the latest bug fixes :-).

Having said that, it would be great to have more systematic and automated
regression testing. I would especially like to have a Tinderbox-like machine
that continously pulls, builds and runs the tests. A battery of automated tests
that one could run quickly and precisely before checkin would be very helpful
too. But that's a lot of work to put together and as usual, if there's no
volunteer it won't get done. 

I'm optimistic both about our current code base and our current process. Layout
is ugly internally, but it does what it does as well or better than anything
else out there. It's hard to maintain, but we are making progress in fixing bugs
and even making it prettier, so I think "unmaintainable" is too strong a
criticism. I like our current process of mostly incremental changes without a
whole lot of process overhead for the developers. I am nervous about "big bang"
changes, but I think we can handle a few in each release cycle, if they land
early in the release cycle and are no bigger than is necessary to make progress
without intolerable overhead. I just wish I had more time to do a few myself!
You need to log in before you can comment on or make changes to this bug.