Site Overlay

Changes in the management of Mozmill tests

As you have probably already read we had some changes to our team. With the transition over into the Automation and Tools Engineering team we can now focus even more on test harnesses, the APIs, and the test creation process while educating others how to write automated tests. Given that new responsibility I want to announce a change in how we will manage Mozmill tests from now on. All that is done with the idea of having a faster pace in getting tests automated and fixed.

  1. We have to do reviews faster. Right now I see a lot of time wasted because the reviewer of tests is in a different timezone and cannot react immediately to requests. That means that patches which have been put up for review will not get reviewed before the evening in Europe. Revising the patch has to happen the next day. Normally we have at least two iterations on a patch so it takes about 2 or 3 days until the patch is ready for check-in. I would like to reduce this time to 1 or at maximum 2 days by promoting at least us (Dave Hunt and me) as reviewers. This applies to new and broken tests. Please split up review requests between us.
  2. We have to ensure that the test covers the appropriate manual test on Litmus. Not sure how close we are at this right now because I never did those reviews since they moved over to Anthony Hughes. If we were good in that area lets keep up that work, otherwise we have to improve. That said we will split up the process in a review and feedback section. While the reviewer takes care about the actual code implementation, the person for feedback checks the test
    for completeness. Only if both areas are fulfilled the test can be checked in. While Dave and myself will take care about reviews, Anthony is the one who knows the most of the current Firefox development and should act as the feedback person for now. Both can happen independently and should not be canceled if the other request gets denied.
  3. We have to fix broken tests and do not only skip them. In the past few months a couple of tests were failing and have been marked as skipped. It has happened a couple of times so that we now end-up with a lot of skipped tests. That doesn’t really help us and we have to get better. So before we create and work on new tests we have to make sure to really fix the broken tests. There should be used a round robin method between all of us (Automation Development and Desktop Automation) so that we will not have any skipped tests due to failures for each of the branches. I think that should be doable.
  4. We have to start the education. Learning from the experience of users while giving reviews will train Dave and me to see where the problems are located. That means that we can make sure to get more and better documentation up for those areas. If we wouldn’t work on reviews we would simply miss those parts. As result everyone who wants to contribute will benefit from and become a writer for automated tests in a shorter time.

I’m sure that with all those changes we can drastically improve the code quality and coverage of automated functional tests with Mozmill.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close