Mozmill status and goals for 2009

As a couple of you already know Mozilla QA runs its own set of automated functional tests which are separated from the tests in the automated test suite. The main goal for us is to shorten the test duration for manual functional tests so those tests will be run more often. There are Smoketests, BFT’s (basic functional tests), and FFT (full functional tests) available on Litmus which get partly run by QA during release testing or at any time by contributors. Given the fact that all those tests need a lot of time to execute manually, we are working on getting most of those tests automated.

There is a question which I get asked very often from developers: “Why do we need Mozmill tests when we already have a suite of automated tests available?” The clear answer is that those tests are used to simulate user actions on UI elements the same way as when a user would sit in-front of the computer. That means that for example clicks on hidden or disabled elements shouldn’t trigger the execution of the underlying command. That’s the difference to Mochitests which always trigger the command when the synthesizeMouse function is used to click on an element. Another really helpful feature is the capability to run restart tests of any sort. That’s not possible with the existing test harnesses which makes Mozmill test unique.

Seeing the importance of those tests we want to have a full suite of BFT tests for Mozmill by end of Q2 in 2010. The total number of 196 doable tests, except the ones which require OS level interaction, would allow us to run 82% of the tests automated. At the moment 65 of the tests have been already finished and can be run with Mozmill against builds from the 1.9.1 and 1.9.2 branch. For detailed information about the current state and actual work please check the Google spreadsheet.

To get more tests automated the following goals have been set by the QA execution team for Q4 in 2009:

  • Firefox 3.6 will be released this quarter. To enhance our testing we want to automate all of the tests in 4 BFT subgroups prioritized as P1. In general these subgroups are: Awesomebar, Add-ons Manager, Download-Manager, and Tabbed Browsing. This will incite us to write the next 40 tests. A list of all available subgroups and their prioritization can be found in the Feature Ownership document.
  • For us who are working on release testing, software update tests have to be performed for the betatest, beta, releasetest, and release channels. Given the manual work which have to be performed here automation will help a lot. I will finalize my software update tests so they can be run by everyone.
  • Running Mozmill tests you will get results reported in the terminal. Even with the integrated capability to send those reports to a server we don’t have a web frontend to display those results. We want to use Brastacks to visualize Mozmill results similar to the Fennec test results. This work will be a joined effort with Testdev.

If anyone is interested in helping us to write or maintain Mozmill tests you can read more about it in the test creation tutorial or simply join us on IRC and get in contact with whimboo or aakashd. But you can also send me a mail or comment on this blog post. Thanks!

Mozilla is seeking for QA Execution Engineers

Are you interested to work for a global and open company? The Mozilla Corporation – vendor of the free and open source web browser Firefox – is seeking for a full-time QA Execution Engineer. If you will take the advantage to work with engaged and talented people and to stay in touch with our world-wide and powerful community go ahead and send your resume…

Mozilla Corporation is seeking a QA Engineer who is responsible for test execution of Firefox and Thunderbird products. You will be developing feature test cases and run regression testing during releases. You will also coordinate testing with the open source community and continually evangelize Firefox testing to the world. Development of browsers is accelerating; we’re looking for engineers who can keep us ahead of the curve.


  • Create test plans and test cases from functional specifications
  • Develop and execute test automation scripts
  • Execute black box tests on a monthly release cycle
  • Maintain test documentation and test cases
  • Coordinate projects within the open source community
  • Confirm, create steps to reproduce, identify regression ranges and clarify bug and feature requestion
  • Monitor user feedback


  • Strong knowledge of internet and browser technologies
  • Strong understanding of test methodologies and test case development
  • Strong problem solving and resolution skills
  • Programming experience with in C, C++, JavaScript, Python, Perl, and XML
  • Knowledge of Windows / Mac / Linux environments
  • Experience with software testing lifecycle process, release management and bug life cycle
  • Strong verbal and written communication skills
  • Flexibility in dynamic software environments
  • BS or MS in Computer Science or related field

Skills Desired:

  • Knowledge of web automation tools like Selenium, Watir, Silk, Quicktest Pro, and WinRunner
  • Working knowledge of web security
  • Experience with browser extensions and plugins
  • Experience with virtual machines, and multiple build environments
  • Experience with testing open source products and open source community participation

See also the full job description on

Photo challenge – please vote for us

Some weeks ago a friend of mine pointed me to an upcoming photo challenge about image manipulation created. It has been initiated by the biggest German computer magazine c’t. My first thought was to attend this challenge but I recognized that I don’t have any great images on my box and no time to work on that. But surprisingly a friend was visiting me over those days and we had some great shots in the Saxon Switzerland.

Given those pictures Silvia made a fake image in parallel without knowing about this challenge. Once I had seen the result I pushed here to subscribe to the challenge. After some talks and further processing steps of the image we have decided to take the challenge just for fun. About 3 weeks ago the first results have been posted to the online gallery and a public survey has been started. 139 of the best pictures can be voted by everyone and our faked image is in this list too.

Here a former version and not the final one on my Flickr account:

Surprisingly a former colleague and a friend of mine informed me today that we have made it to even show up on Woot – there are only 13 images posted. So isn’t it a call that we have good chances?

Please help us and vote for our image when you like it. We appreciate your help.

“Mozmill meets L10n” slides available

Over the last weekend Mozilla Camp Europe 2009 has been taken place in Prague. About 150 people from l10n, qa, dev, and advocacy were invited to join this conference which Mozilla Europe is organizing each year.

Given my project to get manual Litmus tests automated with Mozmill I have prepared some slides with a special focus on l10n. But sadly I wasn’t able to join the conference because of sickness. I have to say a big thanks to my colleague Marcia Knous and also to one of our main contributors for Sunbird tests Merike Sell who both hold the session. As informed at the end of the session via IRC the talk was a great success and a lot of questions were ask.

Due to the amount of sessions not everyone was able to join the Mozmill session. Also given all the people who weren’t be able to come I have uploaded my slides for all of you now. Please check the embedded Slideshare content below:

Because I haven’t got any feedback from localizers so far I’m anxious to hear what you think about the usefulness of Mozmill and testing with localized builds. Given by the current number we have over 70 official locales available which are not tested by automated tests and require manual testing from localizers and contributors on a regular basis. With all the 250 BFT and another 750 FFT tests enabled in Litmus manual testing is a time taking action. Running all the tests with Mozmill will take much lesser time, could be run more often, and could cover all platforms which will result in a higher quality of Firefox and helps us to minimize any new regressions for our huge user base.

Please check the following questions I’m interested in getting an answer:

  1. How often does your l1on team run Litmus tests against your locale whether those are BFT/FFT or the localizer test-run?
  2. Would you like to see much of those tests automated and are you interested in running those tests on your local machine for each major and stability release?
  3. Are you interested to help QA in writing Mozmill tests so we have most of them available as soon as possible?
  4. Do you have further ideas how Mozmill can be used in the l10n area additionally to the points I have pointed out in my slides?

Thanks in advance for your feedback!