On Friday Mozmill 1.5.4 has been finally released, including a lot of bug fixes and a couple of new features. This time it has been taken a bit longer as usual to get the release out, but the A-team is also working kinda hard on the Mozmill 2 code, where we expect a preview release kinda soon.
So thanks to everyone for their help in letting us get the following stuff out:
- Add support for Linux3 kernels (Bug 664564)
- Disable any preference for caching XUL elements (Bug 661588)
- The private property .documentLoaded has been renamed to .mozmillDocumentLoaded and a public API has been added to avoid namespace conflicts on window objects (Bug 661408)
- Don’t install distributed extensions from the application folder per default (Bug 661008)
- Add support for iFrames to the waitForPageLoad() method (Bug 659000)
- Expose full stack frames again after the Error properties are not enumerable anymore (Bug 650646)
- Add Mozmill version to the report document (Bug 636746)
If you haven’t upgraded yet, please do so by running:
pip install --upgrade mozmill
This command will also upgrade all dependent tools necessary by Mozmill. For questions just drop by in our #mozmill channel on IRC.
As part of the discussions during our QA Automation Services work week end of July, we have decided to be more open to the public. We have seen that in the past mostly all conversations regarding tools, tasks, and status updates happened on our internal mailing list, which is not accessible by any of our contributors. Well, that’s counter-productive because we have dozens of smart and awesome community members out there who probably want to help and learn the techniques in automated testing. And we should give those the possibility to easily participate.
As result we have raised bug 676224 two weeks ago with the request to create a public facing mailing list and newsgroup. The creation has been a bit delayed but finally happened last Friday. So if you want to participate in automation, feel free to join by subscribing yourself to one of the following communication channels:
We are looking forward to being able to discuss automation topics with you in a more collaborated fashion as before.
For a long time, exactly since January 29th in 2010, the Mozilla QA team is running daily Mozmill tests to prove the functionality of Firefox and the update system across all supported platforms and release branches. With those tests we were able to find a couple of regressions, which then were fixed before the next release of Firefox.
Given those results we have decided to extend our daily test-runs and not only run the functional and update tests, but also cover endurance tests, add-on tests, and the new remote tests.
Especially with the endurance tests, Dave Hunt has shown that we can retrieve a lot of useful memory related information and find regressions or bad interactions with add-ons very easily. So it makes a lot of sense to additionally perform those tests on Nightly and Aurora builds of Firefox. The test results can be found on our dashboard.
But also for add-on authors who have written Mozmill tests for their add-ons, we now provide test results. Regressions in the add-on or in Firefox itself can now be identified as close as possible. Currently only 2 add-ons have Mozmill tests and are being tested. If you are an add-on author get in contact with us, and we can work together to get your add-on tested.
Remote tests, which we haven’t talked about yet, are tests which connect the testing of the Firefox UI and remote websites. Those are fairly new, means they have been added recently, and currently only cover tests for the ‘Get Add-ons’ pane of the new Add-ons Manager. Those were necessary to create because Selenium can not automate the installation of add-ons. For the latest test results see our dashboard.
We are now looking forward to any regression we can detect with the newly added test-runs. And those we can not only identify by our daily test-runs, but also from reports sent by users of the Mozmill Crowd add-on. So please help us testing Firefox by sending your test reports. Thanks!