Release Process concerns (QA) and suggestions
Parameswaran Sivatharman
para.siva at canonical.com
Wed Aug 29 08:27:11 UTC 2012
On 28/08/12 17:38, Gema Gomez wrote:
> Hi all,
>
> during the last release meeting[1] I brought my concern about how
> quality is not taken into account as much as I think it should be by the
> release process and was asked to produce an email with suggestions for
> improvement.
>
> My main concern with the release process is that decisions on respins
> are made without *explicitly* considering that to be able to release a
> good image, there is a lot of testing that needs to be done after the
> respin, we are used to just have one run to verify that the system can
> install and then we are good to go. In my opinion, this is not enough
> anymore, especially if we want to increase the quality of our releases
> going forward.
>
> The release process should be at the heart of good QA practices and
> needs to ensure beyond any doubt that testing has happened to a
> reasonable standard. The release engineer shouldn't be the only one
> verifying the software he has just fixed (because by definition he is
> going to be biased in the verification). Also, third party testers,
> people different from the developers or release engineers, are always
> recommended to validate a product. Also, we should avoid conflicts of
> interest, i.e. the release engineer wants to get the release out of the
> door as priority 1, the QA engineer wants that release to be of the
> highest quality as priority 1, and those two are often conflicting
> views, in my opinion the release manager should take those two bits of
> information into account and make an informed decision.
>
> The test cases we have at the moment are very minimal, better coverage
> is achieved by having people go through somewhat open ended test cases
> and hence ending up testing more than it is written on the test case
> itself. This is not ideal from the QA perspective, but it is a good
> compromise when a more comprehensive set of test cases doesn't exist
> yet. This means that when we run through the mandatory test cases on a
> Thursday before releasing in a hurry, we are not getting as much
> coverage as when all the community is running open ended test cases at
> home on a varied set of hardware. Needless to say we are working hard to
> improve this situation by using more automated testing and allowing
> people do more specialized testing, but we are not there yet.
>
> I am not saying "let's not respin on release week/milestone", I am
> saying let's respin responsibly, and only if there is enough time to run
> enough testing to satisfy the quality of the images is reasonable. It is
> better to release with known errors than to release with unknown ones,
> quality is not about releasing with 0 errors, but about knowing what
> errors are there in what is being released.
>
> The release process[1] is not detailed enough, in my opinion. We have no
> guidelines as to what is a good reason to respin and what is not, and
> this leads to different engineers making different decisions (at least
> in my opinion, although the decisions is normally to respin, even for
> corner cases such as [4], that only fails with upgrades without network
> connectivity) when presented with the same problem. The Release
> Validation Process[3] is ancient and hasn't been updated in the past two
> years, I take an action to work on that.
>
> I was asked to suggest possible improvements to the current process.
> Here they are:
>
> - Let's document what constitutes a respin and what doesn't, so that
> whenever we see a bug we all know if that is going to trigger a respin
> or not, let's create guidelines for it.
This is a good idea imho. Especially from a tester's perspective, if I
know what are the important deliverables for a particular release early
enough we could devise a test plan for that release and would try to
spot issues very early that could otherwise force a respin at the last
minute. For this I think it's ideal to have a release specific test
planning session early enough to find 'what' to test and with the order
of criticality the QA team to decide 'how' to test. Making the test
plan release specific helps us make the iso tracker grow as well as to
find some corner cases being spotted early in the process.
> - Let's improve the static analysis of images so that we don't have the
> image size problem again, we are adding a job for this to Jenkins this week.
> - Let's require more than just one run of the test cases to validate an
> image. What is reasonable in terms of ensuring reasonable HW coverage?
> I'd like to see at least 3 x 100% run rate with 100% pass rate on the
> current test cases, from people different from the release engineer.
> - Whenever a respin is going to happen, the release team gives the
> opportunity to disagree to the QA team (to whoever is leading the
> testing on that particular milestone), and to any other interested group.
> - Or maybe we should simply have a three person group that makes the
> calls on respins during release/milestone week, with all the interested
> parts represented.
>
> Here are my thoughts as promised, thanks for reading. Many more things
> could probably be done, if you can think of any, please say so!
>
> Cheers,
> Gema
>
>
> [1]
> http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-08-24-15.00.log.html
> [2] https://wiki.ubuntu.com/ReleaseProcess
> [3] https://wiki.ubuntu.com/ReleaseValidationProcess
> [4] https://bugs.launchpad.net/ubuntu/+source/fontconfig/+bug/1039828
>
More information about the Ubuntu-release
mailing list