Whole tree up to date before committing

Óscar Fuentes ofv at wanadoo.es
Thu Oct 22 23:38:18 BST 2009


Nicholas Allen <nick.allen at onlinehome.de> writes:

>> So let's suppose that the merge have no conflicts, the build works and
>> the test suite succeeds. What's next? If the PQM automatically merges
>> and commits to master, the developer is not forced to review the final
>> state of the master branch, so it is not different from subversion's
>> implicit merge + buildbot. 

> It is different because in Subversion it lands on trunk *before* the
> tests have passed. With gatekeeper (eg launchpad) it only lands *after*
> they have passed and therefore the trunk can be considered more stable.
> This can be a problem in subversion because a problem may not be
> detected by the buildbot until many revisions after it was incorporated
> into trunk.

>> If my understanding is right, there is one PQM and all patches must go
>> through it. This creates a bottleneck of its own. For the project I'm
>> thinking on, even a 16 way machine would be unable to keep pace with the
>> patches at peak hours, and that would check just one platform.
>>   
> It creates no more of a bottleneck than having to run each revision
> through the buildbots (which is what you claim you currently do, if I
> understood correctly). So if this is the case the only change would be
> that trunk guaranteed buildbot success for every revision whereas in
> your current subversion setup it does not. That has to be better doesn't it?

With subversion, the developer commits after a local test, which catches
most problems. Bugs may arise because his change conflicts with other
changes on different files, and this is not catched by the local
pre-commit test, although it is detected by the buildbots at some future
point. This happens from time to time, but it is not common. IMHO code
reviews, not buildbots, are the most appropiate tools for avoiding
problems with this model, but I digress.

The model you propose is more of a bottleneck because changes must be
checked sequentially by a single PQM (unless you implement something
like "speculative testing", where a machine assumes that patch N
succeeds, starts testing patch N+1 and commits after the machine that
was testing patch N finishes). A PQM that tests one patch at a time is
not fast enough for that project, even if it is comprised of several
machines each testing one platform in parallel.

With subversion's model it is trivial to have a machine testing revision
N, other testing revision N+1, etc. so it is just a matter of adding
more machines as the project grows. When a problem is detected removing
it is not as simple as dismissing a patch, but as this rarely happens,
it is not a big issue.

I repeat that bzr policy is the right one, but it cannot be applied to
some projects, so subversion's model is a reasonable trade-off.

-- 
Óscar




More information about the bazaar mailing list