bzr too slow

Denys Duchier duchier at ps.uni-sb.de
Wed Jan 11 13:58:04 GMT 2006


John Arbash Meinel <john at arbash-meinel.com> writes:

> Denys Duchier wrote:
>
>> The case of multiple concurrent read transactions also has problems: each
>> transaction may update different parts of the hashcache.  Ideally when writing
>> back the hashcache, we should only merge into the on-disk hashcache the entries
>> that we have actually updated during the transaction.  In this manner,
>> "up-to-date-ness" of the on-disk hashcache would be monotonic.
>> [...]
> The hash-cache is only concerned with the state of the working
> directory, which is not directly linked to the state of the branch.
> You write the hash-cache when you have a read lock. Which means that two
> bzr instances can grab a read lock, and they both will try to update the
> hash-cache at the same time. And that would happen whether you use your
> 'run at cleanup' code, or if we use 'do we have the lock' code, or if we
> use 'write when the lock goes away'. None of them fix the problem that
> you think you are fixing.

If you look at the quoted paragraph again you'll see that I understand the
problem.  A "cleanup action" is part of my solution.  What such an action should
do is not to blindly overwrite the on-disk hashcache, but instead update it with
the entries that we have actually updated during our transaction.  Of course, we
need mutual exclusion at this point (another lock for updating cached data).

> I know I brought up the idea of code which gets run if commit() is
> successful, and code that always gets run. Robert has some valid concern
> over that sort of code. Primarily that Transaction isn't really where
> that should be happening.

So far, I think it is; because a transaction is a unit of coherence.

Cheers,

--Denys





More information about the bazaar mailing list