performance analysis of commit
Martin Pool
mbp at sourcefrog.net
Thu May 10 07:38:18 BST 2007
Here is a description of the minimum work that commit must do. We
want to make sure that our design doesn't cost too much more than this
minimum. I am trying to do this without making too many assumptions
about the underlying storage, but am assuming that the ui and basic
architecture (wt, branch, repo) stays about the same.
[After discussion I will put this into doc/developer. Please do let
me know if you think this is useful, or if you like it pick a case and
do one yourself.]
The basic purpose of commit is to
1 - create and store a new revision based on the contents of the working tree
2 - make this the new basis revision for the working tree
We can do a selected commit of only some files or subtrees.
The best performance we could hope for is:
- stat each versioned selected working file once
- read from the workingtree and write into the repository any new file texts
- in general, do work proportional to the size of the shape (eg
inventory) of the old and new selected trees, and to the total size of
the modified files
In more detail:
1.0 - Store new file texts: if a versioned file contains a new text
there is no avoiding storing it. To determine which ones have changed
we must go over the workingtree and at least stat each file. If the
file is modified since it was last hashed, it must be read in.
Ideally we would read it only once, and either notice that it has not
changed, or store it at that point.
On the other hand we want new code to be able to handle files that are
larger than will fit in memory. We may then need to read each file up
to two times: once to determine if there is a new text and calculate
its hash, and again to store it.
1.1 - Store a tree-shape description (ie inventory or similar.) This
describes the non-file objects, and provides a reference from the
Revision to the texts within it.
1.2 - Generate and store a new revision object.
1.3 - Do delta-compression on the stored objects. (git notably does
not do this at commit time, deferring this entirely until later.)
This requires finding the appropriate basis for each modified file: in
the current scheme we get the file id, last-revision from the
dirstate, look into the knit for that text, extract that text in
total, generate a delta, then store that into the knit. Most delta
operations are O(n^2) to O(n^3) in the size of the modified files.
1.4 - Cache annotation information for the changes: at the moment this
is done as part of the delta storage. There are some flaws in that
approach, such as that it is not updated when ghosts are filled, and
the annotation can't be re-run with new diff parameters.
2.1 - Make the new revision the basis for the tree, and clear the list
of parents. Strictly this is all that's logically necessary, unless
the working tree format requires more work.
The dirstate format does require more work, because it caches the
parent tree data for each file within the working tree data. In
practice this means that every commit rewrites the entire dirstate
file - we could try to avoid rewriting the whole file but this may be
difficult because variable-length data (the last-changed revision id)
is inserted into many rows.
The current dirstate design then seems to mean that any commit of a
single file imposes a cost proportional to the size of the current
workingtree. Maybe there are other benefits that outweigh this.
Alternatively if it was fast enough for operations to always look at
the original storage of the parent trees we could do without the
cache.
2.2 - Record the observed file hashes into the workingtree control
files. For the files that we just committed, we have the information
to store a valid hash cache entry: we know their stat information and
the sha1 of the file contents. This is not strictly necessary to the
speed of commit, but it will be useful later in avoiding reading those
files, and the only cost of doing it now is writing it out.
In fact there are some user interface niceties that complicate this:
3 - Before starting the commit proper, we prompt for a commit message
and in that commit message editor we show a list of the files that
will be committed: basically the output of bzr status. This is
basically the same as the list of changes we detect while storing the
commit, but because the user will sometimes change the tree after
opening the commit editor and expect the final state to be committed I
think we do have to look for changes twice. Since it takes the user a
while to enter a message this is not a big problem as long as both the
status summary and the commit are individually fast.
4 - As the commit proceeds (or after?) we show another status-like
summary. Just printing the names of modified files as they're stored
would be easy. Recording deleted and renamed files or directories is
more work: this can only be done by reference to the primary parent
tree and requires it be read in. Worse, reporting renames requires
searching by id across the entire parent tree. Possibly full
reporting should be a default-off verbose option because it does
require more work beyond the commit itself.
5 - Bazaar currently allows for missing files to be automatically
marked as removed at the time of commit. Leaving aside the ui
consequences, this means that we have to update the working inventory
to mark these files as removed. Since as discussed above we always
have to rewrite the dirstate on commit this is not substantial, though
we should make sure we do this in one pass, not two. I have
previously proposed to make this behaviour a non-default option.
We may need to run hooks or generate signatures during commit, but
they don't seem to have substantial performance consequences.
If one wanted to optimize solely for the speed of commit I think
hash-addressed file-per-text storage like in git (or bzr 0.1) is very
good. Remarkably, it does not need to read the inventory for the
previous revision. For each versioned file, we just need to get its
hash, either by reading the file or validating its stat data. If that
hash is not already in the repository, the file is just copied in and
compressed. As directories are traversed, they're turned into texts
and stored as well, and then finally the revision is too. This does
depend on later doing some delta compression of these texts.
Variations on this are possible. Rather than writing a single file
into the repository for each text, we could fold them into a single
collation or pack file. That would create a smaller number of files
in the repository, but looking up a single text would require looking
into their indexes rather than just asking the filesystem.
Rather than using hashes we can use file-id/rev-id pairs as at
present, which has several consequences pro and con.
--
Martin
More information about the bazaar
mailing list