[Bug 720853] Re: bzr crashed with RuntimeError in normpath(): maximum recursion depth exceeded while calling a Python object
John A Meinel
john at arbash-meinel.com
Mon Nov 14 09:55:11 UTC 2011
I think the original fix was approximately ok, we just need to find a
better place to put the check.
It would be nice if we could understand why bug #889872 was succeeding
in writing content, but not enough to make it think something was
written.
1) Could line be an empty string? No, It comes from _flatten_node, so at a minimum there are some \x00 bytes in there, even if the key and the references were all empty.
2) I can imagine that it could be exactly 2 bytes, though. Specifically, bytes_out_len is just what zlib.compressobj.compress() has decided it needs to write so-far. And it usually buffers quite a bit (a few KB)
When I look at it:
>>> import zlib
>>> o = zlib.compressobj()
>>> o.compress('abcdefghijklmnop')
'x\x9c'
Which is the same bytes you get if you write a different string there. So I'm guessing that is "this is a zlib compressed structure" 2-byte header.
3) During normal commits, etc, we use a soft packing scheme, but during 'bzr pack' we do a "hard" packing scheme. Specific differences are:
# Tuple of (num_repack_attempts, num_zsync_attempts)
# num_zsync_attempts only has meaning if num_repack_attempts is 0.
_repack_opts_for_speed = (0, 8)
_repack_opts_for_size = (20, 0)
a) that means during normal commit, we do:
if these bytes will fit without compression, accept them.
once we fill the buffer, call zobj.flush(Z_SYNC_FLUSH), and re-estimate how many more bytes we can fit.
but only use Z_SYNC_FLUSH 8 times, otherwise just consider the page FULL.
b) However, during 'pack' we are much more aggressive
Use a similar strategy to just add content that we know will fit, and then Z_SYNC_FLUSH to get a good estimate of how much
compression we expect to get.
Once the buffer is then 'almost full', take all the data so far and write it to a new zlib compressobj. Then try to add just the new content and Z_SYNC_FLUSH it. That way, instead of having N Z_SYNC_FLUSH points consuming space in our stream, we only have the one at the end.
At this point, the new content may not fit. So we fall back and write the existing data to a compressed stream, and mark it finalized. And return that we could not add the new content.
So actually 3b) sounds like during 'bzr pack' we'll pretty much always
get bytes_out_len == 2. Because we write content to the PAGE until we're
absolutely sure we can't fit any more, and then we create a new zlib
object to make it as small as possible. However, because we aren't
calling Z_SYNC_FLUSH and the zlib buffer is probably bigger than the
number of bytes we've written, the only bytes that have been output are
the zlib header, and all the other bytes are sitting in the zlib buffer
waiting for .finish() to be called and for us to flush() the compressor.
So what would I say would be better...
The code we are changing is in _add_key. It has an earlier check "rows[-1].writer is None". If that is True, then we are creating a new PAGE and the content is empty to start. So just do this:
=== modified file 'bzrlib/btree_index.py'
--- bzrlib/btree_index.py 2011-08-19 22:34:02 +0000
+++ bzrlib/btree_index.py 2011-11-14 09:54:20 +0000
@@ -294,7 +294,9 @@
flag when writing out. This is used by the _spill_mem_keys_to_disk
functionality.
"""
+ new_leaf = False
if rows[-1].writer is None:
+ new_leaf = True
# opening a new leaf chunk;
for pos, internal_row in enumerate(rows[:-1]):
# flesh out any internal nodes that are needed to
@@ -320,6 +322,10 @@
optimize_for_size=self._optimize_for_size)
rows[-1].writer.write(_LEAF_FLAG)
if rows[-1].writer.write(line):
+ if new_leaf:
+ # We just created this leaf, and now the line doesn't fit.
+ # Clearly it will never fit, so punt.
+ raise errors.BadIndexKey(...)
# this key did not fit in the node:
rows[-1].finish_node()
key_line = string_key + "\n"
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to bzr in Ubuntu.
https://bugs.launchpad.net/bugs/720853
Title:
bzr crashed with RuntimeError in normpath(): maximum recursion depth
exceeded while calling a Python object
Status in Bazaar Version Control System:
In Progress
Status in bzr-search:
Confirmed
Status in “bzr” package in Ubuntu:
Confirmed
Bug description:
Binary package hint: bzr
bzr index always fails on lp:ubiquity.
ProblemType: Crash
DistroRelease: Ubuntu 11.04
Package: bzr 2.3.0-2
ProcVersionSignature: Ubuntu 2.6.38-3.30-generic 2.6.38-rc4
Uname: Linux 2.6.38-3-generic x86_64
Architecture: amd64
BzrDebugFlags: set()
BzrVersion: 2.3.0
CommandLine: ['/usr/bin/bzr', 'index']
CrashDb: bzr
Date: Thu Feb 17 11:12:31 2011
ExecutablePath: /usr/bin/bzr
FileSystemEncoding: UTF-8
InstallationMedia: Ubuntu 10.04 "Lucid Lynx" - Alpha amd64 (20100109)
InterpreterPath: /usr/bin/python2.7
Locale: en_GB.UTF-8
PackageArchitecture: all
Platform: Linux-2.6.38-3-generic-x86_64-with-Ubuntu-11.04-natty
ProcCmdline: /usr/bin/python /usr/bin/bzr index
PythonVersion: 2.7.1
SourcePackage: bzr
Title: bzr crashed with RuntimeError in normpath(): maximum recursion depth exceeded while calling a Python object
UserEncoding: UTF-8
UserGroups: adm admin cdrom dialout libvirtd lpadmin plugdev sambashare
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bzrlib/commands.py", line 923, in exception_to_return_code
return the_callable(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/bzrlib/commands.py", line 1123, in run_bzr
ret = run(*run_argv)
File "/usr/lib/pymodules/python2.7/bzrlib/commands.py", line 691, in run_argv_aliases
return self.run(**all_cmd_args)
File "/usr/lib/pymodules/python2.7/bzrlib/commands.py", line 710, in run
return self._operation.run_simple(*args, **kwargs)
File "/usr/lib/pymodules/python2.7/bzrlib/cleanup.py", line 135, in run_simple
self.cleanups, self.func, *args, **kwargs)
File "/usr/lib/pymodules/python2.7/bzrlib/cleanup.py", line 165, in _do_with_cleanups
result = func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/commands.py", line 41, in run
_mod_index.index_url(trans.base)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/index.py", line 169, in index_url
index.index_revisions(branch, revs_to_index)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/index.py", line 341, in index_revisions
outer_bar)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/index.py", line 389, in _index_revisions
self._add_index(builder)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/index.py", line 402, in _add_index
self._upload_transport)
File "/usr/lib/python2.7/dist-packages/bzrlib/plugins/search/index.py", line 1128, in upload_index
term_indices[len(term)].add_node(term, term_value, ())
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 174, in add_node
self._spill_mem_keys_to_disk()
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 191, in _spill_mem_keys_to_disk
backing_pos) = self._spill_mem_keys_and_combine()
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 225, in _spill_mem_keys_and_combine
allow_optimize=False)
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 385, in _write_nodes
self._add_key(string_key, line, rows, allow_optimize=allow_optimize)
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 351, in _add_key
self._add_key(string_key, line, rows, allow_optimize=allow_optimize)
.....
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 351, in _add_key
self._add_key(string_key, line, rows, allow_optimize=allow_optimize)
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 351, in _add_key
self._add_key(string_key, line, rows, allow_optimize=allow_optimize)
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 328, in _add_key
row.finish_node()
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 98, in finish_node
_BuilderRow.finish_node(self)
File "/usr/lib/pymodules/python2.7/bzrlib/btree_index.py", line 76, in finish_node
spool = tempfile.TemporaryFile(prefix='bzr-index-row-')
File "/usr/lib/python2.7/tempfile.py", line 478, in TemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags)
File "/usr/lib/python2.7/tempfile.py", line 230, in _mkstemp_inner
return (fd, _os.path.abspath(file))
File "/usr/lib/python2.7/posixpath.py", line 349, in abspath
return normpath(path)
File "/usr/lib/python2.7/posixpath.py", line 315, in normpath
slash, dot = (u'/', u'.') if isinstance(path, unicode) else ('/', '.')
RuntimeError: maximum recursion depth exceeded while calling a Python object
To manage notifications about this bug go to:
https://bugs.launchpad.net/bzr/+bug/720853/+subscriptions
More information about the foundations-bugs
mailing list