Rev 3888: (jam) Add a FIFOCache class, to allow max-size with less overhead, in file:///home/pqm/archives/thelove/bzr/%2Btrunk/
Canonical.com Patch Queue Manager
pqm at pqm.ubuntu.com
Wed Dec 10 01:19:37 GMT 2008
At file:///home/pqm/archives/thelove/bzr/%2Btrunk/
------------------------------------------------------------
revno: 3888
revision-id: pqm at pqm.ubuntu.com-20081210011933-axdrxiq306imj2ty
parent: pqm at pqm.ubuntu.com-20081210004116-qqips4wfg800p4ku
parent: john at arbash-meinel.com-20081210000733-5h1j6enwe37ymnuj
committer: Canonical.com Patch Queue Manager <pqm at pqm.ubuntu.com>
branch nick: +trunk
timestamp: Wed 2008-12-10 01:19:33 +0000
message:
(jam) Add a FIFOCache class, to allow max-size with less overhead,
though lower hit rate.
added:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
modified:
bzrlib/tests/__init__.py selftest.py-20050531073622-8d0e3c8845c97a64
------------------------------------------------------------
revno: 3885.1.8
revision-id: john at arbash-meinel.com-20081210000733-5h1j6enwe37ymnuj
parent: john at arbash-meinel.com-20081209222640-u3iece2ixcd0q7lj
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 18:07:33 -0600
message:
Handle that Python2.4 doesn't have collections.deque.remove
modified:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
------------------------------------------------------------
revno: 3885.1.7
revision-id: john at arbash-meinel.com-20081209222640-u3iece2ixcd0q7lj
parent: john at arbash-meinel.com-20081209220341-9d70gll0szipelao
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 16:26:40 -0600
message:
Add a FIFOSizeCache which is constrained based on the size of the values.
modified:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.6
revision-id: john at arbash-meinel.com-20081209220341-9d70gll0szipelao
parent: john at arbash-meinel.com-20081209220003-05o6ymacid6io8mn
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 16:03:41 -0600
message:
Test that del x[foo] also triggers a cleanup.
modified:
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.5
revision-id: john at arbash-meinel.com-20081209220003-05o6ymacid6io8mn
parent: john at arbash-meinel.com-20081209215736-6rvlqtzobqbwqn2m
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 16:00:03 -0600
message:
Add tests that *adding* an entry also triggers the cleanup code.
modified:
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.4
revision-id: john at arbash-meinel.com-20081209215736-6rvlqtzobqbwqn2m
parent: john at arbash-meinel.com-20081209215038-uevgqhtrnlgnkecj
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 15:57:36 -0600
message:
Implement setdefault.
modified:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.3
revision-id: john at arbash-meinel.com-20081209215038-uevgqhtrnlgnkecj
parent: john at arbash-meinel.com-20081209213917-c5ko2gvw831txpt8
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 15:50:38 -0600
message:
Implement update
modified:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.2
revision-id: john at arbash-meinel.com-20081209213917-c5ko2gvw831txpt8
parent: john at arbash-meinel.com-20081209213549-yc1mqv3l5gun9c63
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 15:39:17 -0600
message:
Add a test which fails because we don't call cleanup funcs during deconstruction.
modified:
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
------------------------------------------------------------
revno: 3885.1.1
revision-id: john at arbash-meinel.com-20081209213549-yc1mqv3l5gun9c63
parent: pqm at pqm.ubuntu.com-20081209163533-fj6hx9l65sretbai
committer: John Arbash Meinel <john at arbash-meinel.com>
branch nick: fifo_cache
timestamp: Tue 2008-12-09 15:35:49 -0600
message:
Start working on a FIFOCache.
This is designed to have less runtime cost than the LRUCache.
Mostly because we don't have to do any recording work on *access*
only on update or delete.
As such, we subclass dict directly, because experiments show that
it performed the best. Unfortunately, even though we don't have
a custom __getitem__ implementation, it is still approx 2x slower
than using a raw dict.
added:
bzrlib/fifo_cache.py fifo_cache.py-20081209212307-31ffjwvteyvmydnf-1
bzrlib/tests/test_fifo_cache.py test_fifo_cache.py-20081209212307-31ffjwvteyvmydnf-2
modified:
bzrlib/tests/__init__.py selftest.py-20050531073622-8d0e3c8845c97a64
=== added file 'bzrlib/fifo_cache.py'
--- a/bzrlib/fifo_cache.py 1970-01-01 00:00:00 +0000
+++ b/bzrlib/fifo_cache.py 2008-12-10 00:07:33 +0000
@@ -0,0 +1,228 @@
+# Copyright (C) 2008 Canonical Ltd
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""A simple first-in-first-out (FIFO) cache."""
+
+from collections import deque
+
+
+class FIFOCache(dict):
+ """A class which manages a cache of entries, removing old ones."""
+
+ def __init__(self, max_cache=100, after_cleanup_count=None):
+ dict.__init__(self)
+ self._max_cache = max_cache
+ if after_cleanup_count is None:
+ self._after_cleanup_count = self._max_cache * 8 / 10
+ else:
+ self._after_cleanup_count = min(after_cleanup_count,
+ self._max_cache)
+ self._cleanup = {} # map to cleanup functions when items are removed
+ self._queue = deque() # Track when things are accessed
+
+ def __setitem__(self, key, value):
+ """Add a value to the cache, there will be no cleanup function."""
+ self.add(key, value, cleanup=None)
+
+ def __delitem__(self, key):
+ # Remove the key from an arbitrary location in the queue
+ remove = getattr(self._queue, 'remove', None)
+ # Python2.5's has deque.remove, but Python2.4 does not
+ if remove is not None:
+ remove(key)
+ else:
+ # TODO: It would probably be faster to pop()/popleft() until we get to the
+ # key, and then insert those back into the queue. We know
+ # the key should only be present in one position, and we
+ # wouldn't need to rebuild the whole queue.
+ self._queue = deque([k for k in self._queue if k != key])
+ self._remove(key)
+
+ def add(self, key, value, cleanup=None):
+ """Add a new value to the cache.
+
+ Also, if the entry is ever removed from the queue, call cleanup.
+ Passing it the key and value being removed.
+
+ :param key: The key to store it under
+ :param value: The object to store
+ :param cleanup: None or a function taking (key, value) to indicate
+ 'value' should be cleaned up
+ """
+ if key in self:
+ # Remove the earlier reference to this key, adding it again bumps
+ # it to the end of the queue
+ del self[key]
+ self._queue.append(key)
+ dict.__setitem__(self, key, value)
+ if cleanup is not None:
+ self._cleanup[key] = cleanup
+ if len(self) > self._max_cache:
+ self.cleanup()
+
+ def cleanup(self):
+ """Clear the cache until it shrinks to the requested size.
+
+ This does not completely wipe the cache, just makes sure it is under
+ the after_cleanup_count.
+ """
+ # Make sure the cache is shrunk to the correct size
+ while len(self) > self._after_cleanup_count:
+ self._remove_oldest()
+ if len(self._queue) != len(self):
+ raise AssertionError('The length of the queue should always equal'
+ ' the length of the dict. %s != %s'
+ % (len(self._queue), len(self)))
+
+ def clear(self):
+ """Clear out all of the cache."""
+ # Clean up in FIFO order
+ while self:
+ self._remove_oldest()
+
+ def _remove(self, key):
+ """Remove an entry, making sure to call any cleanup function."""
+ cleanup = self._cleanup.pop(key, None)
+ # We override self.pop() because it doesn't play well with cleanup
+ # functions.
+ val = dict.pop(self, key)
+ if cleanup is not None:
+ cleanup(key, val)
+ return val
+
+ def _remove_oldest(self):
+ """Remove the oldest entry."""
+ key = self._queue.popleft()
+ self._remove(key)
+
+ # raise NotImplementedError on dict functions that would mutate the cache
+ # which have not been properly implemented yet.
+ def copy(self):
+ raise NotImplementedError(self.copy)
+
+ def pop(self, key, default=None):
+ # If there is a cleanup() function, than it is unclear what pop()
+ # should do. Specifically, we would have to call the cleanup on the
+ # value before we return it, which should cause whatever resources were
+ # allocated to be removed, which makes the return value fairly useless.
+ # So instead, we just don't implement it.
+ raise NotImplementedError(self.pop)
+
+ def popitem(self):
+ # See pop()
+ raise NotImplementedError(self.popitem)
+
+ def setdefault(self, key, defaultval=None):
+ """similar to dict.setdefault"""
+ if key in self:
+ return self[key]
+ self[key] = defaultval
+ return defaultval
+
+ def update(self, *args, **kwargs):
+ """Similar to dict.update()"""
+ if len(args) == 1:
+ arg = args[0]
+ if isinstance(arg, dict):
+ for key, val in arg.iteritems():
+ self.add(key, val)
+ else:
+ for key, val in args[0]:
+ self.add(key, val)
+ elif len(args) > 1:
+ raise TypeError('update expected at most 1 argument, got %d'
+ % len(args))
+ if kwargs:
+ for key, val in kwargs.iteritems():
+ self.add(key, val)
+
+
+class FIFOSizeCache(FIFOCache):
+ """An FIFOCache that removes things based on the size of the values.
+
+ This differs in that it doesn't care how many actual items there are,
+ it restricts the cache to be cleaned based on the size of the data.
+ """
+
+ def __init__(self, max_size=1024*1024, after_cleanup_size=None,
+ compute_size=None):
+ """Create a new FIFOSizeCache.
+
+ :param max_size: The max number of bytes to store before we start
+ clearing out entries.
+ :param after_cleanup_size: After cleaning up, shrink everything to this
+ size (defaults to 80% of max_size).
+ :param compute_size: A function to compute the size of a value. If
+ not supplied we default to 'len'.
+ """
+ # Arbitrary, we won't really be using the value anyway.
+ FIFOCache.__init__(self, max_cache=max_size)
+ self._max_size = max_size
+ if after_cleanup_size is None:
+ self._after_cleanup_size = self._max_size * 8 / 10
+ else:
+ self._after_cleanup_size = min(after_cleanup_size, self._max_size)
+
+ self._value_size = 0
+ self._compute_size = compute_size
+ if compute_size is None:
+ self._compute_size = len
+
+ def add(self, key, value, cleanup=None):
+ """Add a new value to the cache.
+
+ Also, if the entry is ever removed from the queue, call cleanup.
+ Passing it the key and value being removed.
+
+ :param key: The key to store it under
+ :param value: The object to store, this value by itself is >=
+ after_cleanup_size, then we will not store it at all.
+ :param cleanup: None or a function taking (key, value) to indicate
+ 'value' sohuld be cleaned up.
+ """
+ # Even if the new value won't be stored, we need to remove the old
+ # value
+ if key in self:
+ # Remove the earlier reference to this key, adding it again bumps
+ # it to the end of the queue
+ del self[key]
+ value_len = self._compute_size(value)
+ if value_len >= self._after_cleanup_size:
+ return
+ self._queue.append(key)
+ dict.__setitem__(self, key, value)
+ if cleanup is not None:
+ self._cleanup[key] = cleanup
+ self._value_size += value_len
+ if self._value_size > self._max_size:
+ # Time to cleanup
+ self.cleanup()
+
+ def cleanup(self):
+ """Clear the cache until it shrinks to the requested size.
+
+ This does not completely wipe the cache, just makes sure it is under
+ the after_cleanup_size.
+ """
+ # Make sure the cache is shrunk to the correct size
+ while self._value_size > self._after_cleanup_size:
+ self._remove_oldest()
+
+ def _remove(self, key):
+ """Remove an entry, making sure to maintain the invariants."""
+ val = FIFOCache._remove(self, key)
+ self._value_size -= self._compute_size(val)
+ return val
=== modified file 'bzrlib/tests/__init__.py'
--- a/bzrlib/tests/__init__.py 2008-11-25 03:05:14 +0000
+++ b/bzrlib/tests/__init__.py 2008-12-09 21:35:49 +0000
@@ -2804,6 +2804,7 @@
'bzrlib.tests.test_errors',
'bzrlib.tests.test_extract',
'bzrlib.tests.test_fetch',
+ 'bzrlib.tests.test_fifo_cache',
'bzrlib.tests.test_ftp_transport',
'bzrlib.tests.test_foreign',
'bzrlib.tests.test_generate_docs',
=== added file 'bzrlib/tests/test_fifo_cache.py'
--- a/bzrlib/tests/test_fifo_cache.py 1970-01-01 00:00:00 +0000
+++ b/bzrlib/tests/test_fifo_cache.py 2008-12-09 22:26:40 +0000
@@ -0,0 +1,250 @@
+# Copyright (C) 2008 Canonical Ltd
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""Tests for the fifo_cache module."""
+
+from bzrlib import (
+ fifo_cache,
+ tests,
+ )
+
+
+class TestFIFOCache(tests.TestCase):
+ """Test that FIFO cache properly keeps track of entries."""
+
+ def test_add_is_present(self):
+ c = fifo_cache.FIFOCache()
+ c[1] = 2
+ self.assertTrue(1 in c)
+ self.assertEqual(1, len(c))
+ self.assertEqual(2, c[1])
+ self.assertEqual(2, c.get(1))
+ self.assertEqual(2, c.get(1, None))
+ self.assertEqual([1], c.keys())
+ self.assertEqual([1], list(c.iterkeys()))
+ self.assertEqual([(1, 2)], c.items())
+ self.assertEqual([(1, 2)], list(c.iteritems()))
+ self.assertEqual([2], c.values())
+ self.assertEqual([2], list(c.itervalues()))
+ self.assertEqual({1: 2}, c)
+
+ def test_missing(self):
+ c = fifo_cache.FIFOCache()
+ self.assertRaises(KeyError, c.__getitem__, 1)
+ self.assertFalse(1 in c)
+ self.assertEqual(0, len(c))
+ self.assertEqual(None, c.get(1))
+ self.assertEqual(None, c.get(1, None))
+ self.assertEqual([], c.keys())
+ self.assertEqual([], list(c.iterkeys()))
+ self.assertEqual([], c.items())
+ self.assertEqual([], list(c.iteritems()))
+ self.assertEqual([], c.values())
+ self.assertEqual([], list(c.itervalues()))
+ self.assertEqual({}, c)
+
+ def test_add_maintains_fifo(self):
+ c = fifo_cache.FIFOCache(4, 4)
+ c[1] = 2
+ c[2] = 3
+ c[3] = 4
+ c[4] = 5
+ self.assertEqual([1, 2, 3, 4], sorted(c.keys()))
+ c[5] = 6
+ # This should pop out the oldest entry
+ self.assertEqual([2, 3, 4, 5], sorted(c.keys()))
+ # Replacing an item doesn't change the stored keys
+ c[2] = 7
+ self.assertEqual([2, 3, 4, 5], sorted(c.keys()))
+ # But it does change the position in the FIFO
+ c[6] = 7
+ self.assertEqual([2, 4, 5, 6], sorted(c.keys()))
+ self.assertEqual([4, 5, 2, 6], list(c._queue))
+
+ def test_default_after_cleanup_count(self):
+ c = fifo_cache.FIFOCache(5)
+ self.assertEqual(4, c._after_cleanup_count)
+ c[1] = 2
+ c[2] = 3
+ c[3] = 4
+ c[4] = 5
+ c[5] = 6
+ # So far, everything fits
+ self.assertEqual([1, 2, 3, 4, 5], sorted(c.keys()))
+ c[6] = 7
+ # But adding one more should shrink down to after_cleanup_count
+ self.assertEqual([3, 4, 5, 6], sorted(c.keys()))
+
+ def test_clear(self):
+ c = fifo_cache.FIFOCache(5)
+ c[1] = 2
+ c[2] = 3
+ c[3] = 4
+ c[4] = 5
+ c[5] = 6
+ c.cleanup()
+ self.assertEqual([2, 3, 4, 5], sorted(c.keys()))
+ c.clear()
+ self.assertEqual([], c.keys())
+ self.assertEqual([], list(c._queue))
+ self.assertEqual({}, c)
+
+ def test_copy_not_implemented(self):
+ c = fifo_cache.FIFOCache()
+ self.assertRaises(NotImplementedError, c.copy)
+
+ def test_pop_not_implemeted(self):
+ c = fifo_cache.FIFOCache()
+ self.assertRaises(NotImplementedError, c.pop, 'key')
+
+ def test_popitem_not_implemeted(self):
+ c = fifo_cache.FIFOCache()
+ self.assertRaises(NotImplementedError, c.popitem)
+
+ def test_setdefault(self):
+ c = fifo_cache.FIFOCache(5, 4)
+ c['one'] = 1
+ c['two'] = 2
+ c['three'] = 3
+ myobj = object()
+ self.assertIs(myobj, c.setdefault('four', myobj))
+ self.assertEqual({'one': 1, 'two': 2, 'three': 3, 'four': myobj}, c)
+ self.assertEqual(3, c.setdefault('three', myobj))
+ c.setdefault('five', myobj)
+ c.setdefault('six', myobj)
+ self.assertEqual({'three': 3, 'four': myobj, 'five': myobj,
+ 'six': myobj}, c)
+
+ def test_update(self):
+ c = fifo_cache.FIFOCache(5, 4)
+ # We allow an iterable
+ c.update([(1, 2), (3, 4)])
+ self.assertEqual({1: 2, 3: 4}, c)
+ # Or kwarg form
+ c.update(foo=3, bar=4)
+ self.assertEqual({1: 2, 3: 4, 'foo': 3, 'bar': 4}, c)
+ # Even a dict (This triggers a cleanup)
+ c.update({'baz': 'biz', 'bing': 'bang'})
+ self.assertEqual({'foo': 3, 'bar': 4, 'baz': 'biz', 'bing': 'bang'}, c)
+ # We only allow 1 iterable, just like dict
+ self.assertRaises(TypeError, c.update, [(1, 2)], [(3, 4)])
+ # But you can mix and match. kwargs take precedence over iterable
+ c.update([('a', 'b'), ('d', 'e')], a='c', q='r')
+ self.assertEqual({'baz': 'biz', 'bing': 'bang',
+ 'a': 'c', 'd': 'e', 'q': 'r'}, c)
+
+ def test_cleanup_funcs(self):
+ log = []
+ def logging_cleanup(key, value):
+ log.append((key, value))
+ c = fifo_cache.FIFOCache(5, 4)
+ c.add(1, 2, cleanup=logging_cleanup)
+ c.add(2, 3, cleanup=logging_cleanup)
+ c.add(3, 4, cleanup=logging_cleanup)
+ c.add(4, 5, cleanup=None) # no cleanup for 4
+ c[5] = 6 # no cleanup for 5
+ self.assertEqual([], log)
+ # Adding another key should cleanup 1 & 2
+ c.add(6, 7, cleanup=logging_cleanup)
+ self.assertEqual([(1, 2), (2, 3)], log)
+ del log[:]
+ # replacing 3 should trigger a cleanup
+ c.add(3, 8, cleanup=logging_cleanup)
+ self.assertEqual([(3, 4)], log)
+ del log[:]
+ c[3] = 9
+ self.assertEqual([(3, 8)], log)
+ del log[:]
+ # Clearing everything should call all remaining cleanups
+ c.clear()
+ self.assertEqual([(6, 7)], log)
+ del log[:]
+ c.add(8, 9, cleanup=logging_cleanup)
+ # __delitem__ should also trigger a cleanup
+ del c[8]
+ self.assertEqual([(8, 9)], log)
+
+ def test_cleanup_at_deconstruct(self):
+ log = []
+ def logging_cleanup(key, value):
+ log.append((key, value))
+ c = fifo_cache.FIFOCache()
+ c.add(1, 2, cleanup=logging_cleanup)
+ del c
+ # TODO: We currently don't support calling the cleanup() funcs during
+ # __del__. We might want to consider implementing this.
+ self.expectFailure("we don't call cleanups during __del__",
+ self.assertEqual, [(1, 2)], log)
+ self.assertEqual([(1, 2)], log)
+
+
+class TestFIFOSizeCache(tests.TestCase):
+
+ def test_add_is_present(self):
+ c = fifo_cache.FIFOSizeCache()
+ c[1] = '2'
+ self.assertTrue(1 in c)
+ self.assertEqual(1, len(c))
+ self.assertEqual('2', c[1])
+ self.assertEqual('2', c.get(1))
+ self.assertEqual('2', c.get(1, None))
+ self.assertEqual([1], c.keys())
+ self.assertEqual([1], list(c.iterkeys()))
+ self.assertEqual([(1, '2')], c.items())
+ self.assertEqual([(1, '2')], list(c.iteritems()))
+ self.assertEqual(['2'], c.values())
+ self.assertEqual(['2'], list(c.itervalues()))
+ self.assertEqual({1: '2'}, c)
+
+ def test_missing(self):
+ c = fifo_cache.FIFOSizeCache()
+ self.assertRaises(KeyError, c.__getitem__, 1)
+ self.assertFalse(1 in c)
+ self.assertEqual(0, len(c))
+ self.assertEqual(None, c.get(1))
+ self.assertEqual(None, c.get(1, None))
+ self.assertEqual([], c.keys())
+ self.assertEqual([], list(c.iterkeys()))
+ self.assertEqual([], c.items())
+ self.assertEqual([], list(c.iteritems()))
+ self.assertEqual([], c.values())
+ self.assertEqual([], list(c.itervalues()))
+ self.assertEqual({}, c)
+
+ def test_add_maintains_fifo(self):
+ c = fifo_cache.FIFOSizeCache(10, 8)
+ c[1] = 'ab'
+ c[2] = 'cde'
+ c[3] = 'fghi'
+ self.assertEqual({1: 'ab', 2: 'cde', 3: 'fghi'}, c)
+ c[4] = 'jkl' # Collapse
+ self.assertEqual({3: 'fghi', 4: 'jkl'}, c)
+ # Replacing an item will bump it to the end of the queue
+ c[3] = 'mnop'
+ self.assertEqual({3: 'mnop', 4: 'jkl'}, c)
+ c[5] = 'qrst'
+ self.assertEqual({3: 'mnop', 5: 'qrst'}, c)
+
+ def test_adding_large_key(self):
+ c = fifo_cache.FIFOSizeCache(10, 8)
+ c[1] = 'abcdefgh' # Adding a large key won't get cached at all
+ self.assertEqual({}, c)
+ c[1] = 'abcdefg'
+ self.assertEqual({1: 'abcdefg'}, c)
+ # Replacing with a too-large key will remove it
+ c[1] = 'abcdefgh'
+ self.assertEqual({}, c)
+ self.assertEqual(0, c._value_size)
More information about the bazaar-commits
mailing list