[Lucid, Karmic, Hardy] Fix regression caused by CVE-2010-2943 fix
Tim Gardner
tim.gardner at canonical.com
Wed Feb 23 22:24:24 UTC 2011
On 02/23/2011 09:22 AM, Stefan Bader wrote:
> SRU Justification:
>
> Impact: The patches backported to fix CVE-2010-2943 caused a regressioni
> for xfsdump.
>
> Fix: Backporting one more patch from upstream is fixing the issue.
>
> Testcase:
> 1. create an xfs filesystem with data (>~100MB). There is a compressed
> archive provided in comment #14.
> 2. run "xfsdump -p10 -Ltest -Mdump -f outfile<mount>"
> Note: Hardy and earlier seem to require the path to the device,
> while later releases can handle the mount point.
> The xfsdump command aborts with SGI_FS_BULKSTAT errno = 22
>
> Note: the same changes were also backported to Dapper, but I have not
> yet been able to verify the regression there. Instead I got the feeling
> that xfs could be more broken in that release (I already get driver
> crashes when transferring bigger amounts of data to the xfs file system
> as a preparation). So I am tempted to leave the Dapper code as is. Even
> more as it again so different from the Hardy code that a backport is not
> simple.
>
> I will follow up with patches (hopefully) responding to this initial mail.
>
> -Stefan
>
I'm inclined to leave Dapper alone as well. Was it even stable enough
for production use in 2.6.15 ?
The commit log mentions using iget "which should be fast enough for
normal use with the radix-tree based inode cache introduced a while
ago". When was the radix-tree stuff introduced? Will this patch
introduce a ginormous performance regression?
What about either reverting the original CVE patch, or finding a smaller
fix for it?
rtg
--
Tim Gardner tim.gardner at canonical.com
More information about the kernel-team
mailing list