NFS performance (can't use "async")

Wipe_Out wipe_out at users.sourceforge.net
Thu Jul 19 17:42:15 UTC 2012


On 19 July 2012 12:27, Steve Flynn <anothermindbomb at gmail.com> wrote:

>
> No surprise there. For reference, we export NFS mounts like so:
>
> /some_filesystem -public,sec=sys,rw,sec=dh:krb5:krb5i:krb5p,rw
>
> and we mount with
>
> /some_filesystem:
>         dev             = "/client_data"
>         vfs             = nfs
>         nodename        = one_of_our_aix_servers
>         mount           = true
>         options         = bg,hard,intr,sec=sys:dh:krb5:krb5i:krb5p
>         account         = false
>
>
> (I'm no expert so take this with the usual grain of salt)
>
> I'd start with an eyeball of the output from "ifconfig -a" on both
> systems. Make sure they both look sensible and that one of them isn't
> configured to route all traffic through some flaky old F5 firewall in
> the basement, by accident.
>
> Providing both installations look the same, have the same MTU,
> sensible error levels for Tx and Rx after a test session and so forth
> it's time to start poking around in /proc to compare tuning values.
> This is where I can offer little, as I have no Linux or FreeBSD
> systems to hand - just AIX which isn't going to help you much.
>
> This /might/ help: http://nfs.sourceforge.net/nfs-howto/ar01s05.html
>
>
> Out of interest, what are you using to generate the load on the NFS
> filesystem and what are you using to measure the number of FSYNCS? Are
> you sure that the two are equivalent under both installations (e.g.
> bonnie on Linux is running with a 64K chunk of data to write to the
> NFS filesystem whereas whatever you're using on FreeBSD is running
> with a 2K chunk)... I'm struggling to come up with a plausible reason
> for such a massive difference in sync rates between two installations
> on the same kit, talking to the same filesystem.
>
>
> Thanks for your thoughts and input Steve.. Might just stick with FreeBSD
for the storage server and Ubuntu/Debian for the KVM hosts..

Still have to investigate the option of using iSCSI+LVM for VM drives
instead of file based virtual disks over NFS so don't want to loose too
much more time on the Linus NFS issue..

I am testing the storage from a Proxmox VE server which includes a utility
called pveperf that tests the FSYNC's per second.. Below is the relevant
source code for the test..

sub test_fsync {
    my $basedir = shift;
    drop_cache ();
    my $dir = "$basedir/ptest.$$";
    eval {
mkdir $dir;
my $data = ('A' x 4000) . "\n";
my $starttime = [gettimeofday];
my $count;
my $elapsed = 0;
for ($count=1;;$count++) {
    my $m = $count % 300;
    my $filename = "$dir/tf_$m.dat";
    open (TMP, ">$filename") || die "open failed";
    print TMP $data;
    File::Sync::fsync (\*TMP);
    close (TMP);
    $elapsed = tv_interval ($starttime);
    last if $elapsed > 3;
}
my $sps = $count /$elapsed; # fsync per second
printf "FSYNCS/SECOND:     %.2f\n", $sps;
    };
    my $err = $@;
    system ("rm -rf $dir");
    die $err if $err;
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-users/attachments/20120719/3e3f5493/attachment.html>


More information about the ubuntu-users mailing list