Optimising NFS for system files

usleepless at gmail.com usleepless at gmail.com
Wed Dec 31 02:10:17 UTC 2008


On 12/30/08, Michel Talon <talon at lpthe.jussieu.fr> wrote:
>
> Bernard Dugas wrote:
>
> > So you din't think that if all files are already in RAM on server, i
> > will  save the drive access time ?
> >
> > Or do you think the NFS network access is so much slow that the disk
> > access time is just marginal ?
> >
> > Do you think i should use something more efficient than NFS ?
>
>
> The VM system in principle does a good job of keeping in memory files
> which are frequently accessed, so you should not have to do anything
> special, and moreover i don't think there exists something convenient
> to force some files in memory (and this would be detrimental to the
> globalthroughput of the server).
>
> As to NFS speed, you should experiment with NFS on TCP and run a large
> number of nfsd on the server (see nfs_server_flags in rc.conf). For
> example -n 6 or -n 8. Maybe also experiment with the readsize and
> writesize.
>
Anyways, i don't think you can expect the same throughput
> via NFS (say 10 MB/s, or more on Gig ethernet) as on a local disk
> (40 MB/s or more).


i disagree. i have recently installed a NAS by slapping FreeNAS on a
relative old server ( P4 2.8Ghz ) and experimented with lots of stuff
because i was disappointed with the througput. spoiler: 1st try 30MB/s, last
try 82MB/s.

hardware server:
 - intel server p4 3ghz, 1GB memory
 - onboard intel 1Gb fxp nic
 - 2 x barracuda 750GB disks
 - hp procurve 3500zl( ? )
 - OS: Freebsd 6.2 ( FreeNAS )

hardware linux workstation:
 - 2 x dual core, 2GB memory workstation
 - onboard intel 1Gb nic
 - 3 250GB disks
 - OS: Ubuntu 8.10

hardware windows workstation:
 - same
 - OS: Windows Server 2003

First installation
 -  FreeNAS, ignorant as i was: chose JBOD as disk-configuration. This is
Just a Bunch Of Disks, it just concats all the ( 2 pcs ) drives into 1 big
volume.
 - Tested throughput(cifs/samba), got about 40MB/s on my linux box. Tested
on the windows box: about 33MB/s.
 - Above measurements where achieved only after jumbo-frames and
send-receive-buffer optimalisations ( won about 10% )

I was heavily disappointed with the results: i had installed a couple of
NAS-systems, which could easily reach 80MB/s or 140MB/s with two nic's
trunked.

To make a long story short: with Gigabit networking it is not the network
which is the bottle-neck: it is the local access to disks. So you need to
use lots of disks. So instead of JBOD you need to configure RAID0, RAID1 on
the file server etc to maximize disk throughput. That's why the NAS-systems
performed so well: they had 4 disks each.

- Second installation
 - FreeNAS, RAID0
 - Tested throughput ( to local RAID0 ):
     - ftp: 82MB/s
     - nfs: 75MB/s
     - cifs/samba: 42MB/s

Confused by the performance of cifs, i configured jumboframes,  large
send/receive buffers for cifs/samba, freenas-tuning-opting, polling etc. To
no avail, there seems to be another limit to cifs/samba performance (
FreeNAS has an optimized smb.conf btw).

Test issues ( things that get you confused )
  - if you expect to be able to copy a file at Gigabit speeds, you need to
be able to write as fast to  your local disk as well. So to reliable test
SAN/NAS performance at Gigabit speeds you need RAID at the server and at the
client. Or write to /dev/null
  - if you repeatedly test with the same file, it will get cached into
memory of the NAS. so you won't be testing troughput disk->network->disk
anymore: you are testing NAS-memory->network->disk. I was testing with
ubuntu-iso's, but with 1GB of memory, ISO's get cached as well.
 - if you repeatedly test with the same file, and you have enough local
memory, and you test with nfs or cifs/samba, the file will get cached
locally as well. this results into transfer-speeds to /dev/null exceeding
100MB/s ( Gigabit speeds ). i have observed transfer speeds to /dev/null of
400MB/s!

The funny thing is i started this DIY-NAS with FreeNAS because we had a
cheap commercial NAS with 4 disks ( raid 5 ). We have had performance
troubles at 100Mbit, repeated authentication trouble ( integration with MSAD
), and when we upgraded our network to Gigabit, it only performed at 11MB/s!

We now have a NAS that performs faster than local disk. We plan to use it
run development-virtual-machines on.

With Gigabit ethernet the network isn't the problem anymore: it's disks. You
need as much as you can get your hands on.

About your question about memory management: it is not needed and you don't
want it. tune nics, filesystems, memory, nfs-options and disks.

regards,

usleep


--
>
>
> Michel TALON
>
>
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "
> freebsd-questions-unsubscribe at freebsd.org"
>


More information about the freebsd-questions mailing list