RAM and zfs and multiple disks
krad
kraduk at gmail.com
Tue Oct 15 08:25:34 UTC 2013
on thing worth noting about dedup is that its not an all or nothing option
as you can tune it per file system. Also you only need the humongous amount
of ram when you do writes, which of course is quite a common thing to do.
However there are scenarios where a dataset will dedup quite well, but isnt
written to often (os repo archive etc). You can potentially enable dedup on
these and take the write performance hit. You can sometimes get away with
is on databases as well, where the dataset is large and highly dedupable,
but doesnt change much. What I have done in the past is enable dedup on the
initial data load, and then disable it before deploying to production. In
this case you get the benefits of the space saving, but not the hits when
you write, as newly written data isnt deduped. This is a special case
though so wont suit most as over time depending on write patterns the
dataset could grow much bigger.
As previously mentioned one of the cheapest ways to make it viable is to
add a big chunk of SSD for L2ARC. Before you think doing this though check
whether its even worth considering on your data by doing a zdb -S <pool>.
It will give you the expected dedup ratio
eg os pool from my main server
root@#:/home/krad# zdb -S rpool
Simulated DDT histogram:
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 460K 31.4G 26.1G 26.1G 460K 31.4G 26.1G 26.1G
2 33.8K 822M 580M 580M 69.8K 1.67G 1.17G 1.17G
4 2.34K 50.1M 31.3M 31.3M 10.8K 254M 156M 156M
8 417 12.8M 5.26M 5.26M 4.29K 134M 55.5M 55.5M
16 131 1.52M 555K 555K 2.70K 29.4M 9.54M 9.54M
32 45 62.5K 51K 51K 1.73K 2.48M 2.02M 2.02M
64 7 3.50K 3.50K 3.50K 602 301K 301K 301K
128 5 130K 130K 130K 991 17.3M 17.3M 17.3M
Total 497K 32.3G 26.7G 26.7G 551K 33.5G 27.5G 27.5G
dedup = 1.03, compress = 1.22, copies = 1.00, dedup * compress / copies =
1.25
and a vm pool
root at radical:/home/krad# zdb -S vm1
Simulated DDT histogram:
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 1.28M 164G 102G 102G 1.28M 164G 102G 102G
2 17.8K 2.23G 1.21G 1.21G 40.9K 5.11G 2.75G 2.75G
4 1.25K 160M 85.7M 85.7M 5.70K 730M 374M 374M
8 176 22M 6.33M 6.33M 1.72K 220M 63.6M 63.6M
16 24 3M 1019K 1019K 540 67.5M 22.8M 22.8M
32 4 512K 71.5K 71.5K 163 20.4M 2.49M 2.49M
64 5 640K 22.5K 22.5K 383 47.9M 1.68M 1.68M
128 2 256K 12K 12K 342 42.8M 1.92M 1.92M
256 2 256K 9K 9K 715 89.4M 3.14M 3.14M
512 1 128K 4.50K 4.50K 570 71.2M 2.50M 2.50M
Total 1.30M 167G 103G 103G 1.33M 171G 105G 105G
dedup = 1.02, compress = 1.62, copies = 1.00, dedup * compress / copies =
1.65
Having said all that as everyone else has said I wouldnt thought it would
be much use on a desktop, and 8 GB of ram it will run fine. Limiting arc as
people have suggested, shouldnt really be necessary as its mainly there to
stop memory fragmentation, and a few special application cases, generally
though it just works and you can leave it alone.
On 11 October 2013 16:58, Zaphod Beeblebrox <zbeeble at gmail.com> wrote:
> At home I have a large ZFS server serving up 18x 2T disks (raidZ1 x2). The
> server machine is core2-duo with 8G RAM and an 'em' GigE added (because the
> motherboard GigE was crap). I use SATA port expanders to multiply the 6
> motherboard ports. The machine has a UFS boot and there is a bit of a
> "fight" when something uses memory and/or UFS to do something.
>
> I would have had ZFS on my desktop, but instead I use a diskless NFS setup
> from the ZFS server. Originally, when I setup these systems, the NFS
> diskless desktop was avoiding the conflict between the nvidia driver being
> 32 bit only and ZFS liking more memory. This is long moot, but the setup
> remains. The only remaining task here is to use NFSv4 diskless. I'm not
> sure how easy/hard that is. The root mount seems to be NFSv3.
>
> As for ZFS on desktops, all the PC-BSD setups I've done for the family are
> ZFS. They typically have either 2 or 4 gig of RAM. I haven't really tried
> a low memory ZFS (say 512 meg). They seem to run acceptably well, but they
> aren't really challenging anything --- running a FreeBSD/gnome or
> FreeBSD/KDE desktop on a machine with 2 or 4 gig of RAM is really not a
> challenge.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
More information about the freebsd-fs
mailing list