ZFS / zpool size
christer.solskogen at gmail.com
Tue Jan 17 17:18:26 UTC 2012
On Tue, Jan 17, 2012 at 5:18 PM, Tom Evans <tevans.uk at googlemail.com> wrote:
> On Tue, Jan 17, 2012 at 4:00 PM, Christer Solskogen
> <christer.solskogen at gmail.com> wrote:
>> A overhead of almost 300GB? That seems a bit to much, don't you think?
>> The pool consist of one vdev with two 1,5TB disks and one 3TB in raidz1.
> Confused about your disks - can you show the output of zpool status.
$ zpool status
scan: scrub repaired 0 in 9h11m with 0 errors on Tue Jan 17 18:11:26 2012
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
gpt/slog ONLINE 0 0 0
da0 ONLINE 0 0 0
$ dmesg | grep ada
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <Crucial CT32GBFAB0 MER1.01k> ATA-6 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: Command Queueing enabled
ada0: 31472MB (64454656 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <WDC WD15EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <ST3000DM001-9YN166 CC98> ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2: Previously was known as ad8
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: <WDC WD15EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad10
> If you have a raidz of N disks with a minimum size of Y GB, you can
> expect ``zpool list'' to show a size of N*Y and ``zfs list'' to show a
> size of roughly (N-1)*Y.
Ah, that explains it.
$ zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 4.06T 3.33T 748G 82% 1.00x ONLINE -
what zpool iostat show is how much of the disks are set to ZFS.
> So, on my box with 2 x 6 x 1.5 TB drives in raidz, I see a zpool size
> of 16.3 TB, and a zfs size of 13.3 TB.
Yeap. I can see clearly now, thanks!
More information about the freebsd-stable