zdb and zpool status inconsistency question...
Chris Watson
bsdunix44 at gmail.com
Tue Sep 28 02:38:16 UTC 2010
Apologies if this is common knowledge but I am confused about that
output of zdb and zpool status. Running a:
priyanka# zdb data
version=14
name='data'
state=0
txg=23
pool_guid=7697236283104447800
hostid=1421614680
hostname='priyanka.open-systems.net'
vdev_tree
type='root'
id=0
guid=7697236283104447800
children[0]
type='mirror'
id=0
guid=13989036133163076272
metaslab_array=26
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=15173803910329500054
path='/dev/ada2'
whole_disk=0
children[1]
type='disk'
id=1
guid=17277025077506889808
path='/dev/ada3'
whole_disk=0
children[1]
type='mirror'
id=1
guid=5773672864445772603
metaslab_array=23
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=2441189965306101196
path='/dev/ada4'
whole_disk=0
children[1]
type='disk'
id=1
guid=6210476332908709518
path='/dev/ada5'
whole_disk=0
Uberblock
magic = 0000000000bab10c
version = 14
txg = 11387
guid_sum = 13222208345635842403
timestamp = 1285637267 UTC = Mon Sep 27 20:27:47 2010
Dataset mos [META], ID 0, cr_txg 4, 1.06M, 44 objects
Dataset data/Aperture [ZPL], ID 31, cr_txg 37, 47.9G, 17596 objects
Dataset data [ZPL], ID 16, cr_txg 1, 19.0K, 5 objects
[...]
capacity operations bandwidth ----
errors ----
description used avail read write read write read
write cksum
data 47.9G 1.77T 292 0 32.6M 0
0 0 2
mirror 23.9G 904G 146 0 16.3M 0
0 0 6
/dev/ada2 141 0 16.5M 0
0 0 6
/dev/ada3 141 0 16.5M 0
0 0 6
mirror 23.9G 904G 146 0 16.3M 0
0 0 2
/dev/ada4 141 0 16.5M 0
0 0 2
/dev/ada5 141 0 16.5M 0
0 0 2
priyanka#
produces cksum errors of 2,6,6,6,2,2,2 respectively.
While a:
priyanka# zpool status -v data
pool: data
state: ONLINE
scrub: scrub completed after 0h6m with 0 errors on Mon Sep 27
20:22:10 2010
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada3 ONLINE 0 0 0
mirror ONLINE 0 0 0
ada4 ONLINE 0 0 0
ada5 ONLINE 0 0 0
errors: No known data errors
priyanka#
So the two questions I have that I don't understand are the following:
1) Why does zdb report cksum errors while zpool status does not?
2) Assuming zdb is correct, shouldnt the errors from zdb for the pool
"data" be 8 instead of 2? Since the first mirror has 6 and the second
mirror has 2 cksum errors?
The zdb man page is pretty sparse. And I know it's not meant to be run
by the average joe. I'm just trying to learn ZFS as thoroughly as I
can. So while I have a test system I am trying many configs and
options to learn how it works and why. And the above inconsistency
confused me. Again apologies if this is covered elsewhere.
Thanks for any schooling about the above!
Chris
More information about the freebsd-fs
mailing list