Next ZFSv28 patchset ready for testing.
Andrei Kolu
antik at bsd.ee
Thu Dec 16 07:33:22 UTC 2010
2010/12/15 Andrei Kolu <antik at bsd.ee>:
> 2010/12/14 Pawel Jakub Dawidek <pjd at freebsd.org>
>>
>> On Mon, Dec 13, 2010 at 10:45:56PM +0100, Pawel Jakub Dawidek wrote:
>> > Hi.
>> >
>> > The new patchset is ready for testing:
>> >
>> > http://people.freebsd.org/~pjd/patches/zfs_20101212.patch.bz2
>>
>> You can also download the whole source tree already patched from here:
>>
>> http://people.freebsd.org/~pjd/zfs_20101212.tbz
>>
>
> # uname -a
> FreeBSD freebsd9.raidon.eu 9.0-CURRENT FreeBSD 9.0-CURRENT #0: Tue Dec
> 14 14:37:01 EET 2010
> root at freebsd9.raidon.eu:/usr/obj/usr/src/sys/GENERIC amd64
>
> Create files filled with zeroes:
> # mkfile 512m disk1 disk2 disk3 disk4
> # zpool create andmed raidz /home/antik/disk{1,2,3,4}
> # zpool status andmed
> pool: andmed
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> andmed ONLINE 0 0 0
> raidz1-0 ONLINE 0 0 0
> /home/antik/disk1 ONLINE 0 0 0
> /home/antik/disk2 ONLINE 0 0 0
> /home/antik/disk3 ONLINE 0 0 0
> /home/antik/disk4 ONLINE 0 0 0
>
> errors: No known data errors
>
> Now let's try to scrub:
> # zpool scrub andmed
>
> Fatal trap 12: page fault while in kernel mode
> cpuid = 1; apic id = 01
> fault virtual address = 0x1fb8007b
> fault code = supervisor read data, page not present
> instruction pointer = 0x20:0xffffffff812967d2
> stack pointer = 0x20:0xffffff80ee605548
> frame pointer = 0x28:0xffffff80ee605730
> code segment = base 0x0, limit 0xfffff, type 0x1b
> = DPL 0, pres1, long 1, def32 0, gran 1
> processor eflags = interrupt enabled, resume, IOPL = 0
> current process = 2081 (initial thread)
> [ thread pid 2081 tid 100121 ]
> Stopped at vdev_file_open+0x92: testb $0x20,0x7b(%rax)
>
>
> Similar problem on FreeBSD 8.1:
> http://www.freebsd.org/cgi/query-pr.cgi?pr=153126
>
Workaround in FreeBSD would be mounting file based devices as block devices:
# mdconfig -f disk1
md0
# mdconfig -f disk2
md1
# mdconfig -f disk3
md2
# mdconfig -f disk4
md3
# zpool create andmed raidz md{0,1,2,3}
# zpool scrub andmed
# zpool status
pool: andmed
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Wed Dec 15 15:57:34 2010
config:
NAME STATE READ WRITE CKSUM
andmed ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
md0 ONLINE 0 0 0
md1 ONLINE 0 0 0
md2 ONLINE 0 0 0
md3 ONLINE 0 0 0
errors: No known data errors
----------------------------------------------------------------------------------------
Deduplication is nice:)
# zfs dedup=on peegel
# zpool set dedupditto=100 peegel
# zpool get all peegel | grep dedup
peegel dedupditto 100 local
peegel dedupratio 1.45x -
# zdb -DD peegel
DDT-sha256-zap-duplicate: 535 entries, size 284 on disk, 153 in core
DDT-sha256-zap-unique: 446 entries, size 316 on disk, 183 in core
DDT histogram (aggregated over all DDTs):
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 446 7.27M 3.92M 3.92M 446 7.27M 3.92M 3.92M
2 534 5.91M 3.25M 3.25M 1.04K 11.8M 6.50M 6.50M
4 1 512 512 512 4 2K 2K 2K
Total 981 13.2M 7.17M 7.17M 1.48K 19.1M 10.4M 10.4M
dedup = 1.45, compress = 1.83, copies = 1.00, dedup * compress / copies = 2.66
----------------------------------------------------------------------------------------
Now, I am trying to add cache devices to pool:
# zpool add andmed cache da0 da1
looks like this command just hangs- system is responsible but pool is
not accessible. How long it would take to cache data? I see no I/O
operations going on. No error messages either.
----------------------------------------------------------------------------------------
PS: All this file based disk stuff I am trying to do is purely for
entertainment purposes and of course for demonstration of technology.
Andrei
More information about the freebsd-fs
mailing list