ZFS on RPi (was: zfs on BeagleBone)
Peter Jeremy
peter at rulingia.com
Fri Mar 27 08:43:54 UTC 2015
On 2015-Mar-23 03:41:47 -0400, Brett Wynkoop <freebsd-arm at wynn.com> wrote:
>On Sat, 21 Mar 2015 17:53:46 +1100
>Peter Jeremy <peter at rulingia.com> wrote:
>
>
>> panic: vm_fault: fault on nofault entry, addr: dd2f1000
>> KDB: stack backtrace:
>> Uptime: 11m46s
>> Physical memory: 473 MB
>> Dumping 36 MB:sdhci_bcm0: DMA in use
>>
>> The tuning I did was:
>> vfs.zfs.arc_max="24M"
>> vfs.zfs.vdev.cache.size="5M"
>
>Sorry for the delay. Other things have kept me from being as attentive
>to the arm list as I might like.
That's OK. Thanks for the response. I was in the middle of a buildworld
so the following are a fresh world at head r280279.
>I strongly suggest setting vm.kmem_size to your real memory and do the
>same with vm.kmem_size_max. I came up with this after doing loads of
>reading for zfs on memory restricted systems.
I had deliberately not set vm.kmem_size or vm.kmem_size_max because
the defaults seemed reasonable:
hw.physmem: 495562752
hw.usermem: 469278720
hw.realmem: 536866816
vm.kmem_size: 161853440
vm.kmem_size_min: 12582912
vm.kmem_size_max: 422366413
vm.kmem_map_size: 13819904
vm.kmem_map_free: 148033536
I tried tuning vm.kmem_size{,_max} to hw.physmem and it still crashed.
vm.kmem_size: 495562752
vm.kmem_size_min: 12582912
vm.kmem_size_max: 495562752
"vmstat 1" starting roughly the same time as I created the pool:
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr mm0 md0 in sy cs us sy id
1 0 0 273M 405M 0 0 0 0 0 5 0 0 2860 144 403 0 2 98
1 0 0 274M 405M 27 0 13 0 2 5 11 0 3341 280 1282 1 5 95
0 1 0 285M 401M 361 0 11 0 195 12 19 0 9592 501 7453 1 95 4
0 1 0 285M 401M 0 0 0 0 0 6 12 0 2625 134 527 0 3 97
0 1 0 285M 401M 0 0 0 0 0 6 21 0 2661 139 643 0 2 98
0 1 0 285M 401M 0 0 0 0 0 6 15 0 2600 130 558 0 2 98
0 1 0 285M 401M 0 0 0 0 0 6 13 0 2557 130 536 0 2 98
0 1 0 285M 401M 0 0 0 0 0 6 41 0 3309 129 855 0 3 97
0 1 0 285M 401M 0 0 0 0 0 6 193 0 3984 129 2568 1 9 91
0 1 0 285M 401M 0 0 0 0 0 6 324 0 4211 140 4018 0 8 92
0 1 0 285M 401M 1 0 0 0 0 6 324 0 4160 128 4014 0 13 87
0 1 0 285M 401M 0 0 0 0 0 6 323 0 4224 127 3999 0 12 88
0 1 0 285M 401M 0 0 0 0 0 6 324 0 4214 128 4029 0 13 87
0 1 0 285M 401M 0 0 0 0 0 6 325 0 4127 131 4026 0 5 95
0 1 0 285M 401M 0 0 0 0 0 6 324 0 4107 140 3977 0 13 87
0 1 0 285M 401M 0 0 0 0 0 6 324 0 4179 130 4027 1 9 90
0 1 0 285M 401M 0 0 0 0 0 6 324 0 4179 130 4035 0 12 88
[panic at this point]
FWIW, the command I used was:
zpool create -O atime=off -O compression=lz4 tank mmcsd0s2d
The slice is ~12GB:
6821865 24256512 4 freebsd-zfs (12G)
Looking at the disk, it looks like the ZFS labels were written, though
"zfs import" can't see it. All 4 labels look like:
--------------------------------------------
LABEL 0
--------------------------------------------
version: 5000
name: 'tank'
state: 0
txg: 0
pool_guid: 15675417041144722368
hostid: 3523104732
hostname: 'rpi1.rulingia.com'
top_guid: 17877609725061934307
guid: 17877609725061934307
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 17877609725061934307
path: '/dev/mmcsd0s2d'
phys_path: '/dev/mmcsd0s2d'
whole_disk: 1
metaslab_array: 0
metaslab_shift: 0
ashift: 9
asize: 12414615552
is_log: 0
create_txg: 4
features_for_read:
create_txg: 4
--
Peter Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 949 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-arm/attachments/20150327/1e5b1aff/attachment.sig>
More information about the freebsd-arm
mailing list