Re[2]: ZFS out of swap space

armonia armonia at inbox.ru
Tue Mar 31 07:37:33 UTC 2015


 After specifying Michelle Sullivan restore everything , now the system is loaded .

At 10.1 LiveUSB I did zpool import but probably the tenth attempt, he has executed , the same thing happened on the 11 branch. So I removed just in case bad dataset - / var / db / mysql / billing,

After I did the export and rebooted in the hard disk. I am grateful to you about .

>Can you try if setting the sysctl variable vfs.zfs.free_max_blocks to
>a limited number, like, 100,000, would make the pool import properly?
 >You may have run out of memory because of too much data being free'ed
>in one transaction group?

This variable 9.3 and 10.1 do not see the branches . Most likely it was.

>(I would recommend doing this after importing the pool read-only and
>copy your data off, though).


Понедельник, 30 марта 2015, 11:19 -07:00 от Xin Li <delphij at delphij.net>:
>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA512
>
>On 03/30/15 01:32, armonia wrote:
>> Yes, my mistake was probably that I have included data
>> deduplication to see how it works, but not turned it off at the
>> right time. In this case, the machine memory of 4 GB ....
>> 
>> async_destroy - too enabled.
>> 
>> That is the conclusion I have deduplication disabled.
>
>Hrm, usually async_destroy should be enough to protect against this
>situation.
>
>Can you try if setting the sysctl variable vfs.zfs.free_max_blocks to
>a limited number, like, 100,000, would make the pool import properly?
> You may have run out of memory because of too much data being free'ed
>in one transaction group?
>
>(I would recommend doing this after importing the pool read-only and
>copy your data off, though).
>
>> How to import a pool of read-only?
>
>zpool import -o readonly poolname.
>
>> Thank you for your response.
>> 
>> zpool get all zroot NAME   PROPERTY                       VALUE
>> SOURCE zroot  size                           230G
>> - zroot  capacity                       24%
>> - zroot  altroot                        -
>> default zroot  health                         ONLINE
>> - zroot  guid                           1229884058434432944
>> default zroot  version                        -
>> default zroot  bootfs                         zroot
>> local zroot  delegation                     on
>> default zroot  autoreplace                    on
>> local zroot  cachefile                      -
>> default zroot  failmode                       wait
>> default zroot  listsnapshots                  on
>> local zroot  autoexpand                     off
>> default zroot  dedupditto                     0
>> default zroot  dedupratio                     1.02x
>> - zroot  free                           174G
>> - zroot  allocated                      56.1G
>> - zroot  readonly                       off
>> - zroot  comment                        ZFS
>> local zroot  expandsize                     0
>> - zroot  freeing                        0
>> default zroot  feature at async_destroy          enabled
>> local zroot  feature at empty_bpobj            active
>> local zroot  feature at lz4_compress           active
>> local zroot  feature at multi_vdev_crash_dump  enabled
>> local zroot  feature at spacemap_histogram     active
>> local zroot  feature at enabled_txg            active
>> local zroot  feature at hole_birth             active
>> local zroot  feature at extensible_dataset     enabled
>> local zroot  feature at bookmarks              enabled
>> local zroot  feature at filesystem_limits      enabled
>> local
>> 
>> 
>> ???????????, 30 ????? 2015, 0:36 -07:00 ?? Xin Li
>> < delphij at delphij.net >:
>> 
>> 
>> 
>> On 3/27/15 05:26, armonia wrote:
>>> After importing I press ctrl + t and here's the conclusion:
>> 
>>> load: 0.20 cmd: zpool 725 [tx->tx_sync_done_cv] 32.50r 0.00y
>>> 5.59s 0% 6432k
>> 
>> Have you ever enabled e.g. dedup on certain dataset(s) and have a
>> lot of files, and the pool don't have 'async destroy' feature
>> enabled? In that case the fastest way to recover, if this took too
>> long, would probably import the pool read-only and copy data to
>> another pool.
>> 
>> On -CURRENT you can use dtrace -qn 'zfs-dbgmsg{printf("%s\n", 
>> stringof(arg0))}' to see more verbose information, they are not
>> always helpful but will give you better idea on what is going on
>> under the hood .
>> 
>> Cheers, _______________________________________________ 
>>  freebsd-stable at freebsd.org
>> < /compose?To=freebsd%2dstable at freebsd.org > mailing list 
>>  http://lists.freebsd.org/mailman/listinfo/freebsd-stable To
>> unsubscribe, send any mail to 
>> " freebsd-stable-unsubscribe at freebsd.org
>> < /compose?To=freebsd%2dstable%2dunsubscribe at freebsd.org >"
>> 
>> 
>> 
>> --
>
>- -- 
>Xin LI < delphij at delphij.net >  https://www.delphij.net/
>FreeBSD - The Power to Serve!           Live free or die
>-----BEGIN PGP SIGNATURE-----
>Version: GnuPG v2.1.2 (FreeBSD)
>
>iQIcBAEBCgAGBQJVGZPKAAoJEJW2GBstM+nsPIAP/2aX5MItnX/LiLII+xKp/Hnx
>9TZWUdpEqwOpIWovjiF7N+Vp9Uz8RCCHl5yzMbd5p/cvaP6h7oQZiJYzBDLVRx61
>Rk3Uz7/SyycWPXlD6lhNYPZ9QrptgO6hhX5y4YHxOlibhe7NLCmZYNxBqNsqR0HW
>FoseCRP2+ima4Qu5P4dVKDCnKwMdifP7qvrbOZcyYWIVThVBH14Rp7w9zfiiAN6v
>AYSY9JLMYQGILfexORo/LG+kYI3gT2CIhYVNpfCsQLo5GNOucAZNYM5oO4aCt/BQ
>2DIzhp58F1z7JYUwZVJ0p7GSjuZ2peWqYYyGMqFkBU0cydskGj+wGwu154sx6Vyg
>xAgzqH/jG95DqkC6yDRoy/bvJ0zam2z3N9jR+XRqgVsuwYbEG7dQp6TBByN5PWp+
>UaRexsvknjNJA6Otqei5qQ5fcXfhaalTD+/3XB3eqExJa6sbONZ6qJdLeiDYe+3V
>wNRnuDQwatLCkLhQoFbXIdXQJ16Da4evmMrHd+YsKrytx2F/wMoNZru7Ilv6X+5L
>LhuGg26Kh2ohZQGvn4cWCus63wRWweEjpTpD4Ng2Ok+qIEgquC9kcveV1TSwxWi1
>ZusD5hYqJXO2rA6iB2MyQqZi6t4fBK00CG7SkAegrNaKnH2e245s7qwsg6huKKUA
>yMt5wuj4GXbRg9yjWs0j
>=m5xz
>-----END PGP SIGNATURE-----


-- 


More information about the freebsd-stable mailing list