Verry serious problem with ZFS & 12.0
freebsd at pki2.com
Wed Aug 28 23:08:48 UTC 2019
On Thu, 2019-08-29 at 00:45 +0200, Albert Shih wrote:
> After update 4 servers from 11.2 to 12.0 without any problem, wait few
> weeks to see if everything work well, and it did. I just upgrade my
I am running 12.0 on a Supermicro with two 6-core E5-2620 processors,
192G of RAM with two RAIDz2 pools *but not* ZFS root, which is hardware
RAID1. One volume has 250G of NVMe cache and the other 480G of SSD
cache. Both volumes have SLOG. The only problem I've experienced is
boot: it forgets the boot volume during the boot process. I fixed that
problem in loader.conf with:
I have another Supermicro with 128G RAM, AMD processors (16x2) but
running 11.3 because I'm too lazy to shut everything down just to
upgrade the NAS. The NAS has two ZFS volumes and also uses hardware
RAID1 for root. The NAS is also my back-up server using dump and ZFS
send from other systems, including ZFS systems.
The slowdowns that I experience under 12.0 is when I have a lot of
network activity and running three videos (it's my primary workstation).
The disk arrays are responsive. The 12.0 system sees heavy use but not
very heavy use, which includes several rsync/etc operations under cron
(including rsync a Debian archive) and a network gateway/router.
On different hardware and under FreeBSD 9.0, I had a lot of problems
with crap disks that I did not have under 8.x, and so I downgraded. I
didn't have a problem under 10.0.
I believe in keeping my BIOS/firmware updated on an annual cycle but
firmware can be tricky. It turns out that a lot of bugs can get fixed
that don't make their way into release notes; and new ones introduced.
> During the upgrade I also upgrade all firmware for the hardware.
> And now I got a very serious issue with my server.
> Configuration :
> Dell PowerEdge R740Xd with H730P, 192 Go Ram, 2 SAS mechanical disk
> for the system,
> 2 SSD (in a zfs pool) for the mail index (cyrus), and 28 mechanical
> (in a second zfs pool) for the mailbox.
> The problem:
> After running few days the zfs pool with the 2 SSD are not
> The system are perfectly working.
> The second zpool (mechanical disk) are perfectly working.
> I got zero log, zero message in the console or in dmesg.
> The arc_size are correct, it's around 70-75 %.
> The moment the zfs pool become not responding are random, not
> related to
> any activity (human or cron).
> The only option I pass for the kernel related to ZFS are
> vfs.zfs.min_auto_ashift=12 and
> vfs.zfs.prefetch_disable=1. Without the second one the system no
> responding (under 11.2) when the server send (through zfs send) the
> data to another
> After the first problem I make a zfs upgrade, thinking maybe that's
> problem so I'm not sure I can downgrade to 11.2 (and 11.2 are EOL)
> In your opinion :
> 1/ What should I do to try to find the problem ?
> 2/ Do you think that's a hardware/firmware problem or FreeBSD
> the point is the second zpool are working perfectly so I'm thinking
> some firmware/hardware/compatibility problem.
> Albert SHIH
> DIO bâtiment 15
> Observatoire de Paris
> Heure local/Local time:
> Thu 29 Aug 2019 12:26:55 AM CEST
> freebsd-questions at freebsd.org mailing list
> To unsubscribe, send any mail to "
> freebsd-questions-unsubscribe at freebsd.org"
More information about the freebsd-questions