From nobody Fri May 28 14:50:13 2021 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 8A0EABF934E for ; Fri, 28 May 2021 14:50:26 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from smtp-sofia.digsys.bg (zm1.digsys.bg [193.68.21.128]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp-sofia.digsys.bg", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Fs7021xnTz3QBG; Fri, 28 May 2021 14:50:25 +0000 (UTC) (envelope-from daniel@digsys.bg) Received: from [193.68.6.100] ([193.68.6.100]) (authenticated bits=0) by smtp-sofia.digsys.bg (8.16.1/8.16.1) with ESMTPSA id 14SEoDGX020140 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 28 May 2021 17:50:13 +0300 (EEST) (envelope-from daniel@digsys.bg) Content-Type: text/plain; charset=utf-8 List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.7\)) Subject: Re: Upsizing a ZFS filesystem - shift question From: Daniel Kalchev In-Reply-To: Date: Fri, 28 May 2021 17:50:13 +0300 Cc: joe mcguckin , freebsd-fs Content-Transfer-Encoding: quoted-printable Message-Id: References: To: Alan Somers X-Mailer: Apple Mail (2.3608.120.23.2.7) X-Rspamd-Queue-Id: 4Fs7021xnTz3QBG X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] X-Spam: Yes X-ThisMailContainsUnwantedMimeParts: N The ashift property is on a per-vdev =E2=80=94 so you can have different = ashift vdevs in the same pool. Another best bet is, if the system is running recent FreeBSD with ZFS = that has the device_removal feature and you are not using raidz. You can add a new vdev, then remove the old vdevs one by one. = Eventually, you will end up with the new vdevs only. This will ensure data is online at all time, but performance will be of = course best with recreating the zpool =E2=80=94 you can use zfs = send/receive to move data to the new pool. Daniel > On 27 May 2021, at 21:36, Alan Somers wrote: >=20 > ashift affects how the data is laid out on disk, so you can't change = it > after the drives have been replaced. And you don't want to use = ashift=3D9 > for the new drives; their performance will suck. Your best bet is to > create a new zpool, then send | receive the data from the old pool. > -Alan >=20 > On Thu, May 27, 2021 at 12:33 PM joe mcguckin wrote: >=20 >> I have an existing 32TB filesystem (2 X 8T vdevs). I want to upgrade = each >> of the drives to 16TB. I pulled one drive and tried a =E2=80=98zfs = replace=E2=80=99 comand, >> but zfs replied with some message about new drive >> optimal ashift doesn=E2=80=99t match the vdev. It suggested Retrying = the >> =E2=80=98replace=E2=80=99 command with =E2=80=98zfs replace -o = ashift=3DN=E2=80=99 >>=20 >> Ok, a little investigation shows the existing filesystem has an = ashift of >> 9. These are older 2T drives with 512 byte sectors. The newer drives = are >> 4kn sectors. >>=20 >> Is ashift settable on a drive by drive basis? Can I temporarily set = ashift >> on a drive until all the drives in a vdev have been replaced? Once = all >> drives have been replaced, how do I set ashift=3D12 for all the = drives in the >> vdev? >>=20 >> After replacing all the drives, will the additional space magicially >> appear or is there an additional command or series of steps? >>=20 >> Thanks, >>=20 >> Joe >>=20 >> Joe McGuckin >> ViaNet Communications >>=20 >> joe@via.net >> 650-207-0372 cell >> 650-213-1302 office >> 650-969-2124 fax >>=20 >>=20 >>=20 >>=20