Re: mirror vdevs with different sizes
- In reply to: Martin Simmons : "Re: mirror vdevs with different sizes"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 25 Mar 2022 17:34:09 UTC
Yes, exactly. There's nothing mysterious about large vdevs in ZFS,
it's just that a greater fraction of the OP's pool's data will be
stored on the new disks, but their performance won't likely be much
better than the old disks.
-Alan
On Fri, Mar 25, 2022 at 11:05 AM Martin Simmons <martin@lispworks.com> wrote:
>
> Is "the new disks will have a lower ratio of IOPS/TB" another way of saying
> "more of the data will be stored on the new disks, so they will be accessed
> more frequently"? Or is this something about larger vdevs in general?
>
> __Martin
>
>
> >>>>> On Fri, 25 Mar 2022 10:09:39 -0600, Alan Somers said:
> >
> > There's nothing wrong with doing that. The performance won't be
> > perfectly balanced, because the new disks will have a lower ratio of
> > IOPS/TB. But that's fine. Go ahead.
> > -Alan
> >
> > On Fri, Mar 25, 2022 at 9:17 AM John Doherty <bsdlists@jld3.net> wrote:
> > >
> > > Hello, I have an existing zpool with 12 mirrors of 8 TB disks. It is
> > > currently about 60% full and we expect to fill the remaining space
> > > fairly quickly.
> > >
> > > I would like to expand it, preferably using 12 mirrors of 16 TB disks.
> > > Any reason I shouldn't do this?
> > >
> > > Using plain files created with truncate(1) like these:
> > >
> > > [root@ibex] # ls -lh /vd/vd*
> > > -rw-r--r-- 1 root wheel 8.0G Mar 25 08:49 /vd/vd0
> > > -rw-r--r-- 1 root wheel 8.0G Mar 25 08:49 /vd/vd1
> > > -rw-r--r-- 1 root wheel 16G Mar 25 08:49 /vd/vd2
> > > -rw-r--r-- 1 root wheel 16G Mar 25 08:49 /vd/vd3
> > >
> > > I can first do this:
> > >
> > > [root@ibex] # zpool create ztest mirror /vd/vd{0,1}
> > > [root@ibex] # zpool list ztest
> > > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
> > > HEALTH ALTROOT
> > > ztest 7.50G 384K 7.50G - - 0% 0% 1.00x
> > > ONLINE -
> > >
> > > And then do this:
> > >
> > > [root@ibex] # zpool add ztest mirror /vd/vd{2,3}
> > > [root@ibex] # zpool list ztest
> > > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
> > > HEALTH ALTROOT
> > > ztest 23G 528K 23.0G - - 0% 0% 1.00x
> > > ONLINE -
> > >
> > > And FWIW, everything works as expected. But I've never constructed a
> > > real zpool with vdevs of different sizes and I don't know whether there
> > > might be any expected problems.
> > >
> > > I could just create a new zpool with new disks, but most of the existing
> > > data and most of the expected new data is in just two file systems and
> > > for simplicity's sake from the perspective of those users, it would be
> > > nicer to just make the existing file systems larger than to give them
> > > access to a new, different one.
> > >
> > > Any comments, suggestions, warnings, etc. much appreciated. Thanks.
> > >
> >