From nobody Fri Mar 25 17:34:09 2022 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 88DD41A335B9 for ; Fri, 25 Mar 2022 17:34:28 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-oo1-f45.google.com (mail-oo1-f45.google.com [209.85.161.45]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KQ8NM38fZz3rq2 for ; Fri, 25 Mar 2022 17:34:27 +0000 (UTC) (envelope-from asomers@gmail.com) Received: by mail-oo1-f45.google.com with SMTP id h3-20020a4ae8c3000000b00324b9ae6ff2so1319737ooe.10 for ; Fri, 25 Mar 2022 10:34:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tlptXK/h7xdgJUpNMzOtQNRn0PhOEbxr0/bTPd9qaQw=; b=wwPZvfyu/GpBBTgzLx/mdsjzLsYS9tqbu3cCJH10lXDg8BMgud6cdNZ1EkCK4rkiSK iYh/fltxh4dMA3yqoJ+zoofSV7emLptZEjpV1apYfVGhEq0dK8c52ewRl5IZR40wP/UA OVYmPSv4fqyNOLOKCbOrkNZJwxlG+X2t1bSiPIoO9Q46cPSqX2hZ+j1x22TG2sVyMqMj oEQvwWVlu6OQw1BivlOdcYYnRe+RS+dOkvBJ5yBWs+BESppfsuf3d7VKGebp/Xfq0SdM SE1c7mvnD73XRtpky+gQmtkIMzcRTZIDeWiJ9S3njfnLQ/Rr5ZIB+6kIMbxLkxaVgvA+ kloA== X-Gm-Message-State: AOAM530vfGDsxzs2nf9xtNkFptfmsIJNhNRdm2rJv4/TcmULseX97wb/ 7ZHibtnmiKxUSpYL02/53wSQZgxO4xspGEBfIcIkTrtB X-Google-Smtp-Source: ABdhPJzI//URx4Q3apNGU3cAPSdOZaVmZ5HVzAxJPn3obcuRvwdjK3ErXtVUFYskl357sNMikW9C7qORh2IsfTiW8xs= X-Received: by 2002:a4a:1106:0:b0:324:acea:8de with SMTP id 6-20020a4a1106000000b00324acea08demr4261157ooc.41.1648229660605; Fri, 25 Mar 2022 10:34:20 -0700 (PDT) List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 References: <95932839-F6F8-4DCB-AA7F-46040CFA1DE1@jld3.net> <202203251705.22PH56du029811@higson.cam.lispworks.com> In-Reply-To: <202203251705.22PH56du029811@higson.cam.lispworks.com> From: Alan Somers Date: Fri, 25 Mar 2022 11:34:09 -0600 Message-ID: Subject: Re: mirror vdevs with different sizes To: Martin Simmons Cc: John Doherty , freebsd-fs Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 4KQ8NM38fZz3rq2 X-Spamd-Bar: - Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=none; spf=pass (mx1.freebsd.org: domain of asomers@gmail.com designates 209.85.161.45 as permitted sender) smtp.mailfrom=asomers@gmail.com X-Spamd-Result: default: False [-1.13 / 15.00]; RCVD_TLS_ALL(0.00)[]; ARC_NA(0.00)[]; FREEFALL_USER(0.00)[asomers]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; R_SPF_ALLOW(-0.20)[+ip4:209.85.128.0/17]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; PREVIOUSLY_DELIVERED(0.00)[freebsd-fs@freebsd.org]; DMARC_NA(0.00)[freebsd.org]; RWL_MAILSPIKE_GOOD(0.00)[209.85.161.45:from]; NEURAL_SPAM_SHORT(0.85)[0.848]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_ALL(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[209.85.161.45:from]; NEURAL_HAM_MEDIUM(-0.98)[-0.978]; MLMMJ_DEST(0.00)[freebsd-fs]; FORGED_SENDER(0.30)[asomers@freebsd.org,asomers@gmail.com]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US]; FROM_NEQ_ENVFROM(0.00)[asomers@freebsd.org,asomers@gmail.com]; FREEMAIL_ENVFROM(0.00)[gmail.com]; RCVD_COUNT_TWO(0.00)[2] X-ThisMailContainsUnwantedMimeParts: N Yes, exactly. There's nothing mysterious about large vdevs in ZFS, it's just that a greater fraction of the OP's pool's data will be stored on the new disks, but their performance won't likely be much better than the old disks. -Alan On Fri, Mar 25, 2022 at 11:05 AM Martin Simmons wrote: > > Is "the new disks will have a lower ratio of IOPS/TB" another way of saying > "more of the data will be stored on the new disks, so they will be accessed > more frequently"? Or is this something about larger vdevs in general? > > __Martin > > > >>>>> On Fri, 25 Mar 2022 10:09:39 -0600, Alan Somers said: > > > > There's nothing wrong with doing that. The performance won't be > > perfectly balanced, because the new disks will have a lower ratio of > > IOPS/TB. But that's fine. Go ahead. > > -Alan > > > > On Fri, Mar 25, 2022 at 9:17 AM John Doherty wrote: > > > > > > Hello, I have an existing zpool with 12 mirrors of 8 TB disks. It is > > > currently about 60% full and we expect to fill the remaining space > > > fairly quickly. > > > > > > I would like to expand it, preferably using 12 mirrors of 16 TB disks. > > > Any reason I shouldn't do this? > > > > > > Using plain files created with truncate(1) like these: > > > > > > [root@ibex] # ls -lh /vd/vd* > > > -rw-r--r-- 1 root wheel 8.0G Mar 25 08:49 /vd/vd0 > > > -rw-r--r-- 1 root wheel 8.0G Mar 25 08:49 /vd/vd1 > > > -rw-r--r-- 1 root wheel 16G Mar 25 08:49 /vd/vd2 > > > -rw-r--r-- 1 root wheel 16G Mar 25 08:49 /vd/vd3 > > > > > > I can first do this: > > > > > > [root@ibex] # zpool create ztest mirror /vd/vd{0,1} > > > [root@ibex] # zpool list ztest > > > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP > > > HEALTH ALTROOT > > > ztest 7.50G 384K 7.50G - - 0% 0% 1.00x > > > ONLINE - > > > > > > And then do this: > > > > > > [root@ibex] # zpool add ztest mirror /vd/vd{2,3} > > > [root@ibex] # zpool list ztest > > > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP > > > HEALTH ALTROOT > > > ztest 23G 528K 23.0G - - 0% 0% 1.00x > > > ONLINE - > > > > > > And FWIW, everything works as expected. But I've never constructed a > > > real zpool with vdevs of different sizes and I don't know whether there > > > might be any expected problems. > > > > > > I could just create a new zpool with new disks, but most of the existing > > > data and most of the expected new data is in just two file systems and > > > for simplicity's sake from the perspective of those users, it would be > > > nicer to just make the existing file systems larger than to give them > > > access to a new, different one. > > > > > > Any comments, suggestions, warnings, etc. much appreciated. Thanks. > > > > >