From nobody Wed Apr 06 13:22:01 2022 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 3A3F01A830D1 for ; Wed, 6 Apr 2022 13:22:13 +0000 (UTC) (envelope-from mikej@paymentallianceintl.com) Received: from us-smtp-delivery-197.mimecast.com (us-smtp-delivery-197.mimecast.com [170.10.133.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.mimecast.com", Issuer "DigiCert TLS RSA SHA256 2020 CA1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KYQCm3vLvz57j9 for ; Wed, 6 Apr 2022 13:22:12 +0000 (UTC) (envelope-from mikej@paymentallianceintl.com) Received: from MAIL-HUB.pai.local (175.158.26.216.gopai.com [216.26.158.175]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id us-mta-654-5uqdydDQO1OS_r7586L3Pw-1; Wed, 06 Apr 2022 09:22:03 -0400 X-MC-Unique: 5uqdydDQO1OS_r7586L3Pw-1 Received: from MAIL-HUB.pai.local (10.10.0.250) by MAIL-HUB.pai.local (10.10.0.250) with Microsoft SMTP Server (TLS) id 15.0.1497.28; Wed, 6 Apr 2022 09:22:02 -0400 Received: from MAIL-HUB.pai.local ([fe80::a02e:93c2:c16a:6af8]) by MAIL-HUB.pai.local ([fe80::a02e:93c2:c16a:6af8%15]) with mapi id 15.00.1497.028; Wed, 6 Apr 2022 09:22:02 -0400 From: "Michael Jung (USER)" To: freebsd-fs Subject: DRAID - Expansion and other issues Thread-Topic: DRAID - Expansion and other issues Thread-Index: AdhJtcngdXkAOXWiT1GRYc8ck+wOog== Date: Wed, 6 Apr 2022 13:22:01 +0000 Message-ID: Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.250.0.59] x-c2processedorg: 474f336e-f930-49ec-9717-e3226b5b6e6e List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: paymentallianceintl.com Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_f42afbe959104522952f3eb57fa5f865MAILHUBpailocal_" X-Rspamd-Queue-Id: 4KYQCm3vLvz57j9 X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=pass (policy=none) header.from=paymentallianceintl.com; spf=pass (mx1.freebsd.org: domain of mikej@paymentallianceintl.com designates 170.10.133.197 as permitted sender) smtp.mailfrom=mikej@paymentallianceintl.com X-Spamd-Result: default: False [-3.90 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; HAS_XOIP(0.00)[]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; R_SPF_ALLOW(-0.20)[+ip4:170.10.133.0/24]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; NEURAL_HAM_LONG(-1.00)[-1.000]; RCPT_COUNT_ONE(0.00)[1]; RCVD_COUNT_THREE(0.00)[4]; RWL_MAILSPIKE_EXCELLENT(0.00)[170.10.133.197:from]; TO_DN_ALL(0.00)[]; NEURAL_HAM_SHORT(-1.00)[-1.000]; DMARC_POLICY_ALLOW(-0.50)[paymentallianceintl.com,none]; MLMMJ_DEST(0.00)[freebsd-fs]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:30031, ipnet:170.10.132.0/23, country:US]; RCVD_TLS_LAST(0.00)[]; RCVD_IN_DNSWL_LOW(-0.10)[170.10.133.197:from] X-ThisMailContainsUnwantedMimeParts: N --_000_f42afbe959104522952f3eb57fa5f865MAILHUBpailocal_ Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi! I started playing with draid some months ago and I have a problem I cannot = figure out. I started out with a single draid2:2d:10c:0s and life was good. Ran some t= ests, added a special device and life was better. Then I expanded the pool by adding an= other draid2:2d:10c:0s. Now the problem: the pool says it has ~18TB worth of spa= ce, the file system only shows ~9TB. It did not auto-expand. I'm currently on main-n253875-8e72f458c6d: but I initially spun this up jus= t after DRAID was brought Into the tree, at least when I became aware of it. I have= tried individually on lining devices in the pool with "-e", exporting/importing t= he pool etc. and basically every suggestion my "google foo" would lead me to. I have no useful data on this pool and I can destroy and re-create it but I= would like to save some time and get opinions or fact on: Can a draid pool be expanded after a special device has been added? It wou= ld seem so but the filesystem does not reflect that.? What at is the correct way to wire down the following - I had problems afte= r not doing this since the 2.x days and would really want to wire down my draid devices= now that I have other pools. I have pool construction all scripted but of course I want physical disk da= to always be consistent. Then I easily test any construct someone comes up with. root@draid:/home/mikej # camcontrol devlist -b scbus0 on ata0 bus 0 scbus1 on ata1 bus 0 scbus2 on mpt0 bus 0 scbus3 on mps0 bus 0 <-- scbus4 on camsim0 bus 0 scbus-1 on xpt0 bus 0 root@draid:/home/mikej # mps0: port 0x5000-0x50ff mem 0xfd4fc000-= 0xfd4fffff,0xfd480000-0xfd4bffff irq 19 at device 0.0 on pci4 mps0: Firmware: 20.00.04.00, Driver: 21.02.00.00-fbsd mps0: IOCCapabilities: 1285c at scbus3 target 63 lun 0 (da42,pass35) at scbus3 target 67 lun 0 (da46,pass39) at scbus3 target 68 lun 0 (da47,pass40) EXAMPLE: Is this correct? hint.scbus.3.at=3D"mps0" hint.da.42.at=3D"scbus3" hint.da.42.target=3D"63" hint.da.42.unit=3D"0" hint.da.46.at=3D"scbus3" hint.da.46.target=3D"67" hint.da.46.unit=3D"0" hint.da.47.at=3D"scbus3" hint.da.47.target=3D"68" hint.da.47.unit=3D"0" I will try again, but this did not seem to work for me. This is my home la= b but it's now painful to spin down this host as it's a iSCSI target for several machi= nes for a ESXi project I'm working on so I would like to put wired down SCSI devices = to rest. And yes, I have scheduled VM backups for my project to another data store t= hat is not this box ;-) It's also odd, that I did not get all GPT labels in the zpool. All disks ha= ve GPT labels. I have added for the next reboot to loader.conf. kern.geom.label.disk_ident.enable=3D"0" kern.geom.label.gptid.enable=3D"1 This is an old Dell MD-1000 shelf I had laying around with a bunch of 1TB d= rives which I thought would be perfect for playing around with draid. mikej@draid:~ $ zpool get all tank | grep expand tank autoexpand on local tank expandsize - - mikej@draid:~ $ zpool list -v tank NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP D= EDUP HEALTH ALTROOT tank 18.5T 8.02T 10.4T - - 0% 43% 1= .00x ONLINE - draid2:2d:10c:0s-0 9.03T 4.06T 4.97T - - 0% 44.9% = - ONLINE gpt/da0p1 - - - - - - - = - ONLINE gpt/d10p1 - - - - - - - = - ONLINE da39p1 - - - - - - - = - ONLINE da40p1 - - - - - - - = - ONLINE da41p1 - - - - - - - = - ONLINE da43p1 - - - - - - - = - ONLINE da44p1 - - - - - - - = - ONLINE da42p1 - - - - - - - = - ONLINE da45p1 - - - - - - - = - ONLINE gpt/d19p1 - - - - - - - = - ONLINE draid2:2d:10c:0s-1 9.03T 3.96T 5.07T - - 0% 43.8% = - ONLINE da46p1 - - - - - - - = - ONLINE gpt/d11p1 - - - - - - - = - ONLINE gpt/d12p1 - - - - - - - = - ONLINE gpt/d13p1 - - - - - - - = - ONLINE gpt/d14p1 - - - - - - - = - ONLINE gpt/d15p1 - - - - - - - = - ONLINE gpt/d16p1 - - - - - - - = - ONLINE gpt/d17p1 - - - - - - - = - ONLINE gpt/d18p1 - - - - - - - = - ONLINE da18p1 - - - - - - - = - ONLINE special - - - - - - - = - - mirror-3 398G 2.86G 395G - - 0% 0.71% = - ONLINE gpt/special0 - - - - - - - = - ONLINE gpt/special1 - - - - - - - = - ONLINE logs - - - - - - - = - - mirror-2 15.5G 256K 15.5G - - 0% 0.00% = - ONLINE gpt/slog0 - - - - - - - = - ONLINE gpt/slog1 - - - - - - - = - ONLINE mikej@draid:~ $ zfs get mountpoint tank NAME PROPERTY VALUE SOURCE tank mountpoint /tank default mikej@draid:~ $ df /tank Filesystem 1K-blocks Used Avail Capacity Mounted on tank 9558892004 4301310224 5257581780 45% /tank <-- ~9TB not = 18TB mikej@draid:~ $ mikej@draid:~ $ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP = HEALTH ALTROOT ccache 9.50G 9.10G 406M - - 89% 95% 1.00x = ONLINE - raid-5400-1 6.28T 1.19T 5.09T - - 6% 19% 1.00x = ONLINE - tank 18.5T 8.02T 10.4T - - 0% 43% 1.00x = ONLINE - <-- zfsroot 103G 32.6G 70.4G - - 25% 31% 1.00x = ONLINE - mikej@draid:~ $ Thanks in advance for any comments or sugesstions. --mikej CONFIDENTIALITY NOTE: This message is intended only for the use of the individual or entity to whom it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this transmission in error, please notify us by telephone at (502) 212-4000 or notify us at: PAI, Dept. 99, 2101 High Wickham Place, Suite 101, Louisville, KY 40245 Disclaimer The information contained in this communication from the sender is confiden= tial. It is intended solely for use by the recipient and others authorized = to receive it. If you are not the recipient, you are hereby notified that a= ny disclosure, copying, distribution or taking action in relation of the co= ntents of this information is strictly prohibited and may be unlawful. This email has been scanned for viruses and malware, and may have been auto= matically archived by Mimecast, a leader in email security and cyber resili= ence. Mimecast integrates email defenses with brand protection, security aw= areness training, web security, compliance and other essential capabilities= . Mimecast helps protect large and small organizations from malicious activ= ity, human error and technology failure; and to lead the movement toward bu= ilding a more resilient world. To find out more, visit our website. --_000_f42afbe959104522952f3eb57fa5f865MAILHUBpailocal_ Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <= /head>

Hi!

 

I started playing with draid some months ago and I have a p= roblem I cannot figure out.

 

I started out with a single draid2:2d:10c:0s and life was g= ood.  Ran some tests, added

a special device and life was better. Then I expanded the p= ool by adding another

draid2:2d:10c:0s.  Now the problem: the pool says it h= as ~18TB worth of space, the

file system only shows ~9TB. It did not auto-expand.

 

I’m currently on main-n253875-8e72f458c6d: but I init= ially spun this up just after

DRAID was brought Into the tree, at least when I became awa= re of it. I have tried

individually on lining devices in the pool with “-e&#= 8221;, exporting/importing the pool etc.

and basically every suggestion my “google foo” = would lead me to.

 

I have no useful data on this pool and I can destroy and re= -create it but I would

like to save some time and get opinions or fact on:

 

Can a draid pool be expanded after a special device has bee= n added?  It would seem so

but the filesystem does not reflect that.?

 

What at is the correct way to wire down the following ̵= 1; I had problems after not doing

this since the 2.x days and would really want to wire down = my draid devices now

that I have other pools. 

 

I have pool construction all scripted but of course I want = physical disk da<x> to always be

consistent. Then I easily test any construct someone comes = up with.

 

root@draid:/home/mikej # camcontrol devlist -b

scbus0 on ata0 bus 0

scbus1 on ata1 bus 0

scbus2 on mpt0 bus 0

scbus3 on mps0 bus 0   ß=

scbus4 on camsim0 bus 0

scbus-1 on xpt0 bus 0

root@draid:/home/mikej #

 

 

<dmesg>

mps0: <Avago Technologies (LSI) SAS2008> port 0x5000-= 0x50ff mem 0xfd4fc000-0xfd4fffff,0xfd480000-0xfd4bffff irq 19 at device 0.0= on pci4

mps0: Firmware: 20.00.04.00, Driver: 21.02.00.00-fbsd<= /o:p>

mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,Snap= Buf,EEDP,TransRetry,EventReplay,HostDisc>

 

<camcontrol>

<SEAGATE ST91000640SS AS09>    &n= bsp;   at scbus3 target 63 lun 0 (da42,pass35)<= /p>

<SEAGATE ST91000640SS AS09>    &n= bsp;   at scbus3 target 67 lun 0 (da46,pass39)<= /p>

<SEAGATE ST91000640SS AS08>    &n= bsp;   at scbus3 target 68 lun 0 (da47,pass40)<= /p>

 

EXAMPLE: Is this correct?

 

hint.scbus.3.at=3D”mps0”

 

hint.da.42.at=3D”scbus3”

hint.da.42.target=3D”63”

hint.da.42.unit=3D”0”

 

hint.da.46.at=3D”scbus3”

hint.da.46.target=3D”67”

hint.da.46.unit=3D”0”

 

hint.da.47.at=3D”scbus3”

hint.da.47.target=3D”68”

hint.da.47.unit=3D”0"

 

I will try again, but this did not seem to work for me.&nbs= p; This is my home lab but it’s

now painful to spin down this host as it’s a iSCSI ta= rget for several machines for a

ESXi project I’m working on so I would like to put wi= red down SCSI devices to rest.

 

And yes, I have scheduled VM backups for my project to anot= her data store that is not

this box ;-)

 

It’s also odd, that I did not get all GPT labels in t= he zpool. All disks have GPT labels. 

 

I have added for the next reboot to loader.conf. 

 

kern.geom.label.disk_ident.enable=3D"0"

kern.geom.label.gptid.enable=3D"1

 

This is an old Dell MD-1000 shelf I had laying around with = a bunch of 1TB drives

which I thought would be perfect for playing around with dr= aid.

 

mikej@draid:~ $ zpool get all tank | grep expand=

tank  autoexpand      &n= bsp;            = ;  on           = ;            &n= bsp;     local

tank  expandsize      &n= bsp;            = ;  -           =             &nb= sp;      -

mikej@draid:~ $ zpool list -v tank

NAME         &= nbsp;       SIZE  ALLOC   FREE=   CKPOINT  EXPANDSZ   FRAG    CAP  = DEDUP    HEALTH  ALTROOT

tank         &= nbsp;      18.5T  8.02T  10.4T &nbs= p;      -       = ;  -     0%    43%  1.00x = ;   ONLINE  -

  draid2:2d:10c:0s-0  9.03T  4.06T  4.9= 7T        -     = ;    -     0%  44.9%  &nb= sp;   -    ONLINE

    gpt/da0p1     &= nbsp;     -      -  =     -        -  &nbs= p;      -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d10p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    da39p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da40p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da41p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da43p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da44p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da42p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

    da45p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -    &n= bsp; -      -      -=     ONLINE

    gpt/d19p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

  draid2:2d:10c:0s-1  9.03T  3.96T  5.0= 7T        -     = ;    -     0%  43.8%  &nb= sp;   -    ONLINE

    da46p1     &nbs= p;        -     = ; -      -       &nb= sp;-         -   &nb= sp;  -      -      -=     ONLINE

    gpt/d11p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d12p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d13p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d14p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d15p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d16p1      =      -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d17p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/d18p1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    da18p1     &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=     ONLINE

special        &nbs= p;        -     = ; -      -      &nbs= p; -         -   &nb= sp;  -      -      -=          -

  mirror-3       &n= bsp;   398G  2.86G   395G    &= nbsp;   -         - =     0%  0.71%      -  &nb= sp; ONLINE

    gpt/special0    &nbs= p;   -      -    &nb= sp; -        -    &n= bsp;    -      -   &= nbsp;  -      -    ONLINE=

    gpt/special1    &nbs= p;   -      -    &nb= sp; -        -    &n= bsp;    -      -   &= nbsp;  -      -    ONLINE=

logs         &= nbsp;          -  &n= bsp;   -      -    &= nbsp;   -         - =      -      -   = ;   -         -

  mirror-2       &n= bsp;  15.5G   256K  15.5G     =    -         -  = ;   0%  0.00%      -  &nb= sp; ONLINE

    gpt/slog0     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

    gpt/slog1     &= nbsp;     -      -  =     -        -  = ;       -      -&nbs= p;     -      -  &nb= sp; ONLINE

 

mikej@draid:~ $ zfs get mountpoint tank

NAME  PROPERTY    VALUE  &nbs= p;    SOURCE

tank  mountpoint  /tank    &n= bsp;  default

 

 

mikej@draid:~ $ df /tank

Filesystem  1K-blocks     &nb= sp; Used      Avail Capacity  Mounted on=

tank       9558892004 4301310= 224 5257581780    45%    /tank   ß= ~9TB n= ot 18TB

mikej@draid:~ $

 

mikej@draid:~ $ zpool list

NAME          = SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   = FRAG    CAP  DEDUP    HEALTH  ALTRO= OT

ccache       9.50G  9.10= G   406M        -  &= nbsp;      -    89%  &nbs= p; 95%  1.00x    ONLINE  -

raid-5400-1  6.28T  1.19T  5.09T  =       -       &= nbsp; -     6%    19%  1.00x &= nbsp;  ONLINE  -

tank         18.5T&= nbsp; 8.02T  10.4T        - &n= bsp;       -     0% =    43%  1.00x    ONLINE  -  &n= bsp;        ß=

zfsroot       103G  32.6= G  70.4G        -   =       -    25%    31= %  1.00x    ONLINE  -

mikej@draid:~ $

 

Thanks in advance for any comments or sugesstions.

 

--mikej

 

 

 

CONFIDENTIALITY NOTE: This message is intended only for the use
of the individual or entity to whom it is addressed and may
contain information that is privileged, confidential, and
exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby
notified that any dissemination, distribution or copying
of this communication is strictly prohibited. If you have
received this transmission in error, please notify us by
telephone at (502) 212-4000 or notify us at: PAI, Dept. 99,
2101 High Wickham Place, Suite 101, Louisville, KY 40245





<= b>Disclaimer

The information contained in this communication from the sender i= s confidential. It is intended solely for use by the recipient and others a= uthorized to receive it. If you are not the recipient, you are hereby notif= ied that any disclosure, copying, distribution or taking action in relation= of the contents of this information is strictly prohibited and may be unla= wful.

This email has been scanned for viruses and malware, and may h= ave been automatically archived by Mimecast, a leader in email security and= cyber resilience. Mimecast integrates email defenses with brand protection= , security awareness training, web security, compliance and other essential= capabilities. Mimecast helps protect large and small organizations from ma= licious activity, human error and technology failure; and to lead the movem= ent toward building a more resilient world. To find out more, visit our web= site.

--_000_f42afbe959104522952f3eb57fa5f865MAILHUBpailocal_--