I/O error on the pool created using g_multipath device

Sowmya L sowmya at cloudbyte.co
Wed Jul 10 04:41:21 UTC 2013


Hi,
 i am finding read/write errors on the pool when the active path cable is
pulled from jbod while running I/O on the pool.

*freebsd version* : 9.0

*patches taken from stable 9 are* :

                MFC r234415
<http://svnweb.freebsd.org/base?view=revision&revision=234415>

                MFC r227464
<http://svnweb.freebsd.org/base?view=revision&revision=227464>,
r227471 <http://svnweb.freebsd.org/base?view=revision&revision=227471>

*g_multipath configuration:*

Geom name: newdisk2

Type: AUTOMATIC Mode: Active/Passive UUID:
1ea053ef-e4a5-11e2-9887-00e0ed158a78 State: OPTIMAL Providers: 1. Name:
multipath/newdisk2 Mediasize: 2000398933504 (1.8T) Sectorsize: 512 Mode:
r0w0e0 State: OPTIMAL Consumers: 1. Name: da0 Mediasize: 2000398934016
(1.8T) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE 2. Name: da6 Mediasize:
2000398934016 (1.8T) Sectorsize: 512 Mode: r1w1e1 State: PASSIVE Geom name:
newdisk1 Type: AUTOMATIC Mode: Active/Passive UUID:
166e0467-e4a5-11e2-9887-00e0ed158a78 State: OPTIMAL Providers: 1. Name:
multipath/newdisk1 Mediasize: 299999999488 (279G) Sectorsize: 512 Mode:
r0w0e0 State: OPTIMAL Consumers: 1. Name: da2 Mediasize: 300000000000
(279G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE 2. Name: da4 Mediasize:
300000000000 (279G) Sectorsize: 512 Mode: r1w1e1 State: PASSIVE Geom name:
newdisk3 Type: AUTOMATIC Mode: Active/Active UUID:
78a76feb-e84f-11e2-9b69-00e0ed158a78 State: OPTIMAL Providers: 1. Name:
multipath/newdisk3 Mediasize: 299999999488 (279G) Sectorsize: 512 Mode:
r1w1e1 State: OPTIMAL Consumers: 1. Name: da7 Mediasize: 300000000000
(279G) Sectorsize: 512 Mode: r2w2e2 State: ACTIVE 2. Name: da8 Mediasize:
300000000000 (279G) Sectorsize: 512 Mode: r2w2e2 State: ACTIVE Geom name:
newdisk Type: AUTOMATIC Mode: Active/Passive UUID:
0a42c877-e4a5-11e2-9887-00e0ed158a78 State: OPTIMAL Providers: 1. Name:
multipath/newdisk Mediasize: 2000398933504 (1.8T) Sectorsize: 512 Mode:
r0w0e0 State: OPTIMAL Consumers: 1. Name: da1 Mediasize: 2000398934016
(1.8T) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE 2. Name: da3 Mediasize:
2000398934016 (1.8T) Sectorsize: 512 Mode: r1w1e1 State: PASSIVE Geom name:
newdisk4 Type: AUTOMATIC Mode: Active/Active UUID:
7b9f9e79-e84f-11e2-9b69-00e0ed158a78 State: OPTIMAL Providers: 1. Name:
multipath/newdisk4 Mediasize: 299999999488 (279G) Sectorsize: 512 Mode:
r1w1e1 State: OPTIMAL Consumers: 1. Name: da5 Mediasize: 300000000000
(279G) Sectorsize: 512 Mode: r2w2e2 State: ACTIVE 2. Name: da9 Mediasize:
300000000000 (279G) Sectorsize: 512 Mode: r2w2e2 State: ACTIVE


*pool configuration:*

pool: mypool state: ONLINE scan: none requested config: NAME STATE READ
WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 multipath/newdisk3
ONLINE 0 0 0 multipath/newdisk4 ONLINE 0 0 0 errors: No known data errors

      Are there any dependencies for the patch that is taken from stable 9 code?

-- 

Thanks & Regards,
Sowmya L


More information about the freebsd-fs mailing list