ZFS and ISCSI question

Kaya Saman kayasaman at gmail.com
Mon Jul 31 12:10:07 UTC 2017

On 07/31/2017 11:05 AM, Eugene M. Zheganin wrote:
> Hi.
> On 30.07.2017 17:19, Kaya Saman wrote:
>> I understand that iscsi works at the "block device" level but how 
>> would one go about using ZFS on the initiator?
>> The standard ZFS commands can be run:
>> zpool followed by zfs FS-set on the Initiator machine
>> however, it doesn't seem right to first create a ZFS pool on the 
>> Target system then create another one on the same pool on the Initiator.
>> Would zpool import/export work or does something else need to be done 
>> to get the Initiator to create a ZFS data set? 
> Zvol is a block device indeed, but, even it is an entity from a parent 
> zfs pool, it doesn't contain any filesystem, including zfs. Thus the 
> kernel won't see anything. So you have to create a zpool first with 
> 'zpool create'.
> Eugene.
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"

Hmm.... basically what I am trying to acheive is to be able to create a 
zpool and zfs file system on the Initiator system.

Of course on the Target one could run:

zpool create pool_1 <device_list>
zfs -V <size> pool_1/zvol

and on the Initiator:

zpool create pool <zvol_device>
zfs create pool/fs-set

but would that be recommended as one would be using 2x zpools on the 
same "device list"?

As an alternative I have tried something like this:


portal-group pg0 {
     discovery-auth-group no-authentication
     listen [::]

target iqn.2012-06.com.example:target0 {
     auth-group no-authentication
     portal-group pg0

     lun 0 {
#        path /dev/zvol/iscsi-tst/tank
#        size 900M
         path /data/disk1
         size 200M

         lun 1 {
                 path /data/disk2
                 size 200M

#        lun 2 {
#                path /data/disk3
#                size 500M
#        }

#        lun 3 {
#                path /data/disk4
#                size 500M
#        }


target iqn.2012-06.com.example:target1 {
         auth-group no-authentication
         portal-group pg0

         lun 2 {
                 path /data/disk3
                 size 500M

         lun 3 {
                 path /data/disk4
                 size 500M


Then on Initiator:

# iscsictl -L
Target name                          Target portal    State
iqn.2012-06.com.example:target0      <IP>    Connected: da24 da25
iqn.2012-06.com.example:target1      <IP>    Connected: da26 da27

so then the zpool becomes:

iscsi-tst     672M   936K   671M         -     0%     0%  1.00x ONLINE  -

# zpool status iscsi-tst
   pool: iscsi-tst
  state: ONLINE
status: One or more devices are configured to use a non-native block size.
     Expect reduced performance.
action: Replace affected devices with devices that support the
     configured block size, or migrate data to a properly configured
   scan: none requested

     iscsi-tst   ONLINE       0     0     0
       mirror-0  ONLINE       0     0     0
         da24    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
         da25    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
       mirror-1  ONLINE       0     0     0
         da26    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native
         da27    ONLINE       0     0     0  block size: 8192B 
configured, 16384B native

errors: No known data errors

Then zfs dataset:

# zfs list iscsi-tst
iscsi-tst   816K   639M   192K  /iscsi-tst

# zfs list iscsi-tst/tank
iscsi-tst/tank   192K   639M   192K  /iscsi-tst/tank

So for best redundancy like "hot swap" etc... what would be the best 
solution or is there an ISCSI "Best Practice" to not get totally burned 
if something goes wrong with an iscsi attached drive? <-- taking backups 
of data excluded of course :-)



More information about the freebsd-fs mailing list