[Bug 215503] net/glusterfs: Glusterfs client does not refreshing content of the files

bugzilla-noreply at freebsd.org bugzilla-noreply at freebsd.org
Fri Dec 23 00:30:30 UTC 2016


https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215503

            Bug ID: 215503
           Summary: net/glusterfs: Glusterfs client does not refreshing
                    content of the files
           Product: Ports & Packages
           Version: Latest
          Hardware: Any
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: Individual Port(s)
          Assignee: freebsd-ports-bugs at FreeBSD.org
          Reporter: craig001 at lerwick.hopto.org

New issue reported to me via email regarding glusterfs not refeshing files.
Opening PR to track and fix issue

Krzysztof Kosarzycki (Chris) reported -


I have a problem with glusterfs client on FreeBSD 10.3.
Glusterfs client does not refreshing content of the files,
but size is always correct. From the server point of view
all is correct (bricks are synchronizing, log clear etc.).
By the way Linux gluster client operating on FBSD brick behave OK.


The scenario was simple :
I have created 4 bricks (3 on FreeBSD 1 on Ubuntu Linux).
All bricks are additional disk 500GB (Test is performed on VmWare 5.5
environment).
The bricks on FreeBSD are on ZFS file system. The brick on Ubuntu is on XFS
file system.
All bricks are mounted on /brick1, /brick2 etc.
Command to create gluster volume :
# gluster volume create test replica 2 z-zt1:/brick1 z-zt2:/brick2
z-zt3:/brick3 z-zt4:/brick4
# gluster volume start test

root at z-zt1:~ # gluster volume status test
Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick z-zt1:/brick1                         N/A       N/A        N       N/A 
Brick z-zt2:/brick2                         49152     0          Y       9326
Brick z-zt3:/brick3                         49152     0          Y       9315
Brick z-zt4:/brick4                         49152     0          Y       1317
NFS Server on localhost                     2049      0          Y       798   
      temporary workaround by NFS mount
Self-heal Daemon on localhost               N/A       N/A        Y       797 
NFS Server on z-zt4                         N/A       N/A        N       N/A 
Self-heal Daemon on z-zt4                   N/A       N/A        Y       1344
NFS Server on z-mail                        N/A       N/A        N       N/A 
Self-heal Daemon on z-mail                  N/A       N/A        Y       3925
NFS Server on z-zt5                         N/A       N/A        N       N/A 
Self-heal Daemon on z-zt5                   N/A       N/A        Y       1363
NFS Server on z-zt3                         N/A       N/A        N       N/A 
Self-heal Daemon on z-zt3                   N/A       N/A        Y       9321
NFS Server on z-zt2                         N/A       N/A        N       N/A 
Self-heal Daemon on z-zt2                   N/A       N/A        Y       9332

Task Status of Volume test
------------------------------------------------------------------------------
Task                 : Rebalance          
ID                   : 3fb64829-7626-4681-a8ca-272567c95ae6
Status               : completed          

root at z-zt1:~ # gluster peer status
Number of Peers: 5

Hostname: z-zt4
Uuid: 719494e9-d584-4016-b918-aa19b8f1957a
State: Peer in Cluster (Connected)

Hostname: z-zt2
Uuid: 0cc9a9f2-0a90-4a8c-bebd-5d2260fbb2e0
State: Peer in Cluster (Connected)

Hostname: z-zt5
Uuid: cfc52d78-6cd3-4e9e-8db6-ce9e67535a51
State: Peer in Cluster (Connected)

Hostname: z-mail
Uuid: a5a40b84-0bca-4fbd-bec8-73594251677e
State: Peer in Cluster (Connected)

Hostname: z-zt3
Uuid: ef1c0986-cd15-4e04-b6f4-8ab1e911a806
State: Peer in Cluster (Connected)
root at z-zt1:~ #

Peers z-zt5 and z-mail are candidate for expanding test volume for next 2
bricks.

Command to mount gluster volume :
# mount_glusterfs z-zt1:test /root/test

root at z-zt1:~ # mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
brick1 on /brick1 (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
/dev/fuse on /root/test (fusefs, local, synchronous)
root at z-zt1:~ #

I test create file, check on other nodes - file is there all nodes registering
new file OK.
When I try to edit this file on other node and close the file. The originaly
creating node register
new file size, but not the content. When I unmount gluster volume and mount it
again :
all is OK - new size and content. This is not happen when I use glusterfs
client on Ubuntu
and check changes on other Ubuntu client connected to FreeBSD host.  This
situation
has place on all three FreeBSD hosts. There is no parameters in the glusterfs
client connected
with file system etc. I discover that gluster has embedded NFS server and turn
on this functionality
on z-zt1 host. When I mount gluster volume as NFS mount all problems gone.
Maybe I made some stupid error but I can not find where.

-- 
You are receiving this mail because:
You are the assignee for the bug.


More information about the freebsd-ports-bugs mailing list