[Bug 266879] Gluster mount not handled as expected

From: <bugzilla-noreply_at_freebsd.org>
Date: Fri, 07 Oct 2022 05:56:18 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=266879

            Bug ID: 266879
           Summary: Gluster mount not handled as expected
           Product: Ports & Packages
           Version: Latest
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: Individual Port(s)
          Assignee: ports-bugs@FreeBSD.org
          Reporter: david@aitch2o.com
                CC: daniel@morante.net
             Flags: maintainer-feedback?(daniel@morante.net)
                CC: daniel@morante.net

I have two identical app servers

root@au-syd01-qa-app01:~ # uname -a
FreeBSD au-syd01-qa-app01.net.local 13.1-RELEASE-p2 FreeBSD 13.1-RELEASE-p2
GENERIC amd64
root@au-syd01-qa-app01:~ # mount
/dev/ufs/rootfs on / (ufs, local, soft-updates, journaled soft-updates)
devfs on /dev (devfs)
/dev/fuse on /attachments (fusefs)
/dev/fuse on /codebase (fusefs)
root@au-syd01-qa-app01:~ # cat /etc/fstab
# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/ufs/rootfs /               ufs     rw      1       1
au-syd01-qa-brick01.net.local:/attachments    /attachments    fusefs
rw,acl,transport=tcp,_netdev,backup-volfile-servers=au-syd01-qa-brick02.net.local:au-syd01-qa-brick03.net.local,mountprog=/usr/local/sbin/mount_glusterfs,late
      0       0
au-syd01-qa-brick01.net.local:/codebase       /codebase       fusefs
rw,acl,transport=tcp,_netdev,backup-volfile-servers=au-syd01-qa-brick02.net.local:au-syd01-qa-brick03.net.local,mountprog=/usr/local/sbin/mount_glusterfs,late
      0       0
root@au-syd01-qa-app01:~ # 
root@au-syd01-qa-app01:~ # 
root@au-syd01-qa-app01:~ # 
root@au-syd01-qa-app01:~ # pkg info glusterfs
glusterfs-8.4_2
Name           : glusterfs
Version        : 8.4_2
Installed on   : Fri Sep 16 18:39:31 2022 AEST
Origin         : net/glusterfs
Architecture   : FreeBSD:13:amd64
Prefix         : /usr/local
Categories     : net
Licenses       : LGPL3+ or GPLv2
Maintainer     : daniel@morante.net
WWW            : https://www.gluster.org
Comment        : GlusterFS distributed file system
Options        :
        DOCS           : on
Shared Libs required:
        libxml2.so.2
        libuuid.so.1
        liburcu-common.so.8
        liburcu-cds.so.8
        liburcu-bp.so.8
        libreadline.so.8
        libintl.so.8
        libcurl.so.4
        libargp.so.0
Shared Libs provided:
        libglusterfs.so.0
        libglusterd.so.0
        libgfxdr.so.0
        libgfrpc.so.0
        libgfchangelog.so.0
        libgfapi.so.0
Annotations    :
        FreeBSD_version: 1301000
        cpe            : cpe:2.3:a:gluster:glusterfs:8.4:::::freebsd13:x64:2
        repo_type      : binary
        repository     : FreeBSD
Flat size      : 13.8MiB
Description    :
GlusterFS is an open source, distributed file system capable of
scaling to several petabytes and handling thousands of
clients. GlusterFS clusters together storage building blocks over
Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory
resources and managing data in a single global namespace.  GlusterFS
is based on a stackable user space design and can deliver exceptional
performance for diverse workloads.

WWW: https://www.gluster.org
root@au-syd01-qa-app01:~ # 


from app01
if I touch /codebase/test the test file is created on the gluster mount and
both app01 and app02 can see it
if I delete /codebase/test the test file is deleted on the gluster mount and
both app01 and app02 can't list it
if I echo app1 > /codebase/test the file is created with the content of app1
and both app01 and app02 can see the content if I cat the file
if I echo app2 > /codebase/test on app02 then only app02 has the content of
app2, app01 still has app1 as the content

it seems file create, rename, remove is working as expected but when contents
of a file changes it's not replicated

the bricks are on Rocky Linux, and if I mount /codebase on Linux servers the
problem above can't be replicated , I've initially thought this could be an
issue with Glutester but I can't replicate the 
problem described above on Linux


Brick info 


[root@au-syd01-qa-brick01 ~]# gluster volume info codebase 

Volume Name: codebase
Type: Replicate
Volume ID: c9d939fe-a29b-4b04-987a-81658e7b68a2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: au-syd01-qa-brick01.net.local:/brick2/brick
Brick2: au-syd01-qa-brick02.net.local:/brick2/brick
Brick3: au-syd01-qa-brick03.net.local:/brick2/brick
Options Reconfigured:
cluster.consistent-metadata: on
performance.readdir-ahead: off
performance.strict-o-direct: off
performance.quick-read: off
performance.open-behind: off
performance.write-behind: off
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
storage.owner-gid: 1200
client.event-threads: 8
cluster.lookup-optimize: off
cluster.readdir-optimize: off
features.cache-invalidation: off
performance.io-thread-count: 16
performance.parallel-readdir: off
performance.stat-prefetch: off
server.event-threads: 8
performance.cache-size: 32MB
performance.cache-max-file-size: 2MB
performance.io-cache: off
performance.read-ahead: off
network.inode-lru-limit: 500000
performance.nl-cache-positive-entry: off
performance.cache-samba-metadata: off
performance.cache-invalidation: on
performance.qr-cache-timeout: 0
features.cache-invalidation-timeout: 0
storage.owner-uid: 80
performance.md-cache-timeout: 0



I have deleted the volume and tested with a vanilla brick config and I see the
same issue, looking at https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215519
it's an identical issue, looks like this bug was re-introduced

-- 
You are receiving this mail because:
You are the assignee for the bug.