Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1361449 - Direct io to sharded files fails when on zfs backend
Summary: Direct io to sharded files fails when on zfs backend
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: 3.8.1
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact:
URL:
Whiteboard:
Depends On: 1360785 1361300
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-29 06:20 UTC by Krutika Dhananjay
Modified: 2016-08-12 09:48 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1361300
Environment:
Last Closed: 2016-08-12 09:48:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Krutika Dhananjay 2016-07-29 06:20:04 UTC
+++ This bug was initially created as a clone of Bug #1361300 +++

+++ This bug was initially created as a clone of Bug #1360785 +++

Beginning with 3.7.12 and 3.7.13 when using zfs backed bricks connecting to sharded files fails with direct io.

How reproducible: Always


Steps to Reproduce:
1. zfs backed bricks default settings except xattr=sa
2. gluster fs 3.7.12+ sharding enabled
3. dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test oflag=direct count=100 bs=1M

Actual results: dd: error writing ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’: Operation not permitted

file test is created with file size defined by shard size.  sharded file created in .shard are 0


Expected results: 
100+0 records in
100+0 records out
104857600 bytes etc.....


Additional info:
Using proxmox users have been able to work around by changing disk caching from none to writethrough/back.  Not sure this would help with oVirt as the pything script that checks storage with dd and oflag=direct also fails

attaching client and brick log from test

--- Additional comment from David on 2016-07-27 09:36:52 EDT ---

in oVirt mailing list was asked to test these settings

i. Set network.remote-dio to off
        # gluster volume set <VOL> network.remote-dio off

ii. Set performance.strict-o-direct to on
        # gluster volume set <VOL> performance.strict-o-direct on

results:

dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/192.168.71.10\:_glustershard/5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test oflag=direct count=100 bs=1M
dd: error writing ‘/rhev/data-center/mnt/glusterSD/192.168.71.10:_glustershard/5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test’: Invalid argument
dd: closing output file ‘/rhev/data-center/mnt/glusterSD/192.168.71.10:_glustershard/5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test’: Invalid argument


[2016-07-25 18:20:19.393121] E [MSGID: 113039] [posix.c:2939:posix_open] 0-glustershard-posix: open on /gluster2/brick1/1/.glusterfs/02/f4/02f4783b-2799-46d9-b787-53e4ccd9a052, flags: 16385 [Invalid argument]
[2016-07-25 18:20:19.393204] E [MSGID: 115070] [server-rpc-fops.c:1568:server_open_cbk] 0-glustershard-server: 120: OPEN /5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test (02f4783b-2799-46d9-b787-53e4ccd9a052) ==> (Invalid argument) [Invalid argument]


and /var/log/glusterfs/rhev-data-center-mnt-glusterSD-192.168.71.10\:_glustershard.log
[2016-07-25 18:20:19.393275] E [MSGID: 114031] [client-rpc-fops.c:466:client3_3_open_cbk] 0-glustershard-client-0: remote operation failed. Path: /5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test (02f4783b-2799-46d9-b787-53e4ccd9a052) [Invalid argument]
[2016-07-25 18:20:19.393270] E [MSGID: 114031] [client-rpc-fops.c:466:client3_3_open_cbk] 0-glustershard-client-1: remote operation failed. Path: /5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test (02f4783b-2799-46d9-b787-53e4ccd9a052) [Invalid argument]
[2016-07-25 18:20:19.393317] E [MSGID: 114031] [client-rpc-fops.c:466:client3_3_open_cbk] 0-glustershard-client-2: remote operation failed. Path: /5b8a4477-4d87-43a1-aa52-b664b1bd9e08/images/test (02f4783b-2799-46d9-b787-53e4ccd9a052) [Invalid argument]
[2016-07-25 18:20:19.393357] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 117: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393389] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 118: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393611] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 119: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393708] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 120: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393771] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 121: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393840] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 122: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393914] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 123: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.393982] W [fuse-bridge.c:2311:fuse_writev_cbk] 0-glusterfs-fuse: 124: WRITE => -1 gfid=02f4783b-2799-46d9-b787-53e4ccd9a052 fd=0x7f5fec0ba08c (Invalid argument)
[2016-07-25 18:20:19.394045] W [fuse-bridge.c:709:fuse_truncate_cbk] 0-glusterfs-fuse: 125: FTRUNCATE() ERR => -1 (Invalid argument)
[2016-07-25 18:20:19.394338] W [fuse-bridge.c:1290:fuse_err_cbk] 0-glusterfs-fuse: 126: FLUSH() ERR => -1 (Invalid argument)

--- Additional comment from David on 2016-07-27 10:54:22 EDT ---

Also have heard from others with issue that problem exists in 3.8.x as well.  I myself have not tested as my environment is still in 3.7.x

--- Additional comment from David on 2016-07-27 11:44:09 EDT ---

These are full settings I usually apply and run with


features.shard-block-size: 64MB
features.shard: on
performance.readdir-ahead: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
server.allow-insecure: on
cluster.self-heal-window-size: 1024
cluster.background-self-heal-count: 16
performance.strict-write-ordering: off
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: off

--- Additional comment from Krutika Dhananjay on 2016-07-28 13:33:25 EDT ---

Hi,

Open() on these affected files seems to be returning ENOENT, however as per the find command output you gave on ovirt-users ML, both the file and its gfid handle seem to be existing in the backend. Then the failure was not due to ENOENT. I looked at the code in posix again and there is evidence to suggest that the actual error code (the real reason for open() failing) is getting masked by stat in .unlink directory:

30         if (fd->inode->ia_type == IA_IFREG) {                                    
 29                 _fd = open (real_path, fd->flags);                               
 28                 if (_fd == -1) {                          
 27                         POSIX_GET_FILE_UNLINK_PATH (priv->base_path,             
 26                                                     fd->inode->gfid,             
 25                                                     unlink_path);                
 24                         _fd = open (unlink_path, fd->flags);                     
 23                 }                                                                
 22                 if (_fd == -1) {                                                 
 21                         op_errno = errno;                                        
 20                         gf_msg (this->name, GF_LOG_ERROR, op_errno,              
 19                                 P_MSG_READ_FAILED,                               
 18                                 "Failed to get anonymous "                       
 17                                 "real_path: %s _fd = %d", real_path, _fd);       
 16                         GF_FREE (pfd);                                           
 15                         pfd = NULL;                                              
 14                         goto out;                                                
 13                 }                                                                
 12         }                         

In your case, on line 29, the open on .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d failed for a reason other than ENOENT (it can't be ENOENT because we already saw on doing find that the file exists). And then line 27 is executed. If the file exists in its real path, then it must be absent in .unlink directory (because the gfid handle can't be present at both places). So it is the open() on line 24 that is failing with ENOENT and not the open on line 29.

I'll be sending a patch to fix this problem.

Meanwhile, in order to understand why the open on line 29 failed, could you attach all of your bricks to strace, run the test again, wait for it to fail, and then attach both the strace output files and the resultant glusterfs client and brick logs here?

# strace -ff -p <pid-of-the-brick> -o <path-where-you-want-to-capture-the-output>

--- Additional comment from Vijay Bellur on 2016-07-28 13:43:34 EDT ---

REVIEW: http://review.gluster.org/15039 (storage/posix: Look for file in .unlink IFF open on real-path fails with ENOENT) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-07-28 23:16:48 EDT ---

REVIEW: http://review.gluster.org/15039 (storage/posix: Look for file in "unlink" dir IFF open on real-path fails with ENOENT) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 1 Vijay Bellur 2016-07-29 06:27:39 UTC
REVIEW: http://review.gluster.org/15042 (storage/posix: Look for file in "unlink" dir IFF open on real-path fails with ENOENT) posted (#1) for review on release-3.8 by Krutika Dhananjay (kdhananj@redhat.com)

Comment 2 Vijay Bellur 2016-07-30 10:59:03 UTC
COMMIT: http://review.gluster.org/15042 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit b653bcbf652e05659189e2f9dbb9767dcd969d55
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Thu Jul 28 22:37:38 2016 +0530

    storage/posix: Look for file in "unlink" dir IFF open on real-path fails with ENOENT
    
            Backport of: http://review.gluster.org/#/c/15039/
    
    PROBLEM:
    In some of our users' setups, open() on the anon fd failed for
    a reason other than ENOENT. But this error code is getting masked
    by a subsequent open() under posix's hidden "unlink" directory, which
    will fail with ENOENT because the gfid handle still exists under .glusterfs.
    And the log message following the two open()s ends up logging ENOENT,
    causing much confusion.
    
    FIX:
    Look for the presence of the file under "unlink" ONLY if the open()
    on the real_path failed with ENOENT.
    
    Change-Id: Id83782fb3995d578881f7a586c83c3e0baea2ae8
    BUG: 1361449
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/15042
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>

Comment 3 Niels de Vos 2016-08-12 09:48:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report.

glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/announce/2016-August/000058.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.