- Issued:
- 2018-07-19
- Updated:
- 2018-07-19
RHBA-2018:2222 - Bug Fix Advisory
Synopsis
glusterfs bug fix update
Type/Severity
Bug Fix Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated glusterfs packages that fix several bugs are now available for Red
Hat Gluster Storage 3.3 Update 1 on Red Hat Enterprise Linux 7.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.
This advisory fixes the following bugs:
- Previously, glusterd could not handle blank real paths when checking if brick path is already a part of another volume, during volume create. Hence, volume create requests would fail with the error 'Brick may be containing or be contained by an existing brick'. This update fixes the path comparison logic to correctly handle blank paths, and thus prevents failure of further volume create requests. (BZ#1599803)
- Previously, glusterd could not check if the daemons it initiated, were fully initialized before sending them requests. Hence, if glusterd sends the index heal request from CLI to the self-heal daemon before it fully initializes its graph, the self-heal daemon would crash. This update fixes the self-heal daemon by ignoring the requests it receives from glusterd before graph is initialized. Thus, the CLI fails the command when user launches the index heal via. gluster CLI. (BZ#1595752)
- Previously, eager-lock was disabled for volumes hosted by a block, because conflicting writes were handled incorrectly when eager-lock is enabled. Hence, the performance of gluster backed block devices was insufficient when eager-lock was enabled. This update fixes the eager-lock handling for conflicting writes. Thus, when eager-lock is enabled, performance of gluster backed block device is enhanced. To observe this performance improvement, the Gluster administrator needs to enable eager-lock on old block hosting volumes. Also, the eager-lock option is enabled by default for all new volumes. (BZ#1583733)
- Previously, due to a bug in glusterd, executing the `gluster volume set <volname> client-io-threads on` command on a replicate volume returned success without actually enabling it in the client graph. This updates fixes that bug, and executing the `gluster volume set <volname> client-io-threads on` command provides a correct success result and ensures that the translator is loaded on the client graph. (BZ#1598416)
- Previously, when an application sent fsync on a sharded file, the shards associated with the file were not being synced to disk. Hence, potentially causing data loss in case of a plain distribute shard volume. This update fixes the shard translator by syncing all the modified shards to disk whenever the application sends an fsync. (BZ#1583462)
All users of Red Hat Gluster Storage are advised to upgrade to these updated packages, which resolve these issues.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 7 x86_64
- Red Hat Virtualization 4 for RHEL 7 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
Fixes
- BZ - 1583462 - Sharding sends all application sent fsyncs to the main shard file
- BZ - 1583464 - Running sysbench on vm disk from plain distribute gluster volume causes disk corruption
- BZ - 1583733 - Poor write performance on gluster-block
- BZ - 1585046 - [RHHI]Fuse mount crashed with only one VM running with its image on that volume
- BZ - 1590774 - [GSS] gsyncd worker crashed in syncdutils with "OSError: [Errno 22] Invalid argument"
- BZ - 1594656 - Block PVC fails to mount on Jenkins pod
- BZ - 1594682 - eager-lock in 3.3.1 always does blocking lock even when non-blocking locks are successful
- BZ - 1595752 - [GSS] Core dump getting created inside gluster pods
- BZ - 1596076 - Introduce database group profile (to be only applied for CNS)
- BZ - 1597509 - introduce cluster.daemon-log-level option
- BZ - 1597648 - "gluster vol heal <volname> info" is locked forever
- BZ - 1598353 - Make port_registered flag of brickinfo to true in the brick attach callback.
- BZ - 1598416 - client-io-threads option not working for replicated volumes
- BZ - 1599803 - [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"
CVEs
(none)
References
(none)
Red Hat Enterprise Linux Server 7
SRPM | |
---|---|
glusterfs-3.8.4-54.15.el7.src.rpm | SHA-256: eb7059a9ef7c73334d0b1b5f891c7b7444e301d20d0569408ae65123e8bdd48e |
x86_64 | |
glusterfs-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 99aed87ef93c8ef3199e56c36e395d44c0e4af2de60bcebe2948ffaa7b2faf32 |
glusterfs-api-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 33217ec7d8ae0f126476f54bca8703b5ad23e238734f5ee1956aa43ac50e46b4 |
glusterfs-api-devel-3.8.4-54.15.el7.x86_64.rpm | SHA-256: b51eeb67e4e9e5581ff81b2f9c228f61cef9392b06fe1e6edb3bc18637f15074 |
glusterfs-cli-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 2f20b97fee1826bbc916b805e9b3e8d51c3f33e3eb62ae34ea8b08960496827a |
glusterfs-client-xlators-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 90bd94aa8bb3966638caac3ee22d34cb2fb2887f626a97e99ba47427500504e6 |
glusterfs-debuginfo-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 1d81027b15d014303034f2884244c9d0a12971ef650ef8567c2e633b8871ec01 |
glusterfs-devel-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 314f17d4084bbf4351b294848f04b6416474508efeda8777862bae668aa19605 |
glusterfs-fuse-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 0312d420704f5400586cc6580a78ceea46d600df6f37e73b17f15185ef3e4e35 |
glusterfs-libs-3.8.4-54.15.el7.x86_64.rpm | SHA-256: ec64d8676bcabfbefedb3163579edd08cb0ad16b022bed33578df18efb034a3b |
glusterfs-rdma-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 65df5185879b5371243a9ab8b42b2b79ce057511aed2140bf8a11a7510fe2bf9 |
python-gluster-3.8.4-54.15.el7.noarch.rpm | SHA-256: 4a41e7b84e83463e6fee0c018fdd125bb8f284e075ddc0aaaed92dac10bb85ad |
Red Hat Virtualization 4 for RHEL 7
SRPM | |
---|---|
glusterfs-3.8.4-54.15.el7.src.rpm | SHA-256: eb7059a9ef7c73334d0b1b5f891c7b7444e301d20d0569408ae65123e8bdd48e |
x86_64 | |
glusterfs-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 99aed87ef93c8ef3199e56c36e395d44c0e4af2de60bcebe2948ffaa7b2faf32 |
glusterfs-api-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 33217ec7d8ae0f126476f54bca8703b5ad23e238734f5ee1956aa43ac50e46b4 |
glusterfs-api-devel-3.8.4-54.15.el7.x86_64.rpm | SHA-256: b51eeb67e4e9e5581ff81b2f9c228f61cef9392b06fe1e6edb3bc18637f15074 |
glusterfs-cli-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 2f20b97fee1826bbc916b805e9b3e8d51c3f33e3eb62ae34ea8b08960496827a |
glusterfs-client-xlators-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 90bd94aa8bb3966638caac3ee22d34cb2fb2887f626a97e99ba47427500504e6 |
glusterfs-debuginfo-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 1d81027b15d014303034f2884244c9d0a12971ef650ef8567c2e633b8871ec01 |
glusterfs-devel-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 314f17d4084bbf4351b294848f04b6416474508efeda8777862bae668aa19605 |
glusterfs-fuse-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 0312d420704f5400586cc6580a78ceea46d600df6f37e73b17f15185ef3e4e35 |
glusterfs-libs-3.8.4-54.15.el7.x86_64.rpm | SHA-256: ec64d8676bcabfbefedb3163579edd08cb0ad16b022bed33578df18efb034a3b |
glusterfs-rdma-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 65df5185879b5371243a9ab8b42b2b79ce057511aed2140bf8a11a7510fe2bf9 |
python-gluster-3.8.4-54.15.el7.noarch.rpm | SHA-256: 4a41e7b84e83463e6fee0c018fdd125bb8f284e075ddc0aaaed92dac10bb85ad |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
SRPM | |
---|---|
glusterfs-3.8.4-54.15.el7rhgs.src.rpm | SHA-256: f1f4ba0b2e0f6ee7f435dd290b1f70ef7d53a5bc18b19ceb0748760c4a38599d |
x86_64 | |
glusterfs-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: be22ad8411b5dc65f31b508ac4832a08e9e07a642a3c7a6b69d923a85e1edd16 |
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 9b097f08a9d189737ccf53354756b4f9d64d0ea58d83584d72e4322b54742cba |
glusterfs-api-devel-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: eb46bc3a3fb5bbf1c9adb290d8ea60b17871f86161d85bbebfb72a0ac04c8d7e |
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 70ed92aa8a0d49a967bbbeccf1610076f4ef2b9c70db9aac1f86b63019d161cc |
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: e3937c8587d5efeeac26794962e40205262fb79e6bba08384b27d78398d6a600 |
glusterfs-debuginfo-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: a832dec9c3939dd2313ccae4875eb66f11f4662484bd2cb3be814cf5cf0be3fb |
glusterfs-devel-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: df821a08c69f5ab378cfd155a845dfef87df4ce5918f5a687bc2571909b69d02 |
glusterfs-events-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 1cf0dfbf6da1513abff696d63d5f91fb2fc6688201a122dc1538c1db58fa5633 |
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 2e59246b1c29bc38c367e6fdc89147360d974684ac104a405c3ebc2741c1fa50 |
glusterfs-ganesha-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 0a7872089be8c08bffb54f71d18fdbffb182d47870b49cc0c3e22ae1884b5153 |
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 7b5f282a93084e9dba4f710ea587597bc778254d2497c2131693f049c96d3f40 |
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 5a560dc32691df4a01ec83cd3585bc37b9471f2ea67fa2632539048e32293050 |
glusterfs-rdma-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: 02524e602dc99ecba845572c133ba22fb04a1918190e4ee88779a0556b17cbe5 |
glusterfs-resource-agents-3.8.4-54.15.el7rhgs.noarch.rpm | SHA-256: 4013fbe5e9d1f9524280b86428f060a5046dc3cd464b8ff64ce4f14ee2f5c01f |
glusterfs-server-3.8.4-54.15.el7rhgs.x86_64.rpm | SHA-256: ec51b5142e99ffd0c9d70f7ab79e3ba7db122368a9a60c86c9f35621b3382ad0 |
python-gluster-3.8.4-54.15.el7rhgs.noarch.rpm | SHA-256: fd5d691acf54496770228290d2b62d179a7499b428674ac326326cbec58070ba |
Red Hat Virtualization Host 4 for RHEL 7
SRPM | |
---|---|
x86_64 | |
glusterfs-debuginfo-3.8.4-54.15.el7.x86_64.rpm | SHA-256: 1d81027b15d014303034f2884244c9d0a12971ef650ef8567c2e633b8871ec01 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.