- Issued:
- 2016-03-01
- Updated:
- 2016-03-01
RHBA-2016:0193 - Bug Fix Advisory
Synopsis
Red Hat Gluster Storage 3.1 update 2
Type/Severity
Bug Fix Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Red Hat Gluster Storage 3.1 Update 2, which fixes several bugs,
and adds various enhancements, is now available for Red Hat
Enterprise Linux 6.
Description
Red Hat Gluster Storage is a software only scale-out storage solution that
provides flexible and affordable unstructured data storage. It unifies data
storage and infrastructure, increases performance, and improves
availability and manageability to meet enterprise-level storage challenges.
Red Hat Gluster Storage's Unified File and Object Storage is built on
OpenStack's Object Storage (swift).
This update also fixes numerous bugs and adds various enhancements. Space
precludes documenting all of these changes in this advisory. Users are
directed to the Red Hat Gluster Storage 3.1 Technical Notes, linked to in
the References section, for information on the most significant of these
changes.
This advisory introduces the following new features:
- Writable Snapshots
Red Hat Gluster Storage snapshots can now be cloned and made writable by creating a new volume based on an existing snapshot. Clones are space efficient, as the cloned volume and original snapshot share the same logical volume back end, only consuming additional space as the clone diverges from the snapshot. For more information, see the Red Hat Gluster Storage 3.1 Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/.
- RESTful Volume Management with Heketi [Technology Preview]
Heketi provides a RESTful management interface for managing Red Hat Gluster Storage volume lifecycles. This interface allows cloud services like OpenStack Manila, Kubernetes, and OpenShift to dynamically provision Red Hat Gluster Storage volumes. For details about this technology preview, see the Red Hat Gluster Storage 3.1 Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/
- Red Hat Gluster Storage for Containers
With the Red Hat Gluster Storage 3.1 update 2 release a Red Hat Gluster Storage environment can be set up in a container. Containers use shared operating systems and are much more efficient than hypervisors in system resource terms. Containers rest on top of a single Linux instance and allows applications to use the same Linux kernel as the system that they're running on. This improves the overall efficiency and reduces the space consumption considerably. For more information, see the Red Hat Gluster Storage 3.1 Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/.
- BitRot scrubber status
The BitRot scrubber command (gluster volume bitrot VOLNAME scrub status)
can now display scrub progress and list identified corrupted files,
allowing administrators to locate and repair corrupted files more easily.
See the Red Hat Gluster Storage 3.1 Administration Guide for details:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Admi nistration_Guide/.
- Samba Asynchronous I/O
With this release, asynchronous I/O from Samba to Red Hat Gluster Storage is supported. The aio read size option is now enabled and set to 4096 by default. This increases the throughput when the client is multithreaded or there are multiple programs accessing the same share. If you have Linux clients using SMB 2.0 or higher, Red Hat recommends disabling asynchronous I/O (setting aio read size to 0).
All users of Red Hat Gluster Storage are advised to apply this update.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Enterprise Linux Server 6 x86_64
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 6 x86_64
- Red Hat Gluster Storage Nagios Server 3 for RHEL 6 x86_64
Fixes
- BZ - 1018170 - quota: numbers of warning messages in nfs.log a single file itself
- BZ - 1060676 - [add-brick]: I/O on NFS fails when bricks are added to a distribute-replicate volume
- BZ - 1139193 - git operations fail when add-brick operation is done
- BZ - 1177592 - quota: rename of "dir" fails in case of quota space availability is around 1GB
- BZ - 1199033 - [epoll] Typo in the gluster volume set help message for server.event-threads and client.event-threads
- BZ - 1224064 - [Backup]: Glusterfind session entry persists even after volume is deleted
- BZ - 1224880 - [Backup]: Unable to delete session entry from glusterfind list
- BZ - 1228079 - [Backup]: Crash observed when keyboard interrupt is encountered in the middle of any glusterfind command
- BZ - 1228643 - I/O failure on attaching tier
- BZ - 1230540 - Quota list is not working on tiered volume.
- BZ - 1231144 - Data Tiering; Self heal deamon stops showing up in "vol status" once attach tier is done
- BZ - 1236020 - Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume
- BZ - 1236052 - Data Tiering:Throw a warning when user issues a detach-tier commit command
- BZ - 1236153 - setting enable-shared-storage without mentioning the domain, doesn't enables shared storage
- BZ - 1236503 - Disabling enable-shared-storage deletes the volume with the name - "gluster_shared_storage"
- BZ - 1237022 - Probing a new RHGS node, which is part of another cluster, should throw proper error message in logs and CLI
- BZ - 1237059 - DHT-rebalance: rebalance status shows failed when replica pair bricks are brought down in distrep volume while re-name of files going on
- BZ - 1238561 - FSAL_GLUSTER : nfs4_getfacl do not display DENY entries
- BZ - 1238634 - Though scrubber settings changed on one volume log shows all volumes scrubber information
- BZ - 1240502 - nfs-ganesha: remove the entry of the deleted node
- BZ - 1240918 - Quota: After rename operation , gluster v quota <volname> list-objects command give incorrect no. of files in output
- BZ - 1241436 - nfs-ganesha: refresh-config stdout output includes dbus messages "method return sender=:1.61 -> dest=:1.65 reply_serial=2"
- BZ - 1242022 - rdma : pending - porting log messages to a new framework
- BZ - 1242148 - With NFSv4 ACLs enabled, rename of a file/dir to an existing file/dir fails
- BZ - 1243534 - Error messages observed in cli.log
- BZ - 1243797 - quota/marker: dir count in inode quota is not atomic
- BZ - 1244792 - nfs-ganesha: nfs-ganesha debuginfo package has missing debug symbols
- BZ - 1247515 - [upgrade] Error messages seen in glusterd logs, while upgrading from RHGS 2.1.6 to RHGS 3.1
- BZ - 1247947 - [upgrade] After in-service software upgrade from RHGS 2.1 to RHGS 3.1, bumping up op-version failed
- BZ - 1248895 - [upgrade] After in-service software upgrade from RHGS 2.1.6 to RHGS 3.1, probing a new RHGS 3.1 node is moving the peer to rejected state
- BZ - 1251471 - FSAL_GLUSTER: Code clean up in acl implemenatation
- BZ - 1251477 - FSAL_GLUSTER : Removal of previous acl implementation
- BZ - 1257209 - With quota enabled, when files are created and deleted from mountpoint, error messages are seen in brick logs
- BZ - 1257343 - vol heal info fails when transport.socket.bind-address is set in glusterd
- BZ - 1257957 - nfs-ganesha: nfs-ganesha process gets killed while executing UNLOCK with a cthon test on vers=3
- BZ - 1258341 - Disperse volume: single file creation is generating many log messages
- BZ - 1260530 - Provide more meaningful errors on peer probe and peer detach
- BZ - 1261765 - NFS Ganesha export lost during IO on EC volume
- BZ - 1262191 - nfs-ganesha: having acls and quota enabled for volume and nfs-ganesha coredump while creating data
- BZ - 1262680 - IO hung on v4 ganesha mount
- BZ - 1264804 - ECVOL: glustershd log grows quickly and fills up the root volume
- BZ - 1265200 - quota: set quota version for files/directories
- BZ - 1267488 - [upgrade] Volume status doesn't show proper information when nodes are upgraded from 2.1.6 to 3.1.1
- BZ - 1269203 - regression : RHGS 3.0 introduced a maximum value length in the info files
- BZ - 1269557 - FUSE clients in a container environment hang and do not recover post losing connections to all bricks
- BZ - 1270321 - Add heketi package to product RH Gluster Storage 3
- BZ - 1271178 - rm -rf on /run/gluster/vol/<directory name>/ is not showing quota output header for other quota limit applied directories
- BZ - 1271184 - quota : display the size equivalent to the soft limit percentage in gluster v quota <volname> list* command
- BZ - 1271648 - tier/cli: number of bricks remains the same in v info --xml
- BZ - 1271659 - gluster v status --xml for a replicated hot tier volume
- BZ - 1271725 - Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
- BZ - 1271727 - Tiering/glusted: volume status failed after detach tier start
- BZ - 1271733 - Tier/shd: Tracker bug for tier and shd compatibility
- BZ - 1271750 - glusterd: disable ping timer b/w glusterd and make epoll thread count default 1
- BZ - 1271999 - After upgrading to RHGS 3.1.2 build, the other peer was shown as disconnected
- BZ - 1272335 - [Heketi] Not all /etc/fstab enteries are cleaned up after volume delete
- BZ - 1272341 - Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
- BZ - 1272403 - [Tier]: man page of gluster should be updated to list tier commands
- BZ - 1272407 - Data Tiering:error "[2015-10-14 18:15:09.270483] E [MSGID: 122037] [ec-common.c:1502:ec_update_size_version_done] 0-tiervolume-disperse-1: Failed to update version and size [Input/output error]"
- BZ - 1272408 - Data Tiering:[2015-10-15 02:54:52.259879] E [MSGID: 109039] [dht-common.c:2833:dht_vgetxattr_cbk] 0-tiervolume-cold-dht: vgetxattr: Subvolume tiervolume-disperse-1 returned -1 [No such file or directory]
- BZ - 1272452 - Data Tiering:heat counters not getting reset and also internal ops seem to be heating the files
- BZ - 1273347 - [Tier]: glusterfs crashed --volfile-id rebalance/tiervolume
- BZ - 1273348 - [Tier]: lookup from client takes too long {~7m for 18k files}
- BZ - 1273385 - tiering + nfs-ganesha: tiering has a segfault
- BZ - 1273706 - build: package release in NVR should only be integral
- BZ - 1273711 - Disperse volume: df -h on a nfs mount throws Invalid argument error
- BZ - 1273728 - Crash while bringing down the bricks and self heal
- BZ - 1273850 - Replica pairs in a volume shouldn't be from the same node
- BZ - 1273868 - Heketi doesn't allow deleting nodes with drives missing/inaccessible
- BZ - 1275155 - [Tier]: Typo in the output while setting the wrong value of low/hi watermark
- BZ - 1275158 - Data Tiering:Getting lookup failed on files in hot tier, when volume is restarted
- BZ - 1275515 - Reduce 'CTR disabled' brick log message from ERROR to INFO/DEBUG
- BZ - 1275521 - Wrong value of snap-max-hard-limit observed in 'gluster volume info'.
- BZ - 1275525 - snap-max-hard-limit for snapshots always shows as 256 in info file.
- BZ - 1275633 - Clone creation should not be successful when the node participating in volume goes down.
- BZ - 1275751 - Data Tiering:File create terminates with "Input/output error" as split brain is observed
- BZ - 1275912 - AFR self-heal-daemon option is still set on volume though tier is detached
- BZ - 1275925 - [New] - Message displayed after attach tier is misleading
- BZ - 1275971 - [RFE] Geo-replication support for Volumes running in docker containers
- BZ - 1275998 - Data Tiering: "ls" count taking link files and promote/demote files into consideration both on fuse and nfs mount
- BZ - 1276051 - Data Tiering:inconsistent linkfile creation when lookups issued on cold tier files
- BZ - 1276227 - Data Tiering:delete command rm -rf not deleting files the linkto file(hashed) which are under migration and possible spit-brain observed and possible disk wastage
- BZ - 1276245 - [Tier]: Stopping and Starting tier volume triggers fixing layout which fails on local host
- BZ - 1276248 - [Tier]: restarting volume reports "insert/update failure" in cold brick logs
- BZ - 1276273 - [Tier]: start tier daemon using rebal tier start doesnt start tierd if it is failed on any of single node
- BZ - 1276330 - [heketi-cli] Incorrect error message when storage-host-name is missing while adding a node
- BZ - 1276334 - Data Tiering:tiering deamon crashes when trying to heat the file
- BZ - 1276340 - [heketi-cli] Inconsistency with the requirement for zone value
- BZ - 1276348 - nfs-ganesha: ACL issue after adding an ace for a user the file permissions gets modified
- BZ - 1276542 - RHGS-3.1.2 op-version need to be corrected
- BZ - 1276587 - [GlusterD]: After updating one of rhgs 2.1.6 node to 3.1.2 in two node cluster, volume status is failing
- BZ - 1276678 - CTR should be enabled on attach tier, disabled otherwise.
- BZ - 1277028 - SMB: share entry from smb.conf is not removed after setting user.cifs and user.smb to disable.
- BZ - 1277043 - Upgrading to 3.7.-5-5 has changed volume to distributed disperse
- BZ - 1277088 - Data Tiering:Rename of cold file to a hot file causing split brain and showing two copies of files in mount point
- BZ - 1277126 - [New] - Files in a tiered volume gets promoted when bitd signs them
- BZ - 1277316 - Data Tiering: fix lookup-unhashed for tiered volumes.
- BZ - 1277359 - Data Tiering:Filenames with spaces are not getting migrated at all
- BZ - 1277368 - Bit rot version and signature for the files on a tiered volume are missing after few promotions and demotions of the files.
- BZ - 1277659 - lookup and set xattr fails when bit rot is enabled on a tiered volume.
- BZ - 1277886 - FSAL_GLUSTER : if only DENY entry is set for a user/group, then it lost all its default permission
- BZ - 1277944 - "Transport endpoint not connected" in heal info though hot tier bricks are up
- BZ - 1278254 - [Snapshot]: Clone creation fails on tiered volume with pre-validation failed message
- BZ - 1278270 - [Tier]: "failed to reset target size back to 0" errors in tier logs while performing rename ops
- BZ - 1278279 - EC: File healing promotes it to hot tier
- BZ - 1278346 - Data Tiering:Regression:NFS crashed due to dht readdirp after attach tier
- BZ - 1278384 - 'ls' on client mount lists varying number of files while promotion/demotion
- BZ - 1278389 - Data Tiering: Tiering deamon is seeing each part of a file in a Disperse cold volume as a different file
- BZ - 1278390 - Data Tiering:Regression:Detach tier commit is passing when detach tier is in progress
- BZ - 1278399 - I/O failure on attaching tier on nfs client
- BZ - 1278408 - [Tier]: Volume start failed after tier attach to newly created stopped volume.
- BZ - 1278419 - Data Tiering:Data Loss:File migrations(flushing of data) to cold tier fails on detach tier with quota limits reached
- BZ - 1278723 - Tier : Move common functions into tier.rc
- BZ - 1278754 - Data Tiering:Metadata changes to a file should not heat/promote the file
- BZ - 1278798 - Few snapshot creation fails with pre-validation failed message on tiered volume.
- BZ - 1279314 - [Tier]: After volume restart, unable to stop the started detach tier
- BZ - 1279350 - [Tier]: Space is missed b/w the words in the detach tier stop error message
- BZ - 1279830 - File creation in nested folders fails when add-brick operation is done on a volume with exclusive file lock.
- BZ - 1280410 - ec-readdir.t is failing consistently
- BZ - 1281304 - sometimes files are not getting demoted from hot tier to cold tier
- BZ - 1281946 - Large system file distribution is broken
- BZ - 1282701 - build: compile error on RHEL5
- BZ - 1282729 - Creation of files on hot tier volume taking very long time
- BZ - 1283035 - [GlusterD]: Incorrect peer status showing if volume restart done before entire cluster update.
- BZ - 1283050 - self-heal won't work in disperse volumes when they are attached as tiers
- BZ - 1283057 - nfs-ganesha+tiering: the fs-sanity is taking is more than 24 hours to complete on nfs vers=3
- BZ - 1283410 - cache mode must be the default mode for tiered volumes
- BZ - 1283505 - when scrubber is scheduled scrubd moves the files to hot tier
- BZ - 1283563 - libgfapi to support set_volfile-server-transport type "unix"
- BZ - 1283566 - qupta/marker: backward compatibility with quota xattr vesrioning
- BZ - 1283608 - nfs-ganesha: Upcall sent on null gfid
- BZ - 1283940 - Data Tiering: new set of gluster v tier commands not working as expected
- BZ - 1283961 - Data Tiering:Change the default tiering values to optimize tiering settings
- BZ - 1284387 - Without detach tier commit, status changes back to tier migration
- BZ - 1284834 - tiering: Seeing error messages E "/usr/lib64/glusterfs/3.7.5/xlator/features/changetimerecorder.so(ctr_lookup+0x54f) [0x7f6c435c116f] ) 0-ctr: invalid argument: loc->name [Invalid argument] after attach tier
- BZ - 1285166 - Snapshot creation after attach-tier causes glusterd crash
- BZ - 1285226 - Masking the wrong values in Bitrot status command
- BZ - 1285238 - Corrupted objects list does not get cleared even after all the files in the volume are deleted and count increases as old + new count
- BZ - 1285281 - CTDB: yum update fails on RHEL6 for ctdb package with dependency on procps-ng and systemd-units
- BZ - 1285295 - [geo-rep]: Recommended Shared volume use on geo-replication is broken in latest build
- BZ - 1285306 - Unresolved dependencies on ctdb-4.2.4-7.el6rhs.x86_64
- BZ - 1285651 - [Tier]: Error: attempt to set internal xattr: trusted.ec.* [Operation not permitted]
- BZ - 1285678 - RHGS312 RHEL7.2 based ISO is not working. throws error like ImportError; no library named udev
- BZ - 1285783 - fops-during-migration.t fails if hot and cold tiers are dist-rep
- BZ - 1285797 - tiering: T files getting created , even after disk quota exceeds
- BZ - 1285958 - [GlusterD]: NFS service not running after layered installation of RHGS on RHEL7.x
- BZ - 1285998 - Possible memory leak in the tiered daemon
- BZ - 1286058 - Brick crashes because of race in bit-rot init
- BZ - 1286218 - Data Tiering:Watermark:File continuously trying to demote itself but failing " [dht-rebalance.c:608:__dht_rebalance_create_dst_file] 0-wmrk-tier-dht: chown failed for //AP.BH.avi on wmrk-cold-dht (No such file or directory)"
- BZ - 1286346 - Data Tiering:Don't allow or reset the frequency threshold values to zero when record counter features.record-counter is turned off
- BZ - 1286604 - glusterfsd to support volfile-server-transport type "unix"
- BZ - 1286605 - vol quota enable fails when transport.socket.bind-address is set in glusterd
- BZ - 1286637 - [geo-rep+tiering]: symlinks are not getting synced to slave on tiered master setup
- BZ - 1286654 - Data Tiering:Read heat not getting calculated and read operations not heating the file with counter enabled
- BZ - 1286927 - Tier: ec xattrs are set on a newly created file present in the non-ec hot tier
- BZ - 1287447 - remove watermark ie cluster.tier-mode from vol info after a detach tier is completed successfully
- BZ - 1287532 - After detach-tier start writes still go to hot tier
- BZ - 1287980 - [Quota]: Peer status is in "Rejected" state with Quota enabled volume
- BZ - 1287997 - tiering: quota list command is not working after attach or detach
- BZ - 1288490 - Good files does not promoted in a tiered volume when bitrot is enabled
- BZ - 1288509 - rm -rf is taking very long time
- BZ - 1288921 - Use after free bug in notify_kernel_loop in fuse-bridge code
- BZ - 1288988 - Getting errors while launching the selfheal
- BZ - 1289017 - Failed to show rebalance status if the volume name has 'tier' substring
- BZ - 1289071 - [Tier]: Failed to open "demotequeryfile-master-tier-dht" errors logged on the node having only cold bricks
- BZ - 1289092 - update redhat-release-server to the latest one available in rhel7.2
- BZ - 1289228 - [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
- BZ - 1289423 - Regular files are listed as 'T' files on nfs mount
- BZ - 1289437 - [Tier]: rm -rf * from client during demotion causes a stale link file to remain in system with attributes as ?????
- BZ - 1289483 - FSAL_GLUSTER : Rename throws error in mount when acl is enabled
- BZ - 1289893 - Excessive "dict is NULL" logging
- BZ - 1289975 - Access to files fails with I/O error through uss for tiered volume
- BZ - 1290401 - File is not demoted after self heal (split-brain)
- BZ - 1291052 - [tiering]: read/write freq-threshold allows negative values
- BZ - 1291152 - [tiering]: cluster.tier-max-files option in tiering is not honored
- BZ - 1291195 - [georep+tiering]: Geo-replication sync is broken if cold tier is EC
- BZ - 1291560 - Renames/deletes failed with "No such file or directory" when few of the bricks from the hot tier went offline
- BZ - 1291566 - first file created after hot tier full fails to create, but later ends up as a stale erroneous file (file with ???????????)
- BZ - 1291969 - [Tiering]: When files are heated continuously, promotions are too aggressive that it promotes files way beyond high water mark
- BZ - 1292205 - When volume creation fails, gluster volume and brick lvs are not deleted
- BZ - 1292605 - (RHEL6) hook script for CTDB should not change Samba config
- BZ - 1292705 - gluster cli crashed while performing 'gluster vol bitrot <vol_name> scrub status'
- BZ - 1292773 - (RHEL6) S30Samba scripts do not work on systemd systems
- BZ - 1293228 - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
- BZ - 1293237 - [Tier]: "Bad file descriptor" on removal of symlink only on tiered volume
- BZ - 1293286 - heal info output shouldn't print number of entries processed when brick is unreachable.
- BZ - 1293380 - [tiering]: Tiering isn't started after attaching hot tier and hence no promotion/demotion
- BZ - 1293903 - [Tier]: can not delete symlinks from client using rm
- BZ - 1294073 - [tiering]: Incorrect display of 'gluster v tier help'
- BZ - 1294478 - quota: limit xattr not healed for a sub-directory on a newly added bricks
- BZ - 1294487 - glusterfsd crash while bouncing the bricks
- BZ - 1294594 - [Tier]: Killing glusterfs tier process doesn't reflect as failed/faulty in tier status
- BZ - 1294774 - Quota Aux mount crashed
- BZ - 1294816 - Unable to modify quota hard limit on tier volume after disk limit got exceeded
- BZ - 1295299 - glusterfs fuse mount session hangs indefinitely when a file create progressively fills up the hot tier completely
- BZ - 1295736 - "Operation not supported" error logs seen continuosly in brick logs
- BZ - 1296048 - Attach tier + nfs : Creates fail with invalid argument errors
- BZ - 1296134 - Rebalance crashed after detach tier.
- BZ - 1297004 - [write-behind] : Write/Append to a full volume causes fuse client to crash
- BZ - 1297300 - Stale stat information for corrupted objects (replicated volume)
- BZ - 1299724 - Excessive logging in mount when bricks of the replica are down
- BZ - 1299799 - Snapshot creation fails on a tiered volume
- BZ - 1300246 - [Tiering]: Values of watermarks, min free disk etc will be miscalculated with quota set on root directory of gluster volume
- BZ - 1300682 - [georep+tiering]: Hardlink sync is broken if master volume is tiered
- BZ - 1302901 - SMB: SMB crashes with AIO enabled on reads + vers=3.0
- BZ - 1303894 - promotions not happening when space is created on previously full hot tier
- BZ - 1304684 - [quota]: Incorrect disk usage shown on a tiered volume
- BZ - 1305172 - [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
CVEs
(none)
Red Hat Enterprise Linux Server 6
SRPM | |
---|---|
glusterfs-3.7.5-19.el6.src.rpm | SHA-256: 600852365dad54f82eaf4751594e73e68d3066229051d6dec7820cce7fd22b75 |
x86_64 | |
glusterfs-3.7.5-19.el6.x86_64.rpm | SHA-256: efcc1f20c20dd89a6a498576a1138a5eed819c542e8dc2ebda3606ea9b63dea3 |
glusterfs-api-3.7.5-19.el6.x86_64.rpm | SHA-256: f39563d5451b8104622ce65d19dbabc41f331586584cfe5fac596d621e168ecc |
glusterfs-api-devel-3.7.5-19.el6.x86_64.rpm | SHA-256: 68f9bc435d039274b01d8fc0dbbc5c2e15d9e60cfc952094f25787175f215840 |
glusterfs-cli-3.7.5-19.el6.x86_64.rpm | SHA-256: 66460c8ec7c584971b6041994d188d244c9720c7f9d9dfa8c88c6e3d26302e0b |
glusterfs-client-xlators-3.7.5-19.el6.x86_64.rpm | SHA-256: 20a2ffbd8112d11781955579017fdf1c1bb1af4e591da3be513ced774ae16d46 |
glusterfs-debuginfo-3.7.5-19.el6.x86_64.rpm | SHA-256: 63e47f087783531d1c4fbf117202485922a05f443a353baed28111fbe7578c05 |
glusterfs-devel-3.7.5-19.el6.x86_64.rpm | SHA-256: 172d2074794e9590da2e6a3dcf9d0f677f638eff0ad39bf109b6bb57bc3b0dd5 |
glusterfs-fuse-3.7.5-19.el6.x86_64.rpm | SHA-256: 507bc9736482624f89fdf23480339c889440c662b85dfefe15f7884f6f541be4 |
glusterfs-libs-3.7.5-19.el6.x86_64.rpm | SHA-256: 3828a367cf1149c211f8f2bb5389652ed98be92e0bd4332a91429f7c38be23af |
glusterfs-rdma-3.7.5-19.el6.x86_64.rpm | SHA-256: df131056870e7fed7b9f782992fd279fed4bf6acf5d45b1c3698ee88de7fe9a3 |
python-gluster-3.7.5-19.el6.noarch.rpm | SHA-256: 7523d69466823c3e992734dcf3e1aacab7b666e809a9bbdf57a7e7671bbe05f4 |
Red Hat Gluster Storage Server for On-premise 3 for RHEL 6
SRPM | |
---|---|
gluster-nagios-common-0.2.3-1.el6rhs.src.rpm | SHA-256: 55bf077f217325748557569e8fdc490ea63ce38e248e54e30ed242319e9617a4 |
glusterfs-3.7.5-19.el6rhs.src.rpm | SHA-256: 45883467434c886913c1bb5560d1f9fdfa1576462c3dd56d8b262c788d4737e6 |
heketi-1.0.2-1.el6rhs.src.rpm | SHA-256: 4b8ead9a0405327658ec286e17193fa7e885b9b262036d863c8a012497505366 |
nfs-ganesha-2.2.0-12.el6rhs.src.rpm | SHA-256: 0a008c9530514892346ee36f4aff2f044a48ec98a771a4b41173717ada688ca2 |
nrpe-2.15-4.2.el6rhs.src.rpm | SHA-256: d343d3619f8b8e2aab315599a1f3a43c75af1b615513976019a64e76575eaacf |
nsca-2.9.1-4.2.el6rhs.src.rpm | SHA-256: 14707c4a255ffa981dbed8c06549a3e261479eb487426aa157ff6311b413c7ad |
redhat-storage-server-3.1.2.0-1.el6rhs.src.rpm | SHA-256: 9ebecb5ab92edb2a9ff5c04d47a1ffc1f2ab13961e295d1385a6aae1aa39669d |
vdsm-4.16.30-1.3.el6rhs.src.rpm | SHA-256: 05625bcfa75967b9c5509143b3fc044c07e950a03252ff9a92e1b367d43fa3ac |
x86_64 | |
gluster-nagios-common-0.2.3-1.el6rhs.noarch.rpm | SHA-256: 16dd78cdaca6ff5c7bcb9e22b9ffea3e51a73c11c748d21661a3cd7b3d48de55 |
glusterfs-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: a9c64a7089f68d948dc0b7fbdea03edeee5c697cd6f5a3cf11b060d0ee9555c2 |
glusterfs-api-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 94cb9bb3e4e28ad588c34de7c429b7e2f21335d76f2adbfca00d244b6ac8347a |
glusterfs-api-devel-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 470a420c2b0ea4be3dc0310b736ca06c4f8526428d19432c30d42b190d8411e9 |
glusterfs-cli-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 1ab4504600f5f07527ea32ccb3414675e1ef9d69a8776c94f252bcf6f652a129 |
glusterfs-client-xlators-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: ed71ba18711f1c29c3c0085801fec4054927c233cd14f222585381942fe5a06e |
glusterfs-debuginfo-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: bea374497e60c866fd7dd2f04fb699818ac406aa8e9a93ba2496dde320581a13 |
glusterfs-devel-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 4403158f5f32d2d859f21e7cbde34569a53c1736a50917a11bcc6f3db717b740 |
glusterfs-fuse-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: fd166c206a830714f0b261c639c31c7cd4e7f56dfe86e2f15629f7f194e20dd1 |
glusterfs-ganesha-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 76d43bde9d0ef98bc143f2c061d2d041ad15413a142267bafce42b03d087a84e |
glusterfs-geo-replication-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 6570fe2e10eeb277f33f1e4b513c0c5c2ebb4f8f612e23747dbf496ce4fa4b0d |
glusterfs-libs-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 5800d1e67829403f1efcb807e13b58a44f0d7b7dc5c86bb600fa3efe692120ce |
glusterfs-rdma-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: dc5724ba494c1b9e550c2ded3cd6c04178eea2ca013b62b723a6c7410fc3a3f4 |
glusterfs-server-3.7.5-19.el6rhs.x86_64.rpm | SHA-256: 7e51828f7ca9189b66b15de2aa6cb891ba5883a64f39290bb8307a280773f227 |
heketi-1.0.2-1.el6rhs.x86_64.rpm | SHA-256: df508175d3e302cb983ff99f0f9383300fe3bc3400010f430560fbc1e5b244f4 |
heketi-devel-1.0.2-1.el6rhs.noarch.rpm | SHA-256: 37887606b89528ac0004773c14ec68865c07bee97af4fa7b68f7254fd406b0e5 |
heketi-unit-test-devel-1.0.2-1.el6rhs.x86_64.rpm | SHA-256: 7466b5b589e8efa4b638deb4edb8a53dfc1ed12995d680258a03fad9f39829b2 |
nfs-ganesha-2.2.0-12.el6rhs.x86_64.rpm | SHA-256: 5159f592fc288a11bb51f4f5bfdc98eba6d1933495474b70eaa4505045df1ae6 |
nfs-ganesha-debuginfo-2.2.0-12.el6rhs.x86_64.rpm | SHA-256: a7b28f9284254a2781f58f84dba2c8aaa544bb14c3d467778b0c1b79b1eb300f |
nfs-ganesha-gluster-2.2.0-12.el6rhs.x86_64.rpm | SHA-256: 2d1a59a5fcd31425e2633f27f98c6d3d280eb39a2f32d0cf29bd9ac033c576cf |
nrpe-2.15-4.2.el6rhs.x86_64.rpm | SHA-256: 9b9c4b867ba0170f37eb619c2fb37d2eee4969618124fe272d58c5aab8ee5dc0 |
nrpe-debuginfo-2.15-4.2.el6rhs.x86_64.rpm | SHA-256: c95d38620039e6b41994ea51a51a5cd23404836b3399dd123066127ba3dbf2a1 |
nsca-client-2.9.1-4.2.el6rhs.x86_64.rpm | SHA-256: 18078268f97f143f7f14af6ef286914098132ded59f910342c37922edd260381 |
nsca-debuginfo-2.9.1-4.2.el6rhs.x86_64.rpm | SHA-256: f53dbe930d26b15602f2e94def3561b0b7a41f5ffa74c133e670495c3decb627 |
python-gluster-3.7.5-19.el6rhs.noarch.rpm | SHA-256: 683e344e35a35fc9a4c62c929f739e711f5a9d7047ebfe4b34f7a716befbd46a |
redhat-storage-server-3.1.2.0-1.el6rhs.noarch.rpm | SHA-256: 4750202eadacf29fe768352a87fb15f2b28bd595928635b0996f2f8b1e44f668 |
vdsm-4.16.30-1.3.el6rhs.x86_64.rpm | SHA-256: e585e32ab99da6ef3d72172adfc31af99f25974848be481cc0aa1e2839995486 |
vdsm-cli-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 66c7623ba0ce769a7e7dcf193a00c9fcd2b6f5e2e7a60e670d34497ad3105ab1 |
vdsm-debug-plugin-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 22d89d89e71808bd6232be276294caacedba705b54019198fe4de72fb00078fa |
vdsm-debuginfo-4.16.30-1.3.el6rhs.x86_64.rpm | SHA-256: 2d1726835135b38637efd2d75c5c7fed66022b5a1e560152e5528994404233c0 |
vdsm-gluster-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 4ed07ca857f048cad66782fc6dc1a4a388d20db548cb3d357413e8326567d612 |
vdsm-hook-ethtool-options-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: e5d802dce1f7da395bc8aab4bc6c20165ba01c8f52f2178c8ea3c9a696e2a227 |
vdsm-hook-faqemu-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 8978827311984d3eddc943e7d84a56a116569c7e42a214f5d4295316e54e5726 |
vdsm-hook-openstacknet-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 35986951b9f9ef93896833c7defa9ec6aab135a7ef3724b1b2a4171333b0eb68 |
vdsm-hook-qemucmdline-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 27e2e023851b42fbe7dab7d8322b32b94319f38ed7f48be9efd5fdf8031c2c3f |
vdsm-jsonrpc-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 8a36dfcf0bf95e097d5b317e09eee160a81997f6f0830e7563ebdcdd6adb3147 |
vdsm-python-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 96e47fd05943245267261fcde11685b9d868713862420ae19750231095bfd75b |
vdsm-python-zombiereaper-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: cf61007f67ae7ec798d92970702f587de2bf93aecdd0e87a3cc23557f9505832 |
vdsm-reg-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: d619f3a54f433e34fe7d2f9211384b9397e53f72c4a0d52259860621f85c9ec3 |
vdsm-tests-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: e13543b8351ff0fea6b7fa2117c21ad911056b73fa58035568b2e8efc8448439 |
vdsm-xmlrpc-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: f0108a6e81e6c548c95cb3285df1f8289c0af871e0c94613de6d89d15a0506ce |
vdsm-yajsonrpc-4.16.30-1.3.el6rhs.noarch.rpm | SHA-256: 372940dc9bf4978d6eecb94ba504a32654a44c211c8d4b580dbef1d15fb82afa |
Red Hat Gluster Storage Nagios Server 3 for RHEL 6
SRPM | |
---|---|
gluster-nagios-common-0.2.3-1.el6rhs.src.rpm | SHA-256: 55bf077f217325748557569e8fdc490ea63ce38e248e54e30ed242319e9617a4 |
nagios-server-addons-0.2.3-1.el6rhs.src.rpm | SHA-256: 3ee7e3a9d56beb8d09812ed3bb40d9a066e7f7276842b60122d5132920be7d7c |
nrpe-2.15-4.2.el6rhs.src.rpm | SHA-256: d343d3619f8b8e2aab315599a1f3a43c75af1b615513976019a64e76575eaacf |
nsca-2.9.1-4.2.el6rhs.src.rpm | SHA-256: 14707c4a255ffa981dbed8c06549a3e261479eb487426aa157ff6311b413c7ad |
x86_64 | |
gluster-nagios-common-0.2.3-1.el6rhs.noarch.rpm | SHA-256: 16dd78cdaca6ff5c7bcb9e22b9ffea3e51a73c11c748d21661a3cd7b3d48de55 |
nagios-plugins-nrpe-2.15-4.2.el6rhs.x86_64.rpm | SHA-256: 401f949f18b74520625d0d86cf17809b79e6b177c6d3761cd9cc4874805cb4c5 |
nagios-server-addons-0.2.3-1.el6rhs.noarch.rpm | SHA-256: 06949eefcc61127c85ed6379b1a789584c1d8649e34d9f5bf54310cb963769b0 |
nrpe-debuginfo-2.15-4.2.el6rhs.x86_64.rpm | SHA-256: c95d38620039e6b41994ea51a51a5cd23404836b3399dd123066127ba3dbf2a1 |
nsca-2.9.1-4.2.el6rhs.x86_64.rpm | SHA-256: b2d8ca29d9ffee947104243db3bf1d9e71ad37590ab8982824dcdb8de99a8b75 |
nsca-debuginfo-2.9.1-4.2.el6rhs.x86_64.rpm | SHA-256: f53dbe930d26b15602f2e94def3561b0b7a41f5ffa74c133e670495c3decb627 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.