- Issued:
- 2017-10-11
- Updated:
- 2017-10-11
RHEA-2017:2881 - Product Enhancement Advisory
Synopsis
cns-deploy-tool bug fix and enhancement update
Type/Severity
Product Enhancement Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated cns-deploy-tool packages that fix one bug and adds multiple enhancements are now available for Container-Native Storage 3.6 and Container Ready Storage Deployments.
Description
The cns-deploy utility provides a tool to deploy Container-Native Storage on the Openshift environment.
This update adds the following enhancements:
- The cns-deploy packages has been upgraded to the upstream gluster-kubernetes v1.2.0, which provides some bug fixes and enhancements over the previous version. (BZ#1456761)
- With this update, heketi is enabled to run inside Openshift to work with Red Hat Gluster Storage through cns-deploy utility in a Container-Ready Storage environment. (BZ#1463989)
- With this update, cns-deploy is now able to deploy the gluster-block provisioner pod and the gluster-s3 service pod. (BZ#1480124)
In addition, this update fixes the following bug:
- Previously, heketi-cli could not write the copy-job file in root of the container filesystem due to insufficient permission. With this update, this issue is fixed and enables files to be written in /tmp and a successful cns-deployment. (BZ#1484685)
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
For details on how to apply this update, refer to:
Affected Products
- Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64
Fixes
- BZ - 1456761 - cns-deploy rebase on 1.2v
- BZ - 1459810 - heketi-storage-copy-job fails during cns-deploy as it trying to pull a wrong image "heketi/heketi:dev"
- BZ - 1463613 - cns-brick-multiplexing: brick process fails to restart after gluster pod failure
- BZ - 1463989 - [RFE] Enable heketi (with inherent HA) running inside OCP to work with RHGS in CRS mode
- BZ - 1465378 - template file for rhgs-s3-server
- BZ - 1468110 - Include Gluster Block Provisioner depoyment artifacts in cns-deploy package.
- BZ - 1470142 - cns-deploy doesn't have latest image tags updated in its templates
- BZ - 1470521 - gluster-block logs should be persisted in CNS
- BZ - 1478046 - /etc/sysconfig/gluster-block file, which defines 'GB_GLFS_LRU_COUNT' value, should be persistent in RHGS image
- BZ - 1480124 - [RFE] Support Block Provisioner and S3 template deployment via cns-deploy tool
- BZ - 1480332 - Gluster Bricks are not coming up after pod restart when bmux is ON
- BZ - 1482046 - cns-deploy retrieves wrong minor version if hostname includes "oc"
- BZ - 1483038 - cns-deployment fails at deploy-heketi phase
- BZ - 1483621 - Failed to deploy CNS with cns-deploy
- BZ - 1483852 - cns-deployment fails during deploy-heketi, deploy-heketi url isn't reachable
- BZ - 1484217 - cns-deploy fails, failing to load the topology file
- BZ - 1484685 - cns-deployment fails while trying to setup heketidb
- BZ - 1487987 - cns-deployment fails
- BZ - 1488122 - [cns-deploy]: gluster-block setup fails
- BZ - 1489860 - s3 and block provisioner templates refer to upstream images
- BZ - 1490700 - cns-deploy failed: error while deploying deploy-heketi-pod
- BZ - 1491219 - Default 'gluster-s3-storageclass.yaml' fails to create
- BZ - 1495838 - glusterblock-provisioner image name seems to have changed recently and hence needs to be updated in the glusterblock-provisioner.yaml template
- BZ - 1497125 - sample file for static provisioning of glusterfs volumes are missing
CVEs
(none)
References
(none)
Red Hat Gluster Storage Server for On-premise 3 for RHEL 7
SRPM | |
---|---|
cns-deploy-5.0.0-54.el7rhgs.src.rpm | SHA-256: bd59c03741428bf9bee1f14640fcd6539dad6a0f0bf2e8aaff991854ab560df2 |
x86_64 | |
cns-deploy-5.0.0-54.el7rhgs.x86_64.rpm | SHA-256: e3004844f159e879dec2ca8e25f83ddd97d29ec5db688934f017ddc173922bb1 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.