- Issued:
- 2013-12-18
- Updated:
- 2013-12-18
RHBA-2013:1832 - Bug Fix Advisory
Synopsis
rhev 3.2.5 - vdsm bug fix update
Type/Severity
Bug Fix Advisory
Red Hat Insights patch analysis
Identify and remediate systems affected by this advisory.
Topic
Updated vdsm packages that fix several bugs are now available.
Description
VDSM is a management module that serves as a Red Hat Enterprise Virtualization
Manager agent on Red Hat Enterprise Virtualization Hypervisor or Red Hat
Enterprise Linux hosts.
This update fixes the following bugs:
- The VDSM client tool could not manage Unicode characters out of ASCII range,
so queries to collect data about virtual machines could fail. These characters
are now properly encoded, enabling VDSM to collect virtual machine data.
(BZ#1016702)
- LvmCache did not invalidate stale filters, so after adding a new FC or iSCSI
LUN to a volume group, hosts could not access the storage domains and became
non-operational. Now, all filters are validated after a new device is added and
before the storage domain is extended, so hosts can access storage domains which
have been extended. (BZ#1025467)
- Each SD vdsm initialized a process pool of 10 processes. Each pool is related
to the sd by its uuid. The parsing of the uuid was wrong and used the imgUUID
instead. Wrong parsing of the storage domain uuid caused the wrong number of
process pools to be created. Fixing the parsing of the sp path and using the
right uuid for the pool's key now means continual creation of process pools. The
fix limits the initialized process pools to one for each sd as
required.(BZ#1026335)
- In VDSM the LVM locking_type 1 is used, which skips all operations for
clustered volume groups. It then returns an exit code which triggers errors in
the module, so storage domains could not be created with clustered volume
groups. Adding the --ignoreskippedcluster flag to LVM commands prevents the exit
code error when clustered volume groups are present in the attached LUNs.
(BZ#1029967)
- QEMU sometimes returned a value for the highest allocated extent of a volume
that was greater than the capacity of the qcow2 volume. In such a case VDSM
attempted to extend the volume with every run of _highWrite. It did not ensure
that the highest allocated extent was within the capacity of the volume before
proceeding. Both _highWrite and _onAbnormalStop now share the same logic about
the volume extension. (BZ#1032106)
- When a hypervisor was rebooted, all logical volumes which were part of an FC
storage domain were automatically activated. This caused some issues as logical
volumes should by activated only on request of the engine, and deactivated
immediately when they are not needed. These logical volumes did not pick changes
done by the SPM on the storage, which could lead to data corruption when a
virtual machine wrote to the logical volume with stale metadata. The fix checks
all VDSM logical volumes during LVM bootstrap and deactivates them if possible.
Special logical volumes are refreshed, since they are accessed early when
connecting to storage pool, before LVM bootstrap is done. Open logical volumes
are skipped because they use correct metadata when opened. (BZ#1033123)
- After attempting to cancel multiple live migrations, some virtual machines
were killed. To fix this, when the migration is cancelled, libvirt raises an
error to prevent the operation from proceeding, which also avoids calling the
destination VDSM to create the virtual machine instance. (BZ#1033153)
All users managing Red Hat Enterprise Linux Virtualization hosts using Red
Hat Enterprise Virtualization Manager are advised to install these updated
packages, which fix these issues.
These updated packages will be provided to users of Red Hat Enterprise
Virtualization Hypervisor in the next rhev-hypervisor6 errata package.
Solution
Before applying this update, make sure all previously released errata
relevant to your system have been applied.
This update is available via the Red Hat Network. Details on how to
use the Red Hat Network to apply this update are available at
https://access.redhat.com/site/articles/11258
Affected Products
- Red Hat Virtualization 3.2 x86_64
- Red Hat Virtualization 3 for RHEL 6 x86_64
Fixes
- BZ - 1025467 - SD is partially accessible after extending.
- BZ - 1032106 - _highWrite should not extend a drive if the highest allocated extent is outside the capacity of the volume.
- BZ - 1033123 - LVM logical volumes on FC SDs are activated automatically after hypervisor reboot
- BZ - 1033153 - DestroyVDSCommand called after CancelMigrateVDSCommand failure when attempting to cancel multiple live migrations at a time
CVEs
(none)
References
(none)
Red Hat Virtualization 3.2
SRPM | |
---|---|
vdsm-4.10.2-28.0.el6ev.src.rpm | SHA-256: 61733836c920e006f880c921f997c8e8a9e0f83f654f87a0312a8f65b5840f85 |
x86_64 | |
vdsm-debuginfo-4.10.2-28.0.el6ev.x86_64.rpm | SHA-256: 7fa16052be08751a23b933a89e0e20f5f02341b4a1bd405eb15fa62daa8900b5 |
Red Hat Virtualization 3 for RHEL 6
SRPM | |
---|---|
vdsm-4.10.2-28.0.el6ev.src.rpm | SHA-256: 61733836c920e006f880c921f997c8e8a9e0f83f654f87a0312a8f65b5840f85 |
x86_64 | |
vdsm-4.10.2-28.0.el6ev.x86_64.rpm | SHA-256: cc211036501c2c0ec1b8d44220ce8b4ec6803167dfd01b68cf1648eaf7ad2e56 |
vdsm-cli-4.10.2-28.0.el6ev.noarch.rpm | SHA-256: 8317818399ee13a0159b80ff447ec4625e25b319842d59a3e4bed1153b235364 |
vdsm-debuginfo-4.10.2-28.0.el6ev.x86_64.rpm | SHA-256: 7fa16052be08751a23b933a89e0e20f5f02341b4a1bd405eb15fa62daa8900b5 |
vdsm-hook-vhostmd-4.10.2-28.0.el6ev.noarch.rpm | SHA-256: 2978c5843d560e680f897dc01f4329ea5c554a1eb135fadca6646f7ecf0384fa |
vdsm-python-4.10.2-28.0.el6ev.x86_64.rpm | SHA-256: 8be8864997b429bef95545e711f8489701a06f672bb652f2d505158612b056fb |
vdsm-reg-4.10.2-28.0.el6ev.noarch.rpm | SHA-256: 1e4926db6a2fe4cf29edff222b9bab00c6e39bfd52786c84a279fad41b8dfd2f |
vdsm-xmlrpc-4.10.2-28.0.el6ev.noarch.rpm | SHA-256: 4d75dc10f7940b845aaa618c9fdab40685be2cd9d2869c0f0b6ece2ebd313dc9 |
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/.