This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
services:rhev [2023/02/08 12:08] akraitman [Summary] |
services:rhev [2024/08/23 00:41] (current) dmick [Gluster] |
||
---|---|---|---|
Line 12: | Line 12: | ||
===== Storage ===== | ===== Storage ===== | ||
- | Two new storage chassis are being used as the storage nodes. [[hardware:infrastructure#ssdstore_01_02_frontsepiacephcom|ssdstore{01,02}.front.sepia.ceph.com]] are populated with 8x 1.5TB NVMe drives in software RAID6 configuration. | ||
- | A third host, [[hardware:senta|senta01]], is configured as the arbiter node for the Gluster volume. 2x 240GB SSD drives are in a software RAID1 and mounted at ''/gluster''. | + | **Note: this was the original configuration. Storage is now provided by an [[services:longrunningcluster#lrc_iscsi_volume_for_the_rhev_cluster|iscsi service]] on the long-running cluster ** |
+ | <del>Two new storage chassis are being used as the storage nodes. [[hardware:infrastructure#ssdstore_01_02_frontsepiacephcom|ssdstore{01,02}.front.sepia.ceph.com]] are populated with 8x 1.5TB NVMe drives in software RAID6 configuration. | ||
+ | |||
+ | A third host, [[hardware:senta|senta01]], is configured as the arbiter node for the Gluster volume. 2x 240GB SSD drives are in a software RAID1 and mounted at ''/gluster''. | ||
+ | </del> | ||
---- | ---- | ||
==== Gluster ==== | ==== Gluster ==== | ||
- | All VMs (except the Hosted Engine which is on the ''hosted-engine'' volume) are backed by a sharded Gluster volume, ''ssdstorage''. A sharded volume was chosen to decrease the time needed for the volume to heal after a storage failure. This should reduce VM downtime in the event of a storage node failure. | + | <del>All VMs (except the Hosted Engine which is on the ''hosted-engine'' volume) are backed by a sharded Gluster volume, ''ssdstorage''. A sharded volume was chosen to decrease the time needed for the volume to heal after a storage failure. This should reduce VM downtime in the event of a storage node failure. |
If there is a storage node failure, RHEV will use the remaining Gluster node and Gluster will automatically heal as part of the recovery process. It's possible a VM will be paused if its VM disk image changed while one of the storage nodes was down. Run ''gluster volume heal ssdstorage info'' to see heal status. | If there is a storage node failure, RHEV will use the remaining Gluster node and Gluster will automatically heal as part of the recovery process. It's possible a VM will be paused if its VM disk image changed while one of the storage nodes was down. Run ''gluster volume heal ssdstorage info'' to see heal status. | ||
- | A single software RAID6 was decided upon as the most redundant and reliable storage configuration. See the graph below comparing the old storage as well as tests of various RAID5 and RAID6 configurations. | + | A single software RAID6 was decided upon as the most redundant and reliable storage configuration. See the graph below comparing the old storage as well as tests of various RAID5 and RAID6 configurations.</del> |
{{ :services:screenshot_at_2017-07-05_16-12-30.png |}} | {{ :services:screenshot_at_2017-07-05_16-12-30.png |}} | ||
Line 227: | Line 230: | ||
I used to have a summary of steps here but it's safer to just follow the [[https://access.redhat.com/documentation/en-us/red_hat_virtualization/|Red Hat docs]]. | I used to have a summary of steps here but it's safer to just follow the [[https://access.redhat.com/documentation/en-us/red_hat_virtualization/|Red Hat docs]]. | ||
+ | ==== VM has paused due to no storage space error ==== | ||
+ | We started seeing this issue on VMs like teuthology and it looks like it's a known bug I updated /etc/vdsm/vdsm.conf.d/99-local.conf and restarted systemctl restart vdsmd as described here: | ||
+ | |||
+ | https://access.redhat.com/solutions/130843 | ||
==== Growing a VM's virtual disk ==== | ==== Growing a VM's virtual disk ==== | ||
- Log into the [[https://mgr01.front.sepia.ceph.com|Web UI]] | - Log into the [[https://mgr01.front.sepia.ceph.com|Web UI]] |