User Tools

Site Tools


services:rhev

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
services:rhev [2023/02/08 12:40]
akraitman [Updating Hypervisors]
services:rhev [2024/08/23 00:41] (current)
dmick [Gluster]
Line 12: Line 12:
  
 ===== Storage ===== ===== Storage =====
-Two new storage chassis are being used as the storage nodes. ​ [[hardware:​infrastructure#​ssdstore_01_02_frontsepiacephcom|ssdstore{01,​02}.front.sepia.ceph.com]] are populated with 8x 1.5TB NVMe drives in software RAID6 configuration. 
  
-A third host, [[hardware:senta|senta01]], is configured as the arbiter node for the Gluster volume. ​ 2x 240GB SSD drives are in a software RAID1 and mounted at ''/​gluster''​.+**Note: this was the original configuration. ​ Storage is now provided by an [[services:longrunningcluster#​lrc_iscsi_volume_for_the_rhev_cluster|iscsi service]] on the long-running cluster **
  
 +<​del>​Two new storage chassis are being used as the storage nodes. ​ [[hardware:​infrastructure#​ssdstore_01_02_frontsepiacephcom|ssdstore{01,​02}.front.sepia.ceph.com]] are populated with 8x 1.5TB NVMe drives in software RAID6 configuration.
 +
 +A third host, [[hardware:​senta|senta01]],​ is configured as the arbiter node for the Gluster volume. ​ 2x 240GB SSD drives are in a software RAID1 and mounted at ''/​gluster''​.
 +</​del>​
 ---- ----
  
 ==== Gluster ==== ==== Gluster ====
-All VMs (except the Hosted Engine which is on the ''​hosted-engine''​ volume) are backed by a sharded Gluster volume, ''​ssdstorage''​. ​ A sharded volume was chosen to decrease the time needed for the volume to heal after a storage failure. ​ This should reduce VM downtime in the event of a storage node failure.+<del>All VMs (except the Hosted Engine which is on the ''​hosted-engine''​ volume) are backed by a sharded Gluster volume, ''​ssdstorage''​. ​ A sharded volume was chosen to decrease the time needed for the volume to heal after a storage failure. ​ This should reduce VM downtime in the event of a storage node failure.
  
 If there is a storage node failure, RHEV will use the remaining Gluster node and Gluster will automatically heal as part of the recovery process. ​ It's possible a VM will be paused if its VM disk image changed while one of the storage nodes was down.  Run ''​gluster volume heal ssdstorage info''​ to see heal status. If there is a storage node failure, RHEV will use the remaining Gluster node and Gluster will automatically heal as part of the recovery process. ​ It's possible a VM will be paused if its VM disk image changed while one of the storage nodes was down.  Run ''​gluster volume heal ssdstorage info''​ to see heal status.
  
-A single software RAID6 was decided upon as the most redundant and reliable storage configuration. ​ See the graph below comparing the old storage as well as tests of various RAID5 and RAID6 configurations.+A single software RAID6 was decided upon as the most redundant and reliable storage configuration. ​ See the graph below comparing the old storage as well as tests of various RAID5 and RAID6 configurations.</​del>​
  
 {{ :​services:​screenshot_at_2017-07-05_16-12-30.png |}} {{ :​services:​screenshot_at_2017-07-05_16-12-30.png |}}
services/rhev.1675860044.txt.gz ยท Last modified: 2023/02/08 12:40 by akraitman