User Tools

Site Tools


services:rhev

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
services:rhev [2022/03/11 15:18]
djgalloway
services:rhev [2023/02/08 12:08]
akraitman [Summary]
Line 2: Line 2:
 ===== Summary ===== ===== Summary =====
 We have have a RHEV instance running on [[hardware:​infrastructure#​hv_0103|hv{01..04}]] as the main hypervisor nodes. ​ They'​re listed as **Hosts** in the RHEV Manager. We have have a RHEV instance running on [[hardware:​infrastructure#​hv_0103|hv{01..04}]] as the main hypervisor nodes. ​ They'​re listed as **Hosts** in the RHEV Manager.
 +
 +Currently the RHEV Hosts installed version is 4.3.5-1
  
 The [[http://​mgr01.front.sepia.ceph.com|RHEV Manager]] is a [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​4.1/​html/​self-hosted_engine_guide/​|Self-Hosted VM]] inside the cluster. ​ The username for logging in is ''​admin''​ and the password is our standard root password. The [[http://​mgr01.front.sepia.ceph.com|RHEV Manager]] is a [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​4.1/​html/​self-hosted_engine_guide/​|Self-Hosted VM]] inside the cluster. ​ The username for logging in is ''​admin''​ and the password is our standard root password.
Line 52: Line 54:
 The Hypervisors (hv{01..04}) and Storage nodes (ssdstore{01..02}) have entries in ''/​etc/​hosts''​ in case of DNS failure. The Hypervisors (hv{01..04}) and Storage nodes (ssdstore{01..02}) have entries in ''/​etc/​hosts''​ in case of DNS failure.
  
 +Note: it is important that the version of glusterfs packages on the hypervisors does not exceed the version on the storage nodes (i.e. client is older or equal to server).
 ---- ----
  
Line 184: Line 187:
 This is https://​bugzilla.redhat.com/​show_bug.cgi?​id=1361518. This is https://​bugzilla.redhat.com/​show_bug.cgi?​id=1361518.
  
-As long as the unsynced entries are GFIDs only and they only appear under the arbiter (senta01) server, you can paste *just* the GFIDs into a ''/​tmp/​gfids''​ file and run the following script:+As long as the unsynced entries are GFIDs only and they only appear under the arbiter (senta01) server, you can paste **just** the GFIDs into a ''/​tmp/​gfids''​ file and run the following script:
  
 <​code>​ <​code>​
 #!/bin/bash #!/bin/bash
 set -ex set -ex
-for id in $(head -/tmp/gfids); do+ 
 +VOLNAME=ssdstorage 
 +for id in $(gluster volume heal $VOLNAME info | egrep '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{8}'​ -o); do
   file=$(find /​gluster/​arbiter/​.glusterfs -name $id -not -path '/​gluster/​arbiter/​.glusterfs/​indices/​*' ​ -type f)   file=$(find /​gluster/​arbiter/​.glusterfs -name $id -not -path '/​gluster/​arbiter/​.glusterfs/​indices/​*' ​ -type f)
-  if [ $(getfattr -d -m . -e hex $file | grep trusted.afr.ssdstorage* | grep "​0x000000"​ | wc -l) == 2 ]; then+  if [ $(getfattr -d -m . -e hex $(echo ​$file| grep trusted.afr.$VOLNAME* | grep "​0x000000"​ | wc -l) == 2 ]; then
     echo "​deleting xattr for gfid $id"     echo "​deleting xattr for gfid $id"
-    for i in $(getfattr -d -m . -e hex $file |grep trusted.afr.ssdstorage*|cut -f1 -d'​='​);​ do +    for i in $(getfattr -d -m . -e hex $(echo ​$file|grep trusted.afr.$VOLNAME*|cut -f1 -d'​='​);​ do 
-      setfattr -x $i $file+      setfattr -x $i $(echo ​$file)
     done     done
   else   else
services/rhev.txt · Last modified: 2023/02/08 12:40 by akraitman