User Tools

Site Tools


services:rhev

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
services:rhev [2020/03/31 15:58]
djgalloway
services:rhev [2023/02/08 12:40] (current)
akraitman [Updating Hypervisors]
Line 2: Line 2:
 ===== Summary ===== ===== Summary =====
 We have have a RHEV instance running on [[hardware:​infrastructure#​hv_0103|hv{01..04}]] as the main hypervisor nodes. ​ They'​re listed as **Hosts** in the RHEV Manager. We have have a RHEV instance running on [[hardware:​infrastructure#​hv_0103|hv{01..04}]] as the main hypervisor nodes. ​ They'​re listed as **Hosts** in the RHEV Manager.
 +
 +Currently the RHEV Hosts installed version is 4.3.5-1
  
 The [[http://​mgr01.front.sepia.ceph.com|RHEV Manager]] is a [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​4.1/​html/​self-hosted_engine_guide/​|Self-Hosted VM]] inside the cluster. ​ The username for logging in is ''​admin''​ and the password is our standard root password. The [[http://​mgr01.front.sepia.ceph.com|RHEV Manager]] is a [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​4.1/​html/​self-hosted_engine_guide/​|Self-Hosted VM]] inside the cluster. ​ The username for logging in is ''​admin''​ and the password is our standard root password.
Line 52: Line 54:
 The Hypervisors (hv{01..04}) and Storage nodes (ssdstore{01..02}) have entries in ''/​etc/​hosts''​ in case of DNS failure. The Hypervisors (hv{01..04}) and Storage nodes (ssdstore{01..02}) have entries in ''/​etc/​hosts''​ in case of DNS failure.
  
 +Note: it is important that the version of glusterfs packages on the hypervisors does not exceed the version on the storage nodes (i.e. client is older or equal to server).
 ---- ----
  
Line 107: Line 110:
   * ''​dgalloway-ubuntu-vm''​ - Installs a basic Ubuntu installation using the entire disk and ''​ext4''​ filesystem. ​ I couldn'​t get ''​xfs''​ working.   * ''​dgalloway-ubuntu-vm''​ - Installs a basic Ubuntu installation using the entire disk and ''​ext4''​ filesystem. ​ I couldn'​t get ''​xfs''​ working.
   * ''​dgalloway-rhel-vm''​ - I don't remember if this one works but you can try.   * ''​dgalloway-rhel-vm''​ - I don't remember if this one works but you can try.
 +
 +=== A note about installing RHEL/CentOS ===
 +You need to specify the URL for the installation repo as a kernel parameter. ​ So in the Cobbler PXE menu, when you hit ''​[Tab]'',​ add ''​%%ksdevice=link inst.repo=http://​172.21.0.11/​cobbler/​ks_mirror/​CentOS-X.X-x86_64%%''​ replacing X.X with the appropriate version.
 +
 +Otherwise you'll end up with an error like ''​dracut initqueue timeout''​ and the installer dies.
  
 ==== ovirt-guest-agent ==== ==== ovirt-guest-agent ====
Line 175: Line 183:
 ==== Emergency RHEV Web UI Access w/o VPN ==== ==== Emergency RHEV Web UI Access w/o VPN ====
 In the event the OpenVPN gateway VM is inaccessible/​locked up/​whatever,​ you can open an SSH tunnel (''​ssh -D 9999 $YOURUSER@8.43.84.133''​) and set your browser'​s proxy settings to SOCKS5 localhost:​9999 to get at the RHEV web UI.  That public IP is on store01 and is a leftover artifact from when store01 ran OpenVPN. In the event the OpenVPN gateway VM is inaccessible/​locked up/​whatever,​ you can open an SSH tunnel (''​ssh -D 9999 $YOURUSER@8.43.84.133''​) and set your browser'​s proxy settings to SOCKS5 localhost:​9999 to get at the RHEV web UI.  That public IP is on store01 and is a leftover artifact from when store01 ran OpenVPN.
 +
 +==== GFIDs listed in ''​gluster volume heal ssdstorage info''​ forever ====
 +This is https://​bugzilla.redhat.com/​show_bug.cgi?​id=1361518.
 +
 +As long as the unsynced entries are GFIDs only and they only appear under the arbiter (senta01) server, you can paste **just** the GFIDs into a ''/​tmp/​gfids''​ file and run the following script:
 +
 +<​code>​
 +#!/bin/bash
 +set -ex
 +
 +VOLNAME=ssdstorage
 +for id in $(gluster volume heal $VOLNAME info | egrep '​[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{8}'​ -o); do
 +  file=$(find /​gluster/​arbiter/​.glusterfs -name $id -not -path '/​gluster/​arbiter/​.glusterfs/​indices/​*' ​ -type f)
 +  if [ $(getfattr -d -m . -e hex $(echo $file) | grep trusted.afr.$VOLNAME* | grep "​0x000000"​ | wc -l) == 2 ]; then
 +    echo "​deleting xattr for gfid $id"
 +    for i in $(getfattr -d -m . -e hex $(echo $file) |grep trusted.afr.$VOLNAME*|cut -f1 -d'​='​);​ do
 +      setfattr -x $i $(echo $file)
 +    done
 +  else
 +    echo "not deleting xattr for gfid $id"
 +  fi
 +done
 +</​code>​
  
 ---- ----
Line 196: Line 227:
 I used to have a summary of steps here but it's safer to just follow the [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​|Red Hat docs]]. I used to have a summary of steps here but it's safer to just follow the [[https://​access.redhat.com/​documentation/​en-us/​red_hat_virtualization/​|Red Hat docs]].
  
 +==== VM has paused due to no storage space error ====
 +We started seeing this issue on VMs like teuthology and it looks like it's a known bug I updated /​etc/​vdsm/​vdsm.conf.d/​99-local.conf and restarted systemctl restart vdsmd as described here:
 +
 +https://​access.redhat.com/​solutions/​130843
 ==== Growing a VM's virtual disk ==== ==== Growing a VM's virtual disk ====
   - Log into the [[https://​mgr01.front.sepia.ceph.com|Web UI]]   - Log into the [[https://​mgr01.front.sepia.ceph.com|Web UI]]
Line 221: Line 256:
 ==== Onlining Hot-Plugged CPU/RAM ==== ==== Onlining Hot-Plugged CPU/RAM ====
 https://​askubuntu.com/​a/​764621 https://​askubuntu.com/​a/​764621
 +
 +<​code>​
 +#!/bin/bash
 +# Based on script by William Lam - http://​engineering.ucsb.edu/​~duonglt/​vmware/​
 +
 +# Bring CPUs online
 +for CPU_DIR in /​sys/​devices/​system/​cpu/​cpu[0-9]*
 +do
 +    CPU=${CPU_DIR##​*/​}
 +    echo "Found cpu: '​${CPU_DIR}'​ ..."
 +    CPU_STATE_FILE="​${CPU_DIR}/​online"​
 +    if [ -f "​${CPU_STATE_FILE}"​ ]; then
 +        if grep -qx 1 "​${CPU_STATE_FILE}";​ then
 +            echo -e "​\t${CPU} already online"​
 +        else
 +            echo -e "​\t${CPU} is new cpu, onlining cpu ..."
 +            echo 1 > "​${CPU_STATE_FILE}"​
 +        fi
 +    else 
 +        echo -e "​\t${CPU} already configured prior to hot-add"​
 +    fi
 +done
 +
 +# Bring all new Memory online
 +for RAM in $(grep line /​sys/​devices/​system/​memory/​*/​state)
 +do
 +    echo "Found ram: ${RAM} ..."
 +    if [[ "​${RAM}"​ == *":​offline"​ ]]; then
 +        echo "​Bringing online"​
 +        echo $RAM | sed "​s/:​offline$//"​|sed "​s/​^/​echo online > /"​|source /dev/stdin
 +    else
 +        echo "​Already online"​
 +    fi
 +done
 +</​code>​
services/rhev.1585670293.txt.gz ยท Last modified: 2020/03/31 15:58 by djgalloway