User Tools

Site Tools


services:vpshosts

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
services:vpshosts [2016/07/05 21:47]
dgalloway created
services:vpshosts [2017/08/15 22:46] (current)
djgalloway
Line 3: Line 3:
 VPSHOSTs are [[hardware:​mira|Miras]] we took out of the testpool to use as hypervisors for testing VMs. VPSHOSTs are [[hardware:​mira|Miras]] we took out of the testpool to use as hypervisors for testing VMs.
  
-See the [[https://​github.com/​ceph/​ceph-sepia-secrets/​blob/​master/​ansible/​inventory/​sepia|vps_hosts]] section of the ansible inventory to see which miras are used as VPSHOSTs. ​ Or query the lock database ''​teuthology-lock --brief -a | grep VPS''​.+See the [[https://​github.com/​ceph/​ceph-sepia-secrets/​blob/​master/​ansible/​inventory/​sepia|vps_hosts]] section of the ansible inventory to see which miras are used as VPSHOSTs. ​ Or query the lock database ''​teuthology-lock ​%%--%%brief -a | grep VPS''​. 
 + 
 +VPSHOSTs are managed using the [[https://​github.com/​ceph/​ceph-cm-ansible/​tree/​master/​roles/​vmhost|vmhost]] ansible role in ceph-cm-ansible. 
 + 
 +===== Virtual Machines ===== 
 +Each VPSHOST is home to <​del>​8</​del>​ 4 virtual machines. ​ Each VM has its own JBOD disk assigned to it with 4GB RAM and 1 vCPU **except** the first VM on each VPSHOST. ​ It runs off the root drive. 
 + 
 +In June 2016, we marked down all the even-numbered VPSes and made a change to teuthology which creates the VMs with 4GB RAM as 2GB per VM wasn't meeting the needs. ​ See http://​tracker.ceph.com/​issues/​15052 
 + 
 +===== Common Tasks ===== 
 +==== Updating users so new users can downburst VMs ==== 
 +<​code>​ 
 +ansible-playbook common.yml --limit vps_hosts 
 +</​code>​ 
 + 
 +==== Setting up a VPSHOST from scratch ==== 
 +**NOTE:** This has been adapted to be applicable for 4 VPSes and disks per host. 
 + 
 +If setting up a host as a new or replacement VPSHOST, be sure to update your libvirt config. ​ See http://​docs.ceph.com/​teuthology/​docs/​downburst_vms.html#​vps-hosts. 
 + 
 +If you install the machine using a ''​-stock''​ cobbler profile, you'll need to run the common role as well. 
 + 
 +<​code>​ 
 +apt-get install xfsprogs vim 
 + 
 +# Determine the first and last VPS number by reading the VPSHOST'​s description in the lock db 
 +# In this example, the VPSes that live on the VPSHOST are vpm177 thru vpm184 
 + 
 +for sys in vpm{049,​051,​053,​055};​ do mkdir -p /​srv/​libvirtpool/​$sys;​ done 
 +for disk in sd{b..d}; do mkfs -t xfs -f /dev/$disk; done 
 + 
 +## Not really needed if next task and `mount -s` succeeds 
 +# $num should be second VPM 
 +num=51; for disk in sd{b..d}; do mount /dev/$disk /​srv/​libvirtpool/​vpm0$num;​ let num=num+2; done 
 +# OR if VPM$num is >= 100, 
 +num=101; for disk in sd{b..d}; do mount /dev/$disk /​srv/​libvirtpool/​vpm$num;​ let num=num+2; done 
 + 
 +# $num should be second VPM 
 +num=51; for disk in sd{b..d}; do echo -e "​UUID=$(blkid -s UUID -o value /​dev/​$disk)\t/​srv/​libvirtpool/​vpm0$num\txfs\tdefaults,​noatime,​nodiratime,​nobarrier,​inode64,​logbufs=8,​logbsize=256k,​largeio\t0\t0";​ let num=num+2; done >> /​etc/​fstab 
 +# OR if VPM$num is >= 100, 
 +num=101; for disk in sd{b..d}; do echo -e "​UUID=$(blkid -s UUID -o value /​dev/​$disk)\t/​srv/​libvirtpool/​vpm$num\txfs\tdefaults,​noatime,​nodiratime,​nobarrier,​inode64,​logbufs=8,​logbsize=256k,​largeio\t0\t0";​ let num=num+2; done >> /​etc/​fstab 
 + 
 +# Verify fstab, then 
 +mount -a 
 + 
 +# On your workstation,​ 
 +ansible-playbook vmhost.yml --limit="​mira###​.front.sepia.ceph.com"​ 
 + 
 +# Run this again to fix libvirtpool permissions 
 +ansible-playbook vmhost.yml --limit="​mira###​.front.sepia.ceph.com"​ 
 + 
 +# Make sure the first VPM is down 
 +tl --update --status down vpm049 
 + 
 +# Lock the first VPM on the host to download disk images 
 +tl --lock --os-type ubuntu --os-version 14.04 ubuntu@vpm049 
 +tl --unlock ubuntu@vpm049 
 +tl --lock --os-type ubuntu --os-version 16.04 ubuntu@vpm049 
 +tl --unlock ubuntu@vpm049 
 +tl --lock --os-type centos --os-version 7.3 ubuntu@vpm049 
 +tl --unlock ubuntu@vpm049 
 + 
 +# Copy the disk image to the other libvirtpools 
 +for dir in $(ls /​srv/​libvirtpool/​ | tail -n 3); do cp /​srv/​libvirtpool/​$(ls /​srv/​libvirtpool/​ | head -n 1)/​{ubuntu*,​centos*} /​srv/​libvirtpool/​$dir/;​ done 
 + 
 +for pool in $(ls /​srv/​libvirtpool/​);​ do virsh pool-refresh $pool; done 
 + 
 +# Lock then unlock all the VPSes to verify everything looks good 
 +for sys in vpm{049,​051,​053,​055};​ do tl --lock ubuntu@$sys;​ done 
 +for sys in vpm{049,​051,​053,​055};​ do tl --unlock ubuntu@$sys;​ done 
 +</​code>​ 
 + 
 +==== Replace bad VPSHOST disk ==== 
 +<​code>​ 
 +# Mark the VM down 
 +teuthology-lock --update --status down vpm### 
 + 
 +# On the VPSHOST, 
 +umount $bad_disk 
 +# Comment out bad disk in /​etc/​fstab 
 + 
 +# Physically replace the disk 
 + 
 +# Create a new filesystem on the new disk 
 +mkfs -t xfs /​dev/​$new_disk 
 + 
 +# Mount the new disk partition to the VM's libvirtpool mount 
 +mount /​dev/​$new_disk /​srv/​libvirtpool/​vpm###​ 
 + 
 +# Obtain the new UUID 
 +lsblk -o name,​uuid,​mountpoint 
 + 
 +# Replace the old UUID in /​etc/​fstab 
 + 
 +# Mark the VM back up 
 +teuthology-lock --update --status up vpm### 
 +</​code>​
services/vpshosts.1467755272.txt.gz · Last modified: 2016/07/05 21:47 by dgalloway