Developer playground machines are used for developing Ceph.
All machines have cephfs mounts for accessing the teuthology, scratch, and postfile file systems. It is encouraged to use these machines to view teuthology logs as they will be significantly faster than the teuthology VM, which is often memory/CPU starved.
Name | Notes |
---|---|
senta01.front.sepia.ceph.com | General purpose (currently unavailable) |
senta02.front.sepia.ceph.com | General purpose |
senta03.front.sepia.ceph.com | General purpose |
vossi01.front.sepia.ceph.com | General purpose |
vossi04.front.sepia.ceph.com | CephFS Team |
vossi06.front.sepia.ceph.com | RADOS Team |
folio02.front.sepia.ceph.com | General purpose |
folio13.front.sepia.ceph.com | General purpose |
Developer playgrounds should be able to build the main
branch. It is okay to use ./install-deps.sh
top-level script from the ceph source tree to update dependencies. Do not run that script from an older release of Ceph as it may break other developer's work. If you need to build an older release, lock a throwaway node like smithi and build there. Or, use a container to do the build/testing!
Using the developer machines to look at teuthology QA artifacts is encouraged. Try to avoid using a text editor to look at large (1GB+) debug logs as this can be RAM intensive/disruptive. Instead, prefer less
or use tail -c xM | $EDITOR -
to look at portions of the log in a text editor.
Many of the developer playground nodes have extra disks for testing Ceph. It's okay to use these for vstart
clusters but it may be more flexible to build LVM volumes on top of these devices so others may use them too.
Please use this MOTD for these playground machines:
******************************************************************************* Welcome! This machine is a Ceph Developer Playground for shared use. Please see the following wiki document for guidelines and a list of available machines. https://wiki.sepia.ceph.com/doku.php?id=devplayground Create support tickets at: https://tracker.ceph.com/projects/lab Thanks! *******************************************************************************
The following script can be run to setup CephFS mounts on a new developer playground machines:
#!/bin/bash HOST="$1" function run { printf '%s\n' "$*" >&2 "$@" } function mssh { run ssh "$HOST" -- "$*" } mssh sudo mkdir -p -m 755 /etc/ceph ssh reesi001.front.sepia.ceph.com 'env CEPH_KEYRING=/etc/ceph/client.sepian.keyring ceph --id sepian config generate-minimal-conf' | mssh sudo tee /etc/ceph/ceph.conf mssh sudo chmod 644 /etc/ceph/ceph.conf ssh reesi001.front.sepia.ceph.com 'cat /etc/ceph/client.sepian.keyring' | mssh sudo tee /etc/ceph/client.sepian.keyring mssh sudo chmod 644 /etc/ceph/client.sepian.keyring ssh reesi001.front.sepia.ceph.com 'ceph-authtool /etc/ceph/client.sepian.keyring -n client.sepian -p' | mssh sudo tee /etc/ceph/client.sepian.secret mssh sudo chmod 600 /etc/ceph/client.sepian.secret function genmount { local secret=$(mssh sudo cat /etc/ceph/client.sepian.secret) # create mountpoint mssh sudo mkdir -p -- "$2" # make the mountpoint directory (shadowed) unwriteable to prevent accidental modification mssh sudo chmod 000 -- "$2" # set it immutable to enforce that even for root mssh sudo chattr +i -- "$2" printf '172.21.2.201,172.21.2.202,172.21.2.203:%s\t%s\tceph\tname=sepian,secret=%s,mds_namespace=%s,_netdev\t0\t2\n' "$1" "$2" "$secret" "$3" | mssh sudo tee -a /etc/fstab } genmount /teuthology-archive /teuthology teuthology genmount / /scratch scratch genmount / /postfile postfile mssh sudo systemctl daemon-reload mssh sudo mount /teuthology mssh sudo mount /scratch mssh sudo mount /postfile mssh sudo ln -s /teuthology /a
Configure the dev playground node to schedule jobs:
sudo tee /etc/teuthology.yaml <<EOF default_machine_type: smithi queue_host: teuthology.front.sepia.ceph.com queue_port: 11300 active_machine_types: - smithi EOF
Note: killing a run is (generally) still necessary on teuthology VM. This is because teuthology-kill requires killing the test processes running there.
When setting up a fresh Developer Playground machine, configure an LVM VolumeGroup for use by users. Volumes can be provisioned for a build directory, OSD block device, or anything else needed.
Note: no redundancy is configured below (i.e. RAID). If a disk is lost, all volumes will be affected.
sudo pvcreate /dev/$DISK
Do this for every disk. This is an ad-hoc process because all nodes are different. Also, some disks may have been used in the past so they will need wiped first:
sudo wipefs -a /dev/$DISK
Once all disks are added as physical volumes, it's then possible to add to a VolumeGroup:
sudo vgcreate DevPlayground $DISKS
Finally make a volume for yourself:
sudo lvcreate -L 256G DevPlayground -n $(whoami)-build sudo mkfs.xfs /dev/DevPlayground/$(whoami)-build mkdir $HOME/build chmod 000 $HOME/build sudo chattr +i $HOME/build echo "/dev/DevPlayground/$(whoami)-build $HOME/build xfs defaults 1 1" | sudo tee -a /etc/fstab sudo systemctl daemon-reload sudo mount $HOME/build
and some OSD block devices:
for i in `seq 0 8`; do sudo lvcreate -L 16G DevPlayground -n $(whoami)-osd.$i ; done
Make those OSDs owned by you:
printf 'ENV{DM_VG_NAME}=="DevPlayground" ENV{DM_LV_NAME}=="%s-*" OWNER="%s" GROUP="users"\n' $(whoami) $(whoami) | sudo tee -a /etc/udev/rules.d/99-lvmowner.rules sudo udevadm control --reload-rules sudo udevadm trigger
Then you can use those devices with vstart.sh:
wipefs -a /dev/DevPlayground/$(whoami)-osd.* shred -v -n 0 -z -s 16M /dev/DevPlayground/$(whoami)-osd.* env OSD=8 ~/ceph/src/vstart.sh \ --bluestore-devs $(echo /dev/DevPlayground/$(whoami)-osd.* | tr ' ' ',')
Feel free to make any other volumes that you require.