====== Developer Playgrounds ======
Developer playground machines are used for developing Ceph.
All machines have [[services:cephfs]] mounts for accessing the teuthology, scratch, and postfile file systems. It is encouraged to use these machines to view teuthology logs as they will be significantly faster than the [[services:teuthology|teuthology VM]], which is often memory/CPU starved.
===== Machines ====
^ Name ^ Notes ^
| [[hardware:senta|senta01.front.sepia.ceph.com]] | General purpose (currently unavailable) |
| [[hardware:senta|senta02.front.sepia.ceph.com]] | General purpose |
| [[hardware:senta|senta03.front.sepia.ceph.com]] | General purpose |
| [[hardware:vossi|vossi01.front.sepia.ceph.com]] | General purpose |
| [[hardware:vossi|vossi04.front.sepia.ceph.com]] | CephFS Team |
| [[hardware:vossi|vossi06.front.sepia.ceph.com]] | RADOS Team |
| [[hardware:folio|folio02.front.sepia.ceph.com]] | General purpose |
| [[hardware:folio|folio13.front.sepia.ceph.com]] | General purpose |
==== Playing Nice ====
Developer playgrounds should be able to build the ''main'' branch. It is okay to use ''./install-deps.sh'' top-level script from the ceph source tree to update dependencies. Do not run that script from an older release of Ceph as it may break other developer's work. If you need to build an older release, lock a throwaway node like [[hardware:smithi]] and build there. Or, use a container to do the build/testing!
Using the developer machines to look at teuthology QA artifacts is encouraged. Try to avoid using a text editor to look at large (1GB+) debug logs as this can be RAM intensive/disruptive. Instead, prefer ''less'' or use ''tail -c xM | $EDITOR -'' to look at portions of the log in a text editor.
Many of the developer playground nodes have extra disks for testing Ceph. It's okay to use these for ''vstart'' clusters but it may be more flexible to build LVM volumes on top of these devices so others may use them too.
==== MOTD ====
Please use this MOTD for these playground machines:
*******************************************************************************
Welcome!
This machine is a Ceph Developer Playground for shared use. Please
see the following wiki document for guidelines and a list of
available machines.
https://wiki.sepia.ceph.com/doku.php?id=devplayground
Create support tickets at:
https://tracker.ceph.com/projects/lab
Thanks!
*******************************************************************************
==== Configuring CephFS Mounts ====
The following ansible playbook can be run to setup CephFS mounts on a new developer playground machines:
---
- name: Configure Ceph Client and Mounts
hosts: all
become: true
vars:
admin_node: "reesi003.front.sepia.ceph.com"
ceph_conf_path: "/etc/ceph/ceph.conf"
keyring_path: "/etc/ceph/client.sepian.keyring"
secret_path: "/etc/ceph/client.sepian.secret"
mounts:
- { path: "/teuthology", fstype: "ceph", src: "/teuthology-archive", mds_namespace: "teuthology", opts: "_netdev,ro" }
- { path: "/scratch", fstype: "ceph", src: "/", mds_namespace: "scratch", opts: "_netdev" }
- { path: "/postfile", fstype: "ceph", src: "/", mds_namespace: "postfile", opts: "_netdev,ro" }
tasks:
- name: "1. Gather Ceph configuration using raw commands"
delegate_to: "{{ admin_node }}"
block:
- name: "▶️ Generate minimal ceph.conf (raw)"
ansible.builtin.raw: >
env CEPH_KEYRING=/etc/ceph/client.sepian.keyring ceph --id sepian config generate-minimal-conf
register: ceph_conf_content
changed_when: false
- name: "▶️ Fetch Ceph keyring (raw)"
ansible.builtin.raw: >
cat {{ keyring_path }}
register: keyring_content
changed_when: false
- name: "▶️ Generate client secret (raw)"
ansible.builtin.raw: >
ceph-authtool {{ keyring_path }} -n client.sepian -p
register: secret_content
changed_when: false
- name: "▶️ Get Ceph monitor list (raw)"
ansible.builtin.raw: >
env CEPH_KEYRING=/etc/ceph/client.sepian.keyring ceph --id sepian mon dump --format json 2>/dev/null | jq -r '[.mons[] | .public_addrs.addrvec[] | select(.type=="v1").addr] | join(",")'
register: mon_hosts
changed_when: false
- name: "2. Configure Ceph client files"
block:
- name: "▶️ Ensure /etc/ceph directory exists"
ansible.builtin.file:
path: "/etc/ceph"
state: directory
mode: '0755'
- name: "▶️ Deploy ceph.conf"
ansible.builtin.copy:
content: "{{ ceph_conf_content.stdout }}"
dest: "{{ ceph_conf_path }}"
mode: '0644'
- name: "▶️ Deploy client keyring"
ansible.builtin.copy:
content: "{{ keyring_content.stdout }}"
dest: "{{ keyring_path }}"
mode: '0644'
- name: "▶️ Deploy client secret file (for other tools)"
ansible.builtin.copy:
content: "{{ secret_content.stdout }}"
dest: "{{ secret_path }}"
mode: '0600'
- name: "3. Set up CephFS mounts"
block:
- name: "▶️ Unmount filesystems if they currently exist"
ansible.posix.mount:
path: "{{ item.path }}"
state: unmounted
loop: "{{ mounts }}"
- name: "▶️ Create mount point directories"
ansible.builtin.file:
path: "{{ item.path }}"
state: directory
mode: '000'
loop: "{{ mounts }}"
- name: "▶️ Set immutable attribute on mount points"
ansible.builtin.file:
path: "{{ item.path }}"
attr: +i
register: immutable_file
changed_when: "'i' not in immutable_file.diff.before.attributes"
loop: "{{ mounts }}"
- name: "▶️ Configure CephFS mounts in /etc/fstab"
ansible.posix.mount:
path: "{{ item.path }}"
src: "{{ mon_hosts.stdout | trim }}:{{ item.src }}"
fstype: "{{ item.fstype }}"
opts: "name=sepian,secret={{ secret_content.stdout | trim }},mds_namespace={{ item.mds_namespace }},{{ item.opts }}"
state: mounted
dump: 2
passno: 2
loop: "{{ mounts }}"
notify: Reload Systemd
- name: "▶️ Create symlink for /a -> /teuthology"
ansible.builtin.file:
src: "/teuthology"
dest: "/a"
state: link
force: true
- name: "Force handlers to run before mounting"
ansible.builtin.meta: flush_handlers
handlers:
- name: Reload Systemd
listen: Reload Systemd
ansible.builtin.systemd:
daemon_reload: true
==== Teuthology scheduling ====
Configure the dev playground node to schedule jobs:
sudo tee /etc/teuthology.yaml <
Note: killing a run is (generally) still necessary on [[services:teuthology|teuthology VM]]. This is because teuthology-kill requires killing the test processes running there.
==== Configuring LVM volumes using spare disks ====
When setting up a fresh Developer Playground machine, configure an LVM VolumeGroup for use by users. Volumes can be provisioned for a build directory, OSD block device, or anything else needed.
Note: no redundancy is configured below (i.e. RAID). If a disk is lost, all volumes will be affected.
sudo pvcreate /dev/$DISK
Do this for every disk. This is an ad-hoc process because all nodes are different. Also, some disks may have been used in the past so they will need wiped first:
sudo wipefs -a /dev/$DISK
Once all disks are added as physical volumes, it's then possible to add to a VolumeGroup:
sudo vgcreate DevPlayground $DISKS
Finally make a volume for yourself:
sudo lvcreate -L 256G DevPlayground -n $(whoami)-build
sudo mkfs.xfs /dev/DevPlayground/$(whoami)-build
mkdir $HOME/build
chmod 000 $HOME/build
sudo chattr +i $HOME/build
echo "/dev/DevPlayground/$(whoami)-build $HOME/build xfs defaults 1 1" | sudo tee -a /etc/fstab
sudo systemctl daemon-reload
sudo mount $HOME/build
and some OSD block devices:
for i in `seq 0 8`; do sudo lvcreate -L 16G DevPlayground -n $(whoami)-osd.$i ; done
Make those OSDs owned by you:
printf 'ENV{DM_VG_NAME}=="DevPlayground" ENV{DM_LV_NAME}=="%s-*" OWNER="%s" GROUP="users"\n' $(whoami) $(whoami) | sudo tee -a /etc/udev/rules.d/99-lvmowner.rules
sudo udevadm control --reload-rules
sudo udevadm trigger
Then you can use those devices with vstart.sh:
wipefs -a /dev/DevPlayground/$(whoami)-osd.*
shred -v -n 0 -z -s 16M /dev/DevPlayground/$(whoami)-osd.*
env OSD=8 ~/ceph/src/vstart.sh \
--bluestore-devs $(echo /dev/DevPlayground/$(whoami)-osd.* | tr ' ' ',')
Feel free to make any other volumes that you require.