This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
devplayground [2023/10/06 18:16] pdonnell |
devplayground [2025/12/17 22:16] (current) pdonnell [Configuring CephFS Mounts] |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Developer Playgrounds ====== | ====== Developer Playgrounds ====== | ||
| - | Developer playground machines are used for developing Ceph. All machines have [[services:cephfs]] mounts for accessing the teuthology, scratch, and postfile file systems. | + | Developer playground machines are used for developing Ceph. |
| + | |||
| + | All machines have [[services:cephfs]] mounts for accessing the teuthology, scratch, and postfile file systems. It is encouraged to use these machines to view teuthology logs as they will be significantly faster than the [[services:teuthology|teuthology VM]], which is often memory/CPU starved. | ||
| ===== Machines ==== | ===== Machines ==== | ||
| - | ^ Name ^ Notes ^ | + | ^ Name ^ Notes ^ |
| - | | [[hardware:senta|senta01.front.sepia.ceph.com]] | General purpose | | + | | [[hardware:senta|senta01.front.sepia.ceph.com]] | General purpose (currently unavailable) | |
| - | | [[hardware:senta|senta02.front.sepia.ceph.com]] | General purpose | | + | | [[hardware:senta|senta02.front.sepia.ceph.com]] | General purpose | |
| - | | [[hardware:senta|senta03.front.sepia.ceph.com]] | General purpose | | + | | [[hardware:senta|senta03.front.sepia.ceph.com]] | General purpose | |
| - | | [[hardware:vossi|vossi01.front.sepia.ceph.com]] | General purpose | | + | | [[hardware:vossi|vossi01.front.sepia.ceph.com]] | General purpose | |
| - | | [[hardware:vossi|vossi04.front.sepia.ceph.com]] | CephFS Team | | + | | [[hardware:vossi|vossi04.front.sepia.ceph.com]] | CephFS Team | |
| - | | [[hardware:vossi|vossi04.front.sepia.ceph.com]] | RADOS Team | | + | | [[hardware:vossi|vossi06.front.sepia.ceph.com]] | RADOS Team | |
| - | | [[hardware:folio|folio02.front.sepia.ceph.com]] | General purpose | | + | | [[hardware:folio|folio02.front.sepia.ceph.com]] | General purpose | |
| + | | [[hardware:folio|folio13.front.sepia.ceph.com]] | General purpose | | ||
| + | |||
| + | ==== Playing Nice ==== | ||
| + | |||
| + | Developer playgrounds should be able to build the ''main'' branch. It is okay to use ''./install-deps.sh'' top-level script from the ceph source tree to update dependencies. Do not run that script from an older release of Ceph as it may break other developer's work. If you need to build an older release, lock a throwaway node like [[hardware:smithi]] and build there. Or, use a container to do the build/testing! | ||
| + | |||
| + | Using the developer machines to look at teuthology QA artifacts is encouraged. Try to avoid using a text editor to look at large (1GB+) debug logs as this can be RAM intensive/disruptive. Instead, prefer ''less'' or use ''tail -c xM | $EDITOR -'' to look at portions of the log in a text editor. | ||
| + | |||
| + | Many of the developer playground nodes have extra disks for testing Ceph. It's okay to use these for ''vstart'' clusters but it may be more flexible to build LVM volumes on top of these devices so others may use them too. | ||
| + | |||
| + | ==== MOTD ==== | ||
| + | |||
| + | Please use this MOTD for these playground machines: | ||
| + | |||
| + | <code> | ||
| + | ******************************************************************************* | ||
| + | |||
| + | Welcome! | ||
| + | |||
| + | This machine is a Ceph Developer Playground for shared use. Please | ||
| + | see the following wiki document for guidelines and a list of | ||
| + | available machines. | ||
| + | |||
| + | https://wiki.sepia.ceph.com/doku.php?id=devplayground | ||
| + | |||
| + | Create support tickets at: | ||
| + | |||
| + | https://tracker.ceph.com/projects/lab | ||
| + | |||
| + | Thanks! | ||
| + | |||
| + | ******************************************************************************* | ||
| + | </code> | ||
| + | |||
| + | |||
| + | ==== Configuring CephFS Mounts ==== | ||
| + | |||
| + | The following ansible playbook can be run to setup CephFS mounts on a new developer playground machines: | ||
| + | |||
| + | <code> | ||
| + | --- | ||
| + | - name: Configure Ceph Client and Mounts | ||
| + | hosts: all | ||
| + | become: true | ||
| + | vars: | ||
| + | admin_node: "doli01.front.sepia.ceph.com" | ||
| + | ceph_conf_path: "/etc/ceph/ceph.conf" | ||
| + | keyring_path: "/etc/ceph/client.sepian.keyring" | ||
| + | client_keyring_path: "/etc/ceph/keyring" | ||
| + | secret_path: "/etc/ceph/client.sepian.secret" | ||
| + | mounts: | ||
| + | - { path: "/teuthology", fstype: "ceph", src: "/teuthology-archive", mds_namespace: "teuthology", opts: "_netdev,ro" } | ||
| + | - { path: "/scratch", fstype: "ceph", src: "/", mds_namespace: "scratch", opts: "_netdev" } | ||
| + | |||
| + | tasks: | ||
| + | - name: "1. Gather Ceph configuration using raw commands" | ||
| + | delegate_to: "{{ admin_node }}" | ||
| + | block: | ||
| + | - name: "▶️ Get LRC fsid" | ||
| + | ansible.builtin.raw: > | ||
| + | env CEPH_KEYRING={{ keyring_path }} ceph --id sepian fsid | ||
| + | register: ceph_fsid | ||
| + | changed_when: false | ||
| + | |||
| + | - name: "▶️ Generate minimal ceph.conf (raw)" | ||
| + | ansible.builtin.raw: > | ||
| + | env CEPH_KEYRING={{ keyring_path }} ceph --id sepian config generate-minimal-conf | ||
| + | register: ceph_conf_content | ||
| + | changed_when: false | ||
| + | |||
| + | - name: "▶️ Fetch Ceph keyring (raw)" | ||
| + | ansible.builtin.raw: > | ||
| + | cat {{ keyring_path }} | ||
| + | register: keyring_content | ||
| + | changed_when: false | ||
| + | |||
| + | - name: "▶️ Generate client secret (raw)" | ||
| + | ansible.builtin.raw: > | ||
| + | ceph-authtool {{ keyring_path }} -n client.sepian -p | ||
| + | register: secret_content | ||
| + | changed_when: false | ||
| + | |||
| + | - name: "▶️ Get Ceph monitor list (raw)" | ||
| + | ansible.builtin.raw: > | ||
| + | env CEPH_KEYRING={{ keyring_path }} ceph --id sepian mon dump --format json 2>/dev/null | jq -r '[.mons[] | .public_addrs.addrvec[] | select(.type=="v1").addr] | join(",")' | ||
| + | register: mon_hosts | ||
| + | changed_when: false | ||
| + | |||
| + | - name: "2. Configure Ceph client files" | ||
| + | block: | ||
| + | - name: "▶️ Ensure /etc/ceph directory exists" | ||
| + | ansible.builtin.file: | ||
| + | path: "/etc/ceph" | ||
| + | state: directory | ||
| + | mode: '0755' | ||
| + | |||
| + | - name: "▶️ Deploy ceph.conf" | ||
| + | ansible.builtin.copy: | ||
| + | content: "{{ ceph_conf_content.stdout }}" | ||
| + | dest: "{{ ceph_conf_path }}" | ||
| + | mode: '0644' | ||
| + | |||
| + | - name: "▶️ Create temporary file for keyring import" | ||
| + | ansible.builtin.tempfile: | ||
| + | state: file | ||
| + | suffix: .keyring | ||
| + | register: tmp_keyring | ||
| + | |||
| + | - name: "▶️ Write keyring content to temporary file" | ||
| + | ansible.builtin.copy: | ||
| + | content: "{{ keyring_content.stdout }}" | ||
| + | dest: "{{ tmp_keyring.path }}" | ||
| + | mode: '0600' | ||
| + | |||
| + | - name: "▶️ Deploy client keyring" | ||
| + | ansible.builtin.raw: > | ||
| + | ceph-authtool {{ client_keyring_path }} --create-keyring --import-keyring {{ tmp_keyring.path }} | ||
| + | |||
| + | - name: "▶️ Clean up temporary keyring file" | ||
| + | ansible.builtin.file: | ||
| + | path: "{{ tmp_keyring.path }}" | ||
| + | state: absent | ||
| + | |||
| + | - name: "▶️ Deploy client secret file (for other tools)" | ||
| + | ansible.builtin.copy: | ||
| + | content: "{{ secret_content.stdout }}" | ||
| + | dest: "{{ secret_path }}" | ||
| + | mode: '0600' | ||
| + | |||
| + | - name: "3. Set up CephFS mounts" | ||
| + | block: | ||
| + | - name: "▶️ Install ceph-common on Ubuntu/Debian" | ||
| + | ansible.builtin.apt: | ||
| + | name: ceph-common | ||
| + | state: present | ||
| + | update_cache: yes | ||
| + | when: ansible_facts['os_family'] == "Debian" | ||
| + | |||
| + | - name: "▶️ Install Ceph Squid repo on RHEL derivatives" | ||
| + | ansible.builtin.dnf: | ||
| + | name: centos-release-ceph-squid.noarch | ||
| + | state: present | ||
| + | when: ansible_facts['os_family'] == "RedHat" | ||
| + | |||
| + | - name: "▶️ Install ceph-common on RHEL derivatives" | ||
| + | ansible.builtin.dnf: | ||
| + | name: ceph-common | ||
| + | state: present | ||
| + | when: ansible_facts['os_family'] == "RedHat" | ||
| + | |||
| + | - name: "▶️ Unmount filesystems if they currently exist" | ||
| + | ansible.posix.mount: | ||
| + | path: "{{ item.path }}" | ||
| + | state: unmounted | ||
| + | loop: "{{ mounts }}" | ||
| + | |||
| + | - name: "▶️ Create mount point directories" | ||
| + | ansible.builtin.file: | ||
| + | path: "{{ item.path }}" | ||
| + | state: directory | ||
| + | mode: '000' | ||
| + | loop: "{{ mounts }}" | ||
| + | |||
| + | - name: "▶️ Set immutable attribute on mount points" | ||
| + | ansible.builtin.file: | ||
| + | path: "{{ item.path }}" | ||
| + | attr: +i | ||
| + | loop: "{{ mounts }}" | ||
| + | |||
| + | - name: "▶️ Configure CephFS mounts in /etc/fstab" | ||
| + | ansible.posix.mount: | ||
| + | path: "{{ item.path }}" | ||
| + | src: "sepian@{{ ceph_fsid.stdout | trim }}.{{ item.mds_namespace }}={{ item.src }}" | ||
| + | fstype: "{{ item.fstype }}" | ||
| + | opts: "{{ item.opts }}" | ||
| + | state: mounted | ||
| + | dump: 0 | ||
| + | passno: 0 | ||
| + | loop: "{{ mounts }}" | ||
| + | notify: Reload Systemd | ||
| + | |||
| + | - name: "▶️ Create symlink for /a -> /teuthology" | ||
| + | ansible.builtin.file: | ||
| + | src: "/teuthology" | ||
| + | dest: "/a" | ||
| + | state: link | ||
| + | force: true | ||
| + | |||
| + | - name: "Force handlers to run before mounting" | ||
| + | ansible.builtin.meta: flush_handlers | ||
| + | |||
| + | handlers: | ||
| + | - name: Reload Systemd | ||
| + | listen: Reload Systemd | ||
| + | ansible.builtin.systemd: | ||
| + | daemon_reload: true | ||
| + | </code> | ||
| + | ==== Teuthology scheduling ==== | ||
| + | |||
| + | Configure the dev playground node to schedule jobs: | ||
| + | |||
| + | <code> | ||
| + | sudo tee /etc/teuthology.yaml <<EOF | ||
| + | default_machine_type: smithi | ||
| + | queue_host: teuthology.front.sepia.ceph.com | ||
| + | queue_port: 11300 | ||
| + | active_machine_types: | ||
| + | - smithi | ||
| + | EOF | ||
| + | </code> | ||
| + | |||
| + | Note: killing a run is (generally) still necessary on [[services:teuthology|teuthology VM]]. This is because teuthology-kill requires killing the test processes running there. | ||
| + | ==== Configuring LVM volumes using spare disks ==== | ||
| + | |||
| + | When setting up a fresh Developer Playground machine, configure an LVM VolumeGroup for use by users. Volumes can be provisioned for a build directory, OSD block device, or anything else needed. | ||
| + | |||
| + | Note: no redundancy is configured below (i.e. RAID). If a disk is lost, all volumes will be affected. | ||
| + | |||
| + | <code> | ||
| + | sudo pvcreate /dev/$DISK | ||
| + | </code> | ||
| + | |||
| + | Do this for every disk. This is an ad-hoc process because all nodes are different. Also, some disks may have been used in the past so they will need wiped first: | ||
| + | |||
| + | <code> | ||
| + | sudo wipefs -a /dev/$DISK | ||
| + | </code> | ||
| + | |||
| + | Once all disks are added as physical volumes, it's then possible to add to a VolumeGroup: | ||
| + | |||
| + | <code> | ||
| + | sudo vgcreate DevPlayground $DISKS | ||
| + | </code> | ||
| + | |||
| + | Finally make a volume for yourself: | ||
| + | |||
| + | <code> | ||
| + | sudo lvcreate -L 256G DevPlayground -n $(whoami)-build | ||
| + | sudo mkfs.xfs /dev/DevPlayground/$(whoami)-build | ||
| + | mkdir $HOME/build | ||
| + | chmod 000 $HOME/build | ||
| + | sudo chattr +i $HOME/build | ||
| + | echo "/dev/DevPlayground/$(whoami)-build $HOME/build xfs defaults 1 1" | sudo tee -a /etc/fstab | ||
| + | sudo systemctl daemon-reload | ||
| + | sudo mount $HOME/build | ||
| + | </code> | ||
| + | |||
| + | and some OSD block devices: | ||
| + | |||
| + | <code> | ||
| + | for i in `seq 0 8`; do sudo lvcreate -L 16G DevPlayground -n $(whoami)-osd.$i ; done | ||
| + | </code> | ||
| + | |||
| + | Make those OSDs owned by you: | ||
| + | |||
| + | <code> | ||
| + | printf 'ENV{DM_VG_NAME}=="DevPlayground" ENV{DM_LV_NAME}=="%s-*" OWNER="%s" GROUP="users"\n' $(whoami) $(whoami) | sudo tee -a /etc/udev/rules.d/99-lvmowner.rules | ||
| + | sudo udevadm control --reload-rules | ||
| + | sudo udevadm trigger | ||
| + | </code> | ||
| + | |||
| + | Then you can use those devices with vstart.sh: | ||
| + | <code> | ||
| + | wipefs -a /dev/DevPlayground/$(whoami)-osd.* | ||
| + | shred -v -n 0 -z -s 16M /dev/DevPlayground/$(whoami)-osd.* | ||
| + | env OSD=8 ~/ceph/src/vstart.sh \ | ||
| + | --bluestore-devs $(echo /dev/DevPlayground/$(whoami)-osd.* | tr ' ' ',') | ||
| + | </code> | ||
| + | Feel free to make any other volumes that you require. | ||