There are a few CephFS file systems available in the sepia lab. These reside on the Long Running Cluster.
Access to this Ceph cluster is available for any machine on the sepia VPN. You can and should mount these file systems on your laptop or development machine. This reduces load on some shared machines, like teuthology, and usually provides faster access depending on your client or network. However, for everyday use, you may prefer to access CephFS from a machine co-located with the other lab infrastructure, such as a Developer Playground machine, where OSD latency/bandwidth will be optimal.
In the scripts/commands below, we will ssh to the reesi001.front.sepia.ceph.com
machine. When given access to the sepia VPN, the ssh key you shared should allow accessing this machine.
On your development machine or laptop, get access to CephFS with the LRC ceph.conf
and client.sepian
credential:
sudo mkdir -p -m 755 /etc/ceph ssh reesi001.front.sepia.ceph.com 'env CEPH_KEYRING=/etc/ceph/client.sepian.keyring ceph --id sepian config generate-minimal-conf' | sudo tee /etc/ceph/ceph.conf sudo chmod 644 /etc/ceph/ceph.conf ssh reesi001.front.sepia.ceph.com 'cat /etc/ceph/client.sepian.keyring' | sudo tee /etc/ceph/client.sepian.keyring sudo chmod 644 /etc/ceph/client.sepian.keyring ssh reesi001.front.sepia.ceph.com 'ceph-authtool /etc/ceph/client.sepian.keyring -n client.sepian -p' | sudo tee /etc/ceph/client.sepian.secret sudo chmod 600 /etc/ceph/client.sepian.secret
The client.sepian
credential is suitable for all Ceph developers to access appropriate LRC resources. In particular, it gives access to the teuthology
, scratch
, and postfile
CephFS file systems.
Generate mounts for your /etc/fstab
using the script below. Copy it locally, mark it executable, and run:
#!/bin/bash function genmount { local secret=$(sudo cat /etc/ceph/client.sepian.secret) # create mountpoint sudo mkdir -p -- "$2" # make the mountpoint directory (shadowed) unwriteable to prevent accidental modification sudo chmod 000 -- "$2" # set it immutable to enforce that even for root sudo chattr +i -- "$2" printf '172.21.2.201,172.21.2.202,172.21.2.203:%s\t%s\tceph\tname=sepian,secret=%s,mds_namespace=%s,_netdev\t0\t2\n' "$1" "$2" "$secret" "$3" } genmount /teuthology-archive /teuthology teuthology | sudo tee -a /etc/fstab genmount / /scratch scratch | sudo tee -a /etc/fstab genmount / /postfile postfile | sudo tee -a /etc/fstab
The fstab changes will cause these file systems to mount on boot. After adding these entries for the first time, you need to manually mount them:
sudo mount /teuthology sudo mount /scratch sudo mount /postfile
The majority of CephFS use is directed at the “teuthology” file system which hosts QA artifacts for analysis. Each test run has a directory in /teuthology-archive
. The /etc/fstab
file (generated above) has this directory mounted locally at /teuthology
:
ls /teuthology/ | head -n 2 abhi-2019-12-04_08:55:20-rgw-wip-abhi-testing-2019-12-03-1859-distro-basic-smithi abhi-2019-12-04_17:41:25-rgw-wip-abhi-testing-2019-12-04-1433-distro-basic-smithi
It's also common for test artifact paths shared among developers to include a /a/
prefix, such as:
/a/teuthology-2023-06-10_14\:23\:08-upgrade\:pacific-x-reef-distro-default-smithi/7301152/teuthology.log
You can generate this helper link using:
sudo ln -s /teuthology /a
This is a general purpose file system for “scratch” space. Do what you want with it but consider all data in it as eligible for deletion at any time. You're encouraged to create a personal top-level directory.
The ceph-post-file utility dumps results in this file system. Users are encouraged to use this utility to share artifacts with Ceph developers.
There is a “home” file system which hosts the home directories of users of teuthology and potentially other development nodes. Its access is restricted to administrators.