This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
services:cephfs [2023/06/08 20:20] pdonnell |
services:cephfs [2025/12/17 14:53] (current) pdonnell [Mounting all Sepia CephFS file systems] |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== CephFS ====== | ====== CephFS ====== | ||
| - | There are a few CephFS file systems available in the sepia lab. These reside on the [[services:longrunningcluster]]. Access to this Ceph cluster is available for any machine on the VPN. | + | There are a few CephFS file systems available in the sepia lab. These reside on the [[services:longrunningcluster|Long Running Cluster]]. |
| + | Access to this Ceph cluster is available for any machine on the [[:vpnaccess|sepia VPN]]. You can and **should** mount these file systems on your laptop or development machine. This reduces load on some shared machines, like [[services:teuthology|teuthology]], and usually provides faster access depending on your client or network. However, for everyday use, you may prefer to access CephFS from a machine co-located with the other lab infrastructure, such as a [[:devplayground|Developer Playground]] machine, where OSD latency/bandwidth will be optimal. | ||
| + | In the scripts/commands below, we will ssh to the ''doli01.front.sepia.ceph.com'' machine. When [[:vpnaccess#requesting_access|given access to the sepia VPN]], the ssh key you shared should allow accessing this machine. | ||
| - | ==== teuthology ==== | ||
| - | The majority of CephFS use is directed at the "teuthology" file system which hosts QA artifacts for analysis. | + | ==== Authorization ==== |
| + | On your development machine or laptop, get access to CephFS with the LRC ''ceph.conf'' and ''client.sepian'' credential: | ||
| + | |||
| + | <code bash> | ||
| + | |||
| + | sudo mkdir -p -m 755 /etc/ceph | ||
| + | ssh doli01.front.sepia.ceph.com 'env CEPH_KEYRING=/etc/ceph/client.sepian.keyring ceph --id sepian config generate-minimal-conf' | sudo tee /etc/ceph/ceph.conf | ||
| + | sudo chmod 644 /etc/ceph/ceph.conf | ||
| + | ssh doli01.front.sepia.ceph.com 'cat /etc/ceph/client.sepian.keyring' | sudo tee /etc/ceph/client.sepian.keyring | sudo tee -a /etc/ceph/keyring | ||
| + | sudo chmod 644 /etc/ceph/client.sepian.keyring | ||
| + | ssh doli01.front.sepia.ceph.com 'ceph-authtool /etc/ceph/client.sepian.keyring -n client.sepian -p' | sudo tee /etc/ceph/client.sepian.secret | ||
| + | sudo chmod 600 /etc/ceph/client.sepian.secret | ||
| + | </code> | ||
| + | |||
| + | The ''client.sepian'' credential is suitable for all Ceph developers to access appropriate LRC resources. In particular, it gives access to the ''teuthology'', ''scratch'', and ''postfile'' CephFS file systems. | ||
| + | |||
| + | ==== Mounting all Sepia CephFS file systems ==== | ||
| + | |||
| + | **Note**: you need the ''mount.ceph'' program installed to use this mount syntax. It is usually part of the ''ceph-common'' package. | ||
| + | |||
| + | Generate mounts for your ''/etc/fstab'' using the script below. Copy it locally, mark it executable, and run: | ||
| + | |||
| + | <code bash> | ||
| + | #!/bin/bash | ||
| + | |||
| + | set -ex | ||
| + | |||
| + | function genmount { | ||
| + | local path="$1" | ||
| + | local mntpnt="$2" | ||
| + | local fsname="$3" | ||
| + | |||
| + | local secret=$(sudo cat /etc/ceph/client.sepian.secret) | ||
| + | # create mountpoint | ||
| + | sudo mkdir -p -- "$mntpnt" || true | ||
| + | # make the mountpoint directory (shadowed) unwriteable to prevent accidental modification | ||
| + | sudo chmod 000 -- "$mntpnt" || true | ||
| + | # set it immutable to enforce that even for root | ||
| + | sudo chattr +i -- "$mntpnt" || true | ||
| + | printf 'sepian@b6f4aaad-d45d-11f0-b949-905a08286547.%s=%s\t%s\tceph\t_netdev\t0\t0\n' "$fsname" "$path" "$mntpnt" | ||
| + | } | ||
| + | |||
| + | genmount /teuthology-archive /teuthology teuthology | sudo tee -a /etc/fstab | ||
| + | genmount / /scratch scratch | sudo tee -a /etc/fstab | ||
| + | </code> | ||
| + | |||
| + | The fstab changes will cause these file systems to mount on boot. After adding these entries for the first time, you need to manually mount them: | ||
| + | |||
| + | <code bash> | ||
| + | sudo mount /teuthology | ||
| + | sudo mount /scratch | ||
| + | sudo mount /postfile | ||
| + | </code> | ||
| + | |||
| + | ==== The teuthology FS ==== | ||
| + | |||
| + | The majority of CephFS use is directed at the "teuthology" file system which hosts QA artifacts for analysis. Each test run has a directory in ''/teuthology-archive''. The ''/etc/fstab'' file (generated above) has this directory mounted locally at ''/teuthology'': | ||
| + | |||
| + | <code bash> | ||
| + | ls /teuthology/ | head -n 2 | ||
| + | abhi-2019-12-04_08:55:20-rgw-wip-abhi-testing-2019-12-03-1859-distro-basic-smithi | ||
| + | abhi-2019-12-04_17:41:25-rgw-wip-abhi-testing-2019-12-04-1433-distro-basic-smithi | ||
| + | </code> | ||
| + | |||
| + | It's also common for test artifact paths shared among developers to include a ''/a/'' prefix, such as: | ||
| + | |||
| + | ''/a/teuthology-2023-06-10_14\:23\:08-upgrade\:pacific-x-reef-distro-default-smithi/7301152/teuthology.log'' | ||
| + | |||
| + | You can generate this helper link using: | ||
| + | |||
| + | <code bash> | ||
| + | sudo ln -s /teuthology /a | ||
| + | </code> | ||
| + | |||
| + | ==== The scratch FS ==== | ||
| + | |||
| + | This is a general purpose file system for "scratch" space. Do what you want with it but consider all data in it as eligible for deletion at any time. You're encouraged to create a personal top-level directory. | ||
| + | |||
| + | |||
| + | ==== The postfile FS ==== | ||
| + | |||
| + | The [[https://docs.ceph.com/en/latest/man/8/ceph-post-file/|ceph-post-file]] utility dumps results in this file system. Users are encouraged to use this utility to share artifacts with Ceph developers. | ||
| + | |||
| + | ==== The home FS ==== | ||
| + | |||
| + | There is a "home" file system which hosts the home directories of users of teuthology and potentially other development nodes. Its access is restricted to administrators. | ||