User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

services:longrunningcluster

This is an old revision of the document!


LONG_RUNNING_CLUSTER

Summary

A small subset of mira systems and all of the reesi and ivan systems are used in a permanent Ceph cluster.

It is managed using cephadm.

Here's a rundown of what this cluster stores

Cluster dashboard

https://reesi006.front.sepia.ceph.com:8443/ https://reesi005.front.sepia.ceph.com:8443/

Topology

  services:
    mon: 5 daemons, quorum reesi003,reesi002,reesi001,ivan02,ivan01 (age 5h)
    mgr: reesi005.xxyjcw(active, since 2w), standbys: reesi006.erytot, reesi004.tplfrt
    mds: 3/3 daemons up, 5 standby

Retired hosts

mira{019,021,049,070,087,099,116,120} had all daemons removed, OSDs, evacuated and reclaimed as testnodes in February 2020. apama were retired entirely as well.

ceph.conf

This file (along with the admin keyring) can be saved on your workstation so you can use it as an admin node.

# minimal ceph.conf for 28f7427e-5558-4ffd-ae1a-51ec3042759a
[global]
        fsid = 28f7427e-5558-4ffd-ae1a-51ec3042759a
        mon_host = [v2:172.21.2.201:3300/0,v1:172.21.2.201:6789/0] [v2:172.21.2.202:3300/0,v1:172.21.2.202:6789/0] [v2:172.21.2.203:3300/0,v1:172.21.2.203:6789/0] [v2:172.21.2.204:3300/0,v1:172.21.2.204:6789/0] [v2:172.21.2.205:3300/0,v1:172.21.2.205:6789/0]

Upgrading the Cluster

The LRC is a testbed we use to test a release candidate before announcing.

For example:

ceph orch upgrade start quay.ceph.io/ceph-ci/ceph:da36d2c9a106ed5231aa923e6c04a2485c89ef4b

watch "ceph -s; ceph orch upgrade status; ceph versions"

MONs run out of disk space

I sadly got too small of disks for the reesi when we purchased them so they occasionally run out of space in /var/log/ceph before logrotate gets a chance to run (even though it runs 4x a day. The process below will get you back up and running again but will wipe out all logs.

ansible  -m shell -a "sudo /bin/sh -c 'rm -vf /var/log/ceph/*/ceph*.gz'" reesi*
ansible  -m shell -a "sudo /bin/sh -c 'logrotate -f /etc/logrotate.d/ceph-*'" reesi*

One-liners

Most of the stuff above is no longer valuable since Ceph has evolved over time. Here's some one-liners that were useful at the time I posted them.

Restart mon service

systemctl restart ceph-28f7427e-5558-4ffd-ae1a-51ec3042759a@mon.$(hostname -s).service

Watch logs for a mon

podman logs -f $(podman ps | grep "\-mon" | awk '{ print $1 }')

LRC iscsi volume

On Nov 2022 we started seeing data corruption on our main gluster volume where we have all our critical VM's so we connected an iscsi volume from the LRC, those are the steps to connect an iscsi volume to a rhev cluster according to this doc https://docs.google.com/document/d/1GYwv5y4T5vy-1oeAzw-zoLgQs0I3y5v_xD1wXscAA7M/edit

1. Create an rbd pool

ceph osd pool create <poolname>
ceph osd pool application enable <poolname> rbd

2. Deploy iscsi on at least four hosts - create a yaml file

service_type: iscsi
service_id: iscsi
placement:
  hosts:
    - reesi002
    - reesi003
    - reesi004
    - reesi005
spec:
  pool: lrc
  api_secure: false

3. Connect to the iscsi container on one of the deployed hosts, to find the exact container id run “podman ps” and look for a container with the word “tcmu” in the end.

Podman exec -it <iscsi container id> /bin/bash

for example:

podman exec -it ceph-28f7427e-5558-4ffd-ae1a-51ec3042759a-iscsi-iscsi-reesi005-luegfv-tcmu /bin/bash

4. Enter gwcli

gwcli

5. Go to iscsi-targets

cd iscsi-targets/

6. Go to the storage iqn

cd iqn.2003-01.com.redhat.iscsi-gw:lrc-iscsi1/

7. Create all four gateway as specified in the yaml on step 2

create reesi002.front.sepia.ceph.com 172.21.2.202

8. Go to disks

cd /disks

9. Create RBD image with the name “vol1” in the “lrc” pool

create pool=lrc image=vol1 size=20T image=rbdimage size=50g
services/longrunningcluster.1676224714.txt.gz · Last modified: 2023/02/12 17:58 by akraitman