User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:ivan

This is an old revision of the document!


ivan{01..07}

Summary

The Ceph Foundation purchased 7 more servers to join the longrunningcluster. The three primary goals were:

  1. Faster networking between hosts
  2. Large NVMe devices as OSDs
  3. 12TB HDDs (largest up until now was 4TB)

Purchasing details

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis 2U Supermicro SSG-6028R-E1CR12H N/A
Mainboard N/A Supermicro X10DRH-iT N/A
CPU 1 Intel Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz N/A ARK
RAM 4 DIMMs Samsung M393A2G40EB1-CRC 16GB 64GB total
SSD 2 Intel SSDSC2BB150G7 (S3520) 150GB Software RAID1 for OS
HDD 11 Seagate ST4000NM0025 4TB SAS 7200RPM for OSDs
HDD 1 HGST HUH721212AL5200 12TB SAS 7200RPM added 1AUG2019 at Brett's request.
NVMe 1 Micron MTFDHBG800MCG-1AN1ZABYY 800GB Carved up as logical volumes on two partitions. 400GB as an OSD and the other 400GB divided by 12 for HDD OSD journals
NIC 2 ports Intel X540-AT2 10Gb RJ45 (not used)
NIC 2 ports Intel 82599ES 10Gb 1 port cabled per system on front VLAN
BMC 1 Supermicro N/A N/A Reachable at $host.ipmi.sepia.ceph.com

OSD/Block Device Information

The ivan have 9x 12TB HDD, 2x 1.5TB NVMe, and 1x 350GB NVMe.

The 12TB were added to so we can say we're testing on drives larger than 8TB.

The smaller NVMe device is split into eleven equal logical volumes. One for each OSD's journal.

root@ivan04:~# lsblk
NAME                 MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                    8:0    0 894.3G  0 disk  
`-sda1                 8:1    0 894.3G  0 part  
  `-md0                9:0    0 894.1G  0 raid1 /
sdb                    8:16   0 894.3G  0 disk  
`-sdb1                 8:17   0 894.3G  0 part  
  `-md0                9:0    0 894.1G  0 raid1 /
sdc                    8:32   0  10.9T  0 disk  
sdd                    8:48   0  10.9T  0 disk  
sde                    8:64   0  10.9T  0 disk  
sdf                    8:80   0  10.9T  0 disk  
sdg                    8:96   0  10.9T  0 disk  
sdh                    8:112  0  10.9T  0 disk  
sdi                    8:128  0  10.9T  0 disk  
sdj                    8:144  0  10.9T  0 disk  
sdk                    8:160  0  10.9T  0 disk  
sr0                   11:0    1   841M  0 rom   
nvme0n1              259:0    0 349.3G  0 disk  
`-nvme0n1p1          259:3    0 349.3G  0 part  
  |-journals-sdc     253:0    0    31G  0 lvm   
  |-journals-sdd     253:1    0    31G  0 lvm   
  |-journals-sde     253:2    0    31G  0 lvm   
  |-journals-sdf     253:3    0    31G  0 lvm   
  |-journals-sdg     253:4    0    31G  0 lvm   
  |-journals-sdh     253:5    0    31G  0 lvm   
  |-journals-sdi     253:6    0    31G  0 lvm   
  |-journals-sdj     253:7    0    31G  0 lvm   
  |-journals-sdk     253:8    0    31G  0 lvm   
  |-journals-nvme1n1 253:9    0    31G  0 lvm   
  `-journals-nvme2n1 253:10   0    31G  0 lvm   
nvme1n1              259:1    0   1.5T  0 disk  
nvme2n1              259:2    0   1.5T  0 disk

How to partition/re-partition the NVMe device

Here's my bash history that can be used to set up a reesi machine's NVMe card.

ansible -a "sudo parted -s /dev/nvme0n1 mktable gpt" reesi*
ansible -a "sudo parted /dev/nvme0n1 unit '%' mkpart foo 0 50" reesi*
ansible -a "sudo parted /dev/nvme0n1 unit '%' mkpart foo 51 100" reesi*
ansible -a "sudo pvcreate /dev/nvme0n1p1" reesi*
ansible -a "sudo vgcreate journals /dev/nvme0n1p1" reesi*
for disk in sd{a..l}; do ansible -a "sudo lvcreate -L 31G -n $disk journals" reesi*; done

Checking NVMe Card SMART Data

nvme smart-log /dev/nvme0n1

Updating BIOS

TBD

hardware/ivan.1652188723.txt.gz · Last modified: 2022/05/10 13:18 by djgalloway