User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:infrastructure

Infrastructure Hardware

This page covers all other hardware that doesn't fit under the umbrella of testnode hardware.

Spare Hardware

Present as of: — David Galloway 2022/01/18 19:29

Hardware Quantity
Intel Optane P5800X 400GB PCIe cards 10
STARTECH 2PT 10G FIBER NIC-OPEN SFP+ 5

gw1/gw2

Summary

These two hosts are 1U custom-built Supermicro systems.

gw1 was repurposed as store03 to use as an arbiter node for the Gluster storage backing RHEV VMs.

gw2 is not powered on or accessible

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro N/A
Mainboard N/A Supermicro PDSMi+ N/A
CPU 1 Intel Xeon X3220 @2.40GHz N/A ARK
RAM 4 DIMMs ? ? 1GB DDR2 667MHz 4GB total
HDD 2 HGST HDS721010CLA330 1TB Software RAID1
SSD
NIC 2 on-board

store{01,02}.front.sepia.ceph.com

Summary

Serve as the main gluster storage nodes for the rhev cluster. They're running CentOS 7.

store01 was formerly gw.sepia.ceph.com and still hosts a number of services:

store02 was formerly mira123.

Hardware Information

Mira-type chassis with 16GB RAM and 1x Intel(R) Xeon(R) CPU X3430 @ 2.40GHz

Storage

  • Populated with 7x 4TB Hitachi HDD
  • 20TB Areca RAID5
  • One bay is not used in each system so if desired, the stripe size and RAID level could be changed

Networking

  • store01 has a 1Gb bridge on the ipmi network from when it was the OpenVPN gateway.
  • store01 has a 10Gb uplink with address 8.43.84.133 mainly as a backup “in” to the lab. This is leftover from when the host was the OpenVPN gateway.
  • Both nodes have an add-on dual-port 10Gb SFP+ card with both ports cabled and configured as bond0 (bond type 6)

store03.front.sepia.ceph.com

Summary

Sole purpose is to serve as an arbiter node for the Gluster volume backing rhev VMs.

store03 was formerly gw1.

Hardware Information

ssdstore{01,02}.front.sepia.ceph.com

Summary

Serve as new NVMe Gluster storage nodes for the rhev cluster. They're running RHEL 7. Purchased/donated by OSAS.

Hardware Information

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro SYS-2028U-TN24R4T+ N/A
Mainboard N/A Supermicro X10DRU-i+ N/A
CPU 2 Intel Xeon X3220 @2.40GHz N/A ARK
RAM 6 DIMMs Samsung M393A1G40EB1-CRC 8GB DDR4 2400MHz – 48GB total
SSD 2 Intel S3520 480GB Software RAID1 for OS
SSD 8 Intel DC P3600 1.6TB Software RAID6 for Gluster brick
NIC 1 Intel 82599 (AOC-STGN-I2S) 2 ports 10Gbps SFP+ bonded in balance-alb mode

teuthology.front.sepia.ceph.com

Hosted previous teuthology service for Sepia lab. Running Trusty with a custom kernel so the LRC can be mounted. Now only up for legacy purposes. Can be moved to another rack and/or repurposed.

Mira-type chassis with 32GB RAM and 2x Intel(R) Xeon(R) X3440 @ 2.53GHz

Storage

  • Populated with 8x 4TB Hitachi HDD
  • 80GB Areca RAID6 hosts /
  • 24TB Areca RAID6 hosts /home

Networking

  • 1x cabled but unused 1Gbps link (eth0)
  • 1x 10Gbps uplink for all traffic (eth3)

pulpito.front.sepia.ceph.com

Hosts pulpito and paddles. Running Ubuntu Focal.

Mira-type chassis with 16GB RAM and Intel(R) Xeon(R) X3440 @ 2.53GHz

Storage

root@pulpito:~# cli64 vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 ARC-1222-VOL#000 Raid Set # 000  Raid6    400.0GB 00/00/00   Normal
  2 ARC-1222-VOL#001 Raid Set # 000  Raid1+0   60.0GB 00/00/01   Normal
  3 ARC-1222-VOL#002 Raid Set # 000  Raid6   5510.0GB 00/00/02   Normal
===============================================================================

root@pulpito:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 372.5G  0 disk 
├─sda1   8:1    0 356.5G  0 part /
├─sda2   8:2    0     1K  0 part 
└─sda5   8:5    0    16G  0 part [SWAP]
sdb      8:16   0  55.9G  0 disk /var/lib/postgresql
sdc      8:32   0     5T  0 disk 

Networking

  • 1x cabled 1Gbps link (eth0)

gitbuilder.ceph.com

Summary

AKA apt-mirror or gitbuilder-archive.

OOB access is available at ipmitool -I lanplus -U root -P XXXXX -H 172.21.39.25 sol activate

Services

Storage

  • 8x 4TB HGST drives
  • Volume set 1 is a 80GB RAID6 for the OS
  • Volume set 2 is a 24TB RAID6 for /home where all the data for services listed above live

Networking

  • 1x 10Gb uplink at address 8.43.84.130
  • Temporary VPN tunnel for drop.ceph.com access to the longrunningcluster
    • (Must be restarted service openvpn restart anytime the VPN service is restarted) 1)

hv{01..03}

Summary

Main RHEV highly available hypervisor nodes. Running RHEL7.

Infra admins have standard ssh access as their user.

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro SYS-1028U-TRT+ N/A
Mainboard N/A Supermicro X10DRU-i+ N/A
CPU 2 Intel E5-2660 v3 @ 2.60GHz N/A ARK
RAM 8 DIMMs Hynix Semiconductor HMA42GR7MFR4N-TF 16GB 128GB total
HDD 0
SSD 4 Intel SSDSC2BB480G6 480GB Software RAID5
NIC 2 ports Intel X540-AT2 10Gbps On-board RJ45. Only useful for 1Gb due to no 10GBASE-T switches
NIC 2 ports Intel 82575EB 1Gbps Add-On NIC. Not used
NIC 2 ports Intel 82599ES 10Gbps SFP+ Port 1 for LAN / Port 2 for WAN
NIC 2 ports Intel 82599ES 10Gbps SFP+ Port 1 for rhevm network / Port 2 unused

Dual PSU redundantly cabled to two PDUs.

hv04

Summary

Just another RHEV highly available hypervisor node. Model and specs are slightly different due to former models being EOL.

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro SYS-1029U-TRTP2 N/A
Mainboard N/A Supermicro X11DPU N/A
CPU 2 Intel Xeon(R) Gold 5115 CPU @ 2.40GHz N/A ARK
RAM 8 DIMMs Hynix Semiconductor HMA82GR7AFR8N-VK 16GB 128GB total
HDD 0
SSD 4 Intel SSDSC2KB480G7 480GB Software RAID5
NIC 2 ports Intel I350 1Gbps On-board RJ45. Port 1 for WAN
NIC 2 ports Intel X710 10Gbps SFP+ Port 1 for front/ipmi / Port 2 for rhevm

cnv{01..03}.front.sepia.ceph.com

Summary

Purchased by Red Hat in 2021 to create a new Openshift cluster with CNV. This cluster is intended to replace RHEV.

Purchasing ticket: https://redhat.service-now.com/surl.do?n=PNT1028261
Racking ticket: https://redhat.service-now.com/surl.do?n=PNT1028262

Hardware Information

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro SYS-2028U-TN24R4T+ N/A
Mainboard N/A Supermicro X10DRU-i+ N/A
CPU 2 Intel Xeon X3220 @2.40GHz N/A ARK
RAM 6 DIMMs Samsung M393A1G40EB1-CRC 8GB DDR4 2400MHz – 48GB total
SSD 2 Intel S3520 480GB Software RAID1 for OS
SSD 8 Intel DC P3600 1.6TB Software RAID6 for Gluster brick
NIC 1 Intel 82599 (AOC-STGN-I2S) 2 ports 10Gbps SFP+ bonded in balance-alb mode

Installation Notes

1)
drop.ceph.com moved to separate VM in RHEV
hardware/infrastructure.txt · Last modified: 2023/08/21 22:05 by zmc