User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:mako

This is an old revision of the document!


mako{01..10}

Summary

Mark had been increasingly hearing that customers wanted details on how Ceph performed on non-Intel systems. Red Hat purchased 10 servers with AMD processors in Q1 2021.

These servers also have some donated Samsung MZ-QLB3T80 3.84TB (Model: PM983) in each. 8 in the first two nodes and 6 in the rest. 64 total.

Purchasing Details

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis 1U Quanta D52B-1U N/A
Mainboard N/A Quanta S5B-MB (LBG-1G) N/A
CPU 2 Intel Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz 112 ARK
RAM 12 DIMMs Micron 36ASF4G72PZ-2G6H1 32GB 384GB Total
SSD 1 Intel SSDSC2KB960G8 1TB For OS
NVMe 2 Intel SSDPE21K750GA 1TB For OSD journals?
NVMe 8 Intel SSDPE2KX080T8 8TB For OSDs
NIC 2 NICs 2 x ports Intel XXV710 25Gb All 4 ports cabled and bonded
BMC 1 Quanta N/A N/A Reachable at $host.ipmi.sepia.ceph.com using usual IPMI credentials.

PXE/Reimaging

These PXE using Legacy/BIOS mode and can be provisioned via Cobbler normally.

Network Config

These nodes are connected to the officinalis QFX5200 (s/n WH0218170419 [formerly WH3619030401]) uplinked and managed by Red Hat IT. For an example of how to report an outage, see https://redhat.service-now.com/surl.do?n=INC1201508.

The 100Gb connection is the only uplink for now. The top-of-rack switch in that rack probably has capacity if we need a 1Gb uplink and reserve the 100Gb NIC for backend traffic.

hardware/mako.1622058863.txt.gz · Last modified: 2021/05/26 19:54 by djgalloway