User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:mako

mako{01..10}

Summary

Mark had been increasingly hearing that customers wanted details on how Ceph performed on non-Intel systems. Red Hat purchased 10 servers with AMD processors in Q1 2021.

These servers also have some donated Samsung MZ-QLB3T80 3.84TB (Model: PM983) in each. 8 in the first two nodes and 6 in the rest. 64 total.

Purchasing Details

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis 1U Dell PowerEdge R6515 N/A
Mainboard N/A Dell 0R4CNN N/A
CPU 1 AMD AMD EPYC 7742 128 cores
RAM 8 DIMMs Micron 18ASF2G72PDZ-3G2E1 16GB 128GB Total
SSD 2 Micron MTFDDAV480TDS 480GB Behind hardware RAID1 for OS
NVMe 1 Dell P4510 1TB For OSD journals?
NIC 2 ports Dell 1Gb On-board. Unused.
NIC 2 ports Broadcom 57416 BaseT 1/10Gb Oops. Won't be using this.
NIC 2 ports Mellanox ConnectX-6 100Gb 1 port as uplink
BMC 1 Quanta N/A N/A Reachable at $host.ipmi.sepia.ceph.com using usual IPMI credentials.

PXE/Reimaging

These PXE using Legacy/BIOS mode and can be provisioned via Cobbler normally.

Network Config

These nodes are connected to the officinalis QFX5200 (s/n WH0218170419 [formerly WH3619030401]) uplinked and managed by Red Hat IT. For an example of how to report an outage, see https://redhat.service-now.com/surl.do?n=INC1201508.

The 100Gb connection is the only uplink for now. The top-of-rack switch in that rack probably has capacity if we need a 1Gb uplink and reserve the 100Gb NIC for backend traffic.

hardware/mako.txt · Last modified: 2021/05/26 20:58 by djgalloway