This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
hardware:infrastructure [2016/09/22 14:36] dgalloway [Networking] |
hardware:infrastructure [2023/08/21 22:05] (current) zmc [pulpito.front.sepia.ceph.com] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Infrastructure Hardware ====== | ====== Infrastructure Hardware ====== | ||
This page covers all other hardware that doesn't fit under the umbrella of testnode hardware. | This page covers all other hardware that doesn't fit under the umbrella of testnode hardware. | ||
+ | |||
+ | ===== Spare Hardware ===== | ||
+ | Present as of: --- //[[dgallowa@redhat.com|David Galloway]] 2022/01/18 19:29// | ||
+ | |||
+ | ^ Hardware ^ Quantity ^ | ||
+ | | Intel Optane P5800X 400GB PCIe cards | 10 | | ||
+ | | STARTECH 2PT 10G FIBER NIC-OPEN SFP+ | 5 | | ||
===== gw1/gw2 ===== | ===== gw1/gw2 ===== | ||
Line 6: | Line 13: | ||
These two hosts are 1U custom-built Supermicro systems. | These two hosts are 1U custom-built Supermicro systems. | ||
- | gw1 can currently be accessed at ''irv-gw.front.sepia.ceph.com''. It is the old gateway host from when the Sepia lab was in Irvine | + | gw1 was repurposed as [[hardware:infrastructure#store03frontsepiacephcom|store03]] to use as an arbiter node for the Gluster storage backing RHEV VMs. |
gw2 is not powered on or accessible | gw2 is not powered on or accessible | ||
Line 44: | Line 51: | ||
* store01 has a 10Gb uplink with address 8.43.84.133 mainly as a backup "in" to the lab. This is leftover from when the host was the OpenVPN gateway. | * store01 has a 10Gb uplink with address 8.43.84.133 mainly as a backup "in" to the lab. This is leftover from when the host was the OpenVPN gateway. | ||
* Both nodes have an add-on dual-port 10Gb SFP+ card with both ports cabled and configured as bond0 (bond type 6) | * Both nodes have an add-on dual-port 10Gb SFP+ card with both ports cabled and configured as bond0 (bond type 6) | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== store03.front.sepia.ceph.com ===== | ||
+ | ==== Summary ==== | ||
+ | Sole purpose is to serve as an arbiter node for the Gluster volume backing [[services:rhev]] VMs. | ||
+ | |||
+ | store03 was formerly gw1. | ||
+ | |||
+ | ==== Hardware Information ==== | ||
+ | http://wiki.front.sepia.ceph.com/doku.php?id=hardware:infrastructure#hardware_specs | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== ssdstore{01,02}.front.sepia.ceph.com ===== | ||
+ | ==== Summary ==== | ||
+ | Serve as new NVMe Gluster storage nodes for the [[services:rhev]] cluster. They're running RHEL 7. Purchased/donated by OSAS. | ||
+ | |||
+ | ==== Hardware Information ==== | ||
+ | | ^ Count ^ Manufacturer ^ Model ^ Capacity ^ Notes ^ | ||
+ | ^ Chassis | N/A | Supermicro | SYS-2028U-TN24R4T+ | N/A | | | ||
+ | ^ Mainboard | N/A | Supermicro | X10DRU-i+ | N/A | | | ||
+ | ^ CPU | 2 | Intel | Xeon X3220 @2.40GHz | N/A | [[https://ark.intel.com/products/92981/Intel-Xeon-Processor-E5-2630-v4-25M-Cache-2_20-GHz|ARK]] | | ||
+ | ^ RAM | 6 DIMMs | Samsung | M393A1G40EB1-CRC | 8GB | DDR4 2400MHz -- 48GB total | | ||
+ | ^ SSD | 2 | Intel | S3520 | 480GB | Software RAID1 for OS | | ||
+ | ^ SSD | 8 | Intel | DC P3600 | 1.6TB | Software RAID6 for Gluster brick | | ||
+ | ^ NIC | 1 | Intel | 82599 (AOC-STGN-I2S) | 2 ports | 10Gbps SFP+ bonded in balance-alb mode | | ||
---- | ---- | ||
Line 64: | Line 98: | ||
===== pulpito.front.sepia.ceph.com ===== | ===== pulpito.front.sepia.ceph.com ===== | ||
- | Hosts [[services:pulpito]] and [[services:paddles]]. Running Ubuntu Precise. | + | Hosts [[services:pulpito]] and [[services:paddles]]. Running Ubuntu Focal. |
[[hardware:Mira]]-type chassis with 16GB RAM and Intel(R) Xeon(R) X3440 @ 2.53GHz | [[hardware:Mira]]-type chassis with 16GB RAM and Intel(R) Xeon(R) X3440 @ 2.53GHz | ||
Line 96: | Line 130: | ||
==== Summary ==== | ==== Summary ==== | ||
AKA [[services:apt-mirror]] or gitbuilder-archive. | AKA [[services:apt-mirror]] or gitbuilder-archive. | ||
+ | |||
+ | OOB access is available at ''ipmitool -I lanplus -U root -P XXXXX -H 172.21.39.25 sol activate'' | ||
==== Services ==== | ==== Services ==== | ||
Line 101: | Line 137: | ||
* [[services:backups]] | * [[services:backups]] | ||
* Serves packages built by [[services:gitbuilders]] | * Serves packages built by [[services:gitbuilders]] | ||
+ | * [[tasks:lab-extras|lab-extras]] | ||
==== Storage ==== | ==== Storage ==== | ||
Line 114: | Line 151: | ||
---- | ---- | ||
- | ===== hv01/hv02 ===== | + | ===== hv{01..03} ===== |
==== Summary ==== | ==== Summary ==== | ||
Main [[services:RHEV]] highly available hypervisor nodes. Running RHEL7. | Main [[services:RHEV]] highly available hypervisor nodes. Running RHEL7. | ||
Line 134: | Line 171: | ||
Dual PSU redundantly cabled to two PDUs. | Dual PSU redundantly cabled to two PDUs. | ||
+ | |||
+ | ===== hv04 ===== | ||
+ | ==== Summary ==== | ||
+ | Just another [[services:RHEV]] highly available hypervisor node. Model and specs are slightly different due to former models being EOL. | ||
+ | |||
+ | ==== Hardware Specs ==== | ||
+ | | ^ Count ^ Manufacturer ^ Model ^ Capacity ^ Notes ^ | ||
+ | ^ Chassis | N/A | Supermicro | SYS-1029U-TRTP2 | N/A | | | ||
+ | ^ Mainboard | N/A | Supermicro | X11DPU | N/A | | | ||
+ | ^ CPU | 2 | Intel | Xeon(R) Gold 5115 CPU @ 2.40GHz | N/A | [[https://ark.intel.com/products/120484/Intel-Xeon-Gold-5115-Processor-13_75M-Cache-2_40-GHz|ARK]] | | ||
+ | ^ RAM | 8 DIMMs | Hynix Semiconductor | HMA82GR7AFR8N-VK | 16GB | 128GB total | | ||
+ | ^ HDD | 0 | | | | | | ||
+ | ^ SSD | 4 | Intel | SSDSC2KB480G7 | 480GB | Software RAID5 | | ||
+ | ^ NIC | 2 ports | Intel | I350 | 1Gbps | On-board RJ45. Port 1 for WAN | | ||
+ | ^ NIC | 2 ports | Intel | X710 | 10Gbps | SFP+ Port 1 for front/ipmi / Port 2 for rhevm | | ||
+ | |||
+ | ===== cnv{01..03}.front.sepia.ceph.com ===== | ||
+ | ==== Summary ==== | ||
+ | Purchased by Red Hat in 2021 to create a new Openshift cluster with CNV. This cluster is intended to replace [[services:RHEV]]. | ||
+ | |||
+ | Purchasing ticket: https://redhat.service-now.com/surl.do?n=PNT1028261 \\ | ||
+ | Racking ticket: https://redhat.service-now.com/surl.do?n=PNT1028262 | ||
+ | |||
+ | ==== Hardware Information ==== | ||
+ | | ^ Count ^ Manufacturer ^ Model ^ Capacity ^ Notes ^ | ||
+ | ^ Chassis | N/A | Supermicro | SYS-2028U-TN24R4T+ | N/A | | | ||
+ | ^ Mainboard | N/A | Supermicro | X10DRU-i+ | N/A | | | ||
+ | ^ CPU | 2 | Intel | Xeon X3220 @2.40GHz | N/A | [[https://ark.intel.com/products/92981/Intel-Xeon-Processor-E5-2630-v4-25M-Cache-2_20-GHz|ARK]] | | ||
+ | ^ RAM | 6 DIMMs | Samsung | M393A1G40EB1-CRC | 8GB | DDR4 2400MHz -- 48GB total | | ||
+ | ^ SSD | 2 | Intel | S3520 | 480GB | Software RAID1 for OS | | ||
+ | ^ SSD | 8 | Intel | DC P3600 | 1.6TB | Software RAID6 for Gluster brick | | ||
+ | ^ NIC | 1 | Intel | 82599 (AOC-STGN-I2S) | 2 ports | 10Gbps SFP+ bonded in balance-alb mode | | ||
+ | |||
+ | ==== Installation Notes ==== | ||
+ | Followed https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html-single/installing/index#deploying-installer-provisioned-clusters-on-bare-metal | ||
+ | |||
+ | ---- |