User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:mira

mira

Summary

We have around 100 mira hosts.

Some of them used to be used as vpshosts and longrunningcluster, however they are nearly 10 years old and severely behind spec-wise. We're slowly phasing them out to make room for new hardware. The systems that remain are used as testnodes. A small subset are still in the longrunningcluster.

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro 2U Unmodel N/A
Mainboard N/A Supermicro X8SIL N/A
CPU 1 Intel Xeon(R) CPU X3440 @ 2.53GHz N/A ARK
RAM 4 DIMMs Kingston 9965434-017.A00LF 4GB 16GB total. PC3-8500R DDR3-1066 REGISTERED ECC CL7 240 PIN
HDD 8x WD/HGST 1TB For VPSHOSTS and testnodes
HDD Asst WD/HGST 1TB/4TB LRC hosts have a mixture of 1TB and 4TB disks
NIC 2 ports Intel 82574L Gigabit Network Connection 1Gb
RAID 1 Areca Mix of ARC-{1222,1880} 8 disks JBOD Mode
BMC 1 Supermicro N/A N/A Reachable at $host.ipmi.sepia.ceph.com

E-Waste

As these machines age, they continue to MCE and lock up at higher rates. To make room for new LRC hosts, we've begun e-wasting miras.

Hostname Date E-Wasted Ticket Number(s)
mira005 PNT0146880
mira009 PNT0146880
mira091 PNT0146880
mira095 PNT0146880
mira113 PNT0146880
mira{030..039} PNT0766680

Areca RAID Controllers

Flashing Firmware

UPDATE This can be done now simply by running ansible-playbook firmware.yml --limit="miraXXX*" --tags="areca"

The latest firmware for ARC-1222 controllers can be obtained from here.

The latest firmware for ARC-1880 controllers can be obtained from here.

My process for flashing ARC-1222 firmware manually is below. This assumes you've downloaded and extracted the firmware zip. The same process can be used for other Areca controllers. Just use the proper firmware BIN files.

scp /home/dgalloway/BIOS/areca/ARC1212_1222/ARC1212* ubuntu@$host.front.sepia.ceph.com:/home/ubuntu/
ssh $host
sudo -i
for file in $(ls /home/ubuntu/ARC1212*.BIN); do cli64 sys updatefw path=$file; done
for file in $(ls /home/ubuntu/ARC1212*.BIN); do rm $file; done

Other Common Tasks

Erasing a RAID and setting controller to JBOD mode

cli64 set password=0000
cli64 vsf delete vol=1
cli64 rsf delete raid=1
cli64 sys mode p=1

Stop Beeper

Parameter: <p=<0(mute)|1(disabled)|2(enabled)»

cli64 set password=0000
cli64 sys beeper p=0

Replacing failed/failing drives

This process is a bit annoying. Depending on which order the HDD backplane is connected to the RAID controller, the order of drive bays on these machines will be:

1 2 3 4
5 6 7 8

OR

5 6 7 8
1 2 3 4

To add to the annoyingness, it's not possible to light up the red/failed LED manually on the drive sleds. So when working with the labs team, it's easiest to have the admin be in front of the machine and either light up the failed drive or light up drive 1 and have them count to the drive bay.

To light up a drive, I typically just do dd if=/dev/sda of=/dev/null if I want to light up drive 1.

If a drive just has failing sectors but is still readable, it's easiest to light up that drive (smart.sh will tell you which drive letter to use dd on). If the drive has completely failed, light up drive 1 (usually /dev/sda) and have the admin count up to it.

hardware/mira.txt · Last modified: 2020/03/11 20:04 by djgalloway