User Tools

Site Tools


Sidebar

General Lab Info (Mainly for Devs)

Hardware

Lab Infrastructure Services

Misc Admin Tasks
These are infrequently completed tasks that don't fit under any specific service

Production Services

OVH = OVH
RHEV = Sepia RHE instance
Baremetal = Host in Sepia lab

The Attic/Legacy Info

hardware:mira

This is an old revision of the document!


mira

Summary

We have 122 mira hosts. Some serve as vpshosts hypervisors and some serve as OSD nodes in the longrunningcluster. The remaining nodes are used as baremetal testnodes.

See the [long-running-cluster] and [vps_hosts] groups in ceph-sepia-secrets Ansible inventory to see what systems are used for.

Hardware Specs

Count Manufacturer Model Capacity Notes
Chassis N/A Supermicro 2U Unmodel N/A
Mainboard N/A Supermicro X8SIL N/A
CPU 1 Intel Xeon(R) CPU X3440 @ 2.53GHz N/A ARK
RAM 4 DIMMs Kingston 9965434-017.A00LF 4GB 16GB total. PC3-8500R DDR3-1066 REGISTERED ECC CL7 240 PIN
HDD 8x WD/HGST 1TB For VPSHOSTS and testnodes
HDD Asst WD/HGST 1TB/4TB LRC hosts have a mixture of 1TB and 4TB diskss=
NIC 2 ports Intel 82574L Gigabit Network Connection 1Gb
RAID 1 Areca Mix of ARC-{1222,1880} 8 disks JBOD Mode
BMC 1 Supermicro N/A N/A Reachable at $host.ipmi.sepia.ceph.com

Areca RAID Controllers

Flashing Firmware

The latest firmware for ARC-1222 controllers can be obtained from here.

The latest firmware for ARC-1880 controllers can be obtained from here.

My process for flashing ARC-1222 firmware manually is below. This assumes you've downloaded and extracted the firmware zip. The same process can be used for other Areca controllers. Just use the proper firmware BIN files.

scp /home/dgalloway/BIOS/areca/ARC1212_1222/ARC1212* ubuntu@$host.front.sepia.ceph.com:/home/ubuntu/
ssh $host
sudo -i
for file in $(ls /home/ubuntu/ARC1212*.BIN); do cli64 sys updatefw path=$file; done
for file in $(ls /home/ubuntu/ARC1212*.BIN); do rm $file; done

Other Common Tasks

Erasing a RAID and setting controller to JBOD mode

cli64 set password=0000
cli64 vsf delete vol=1
cli64 rsf delete raid=1
cli64 sys mode p=1

Stop Beeper

Parameter: <p=<0(mute)|1(disabled)|2(enabled)»

cli64 set password=0000
cli64 sys beeper p=0

Replacing failed/failing drives

This process is a bit annoying. Depending on which order the HDD backplane is connected to the RAID controller, the order of drive bays on these machines will be:

1 2 3 4
5 6 7 8

OR

5 6 7 8
1 2 3 4

To add to the annoyingness, it's not possible to light up the red/failed LED manually on the drive sleds. So when working with the labs team, it's easiest to have the admin be in front of the machine and either light up the failed drive or light up drive 1 and have them count to the drive bay.

To light up a drive, I typically just do dd if=/dev/sda of=/dev/null if I want to light up drive 1.

If a drive just has failing sectors but is still readable, it's easiest to light up that drive (smart.sh will tell you which drive letter to use dd on). If the drive has completely failed, light up drive 1 (usually /dev/sda) and have the admin count up to it.

hardware/mira.1494000654.txt.gz · Last modified: 2017/05/05 16:10 by dgalloway