This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
hardware:ivan [2022/06/08 15:13] djgalloway [Summary] |
hardware:ivan [2022/06/08 15:17] (current) djgalloway |
||
---|---|---|---|
Line 26: | Line 26: | ||
===== OSD/Block Device Information ===== | ===== OSD/Block Device Information ===== | ||
- | The ivan have 9x 12TB HDD, 2x 1.5TB NVMe, and 1x 350GB NVMe. | + | I used the Orchestrator to deploy OSDs on the ivan hosts (I did this one by one to avoid a mass data rebalance all to one rack). |
- | + | ||
- | The 12TB were added to so we can say we're testing on drives larger than 8TB. | + | |
- | + | ||
- | The smaller NVMe device is split into eleven equal logical volumes. One for each OSD's journal. | + | |
- | + | ||
- | <code> | + | |
- | root@ivan04:~# lsblk | + | |
- | NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT | + | |
- | sda 8:0 0 894.3G 0 disk | + | |
- | `-sda1 8:1 0 894.3G 0 part | + | |
- | `-md0 9:0 0 894.1G 0 raid1 / | + | |
- | sdb 8:16 0 894.3G 0 disk | + | |
- | `-sdb1 8:17 0 894.3G 0 part | + | |
- | `-md0 9:0 0 894.1G 0 raid1 / | + | |
- | sdc 8:32 0 10.9T 0 disk | + | |
- | sdd 8:48 0 10.9T 0 disk | + | |
- | sde 8:64 0 10.9T 0 disk | + | |
- | sdf 8:80 0 10.9T 0 disk | + | |
- | sdg 8:96 0 10.9T 0 disk | + | |
- | sdh 8:112 0 10.9T 0 disk | + | |
- | sdi 8:128 0 10.9T 0 disk | + | |
- | sdj 8:144 0 10.9T 0 disk | + | |
- | sdk 8:160 0 10.9T 0 disk | + | |
- | sr0 11:0 1 841M 0 rom | + | |
- | nvme0n1 259:0 0 349.3G 0 disk | + | |
- | `-nvme0n1p1 259:3 0 349.3G 0 part | + | |
- | |-journals-sdc 253:0 0 31G 0 lvm | + | |
- | |-journals-sdd 253:1 0 31G 0 lvm | + | |
- | |-journals-sde 253:2 0 31G 0 lvm | + | |
- | |-journals-sdf 253:3 0 31G 0 lvm | + | |
- | |-journals-sdg 253:4 0 31G 0 lvm | + | |
- | |-journals-sdh 253:5 0 31G 0 lvm | + | |
- | |-journals-sdi 253:6 0 31G 0 lvm | + | |
- | |-journals-sdj 253:7 0 31G 0 lvm | + | |
- | |-journals-sdk 253:8 0 31G 0 lvm | + | |
- | |-journals-nvme1n1 253:9 0 31G 0 lvm | + | |
- | `-journals-nvme2n1 253:10 0 31G 0 lvm | + | |
- | nvme1n1 259:1 0 1.5T 0 disk | + | |
- | nvme2n1 259:2 0 1.5T 0 disk | + | |
- | </code> | + | |
- | + | ||
- | ==== How to partition/re-partition the NVMe device ==== | + | |
- | Here's my bash history that can be used to set up an ivan machine's 375GB NVMe card. | + | |
<code> | <code> | ||
- | ansible -a "sudo parted -s /dev/nvme0n1 mktable gpt" ivan | + | root@reesi001:~# cat ivan_osd_spec.yml |
- | ansible -a "sudo parted /dev/nvme0n1 unit '%' mkpart foo 0 100" ivan | + | service_type: osd |
- | ansible -a "sudo pvcreate /dev/nvme0n1p1" ivan | + | service_id: osd_using_paths |
- | ansible -a "sudo vgcreate journals /dev/nvme0n1p1" ivan | + | placement: |
- | for disk in sd{c..k} nvme1n1 nvme2n1; do ansible -a "sudo lvcreate -L 31G -n $disk journals" ivan; done | + | hosts: |
+ | - ivan01 | ||
+ | - ivan02 | ||
+ | - ivan03 | ||
+ | - ivan04 | ||
+ | - ivan05 | ||
+ | - ivan06 | ||
+ | - ivan07 | ||
+ | spec: | ||
+ | data_devices: | ||
+ | paths: | ||
+ | - /dev/sdc | ||
+ | - /dev/sdd | ||
+ | - /dev/sde | ||
+ | - /dev/sdf | ||
+ | - /dev/sdg | ||
+ | - /dev/sdh | ||
+ | - /dev/sdi | ||
+ | - /dev/sdj | ||
+ | - /dev/sdk | ||
+ | - /dev/nvme1n1 | ||
+ | - /dev/nvme2n1 | ||
+ | db_devices: | ||
+ | paths: | ||
+ | - /dev/nvme0n1 | ||
</code> | </code> | ||
Line 93: | Line 74: | ||
I added the hosts to the cluster using the ''back'' IPs. The cluster became very unhappy complaining about slow OPs. Come to find out the ivan servers couldn't get **out** from their ''back'' interfaces so the OSDs defaulted back to the 1Gb link. | I added the hosts to the cluster using the ''back'' IPs. The cluster became very unhappy complaining about slow OPs. Come to find out the ivan servers couldn't get **out** from their ''back'' interfaces so the OSDs defaulted back to the 1Gb link. | ||
- | I reached out to Red Hat IT to have the 25Gb network ports switched over to VLAN100. After that, I struggled to get eno1 (the 1Gb interface) to not come up on boot. | + | I reached out to Red Hat IT to have the 25Gb network ports switched over to VLAN100. After that, I struggled to get eno1 (the 1Gb interface) to **not** come up on boot since I didn't need it anymore. |
Finally I figured out<code> | Finally I figured out<code> |