mdadm

wednesday, 27 august 2014

i have moved from RAID level 10, to 5, to 6 and now back to 10. The rebuild and check times for RAID 6, in particular, are excruciating and stress the drives. With proper backups (which you should always have, regardless of the RAID level), RAID 10 offered speed (in build, recovery and usage) over capacity. Pick your trade offs.

create array

to create a new 8 disk RAID10 array..

mdadm --create /dev/md0 --raid-devices==8 --level==raid10 --chunk==256 --assume-clean /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1

mdadm with XFS (ext4 in 2.6.x kernels seemed to exhibit an annoying drive thrashing during idle).

mkfs.xfs -f -b size==4096 -d sunit==512,swidth==2048 /dev/md0 xfs_check /dev/md0

check

to scan array configuration..

mdadm --detail --scan mdadm --detail --scan >>/etc/mdadm/mdadm.conf #.. to save configuration

To force a check of the RAID..

echo check > /sys/block/md0/md/sync_action

To monitor mdadm progress..

watch -d cat /proc/mdstat

maintenance

to remove a disk from an array for replacement..

mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1

To add a new disk to array..

mdadm --add /dev/md0 /dev/sda1

delete array

to free up the disks to move to another mdadm RAID level from backups (migrating levels with realtime conversion is painfully slow and stresses the drives)..

umount /net mdadm --stop /dev/md0 mdadm --remove /dev/md0

disk partitions

unless using identical RAID certified hard disks, manually partition all drives to identical partition size so differing drive models and brands may be used within the RAID, as even the same series within a brand may have differing sector block counts for the drive across generations (e.g. create 931GB partitions with 1TB Samsung SpinPoint drives).

western digital

consumer grade drives should probably not be used for RAID usage due to their disabling of TLEF (Time Limited Error Recovery). Use the “WD Red” NAS drives for decent price/performance.

superblock failure

drives flagged as bad on powerup can result in the array appearing to have a bad superblock when the complete array cannot be assembled by mdadm. This can usually be rectified with the following..

umount /net mdadm --stop /dev/md0 mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 xfs_check /dev/md0 mount -a

»»  smartmontools

comment ?