Simply md RAID 1 extra data drive will not boot. The system drops to recovery mode with a missing (md) drive to mount and an fsck request. The extra file system has 2 "linux_raid_member" drives that show under fdisk and blkid. Even a "cat /proc/mdstat" shows no arrays. If I run, as root:
the /proc/mdstat will finally show info, however.mdadm --auto-detect
- Make sure that the /etc/mdadm.conf contains the array info from:
mdadm --examine --scan >> /etc/mdadm.conf
- There seems to be kinda "depreciated" mdadm technical note about older created arrays with BZ-788022 implicated and a "+0.90" needing to be added to/etc/mdadm.conf, but you need to get rid of the "+1.x -all" options!
How do you tell in advance that you will have an issue *before* an upgrade? If you run, as root,
mdadm -E /dev/sdc1|grep Vers
It is interesting to note that I was able to just have this md array work in 6.0, 6.1 and 6.2 because,
In Red Hat Enterprise Linux 6.1 and 6.2, mdadm always assembled version 0.90 RAID arrays automatically due to a bug.