Just in case someone googles this up, here is my experience with moving from 2x150Gb to 2x1Tb drives in mdadm RAID1 + LVM on top of it. Assuming, we got 2 drives - small1, small2 in mdadm mirror (md0), and the new are big1 and big2. On top of them is LVM with volume group VG1? and logical volume LV1? ensure everything OK with current md:
cat /proc/mdadm
Tell mdadm to fail one drive and remove it from md array:
mdadm /dev/md0 --set-faulty /dev/small1 && mdadm /dev/md0 --remove /dev/small1
Replace small1 drive with big one (either hotswapping, or powering the system down). Make new partition on the big HDD of type FD (Linux RAID autodetect). Make it the size you want your new RAID to be. I prefer cfdisk, but this may vary:
cfdisk /dev/big1
Add the new disk (or, to be correct, your newly created partition, e.g. /dev/sda1):
mdadm /dev/md0 --add /dev/big1
Wait till the array is synced:
watch cat /proc/mdstat
Repeat this with the other pair of drives. In the end you'll get two big disks in array. Grow the array to maximum size allowed by component devices, wait till synced:
mdadm /dev/md0 --grow --size=max watch cat /proc/mdstat
Find the physical volume device SCSI device mapping name on which the SR exists. Identify the Volume Group (VG) corresponding to the SR.
Issue the following command on XenServer host:
# pvs
The output of this command should be similar to one below:
- /dev/md0 VG_XenStorage-058e9a1d-9b7e-71bc-7a4c-5b78d6e30bcb lvm2 a- 80.00G 38.00G
- /dev/sde VG_XenStorage-4684b6c6-be6d-6267-b7b5-834a1fd30f65 lvm2 a- 59.99G 45.99G
The volume groups (VG) are named as VS_XenStorage-<SR UUID>. Using the SR UUID noted in step 2, identify the correct volume group and the corresponding Physical Volume (PV) from the output of the above command.
Resize the Physical Volume:
# pvresize /dev/md<x>
Scan the Storage Repository:
# xe sr-scan uuid=<SR UUID>
The SR size now gets updated to the new size of the physical volume.