We run a big share of Xen virtual servers spanned over multiple servers and if you want to use the full or best capability of Xen, I would suggest LVM (Logical Volume Manager), it makes life a lot easier, especially for those who do not run a RAID setup (We run RAID10 on all VM nodes) as you can split the partition over multiple hard drives. I’m not going to cover setting up the LVM as there are loads of tutorials on how to do that but I will rather cover the best way to migrate a LVM volume.
First, we will need to create a snapshot of the LVM volume as we cannot create an image of the live version, we run the following line:lvcreate -L20G -s -n storageLV_s /dev/vGroup/storageLV
The 20G
part is the size of the snapshot LVM, I would suggest looking up the size of the real original LV and making it the same, you can find out the size of the LV by using this command: lvdisplay /dev/vGroup/storageLV
— There will be a “LV Size” field, get it from there and put it in the command, the -n
switch is for the name, usually I name them the same as the LV with a trailing _s
for snapshot, the last argument is simply the real LV that we want to make a snapshot of.
Afterwards, we will use dd in different way, usually if you use dd in one line, it’s either reading or it’s either writing which makes it crawl, to bypass this, we will read the LV and pipe it to one that writes so the minimum speed is the fastest speed of the slowest hard drive (I could re-phrase that but it’s 11:10 PM!) — To speed it up a bit more, we used a block size of 64K.dd if=/dev/vGroup/storageLV_s conv=noerror,sync bs=64k | dd of=/migrate/storageLV_s.dd bs=64k
I won’t cover the file transfer process as there are multiple methods, if you want to use SCP, I would suggest disabling encryption or anything as it really slows it down, our node usually has httpd
installed on them so I simply changed the configuration to listen on a different port (for security) and changed the DocumentRoot to /migrate
Once you got your file on the server, you’ll need to re-create the LV on the target server, you’ll need to run thislvcreate -L20G -n storageLV vGroup
You’ll have to keep the same size, bring the same name (this time without a trailing _s
as it won’t be a snapshot) and the volume group at the end.
The last step is to actually restore the image using dd, again using our block-size & pipe tweak for better performance.dd if=/migrate/storageLV_s.dd conv=noerror,sync bs=64k | dd of=/dev/vGroup/storageLV bs=64k
I have migrated around 16 LVs with this method without any problems, 13 of them were 20G each, 2 40G and 1 75G — So far every part is fast however I have to admit that the slowest part was the file transfer, I would suggest using a Gbit crossover or even better if you have a Gbit switch, if you don’t but you’re right next to the server, might consider using a spare USB 2.0 HDD as they are much faster compared to 100mbps (USB2.0 is around 480Mbps).
No comments:
Post a Comment