community.riocities.com
  • Home
  • Categories
  • Tags
  • Archives

Re part live system md+lvm

Contents

  • Background and introduction
  • Adding a extra disk to md0 (optional)
    • Partitioning
    • Install grub to the new disk
    • Add the new disk to md0
  • md1 setup
    • Remove one disk from md0
    • Repartion sdb
    • Install grub to sdb
    • Create a new degraded mirror (md1)
  • House keeping (precaution)
  • Move the data
  • Remove md0 and move new spare partition into md1
  • House keeping (end)

Re-partitioning of a live system to have space for a new (and larger) grub bootloader.

Background and introduction¶

Upgrading from Debian Squeeze to Wheezy failed on my md+lvm systems (same problem with an Ubuntu LTS upgrade from lucid 10.04 to precise 12.04).

This is the error message from grub-install during the system upgrade, when it fails to fit the image

/usr/sbin/grub-setup: warn: Your core.img is unusually large.  It won't fit in the embedding area..
/usr/sbin/grub-setup: error: embedding is not possible, but this is required when the root device is on a RAID array or LVM volume.

The grub2 in Wheezy is large and does not fit on drives partitioned with old version of fdisk. Example from my system:

Note use fdisk -l <device> with "Wheezy fdisk" and fdisk -lu <device> with "Squeeze fdisk"

Device     Boot      Start         End      Blocks   Id  System
/dev/sda1   *           63   312576704   156288321   fd  Linux raid autodetect

If partitioned with a newer fdisk start would have been 2048, and Wheezy grub2 needs the extra space to fit with support for md-raid and lvm.

So how to solve this? in short it involves creating a new raid metadevice with disks partitioned to start the first partition on 2048 and moving the existing lvm volume group to this metadevice.

=> This can be done by degrading the running raid array and repartition the disk that was removed from the running raid array.

I however strongly recommend that you add new disks while doing this on a live system. (There is an optional tagged chapter that shows how this is done). The extra disks can be connected via SATA/eSATA. USB will not work as you can not mix USB and SATA disks in the same array. If you try you will get this:

bio too big device md0 (248 > 240)

And that is NOT GOOD!

NOTE NOTE NOTE - before starting any work you must make sure that you have free blocks in the volume group with md0 as a physical disk. This is due to that the new metadevice created will be slightly smaller (in other words you need some Free PE in the md0 LVM physical volume)

Timing - This can be done before the upgrade, or as in my case directly after "dist-upgrade", but before reboot.

Adding a extra disk to md0 (optional)¶

The new disk must be larger than the existing disks so you can partition it correctly from the start and be able to fit grub on that disk at least.

Later when you create md1 you can add an extra disk to that as well, so md1 is also having full redundancy during the data move.

Partitioning¶

Create a new primary partition starting at 2048 and ending at 'current-disks-end + (2048-63)' in my case 312576704 + 1985 = 312578689

Example in fdisk

    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 
    Using default value 1
    First sector (2048-976773167, default 2048): 
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-976773167, default 976773167): 312578689

    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes): fd
    Changed system type of partition 1 to fd (Linux raid autodetect)

    Disk /dev/sdc: 500.1 GB, 500107862016 bytes
    81 heads, 63 sectors/track, 191411 cylinders, total 976773168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc659cfc5

       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1   *        2048   312578689   156288321   fd  Linux raid autodetect

Install grub to the new disk¶

We at least want to have one disk with the new grub on it in case something happens

# grub-install /dev/sdc
Installation finished. No error reported.

Add the new disk to md0¶

We first add the new disk as a spare

# mdadm --add /dev/md0 /dev/sdc1
mdadm: added /dev/sdc1

and then "grow" the raid1 to three disks

# mdadm --grow --raid-devices=3 /dev/md0
raid_disks for /dev/md0 set to 3

And then wait for the full sync (hours to days) - see /proc/mdstat

-- end optional extra mirror device setup ---

md1 setup¶

Remove one disk from md0¶

Remove the disk that will be the first disk in the new md1

# mdadm --fail /dev/md0 /dev/sdb1
# mdadm --remove /dev/md0 /dev/sdb1

Repartion sdb¶

Re-partition /dev/sdb with a new empty partition table and then add a new primary partition on it starting at 2048.

Fdisk output should be like this:

       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *        2048   312581807   156289880   da  Non-FS data

Install grub to sdb¶

We now have a disk with a suitable partition table for grub installation, lets install it

# grub-install /dev/sdb
Installation finished. No error reported.

Create a new degraded mirror (md1)¶

  • Zero out previous raid super block (most probably not needed) note: an error like "Unrecognised md component device" is to be expected
# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1

Now we can setup md1 in degraded mode

# mdadm --create /dev/md1 --level=raid1 -f -n 1 /dev/sdb1

Note: if doing this step on a pure Squeeze system (before partial upgrade) you should specify 1.2 format (default for mdadm in Wheezy)

# mdadm --create /dev/md1 --metadata=1.2 --level=raid1 -f -n 1 /dev/sdb1

A word of warning, you will need to complete the upgrade to Wheezy before rebooting as a Squeeze system can not boot from a metadata version 1.2 mirror.

Option: you can also add --bitmap=internal flag to --create in order to setup a write-intent bitmap. Note: if doing this with mdadm before user space upgrade to Wheezy the chunk size will be small. The default in Wheezy is 65536KB and can be selected with --bitmap-chunk= .

House keeping (precaution)¶

Some house keeping in case of a system/power failure

update md config file

We now need to update the system config with awareness of md1

# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

update initrd files

Make sure the new mdadm.conf is present in the initrd files

# update-initramfs -u -k all

Move the data¶

Make md1 into a pysical volume for LVM

# pvcreate /dev/md1

Move the data from md0 to md1

# vgextend vg_raid1 /dev/md1
# pvmove /dev/md0
# vgreduce vg_raid1 /dev/md0
# pvremove /dev/md0

Remove md0 and move new spare partition into md1¶

Now all data is in md1 so md0 can be removed and the disk in it can be used for md1

# mdadm --stop /dev/md0 

Re-partition sda as sdb (starting at 2048)

Install grub

# grub-install /dev/sda
Installation finished. No error reported.

Add sda1 to /dev/md1

# mdadm --zero-superblock /dev/sda1
# mdadm --add /dev/md1 /dev/sda1
# mdadm --grow /dev/md1 -n 2

House keeping (end)¶

Some house keeping due to changes in metadevice setup

update md config file

We now need to update the system config with awareness of md0 removal

# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

update initrd files

Make sure the new mdadm.conf is present in the initrd files

# update-initramfs -u -k all
  • Wait for re-sync of md1 (see /proc/mdstat)

  • Complete the upgrade to Wheezy if not done before (mandatory: if using 1.2 meta data)

  • Reboot (and hope for the best)...


  • « KVM mount guest disk on host
  • HP ProLiant MicroServer Gen8 »

Published

Aug 12, 2013

Last Updated

2014-02-26 23:14:56+01:00

Author

henrik

Category

HOWTOs

Tags

  • Debian 8
  • LVM 4

Social

  • atom feed
  • rss feed
  • ipv6 ready
  • Powered by Pelican. Theme: Elegant by Talha Mansoor