Recent Changes - Search:
 Welcome to the Cisco Academy for Vision Impaired Linux Wiki.

PmWiki

edit SideBar

VolumeExercise

Linux volume exercise

This exercise will familiarize you with the various methods Linux uses to manage storage including Raid and LVM.

Volume management from the command-line is tricky, involved and complicated. whilst it is possible to hide the complexity underneath a web interface or installer frontend you will be able to repair all sorts of array and disk problems if you are a little familiar with the underlying tools.

You will need to undertake this exercise on a Ubuntu Virtual machine OpenVZ will not do.

Making some volumes to play with

First we will make a number of empty volumes to play with so that we have some blocks of storage to work with. Log in to the vm and become root.

 cd /tmp
mkdir junk
cd junk
for i in 1 2 3 4 5
do
echo creating disk$i
dd bs=1M if=/dev/zero of=disk$i count=50
done

You now have 5 50-meg volumes to play with.

Launch fdisk on the first volume.

 fdisk disk1

press p to print the partition table. Do you see any partitions listed? How many cylinders on the device? What is the minimum and optimal i/o unit on this device? How big is the device in bytes? How big is the device in megs?

If we wrote 50 1M blocks to each file why doesn't fdisk show the device size as 50 megabytes?

make a file system on the device.

 mkfs.ext4 disk1

Say yes to the question regarding disk1 not being a block device. How many blocks reserved for the super user? How many inodes on the device? How many files could you create on this file system?

mount the device:

mount -o loop disk1 /mnt
How much space is available for files? Why has it changed? Unmount the device.

 umount /mnt
Assuming the -m option to mkfs.ext4 sets the reserved percentage of blocks to reserve for the super user, give me the command-line to create a file system with no reserved blocks.

Now zero the device.

 dd if=/dev/zero bs=1M of=disk1 count=50

If we have multiple 50 meg devices we can stick them together to make one larger device. What is the problem with doing this?

LVM won't let us create a volume group on disk files so we'll create a raid array instead. You'll need to install a mail transport agent so mdadm can deliver failure notices.

 apt-get install exim4-daemon-light

Tell it you want local configuration only, option 4. accept the default listening configuration, however tell it to split configuration into small files Now install mdadm:

 apt-get install mdadm

You want to accept the defaults here as this is only a test install and you are not booting from an array.

You'll need to map loop devices to the files we created earlier with the following commands:

 for i in 1 2 3 4 5
do
losetup /dev/loop$i disk$i
done

This will give you /dev/loop1 /dev/loop2 /dev/loop3 etc mapped to our 5 files.

Now create a test array with 3 devices:

mdadm --create --level=5 -n3 /dev/md0 /dev/loop1 /dev/loop2 /dev/loop3

Now take a look at /proc/mdstat and see what you find:

 cat /proc/mdstat|more

How large is our raid array? Is this what we expect to see? Why?

Now let us turn our raid array into a physical volume for the lvm2 tools.

 pvcreate /dev/md0

now run,

 pvs

Do you notice that it shows the physical volume we created and the one that holds the file system?

How much space do you see on the physical volume?

now create a new volume group:

 vgcreate junk /dev/md0

Now take a look at your volume groups:

 vgs

if you want to take a better look try

 vgdisplay junk

How much space is available on junk? Why the drop in space?

Now we need to create a logical volume in the volume group to store data.

 lvcreate -n junk crud -l 24

-n specifies you want to create the logical volume in volume group junk. -l 24 says use 24 extents of 4mb.

Now let us create a file system on the device:

 mkfs.ext4 -m0 /dev/junk/crud

Now mount the file system and check free space:

 mount /dev/junk/crud /mnt
df -h /mnt
How much space is available?

Unmount the file system:

 umount /mnt

Now fail a device out of the raid array:

 mdadm --fail /dev/md0 /dev/loop3

Now take a look at /proc/mdstat how many devices are working and is the array running? for more info try

 mdadm --detail /dev/md0|more

A failed device must be removed from the array before it can be added back again:

mdadm --remove /dev/md0 /dev/loop3

Now take another look at /proc/mdstat how many devices does it show?

Now we re-add the now-working /dev/loop3 back to the array:

 mdadm --add /dev/md0 /dev/loop3
Look at /proc/mdstat and assure yourself that it is working correctly.

We just bought 2 more disks and decided we'd like to add them to the existing array. This turns out to be quite an exercise as we must first add the devices, then grow the array, extend the physical volume, enlarge the logical volume and finally resize the file system.

Extending the array:

mdadm --add /dev/md0 /dev/loop4
mdadm --add /dev/md0 /dev/loop5
Now take a look at /proc/mdstat, why hasn't our array size increased?

Now grow the array:

mdadm --grow --raid-devices=5 --backup-file=/tmp/stuff /dev/md0

Take a look at /proc/mdstat, any idea why the reshape is taking so long?

Now look at the bottom few kernel messages once the reshape is done:

 dmesg|tail

What did the capacity change to?

Now extend the physical volume:

 pvresize /dev/md0

Now you'll need to extend the logical volume. To find out how large you can have it execute

 vgdisplay junk
Note that our physical volume now contains 49 physical extents.

now extend the logical volume:

 lvextend /dev/junk/crud -l 49

Finally resize your file system:

resize2fs /dev/junk/crud
Now to clean up after this exercise first reboot the vm to clear the losetup mappings. Now log back in and become root.

cd /tmp
rm -rf junk

phew all done!

Edit - History - Print - Recent Changes - Search
Page last modified on October 23, 2012, at 06:10 PM