![]() ![]() So, after we added our RAID1 disk md1 to our vg, its storage space is ready to be allocated to a logical volume. In lvm-speak, a 'logical volume' is the disk lvm exposes to the system. This is independent from how you intend to format it. So as we here are in effect growing the size of the "disk" lvm exposes to the system - so far on the physical layer by installing disks and on a lvm-physical layer by adding 'physical volumes' to our 'volume group' - we now need to tell lvm that it should use the additional storage space to grow the size of the exposed drive. To do this we run the lvextend tool providing the size by which we wish to extend the volume. Use lvdisplay to get the path of the 'logical volume' we want to grow inside the 'volume group'. #Impulse rc driver fixer not finding freeSudo lvextend /dev/storagevg/onelv /dev/md1 This is equivalent to specifying "-l +100%PVS" on the command line." We here want to extend the logical volume by 100% of the newly added storage space, thus we can learn from the man page of lvextend: "lvextend /dev/vg01/lvol01 /dev/sdk3" tries to extend the size of that logical volume by the amount of free space on physical volume /dev/sdk3. Just for the record, the command "sudo lvextend /dev/storagevg/onelv -l +100%PVS"gave me a "segmentation fault" error, thus the above equivalent. The last step in the process is to resize the file system residing on the logical volume /dev/storagevg/onelv so that it uses the additional space. A previous e2fsck might be needed to make sure everything in this fs is okay: In my case it is an Ext3 file system, thus I am using the resize2fs command. What looked like an easy and problem free process turned out to have a few surprises in store. ![]() Horror.Ī bit of forum post reading later, I had found out that my process did not add my new disk RAID1 to my /etc/mdadm/nf file, which I fixed by adding:ĪRRAY /dev/md1 level=raid1 num-devices=2 UUID=3eaf73fc:0559f59a:e7cc9877:xxxxx Even more disappointing, the drive /dev/md1 I had just created was now a strange /dev/md_d1 with a slew of other devices named /dev/md_d1/d1p1 etc. Which is effectively the output of " sudo mdadm -detail -scan". Just issue this command and add the output to your nf file. After that another reboot added /dev/md0 and /dev/md1 properly to my system. I think there's a wizard on every system which creates this file, it should come up when you do " sudo dpkg-reconfigure mdadm". Use the command " cat /proc/mdstat" to see your RAIDs. The other thing was my fstab entry for the file system which I layered on top of my logical volume /dev/storagevg/onelv. ![]() (The commented out line is the newer UUID based format which newer debians/Ubuntus should use.) Thus I reverted to the old format of telling the/dev path in fstab and it seems to work. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |