Greetings
I messed up my volume group vg00 and I need help to get it back.
At the moment I’m running on liveCD. However, I managed to mount /boot and /<root>
and chroot.

mint@mint ~ $ sudo mount -t proc none /mnt/opensuse/proc mint@mint ~ $ sudo mount /dev/sdb2 /mnt/opensuse mint@mint ~ $ sudo mount /dev/sdb1 /mnt/opensuse/boot mint@mint ~ $ sudo mount --rbind /sys /mnt/opensuse/sys mint@mint ~ $ sudo mount --rbind /dev /mnt/opensuse/dev mint@mint ~ $ sudo chroot /mnt/opensuse /bin/bash # cat /etc/fstab # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> /dev/disk/by-id/ata-ST380215A_5QZ0A66D-part3 swap swap defaults 0 0 /dev/disk/by-id/ata-ST380215A_5QZ0A66D-part2 / ext4 acl,user_xattr 1 1 /dev/disk/by-id/ata-ST380215A_5QZ0A66D-part1 /boot ext4 acl,user_xattr 1 2 /dev/vg00/lv_home /home ext4 acl,user_xattr 1 2 /dev/vg00/lv_felles /mnt/felles ext4 defaults 1 2 /dev/vg00/lv_opt /opt ext4 acl,user_xattr 1 2 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 usbfs /proc/bus/usb usbfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 #UUID=23b07c85-4984-4b31-b849-889bcca7874c /boot ext4 defaults 1 2 # cat /proc/partitions major minor #blocks name 8 0 78150744 sda 8 1 71680 sda1 8 2 12587008 sda2 8 3 2096128 sda3 8 16 244198584 sdb 8 32 488386584 sdc

Here’s some info on the disk.

# fdisk -l
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c33e1
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      145407       71680   83  Linux
/dev/sdb2   *      145408    25319423    12587008   83  Linux
/dev/sdb3        25319424    29511679     2096128   82  Linux swap / Solaris

This is what I got about my missing volume group

# vgcfgrestore vg00 Couldn't find device with uuid bf4yVy-k8Ue-V9PI-27V1-JumP-ETLU-J4eeuD. Cannot restore Volume Group vg00 with 1 PVs marked as missing. Restore failed. # vgcfgrestore --debug -l vg00 File: /etc/lvm/archive/vg00_00009.vg Couldn't find device with uuid bf4yVy-k8Ue-V9PI-27V1-JumP-ETLU-J4eeuD. VG name: vg00 Description: Created *before* executing '/sbin/vgchange -a y' Backup Time: Sun Oct 3 15:13:16 2010 File: /etc/lvm/archive/vg00_00010.vg VG name: vg00 Description: Created *before* executing '/sbin/vgscan --mknodes' Backup Time: Sun Oct 3 15:30:30 2010 File: /etc/lvm/archive/vg00_00011.vg Couldn't find device with uuid jZ14c5-MvJg-p8gA-uHsH-MVeD-Uys4-vdnrMr. Couldn't find device with uuid NTBLZP-ygYG-4qYF-29W9-kUgM-CO1w-8uLXLB. VG name: vg00 Description: Created *before* executing '/sbin/vgscan --mknodes' Backup Time: Sun Oct 3 15:30:30 2010 File: /etc/lvm/archive/vg00_00012.vg VG name: vg00 Description: Created *before* executing '/sbin/vgscan --mknodes' Backup Time: Sun Oct 3 15:30:31 2010 File: /etc/lvm/archive/vg00_00013.vg VG name: vg00 Description: Created *before* executing '/sbin/vgscan --mknodes' Backup Time: Sun Oct 3 15:30:31 2010 File: /etc/lvm/archive/vg00_00014.vg VG name: vg00 Description: Created *before* executing '/sbin/vgchange -a y' Backup Time: Sun Oct 3 15:30:31 2010 File: /etc/lvm/archive/vg00_00015.vg VG name: vg00 Description: Created *before* executing '/sbin/vgchange -a y' Backup Time: Sun Oct 3 15:30:31 2010 File: /etc/lvm/archive/vg00_00016.vg VG name: vg00 Description: Created *before* executing '/sbin/vgchange -a y' Backup Time: Sun Oct 3 15:30:32 2010 File: /etc/lvm/archive/vg00_00017.vg VG name: vg00 Description: Created *before* executing '/sbin/vgchange -a y' Backup Time: Sun Oct 3 15:30:32 2010 File: /etc/lvm/archive/vg00_00018.vg VG name: vg00 Description: Created *before* executing '/sbin/lvextend -l +1280 /dev/vg00/lv_opt' Backup Time: Sun Sep 11 12:35:13 2011 File: /etc/lvm/backup/vg00 VG name: vg00 Description: Created *after* executing '/sbin/lvextend -l +1280 /dev/vg00/lv_opt' Backup Time: Sun Sep 11 12:35:13 2011 # vgscan -v Wiping cache of LVM-capable devices Wiping internal VG cache Reading all physical volumes. This may take a while... Finding all volume groups No volume groups found # pvscan -v Wiping cache of LVM-capable devices Wiping internal VG cache Walking through all physical volumes No matching physical volumes found # pvs -o +uuid # lvs -o +devices

If I do a "pvcreate - … " will I loose/overwrite/destroy the volume group config already in /etc/lvm2?
What options should I use?
What’s next step?
Any suggestions on how I can get LVM to recognize my vg00?

Well, It’s the one disk I have that should have been vg00.
My guess was that the LV’s was on /dev/sdb2. (Have mercy, I might be a bit slow on LVM :question:)
I was thinking in the line of 6.4. Recovering Physical Volume Metadata

Can I recreate the PV on that disk? And then maybe vgcfgrestore?

Well, It’s the one disk I have that should have been vg00.
My guess was that the LV’s was on /dev/sdb2. (Have mercy, I might be a bit slow on LVM :question:)
I was thinking in the line of 6.4. Recovering Physical Volume Metadata

Can I recreate the PV on that disk? And then maybe vgcfgrestore?

That is very unsatisfactory information :frowning:

You can of course do a lot, but IMHO it is not advisable to do anything without having a clear picture before you about what the situation should be.

But back to what you posted.
The fdisk listing, strange enough, only has info about *sdb. *Where is sda?
And we need to know for sure what */dev/disk/by-id/ata-ST380215A_5QZ0A66D *from your fstab is, thus we need

 ls -l /dev/disk/by-id

And as you seem to know one of the UUIDs please also add

ls -l /dev/disk/by-uuid
              

hcvv wrote:
> But back to what you posted.
> The -fdisk- listing, strange enough, only has info about -sdb. -Where
> is sda?

Not only that but the information from /proc/partitions contradicts that
from fdisk! :frowning: I don’t believe the information is cut-and-pasted from
the same session. In particular, as you say, there is fdisk output missing.

durque, please rerun those commands and show both the commands used and
the full output. Please also run and show the output from

pvscan
vgscan
lvscan

Well, you are both right.

About the fdisk-listing. The other entries are empty, and to my knowledge they’ve been empty all the time, so to save post-space … However, here’s the full listing. These are very small disks.

# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0005095f Device Boot Start End Blocks Id System Disk /dev/sdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c33e1 Device Boot Start End Blocks Id System /dev/sdb1 2048 145407 71680 83 Linux /dev/sdb2 * 145408 25319423 12587008 83 Linux /dev/sdb3 25319424 29511679 2096128 82 Linux swap / Solaris Disk /dev/sdc: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d461c Device Boot Start End Blocks Id System

I should have seen the contradiction of /proc/partitions. It’s clearly not from this system. Here’s the correct one.

# cat /proc/partitions major minor #blocks name 7 0 657024 loop0 8 0 488386584 sda 8 16 78150744 sdb 8 17 71680 sdb1 8 18 12587008 sdb2 8 19 2096128 sdb3 8 32 244198584 sdc

Now for the listing of devices (/dev)

# ls -l /dev/disk/by-id total 0 lrwxrwxrwx 1 root root 9 Oct 20 21:18 ata-Optiarc_DVD_RW_AD-5170A -> ../../sr0 lrwxrwxrwx 1 root root 9 Oct 21 23:02 ata-ST3500418AS_5VMB99GC -> ../../sda lrwxrwxrwx 1 root root 9 Oct 21 23:02 ata-ST380215A_5QZ0A66D -> ../../sdb lrwxrwxrwx 1 root root 10 Oct 21 23:02 ata-ST380215A_5QZ0A66D-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Oct 21 23:02 ata-ST380215A_5QZ0A66D-part2 -> ../../sdb2 lrwxrwxrwx 1 root root 10 Oct 21 23:02 ata-ST380215A_5QZ0A66D-part3 -> ../../sdb3 lrwxrwxrwx 1 root root 9 Oct 21 23:02 ata-WDC_WD2500AAJB-00J3A0_WD-WCAV21275393 -> ../../sdc lrwxrwxrwx 1 root root 9 Oct 21 23:02 scsi-SATA_ST3500418AS_5VMB99GC -> ../../sda lrwxrwxrwx 1 root root 9 Oct 21 23:02 scsi-SATA_ST380215A_5QZ0A66D -> ../../sdb lrwxrwxrwx 1 root root 10 Oct 21 23:02 scsi-SATA_ST380215A_5QZ0A66D-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Oct 21 23:02 scsi-SATA_ST380215A_5QZ0A66D-part2 -> ../../sdb2 lrwxrwxrwx 1 root root 10 Oct 21 23:02 scsi-SATA_ST380215A_5QZ0A66D-part3 -> ../../sdb3 lrwxrwxrwx 1 root root 9 Oct 21 23:02 scsi-SATA_WDC_WD2500AAJB-_WD-WCAV21275393 -> ../../sdc lrwxrwxrwx 1 root root 9 Oct 21 23:02 wwn-0x5000c50024c45b08 -> ../../sda lrwxrwxrwx 1 root root 9 Oct 21 23:02 wwn-0x50014ee1ac42a722 -> ../../sdc # ls -l /dev/disk/by-uuid total 0 lrwxrwxrwx 1 root root 10 Oct 21 23:02 23b07c85-4984-4b31-b849-889bcca7874c -> ../../sdb1 lrwxrwxrwx 1 root root 10 Oct 21 23:02 2713840c-ae7f-4d71-a825-3ef678d145ef -> ../../sdb3 lrwxrwxrwx 1 root root 10 Oct 21 23:02 4698b7db-b9f6-4ff4-a2b4-900ceec2c1f1 -> ../../sdb2

And finally physical volume, volume group and lv-info. Nothings changed there since first posting.

# pvscan -v Wiping cache of LVM-capable devices Wiping internal VG cache Walking through all physical volumes No matching physical volumes found # vgscan -v Wiping cache of LVM-capable devices Wiping internal VG cache Reading all physical volumes. This may take a while... Finding all volume groups No volume groups found # lvscan -v Finding all logical volumes No volume groups found

I also do have som stuff in /etc/lvm, if that’s of interest.

# ls -lR /etc/lvm total 28 drwxr-xr-x 2 root root 4096 Jan 12 2012 archive drwxr-xr-x 2 root root 4096 Jan 12 2012 backup -rw------- 1 root root 1974 Nov 1 22:20 .cache -rw-r--r-- 1 root root 10937 Jan 12 2012 lvm.conf drwxr-xr-x 2 root root 4096 Jan 12 2012 metadata ./archive: total 48 -rw------- 1 root root 3345 Oct 10 2010 old_vg_00000.vg -rw------- 1 root root 3312 Oct 10 2010 old_vg_00001.vg -rw------- 1 root root 1763 Oct 3 2010 vg00_00009.vg -rw------- 1 root root 1766 Oct 3 2010 vg00_00010.vg -rw------- 1 root root 3314 Oct 3 2010 vg00_00011.vg -rw------- 1 root root 3314 Oct 3 2010 vg00_00012.vg -rw------- 1 root root 1766 Oct 3 2010 vg00_00013.vg -rw------- 1 root root 1763 Oct 3 2010 vg00_00014.vg -rw------- 1 root root 3311 Oct 3 2010 vg00_00015.vg -rw------- 1 root root 3311 Oct 3 2010 vg00_00016.vg -rw------- 1 root root 1763 Oct 3 2010 vg00_00017.vg -rw------- 1 root root 1785 Sep 11 2011 vg00_00018.vg ./backup: total 8 -rw------- 1 root root 3347 Oct 10 2010 old_vg -rw------- 1 root root 1784 Sep 11 2011 vg00 ./metadata: total 0

That’s it I think.
Any suggestions?

Please, next ti8me do not leave out pr change in the computer output you post between CODE gas. The posting between CODE tags should be a coy/paste inclusing both prompt (begin end end) and that should give is the assurance that yoy did not cheact there. Of course you can still cheat, but then most people will go for other things then helping you. And when it is imminent to leave tthings out (readable paswwprds e.g. or excessive long listings) then at least tell waht you did.

And now for your fdisk listing.
sda seems to be 500 GB, sdb 80 GB and sdc 250 GB. How do you explain that you call the disks there weren’t in the listing earlier " These are very small disks." while in fact they are a multible of the size ofthe sdb?

From the by-id listing we see that sdb is indeed providing you with the partitions for /boot, / and Swap.

None of the bu-uuid entries of the three disks is the same as the UUID not found by the LVM software.

There seems to be no partition table at all on the two other disks (or maybe an empty one). That is not impossible because LVM does not rely need to uuse partitions, it can use the whole disk.

My idea is that you should realy have an idea about how your LVM setup was. But I get the strong feeling that you have no documentataion about the setup a all. When you have the idea that beside the system disk, there are only two very small and completly unimportant other disks on the system, then IMHO you have no idea at all about what is what and what should be what. A very bad start for any recovery.

We must at least know what the partitioning, if any, of those two disks was. And which PVs were created on them.

That does not answer my question at all. When those two disks are not involved in the LVM configuration, where are the ones that are? The small disk sdb is used completely IMHO.

As long as you can not even identify the hardware were those LVM Psysical Volumes are (or have been), we have only vaporware.

hcvv wrote:
> That does not answer my question at all. When those two disks are not
> involved in the LVM configuration, where are the ones that are? The
> small disk sdb is used completely IMHO.
> As long as you can not even identify the hardware were those LVM
> Psysical Volumes are (or have been), we have only vaporware.

Right. There’s no evidence of any LVM instance on this system. The OP
appears to continue to provide only partial information and appears to
provide data from several different systems. What is in /etc/lvm/backup?
What are the full contents of the /etc/lvm/archive files? (the alleged
contents don’t look like the contents of my files)

I don’t see any way to help, I’m afraid.

Looking back at my first post, I asked a specific question regarding using pvcreate and restoring meta-data. The question was repeated.

From the logs I have provided, there is evidence of a backup of LVM configuration and meta-data (/etc/lvm/backup/vg00). Now from what I’ve learned from other posts elsewhere, vgscan, pvscan etc will show you nothing if the meta-data are messed up. Right?

The meta-data backup can be used to restore the VG meta-data using pvcreate and vgcfgrestore and finally reactivating the VG?

I guess the task is to get back the VG Name and the original PV UUID which is to be found in the /etc/lvm/backup/vg00?
Now, to recreate the physical volume with the same id one should use pvcreate?

How to verify to use the right UUID?

If you insist on destroying your data, “man pvcreate” provides enough information to recreate PVs with correct UUID.

Your sdb is 80GB, but existing partitions occupy just about 15GB. May be the rest was used for LVM, but you never answered the question where your volumes were actually located.

Now, apart from your question about how to lure pvcreate into creating with a known UUID, what do you think you will use as the main parameter of the pvcreate statement: the PhysicalVolume:

Each PhysicalVolume can be a disk partition, whole disk, meta device, or loopback file

(from man pvcreate).
In other words which disk or disk partition do you intend to offer to pvcreate?

Sorry, if it looks to you that we are walking a different path from you, you are correct. As long as we do not even understand where that LVM of you was (and is going to be) located, changes that we realy understand what to do are next to nil. And I am afraid that @dhj-novell thinks the same.

It is not that we are not willing to help you, but we realy have not enough information. And what we have is contradictory.

For sure sdb is part of the VG. There were three LVs on that. All must have been in sdb2. For sure sda was never part of VG. sdc might have been PV in VG. I don’t think there where any partitions on it that was used for LVs, and whole disk approach was not used. Included, but never used?

To my understanding there was a single VG, vg00, with a single PV (sdb) and three LVs (sdb2).

Sidetrack: In a perfect world all home computers would probably be configured and documented using cfengine. :slight_smile:

durque:

For sure sdb is part of the VG. There were three LVs on that. All must have been in sdb2. For sure sda was never part of VG. sdc might have been PV in VG. I don’t think there where any partitions on it that was used for LVs, and whole disk approach was not used. Included, but never used?

To my understanding there was a single VG, vg00, with a single PV (sdb) and three LVs (sdb2).

Sidetrack: In a perfect world all home computers would probably be configured and documented using cfengine. :slight_smile:

This still makes no sense as I will try to explain to you.

When we forget about the large disks sda and sdc, we have the smalll disk sdb only. But sdb is partitioned and has sdb1, sdb2 and sdb3 which are used for /boot, / and swap. Thus it is not possible to use sdb (which is the whole disk) for a PV. At the most you could use something like sdb4 for a PV, but there isn’t no sdb4 (or any other).

When it is/was true that you had a single PV on the whole of sdb (which is not true as pointed out above), then you realy can/could create a VG (vg00) on that single PV. And thus you could say vg00 is on sdb.

When you then create three LVs on that vg00 you could only say thatr those three LVs are on sdb, because there is no sdb2 at all in this assumed configuration.

This this one sentence of you (“to my understanding …”) brings not a single small piece of understanding to me

BTW it eludes me completely why one would create a VG with three LVs on one single PV where one also could simply create three partitions. This may be none opf my bussiness, but understanding the logics behind the “why” of this LVM configuration could help in understanding it’s implementation.

A second thing that comes to my mind is that all the time spoiled now in this very slow discussion (a week now without much progress) could have been spend by you in creating some partitions (or a new LVM config) and restoring from your backup and you would have been up and running days ago.

And re-reading your first post and knowing that /boot, / and swap are available, I also do not understand quite why you neede a live/rescue CD because the system must be bootable.

For those interested.

Examining the /etc/lvm/vg00 -file showed that there was a fourth partition on /dev/sda.
Included was also number of sectors used for the partition. The unallocated space from fdisk shows
almost a match in sectors. I guess the partition table was messed up some how.

I created a primary LVM partition (ID 8e) using the remaining sectors of /dev/sda, rebooted,
and everything, that is VG, and LVMs was back and mounted. Everything intact.