Plop Forum
General Category => HFS+ Rescue => Topic started by: rtype909 on April 10, 2018, 14:42:42 PM
-
hi,
i formatted a HFS+ 2TB (1.2tb data) volume by mistake with 300mb of data. I have not written to the drive since.
I have recovered my files (raw) but wish to see if i can recover the partition table and directory structure. Is this even possible?
Will the 'alternate volume header' have been re-written as well ? Or is there is a prospect of recovering the headers, catalogs with which rebuild the original directory structure?
thank you
-
Hello,
i formatted a HFS+ 2TB (1.2tb data) volume by mistake with 300mb of data. I have not written to the drive since.
Do you mean, that you created a new partition with 300MB over the old one and formatted it?
Best regards
Elmar
-
Hello Elmar,
I assume it was 're-partitioned' because i inadvertently imaged a 300mb 'iso' image across the 2tb hdd in the one step. I just wonder if the HFS Volume header still has the catalog intact even partially to allow some reconstruction?
thank you and kind regards
-
Was there only one partition on the drive?
Is it an external drive?
-
hi,
yes to external
yes to single partition
low fragmentation too, i only ever dumped data on to it.
-
At first, I would search and extract the alternate volume header from the end of the old partition. It should already be fine, because only the first 300MB are affected.
https://www.plop.at/en/hfsprescue/find_vh.html
I fear, the extents overflow file will be destroyed.
Then run step 1 "-s1" with the disk device node (not the partition). This scans the whole disk.
Then run step 2.
Then calculate the old partition start with a reference file. See https://www.plop.at/en/hfsprescue/partition_offset.html
Then restore your files with the disk device node and the calculated offset.
Best regards
Elmar
-
hello,
i shall be reporting back, whether this works, soon having cloned the target drive.
thank you for you advices
-
hi,
ran step 1 and 2 but could get no further. Whilst i can run 'grep' i get nowhere with --find.
i then conducted a search for the extents overflow file and get this:
sudo ./hfsprescue --find-eof /dev/rdisk2 -b 4096
hfsprescue 3.4 2018/02/16 by Elmar Hanlhofer https://www.plop.at
Start: 2018/04/30 18:29:51
*** Force block size: 4096
Signature: 0x5300, S (Unknown)
LastMountedVersion: ?, last mount was not done by Mac OS X.
FileCount: 1207961856
DirCount: 0
BlockSize: 4096
TotalBlocks: 0
AllocationFile StartBlock: 0
ExtentsOverflowFile StartBlock: 0
CatalogFile StartBlock: 0
Total size: 1863 GB
Searching block positions of the Extents Overflow File...
Done.
does that mean the extents overflow file is destroyed?
-
search for extents with offset
*** Using offset 302026752 (288 MB)
Signature: 0x2b48, H+
LastMountedVersion: fsck, last mount was not done by Mac OS X.
FileCount: 246949
DirCount: 21520
BlockSize: 4096
TotalBlocks: 488294668
AllocationFile StartBlock: 1
ExtentsOverflowFile StartBlock: 53816
CatalogFile StartBlock: 885304
Total size: 1863 GB
Using offset 302026752 calcuation base. Starting search from the offset (= 288 MB).
Searching block positions of the Extents Overflow File...
Done.
-
Whats about the alternate volume header?
-
hi
Alternate volume header was intact and this how i obtained the 'offset' as you had advised here and on the supporting guide to HFS+rescue.
I obtained the following info with when searching backwards:
Searching backwards for the Alternate Volume Header.
Scanned 40 MB.
============================================
A Volume Header has been found.
Volume Header start: 2000356985856 (Byte), 0x1d1be914c00, 3906947238 (LBA Sector), at 1907689 MB
Signature: 0x2b48, H+
LastMountedVersion: HFSJ, last mount by Mac OS X.
FileCount: 246949
DirCount: 21520
BlockSize: 4096
TotalBlocks: 488294668
AllocationFile StartBlock: 1
ExtentsOverflowFile StartBlock: 53816
CatalogFile StartBlock: 885304
Possible partition start: 302026752 (Byte), 0x12009000, 589896 (LBA Sector), at 288 MB
-
Extract the altenate volume header
https://www.plop.at/en/hfsprescue/extract_vh.html
Then try to restore filed with using --vh-file parameter.
-
hello [thank you once more]
with that method i get this, if i have applied the steps correctly:
sudo ./hfsprescue --one-file /dev/rdisk2 198 --vh-file ./restored/VolumeHeader
hfsprescue 3.4 2018/02/16 by Elmar Hanlhofer https://www.plop.at
Start: 2018/05/09 20:56:21
*** Using external Volume Header file './restored/VolumeHeader'.
Signature: 0x2b48, H+
LastMountedVersion: HFSJ, last mount by Mac OS X.
FileCount: 246949
DirCount: 21520
BlockSize: 4096
TotalBlocks: 488294668
AllocationFile StartBlock: 1
ExtentsOverflowFile StartBlock: 53816
CatalogFile StartBlock: 885304
Total size: 1863 GB
Extracting the ExtentsOverflowFile to 'restored/ExtentsOverflowFile'.
Size: 14680064 bytes, 14.00 MB
Clump Size: 14680064 bytes
Total Blocks: 3584
Extent 0: Start 53816, Num 3584
Extent 1: Start 0, Num 0
Extent 2: Start 0, Num 0
Extent 3: Start 0, Num 0
Extent 4: Start 0, Num 0
Extent 5: Start 0, Num 0
Extent 6: Start 0, Num 0
Extent 7: Start 0, Num 0
File created.
Invalid ExtentsOverflowFile: Invalid header in 'restored/ExtentsOverflowFile'.
So i use 'force-eof' but the files are corrupted; both v.small ones to large ones.
-
You also have to use the -o parameter.
Use 'disk' instead of 'rdisk'.
sudo ./hfsprescue --one-file /dev/disk2 198 --vh-file ./restored/VolumeHeader -o 302026752
-
Any news?
-
hi,
the only news is that i have been away on holiday and have left this 'project' for now.
I did try what suggested but to no avail on a quiet day, forthcoming, i am going to start from the beginning try again, log everything and come back to you.
Thank you for asking and please bear with me.
Kind regards