Main Menu

Invalid enty

Started by nicedog, August 18, 2020, 02:26:36 AM

Previous topic - Next topic

nicedog

Hello, thank you for providing this powerful tool.

Long story short, I wish I could go back in time.

I put two HFS+ 8TB that contain valuable 16TB data hard disks to Buffalo LinkStation, it formatted them and wrote some files immediately without asking.

When I realized that, it's already been over 10 minutes, I took them out, both disks were not recognized by my mac anymore. According to diskutil, Buffalo had formatted them into 5 or 6 XFS partitions.

I used testdisk to do a quick search, it took over 30 hours, it was able to find the previous partition and restored it, along with the volume header.

Then I tried hfsprescue,

I'm pasting the output here, I hope they are not too bothersome.


[~/hfsprescue-3.5-rc1-precompiled/Linux]> sudo ./hfsprescue_x64 -b 8192 -s1 /dev/sdb
hfsprescue 3.5-rc1 2020/04/26 by Elmar Hanlhofer https://www.plop.at

Start: 2020/08/15 23:35:28

*** Force block size: 8192
Signature:                      0x5300, S (Unknown)
LastMountedVersion:             �, last mount was not done by Mac OS X.
FileCount:                      675284480
DirCount:                       0
BlockSize:                      8192
TotalBlocks:                    50331648
AllocationFile StartBlock:      0
ExtentsOverflowFile StartBlock: 0
CatalogFile StartBlock:         0
Total size:                     7452 GB

100.00% scanned (7452.04 GB). Found: 13698 directories, 13055 files.

End: 2020/08/16 23:48:56
Elapsed time: 24 hours, 13 minutes.
Done.


================================================================================
Next step: STEP 2, cleanup file database.
Next command: ./hfsprescue_x64 -s2


[~/hfsprescue-3.5-rc1-precompiled/Linux]> sudo ./hfsprescue_x64 -b 8192 -s2

hfsprescue 3.5-rc1 2020/04/26 by Elmar Hanlhofer https://www.plop.at

Start: 2020/08/17 07:25:42

Cleanup file database:
  Hashing file entries...
  13055 database entries.
  Allocated 1 MB RAM.
  Searching for duplicate database entries...
  Found 354 duplicate entries.
  Creating fresh database...


End: 2020/08/17 07:25:43
Elapsed time: 1.00 seconds.
Fresh database created with 1175 entries. 354 duplicate entries removed.

* Info: Entries with file date in the future have been removed! Usually, those
        are wrong file detections. If you want to check the file names then
        search for 'Error future date' in the log file. If you don't expect
        future file dates, then you can ignore this. The future date tolerance
        is 7 days. To set your own future date tolerance, run again -s2 and
        use '--future-days <days>'.

* Info: Entries with invalid file name encodings have been removed! Usually,
        those are wrong file detections. If you want to check the file names
        then search for 'Error encoding' in the log file. When you don't expect
        asian chars in your file names, then you can ignore this. To enable asian
        chars, run again -s2 and use '--utf8len 2'.

Log file: ./hfsprescue-data/s2.log

================================================================================
Next step: STEP 3, restore files.
Possible parameters for -s3: <device node|image file> [-b <block size>] [-o <offset in bytes>] [-c <file number>] [--alternative]

Next command: ./hfsprescue_x64 -s3 -b 8192 /dev/sdb

[~/hfsprescue-3.5-rc1-precompiled/Linux]> sudo ./hfsprescue_x64 -b 8192 -s3 /dev/sdb
hfsprescue 3.5-rc1 2020/04/26 by Elmar Hanlhofer https://www.plop.at

Start: 2020/08/17 17:06:11

*** Force block size: 8192
Signature:                      0x5300, S (Unknown)
LastMountedVersion:             �, last mount was not done by Mac OS X.
FileCount:                      675284480
DirCount:                       0
BlockSize:                      8192
TotalBlocks:                    50331648
AllocationFile StartBlock:      0
ExtentsOverflowFile StartBlock: 0
CatalogFile StartBlock:         0
Total size:                     7452 GB

Extracting the ExtentsOverflowFile to 'restored/ExtentsOverflowFile'.

You are forcing the block size. I assume, the Volume Header is not correct.
Automatic extracting of the ExtentsOverflowFile has been disabled! This means
strong fragmented files will not be restored! When you want to use the
ExtentsOverflowFile, then you have to restore it manually with '--extract-eof'.
Use '--ignore-eof' to restore without ExtentsOverFlowFile.

================================================================================

At this point I really don't know what to do. I don't know what ExtentsOverFlow means, when I tried --extract-eof it gave me the same output without creating any restored files.

when I tried --ignore-eof, everything shows "invalid entry for..." after file #1063,

............
[1060/1175] File: thumb_20160905_122821.jpg (51921 bytes, 50.70 KB)
[1061/1175] File: thumb_20160905_122821_1024.jpg (255681 bytes, 249.69 KB)
[1062/1175] File: thumb_20160905_122825.jpg (44388 bytes, 43.35 KB)
[1063/1175] File: thumb_20160905_122825_1024.jpg (182262 bytes, 177.99 KB)
[1064/1175] File: Invalid entry for @
[1065/1175] File: Invalid entry for Ȁ
[1066/1175] File: Invalid entry for @
[1067/1175] File: Invalid entry for 
.............
[1175/1175] File: Invalid entry for ģä

End: 2020/08/17 17:13:10
Elapsed time: 10.00 seconds.
Done.

The result doesn't look good at all, in "restored" folder, there are only 1603 files (I believe they were created by Buffalo), am I screwed? I've spent over two weeks trying to rescue those 16TB data but still in vain.

Can you please help? I understand it's very difficult to troubleshoot case like this, but I need whatever inputs I can get so I can keep going without giving up...


nicedog

Maybe because the drive is NTFS format?  Although I've been using HFS+ for a while,  I believe at some point I formatted one of them to NTFS. I'll try a different tool first for now.

nicedog

turns out it is an exfat formatted drive. sorry for the confusion.