Git Product home page Git Product logo

xfs_undelete's People

Contributors

axxapy avatar cab404 avatar ianka avatar marcone avatar phcoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xfs_undelete's Issues

`Please specify a block device or an XFS filesystem image.`

hi,
i entered single user mode using the command: # telinit 1
and
i used the following command but got the error.

[root@localhost ~]# xfs_undelete /dev/mapper/centos-root
Please specify a block device or an XFS filesystem image.

plz give me some advice.
thanks!

thank and suggest

thank u for great tool, two suggest,
1, i change the filename by ctime to mtime, unlike ext4 have directory name, so mtime is only mark of what it's..
hope an option for use the ctime or mtime as file name. or restore the file's mtime(in code current not)
2, i put an size filter, many not .c but named .c file recover more 10G.. so very slow. hope an option for that, just $loffset*$::blocksize maybe.
i change the code but i don't known about tclsh, i just sample copy many if todo that, so i don't pull the request, just suggests.

Restore file with big inode number fail

Hello, author!
My test scenario is to delete all files in the /testdir directory. During my test, I found that this tool cannot recover files with inode number jumps, which often occurs in multi-level directories, or files with larger inode numbers cannot be recovered. Even if the inode number is specified with the -s parameter, it cannot be recovered. For example, files with inode numbers of 50-110 can be restored, while files with numbers of 13581-13590 and 1069121-1069129 cannot be restored. How to solve this situation?

invalid command name "lmap"

./xfs_undelete

invalid command name "lmap"
while executing
"lmap t $times {
if {[catch {clock scan $t} t]} {
puts stderr "Unable to parse time range. Please put it as a range of time specs Tcl's [clock sc..."
(procedure "parseTimerange" line 13)
invoked from within
"parseTimerange [dict get $::parameters t]"
invoked from within
"set ctimes [parseTimerange [dict get $::parameters t]]"
(file "./xfs_undelete" line 453)

Centos 7

Package 1:tcl-8.5.13-8.el7.x86_64

recovering a vdisk.img file

Hi, i also accidently wiped a VM including image ... so i ended up here now ;)

i tried following the instructions

Starting recovery.
Recovered file -> xfs_undeleted/2023-03-03-08-24_1092962809.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1098416061.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304231.gzip
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304236.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304237.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304240.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304241.pgp
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304243.txt
Recovered file -> xfs_undeleted/2023-03-03-08-24_1152304250.pgp
Recovered file -> xfs_undeleted/2023-03-03-09-03_2165197241.bin
Recovered file -> xfs_undeleted/2023-03-03-08-24_3221231872.txt
Done.
root@AlsServer:/tmp#`

it does fairly work, but there is no vdisk.img with 120 GB in size, so i assume im also too late and xfs already reaccolerated the free space ... even there where definately no writes afterwards.

may possible as .img is also not included in the mime type list (-l function) ? the list is huge, but .img is not included ...

the disk is unmounted

```root@AlsServer:/tmp# mount | grep -i "nvme"
root@AlsServer:/tmp#`

if you have any other advice ;) thanks ahead, also tried the -r option but then the results are even less.

Error renaming to [...] to [...].pythonapplication/octet-stream

The program tries to rename to a filename with a slash:

error renaming "xfs_undeleted/2021-11-30-12-56_21774193369" to "xfs_undeleted/2021-11-30-12-56_21774193369.pythonapplication/octet-stream": no such file or directory
while executing
"file rename -force $of $rof"
(procedure "investigateInodeBlock" line 93)
invoked from within
"investigateInodeBlock $ag $iblock"
(procedure "traverseInodeTree" line 40)
invoked from within
"traverseInodeTree $ag $agi_branch"
(procedure "traverseInodeTree" line 26)
invoked from within
"traverseInodeTree $ag $agi_root"
("for" body line 10)
invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
## Read inode B+tree information sector of this allocation group.
seek $fd [expr {$blocksize*$agblocks*$ag..."
(file "/root/xfs_undelete/xfs_undelete" line 598)

[Q] if nothing found - nothing to recover?

This is more of a usage question.

Do I understand correctly something like

./xfs_undelete -i "" -o /mounted/flashdrive /dev/sdc1

would be a catch-all recovery - if that returns nothing (or not what expected), then data is lost?
I moved an empty dir over a data directory that needs to be recovered, but surprisingly only 2 binary files are found; fairly certain no writes were done to the filesystem after the event.

recovery not working on ubuntu

Hi,
I try to use the tools on one xfs filesystem on ubuntu bionic with this command:
./xfs_undelete -i "" /dev/mapper/lowspeed-das
Starting recovery.
Done.
but no files are recovered, the files are in one extensions hdf5 that is not listed if I use the -l option
Is I use strings command I can find some occurence, for example:
strings -td /dev/mapper/lowspeed-das | grep hdf5
11817744 $/home/ldanciu/oqdata/calc_40597.hdf5

Unexpected info occur during try to run this program

Hi,

I was trying to use xfs_undelete,unexpected info occur during try to run this program.Please help to analysis,Thanks!

The info message following:

[root@localhost xfs_undelete-1.2]# ./xfs_undelete
can't find package cmdline
while executing
"package require cmdline"
(file "./xfs_undelete" line 10)

OS info:
[root@localhost xfs_undelete-1.2]# uname -a
Linux localhost 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost xfs_undelete-1.2]# cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.5 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.5"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.5 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.5:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.5
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.5"

[root@localhost xfs_undelete-1.2]# strings /lib64/libc.so.6 |grep ^GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_2.13
GLIBC_2.14
GLIBC_2.15
GLIBC_2.16
GLIBC_2.17
GLIBC_PRIVATE
GLIBC_2.8
GLIBC_2.5
GLIBC_2.9
GLIBC_2.7
GLIBC_2.6
GLIBC_2.11
GLIBC_2.16
GLIBC_2.10
GLIBC_2.17
GLIBC_2.13
GLIBC_2.2.6

TCL Version:
tcl8.6.1

XFSPROGS Version:
xfsprogs-4.5.0-15.el7.x86_64

Doesn't appear to undelete PCM data.

I have a big raw datafile which is about 290 GiB in size, however I get only one file recovered which is just 356kiB in size - and which definitely contains unrelated data. It appears, even with the -i "" option, that the tool is ignoring the file because of its size.
Is there any way to lift the maximum file size limitation?

The file contains 8 channels of unsigned 16 bit data in little-endian order, none of which will have the least three significant bits set, and at least two sequential channels of which were all zeros. Is there any option to give a pattern to match like this for recovery recognition purposes?

Duplicates

Hey it's not an issue per se, but I was wondering if it would detect if it had recovered the same file before?

I carelessly deleted a decent amount of files about 600GB so when I went into recovering the files I was faced with an issue, "how would I store all of it if it manages to fined everyting?"
I had to go find some of the external hard drives drives I had laying around, thankfully I had about 700GB left on one of them.

My issue was that I wanted to start restoring the files as quick as possible so I launched the scrip and noticed that it was quickly filling up my drive, so I switched to another bigger drive and after a couple of hours realized that it might not be enough.

I tried moving the recovered files off to a another computer while it was recovering in fear I of stopping it again, but it only ended up clipping the poor drive.

So basically I have just two question and one request, Would it over write the files if I started recovering in the same folder, or would it detect the duplicate?
and what would happen if the recovery drive ended up out of space?

request:
could it be possible to start recovering from inode X
I still don't know how it all works, but basically if I stop the recovery for some reason, could it be possible to implement a function where I could "resume" the recovery?

Also THANK YOU!! I got 500GB back!
Now all I have to do is rename the files, thankfully most of them were RAR files that kept the original name inside the archive, and some movies with subtitles, or text files that had the original name.

reading around the net you only hear horror stories about rm -rf :)

I'm glad I decided to look further because the first recovery tools ended up giving me corrupted file back (probably because they were made for ext3,4

No nodes found ..

Hi,
just went through the other reports of "nothing found"; I deleted a single JPEG file by accident.. first I ran time xfs_undelete -t -48hour /dev/disk/by-label/attik overnight, thinking this will surely take quite the while.. Actually it only takes 90 seconds It successfully remounted the partition read-only..
This is a
Linux base 5.8.0-1-amd64 #1 SMP Debian 5.8.7-1 (2020-09-05) x86_64 GNU/Linux machine, with tcl 8.6.9+1+b1 and tcllib 1.20+dfsg-1.

# file --version
file-5.38
magic file from /etc/magic:/usr/share/misc/magic
# file --brief --mime-type /etc/magic
text/plain

The mount is

/dev/sdd3 /mnt/attik xfs ro,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sdd3       2.6T  2.6T  3.1G 100% /mnt/attik

I also tried without time limit, with -i "", directly specifying the device (/dev/sdd3)..
Wanted to turn the verbosity higher but no idea how to do that with tclsh.
The file is probably very irrelevant, just wanted to use this as a test case. Glad to dig deeper though.. best regards ; )

fails if $LANG env var is undefined

./xfs_undelete -t 2020-08-30 -o /mnt/disks/flashdrive /dev/sdc1 
Starting recovery.
no such variable805599295 ( 98%)
    (read trace on "::env(LANG)")
    invoked from within
"set lang $::env(LANG)"
    (procedure "dd" line 5)
    invoked from within
"dd if=$::fs of=$of bs=$::blocksize skip=[dict get $extents 0 skip] seek=0 count=1 conv=notrunc status=none"
    (procedure "investigateInodeBlock" line 80)
    invoked from within
"investigateInodeBlock $ag $iblock"
    (procedure "traverseInodeTree" line 40)
    invoked from within
"traverseInodeTree $ag $agi_root"
    ("for" body line 10)
    invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
        ## Read inode B+tree information sector of this allocation group.
        seek $fd [expr {$blocksize*$agblocks*$ag..."
    (file "./xfs_undelete" line 577)

Running on unraid (a Slackware derivative)

tcl-8.6.10
tcllib-1.20

/dev/sdc is unmounted

How do I recover files?

This project looks promising but I'm confused how I should recover files. It generated tons of files with the inode? as the filename. How can I use these files to restore?

Please specify a block device or an XFS filesystem image.

Im on unraid trying to recover recently deleted files.
I tried multiple commands and get the same error every time

xfs_undelete -t 2021-12-13 -r '.mp4' -o /mnt/user/Recovery /mnt/user/Pictures
xfs_undelete -t 2021-12-13 -r '
.mp4' -o /mnt/user/Recovery --no-remount-readonly /mnt/user/Pictures
xfs_undelete /mnt/user/Pictures
xfs_undelete /mnt/disk6

Any help pointing me in the right direction would be appreciated !
Thanks

/usr/bin/env: ‘tclsh’: No such file or directory ...

Basically I would like to undelete a folder in an XFS partition, but I am getting errors message:

"/usr/bin/env: ‘tclsh’: No such file or directory"

which I don't know how to troubleshoot.

$ uname -a
Linux debian 5.10.0-18-amd64 #1 SMP Debian 5.10.140-1 (2022-09-02) x86_64 GNU/Linux
$

$ ls -l
total 256
drwxr-xr-x 2 user user 131072 Mar 18 14:40 xfs_undelete-master
-rwxr-xr-x 1 user user 28268 Jul 16 01:55 xfs_undelete-master.zip

$ ls -l xfs_undelete-master.zip
-rwxr-xr-x 1 user user 28268 Jul 16 01:55 xfs_undelete-master.zip

$ file --brief xfs_undelete-master.zip
Zip archive data, at least v1.0 to extract

$ sha256sum --binary xfs_undelete-master.zip
db66ef9ca37120407f6a692fe0d30492dba525e2446ee1a87e26d3d978b7e875 *xfs_undelete-master.zip

$ cd xfs_undelete-master

$ ls -l
total 640
-rwxr-xr-x 1 user user 35149 Mar 18 14:40 LICENSE
-rwxr-xr-x 1 user user 13411 Mar 18 14:40 README.md
-rwxr-xr-x 1 user user 150 Mar 18 14:40 shell.nix
-rwxr-xr-x 1 user user 21851 Mar 18 14:40 xfs_undelete
-rwxr-xr-x 1 user user 9698 Mar 18 14:40 xfs_undelete.man

$ ls -l xfs_undelete
-rwxr-xr-x 1 user user 21851 Mar 18 14:40 xfs_undelete

$ file --brief xfs_undelete
Tcl script, UTF-8 Unicode text executable

$ sha256sum --binary xfs_undelete
063dad87b4f4ae505521735067405b07eb668e4fc7791624132420b665adb64a *xfs_undelete
$

$ sudo ./xfs_undelete --help
/usr/bin/env: ‘tclsh’: No such file or directory

$ ./xfs_undelete --help
/usr/bin/env: ‘tclsh’: No such file or directory

$ ./xfs_undelete
/usr/bin/env: ‘tclsh’: No such file or directory

$ sudo ./xfs_undelete
/usr/bin/env: ‘tclsh’: No such file or directory

recovery not working

I have some directories and image files that have been deleted by mistake, I tried your recovery script and it doesn't seem to work.
Starting recovery.
Done. 2 (100%))
I've searched all of them and found no recovered files.

-l list file types option broken

Tested at commit c01b5fa (v13.1-2-g4278cf7).

The "list filetypes" option -l documented in the man page does not work as advertised:

[user@host xfs_undelete]# xfs_undelete -l
bad option "-stride": must be -ascii, -command, -decreasing, -dictionary, -increasing, -index, -indices, -integer, -nocase, -real, or -unique
    while executing
"lsort -dictionary -stride 3 -index 0 $::filetypes"
    invoked from within
"if {[dict get $::parameters l]} {
	## Yes. Get file types understood by the file utility
	if {![catch {exec -ignorestderr -- file -l {*}$magicopts 2>/..."
    (file "/root/XFS_Data_Recovery/xfs_undelete_20240111/xfs_undelete/xfs_undelete" line 439)
[user@host xfs_undelete]#

Testing with git bisect reveals that this output is produced since commit a867fdf "added -l option for listing understood file extensions". Perhaps this option never worked properly, or there is something broken on my system.

Restored file is 4.0 EB big

Hi,

first thank you for the nice recovery tool!

I am trying to recover some files removed some hours ago, and I get several 4.0 EB big files that are difficult to deal with afterwards.
The block device where I start the recovery is a 59 TB hardware raid.
I have tried the -z option, both with txt and text/* but I still get the 4.0 EB file.
Am I doing something wrong or how can I get a smaller file?

Many thanks,
Richard

using XFS v5 No files showing up

Using XFS v5
dmesg | grep XFS
[5536321.175489] XFS (sde1): Mounting V5 Filesystem

I deleted about 1tb of backups a few days ago, about 50 files and of course now need some back.

/dev/ssd is the block device where the files were deleted from and
/var/opt/mssql/restore is a new drive I have added to accept the recovered files.

./xfs_undelete -o /var/opt/mssql/restore /dev/sdd
/dev/sdd (/dev/sdd) is currently mounted read-write. Trying to remount read-only.
Remount successful.
Starting recovery.
Done.

It took about 1 sec to run which seemed too fast, and the result was no files.
I am using Ubuntu, and untared the tar.gz installed TCL and LIBTCL and ran the ./xfs_undelete
I tried listing the file types and a few other commands. I did not install xfs_undelete as a package.

problem recovering a txt file

I create a txt file with three lines:
11
12
13

the i delete this file,and use this tool to undelete it,it works fine ,but the file content is a little different:
11

: mark causes problem on ArchLinux

Hi,

Thanks for the brilliant tool.

I found an issue on my ArchLinux machine. dd complained the invalid argument cause the of parameter contains :, I believe the code is on line 83

83  set of [file join [dict get $::parameters o] [format "%s_%s" [clock format $ctime -format "%Y-%m-%d-%H:%M"] $inode]]

After I changed "%Y-%m-%d-%H:%M" to "%Y-%m-%d-%H-%M", it worked normally.

I don't know if it is a bug, but good to have you known it.

xfs_undelete -l does not show at least one file type supported by file -i

So I used xfs_undelete to unsuccessfully try to recover some old log files that got rotated out. I chose to focus exclusively on the gzip compressed log files: as they would be easy for 'file' to recognize, and less likely to be fragmented (due to their smaller size). I was only able to recover 3 very short (under 12 line) log files. Most of the hits were gzip compressed JavaScript from the last day or two of web browsing.

As part of the process, I temporarily mounted the xfs volume read-only: and then ran 'file' on a representative sample log file. The output was:

gzip compressed data, last modified: Thu Jul 7 06:00:03 2022, from Unix, original size modulo 2^32 34984

Realizing my mistake, I looked at the man page for 'file' and ran "file -i /mnt/var/log/syslog.5.gz":

application/gzip;    charset=binary

Out of curiosity it [tried] running "xfs_undelete -l | grep gzip":

 #

Since I had gotten my target [mime] type from 'file' directly: I tried it anyway:

xfs_undelete -t 2022-06-07 -r application/gzip /dev/md126

Successfully recovered dozens of files, as described above. Unfortunately not the ones I was looking for: but I don't think that is the fault of 'xfs_undelete'.

Edit: put command prompt indicator in there so empty line can be represented.

Please specify a block device or an XFS filesystem image

Hi there, I am hoping to recover a bunch of files but when running the command, I get:

Please specify a block device or an XFS filesystem image.

I run the command as follows, from the directory the tool is installed in:

xfs_undelete -t -10hour -r '*.jpg' -o /dev/disk/by/id/ata-xxx --no-remount-readonly /dev/disk/by/id/ata-xxx

Infrastructure:

  • unraid server with 8 disks XFS formatted, one of which is the parity disk.
  • data was deleted on 6 disks, the 7th is the one I wish to restore to
  • since deletion the server has not been written to
  • array is mounted.

Would appreciate your steer, thanks for sharing such an awesome little program!

Test file not recovered, what am I doing wrong?

Test file not recovered, what am I doing wrong?

It should have recovered text.txt but when I check xfs_undeleted/ its empty.

[root@ip-10-0-10-221 xfs_undelete]# df |grep new
/dev/xvdf1       8376300 1269792   7106508  16% /newdrive

[root@ip-10-0-10-221 xfs_undelete]# echo "test" >/newdrive/text.txt
[root@ip-10-0-10-221 xfs_undelete]# cat /newdrive/text.txt
test

[root@ip-10-0-10-221 xfs_undelete]# rm -f /newdrive/text.txt
[root@ip-10-0-10-221 xfs_undelete]# umount /newdrive
[root@ip-10-0-10-221 xfs_undelete]# df |grep newdrive

[root@ip-10-0-10-221 xfs_undelete]# ./xfs_undelete /dev/xvdf1
Starting recovery.
Done.

[root@ip-10-0-10-221 xfs_undelete]# ls xfs_undeleted/
[root@ip-10-0-10-221 xfs_undelete]#

xfs_undeleted only contains txt file?

I use xfs_undelete to recover my stupid rm-rf

my machine is centos 8 with lvm

i typed ./xfs_undelete /dev/mapper/cl-home

then it comes out:

Starting recovery.
Recovered file -> xfs_undeleted/2021-08-26-02-21_1474.txt
Recovered file -> xfs_undeleted/2021-08-26-02-21_150551.txt

it only output two txt file, which record some command history..

How to get the removed file?

In Suse Linux Enterprise Server 15. SP3 it gives error executing xfs_undelete

In Suse Linux Enterprise Server 15. SP3 it gives error executing xfs_undelete

Last sunday night (~21:30) by mistake i delete a linux virtual machine in Xen.
The host is a SUSE Linux Enterprise Server SP3 5.3.18-59.24 13-Sep-2021 with Xen, and the partition of /vm is in XFS format.
The virtual machine was in /vm/grpwise/ and the file was grpwise.qcow2 with 321G.

I am trying to use your tool xfs_undelete, but i cant make it running.
In the SLES 15 SP3 i've installed:

  • tcl 8.6.7-7.6.1
  • tk 8.6.7-3.6.3
  • graphviz 2.40.1-6.12.1 and also i have made zypper install coreutils and he says it are installed (GNU Core Utils 8.29-2.12)
  • also, tcl-devel

I'e copied shell.nix and xfs_undelete (the contents from you site) and made them exec with chmod 700
i run ./xfs_undelete and i receive:
can't find package cmdline
while executing "package require cmdline"
(file "./xfs_undelete" line 10)

I read something you said about this error, but i don't how to compile tcl 8.6 to make the modifications you said.....
I am not an "expert" in linux.
Can you give me support?
Wath would be the cost?
Thanks,

Urgent
Paulo Sousa / [email protected] / Deltabyte

20211122_182849
20211122_181758
20211122_181821
e

install on Suse 13.2

Fairly old OS, install failing to find xfs_undelete-master.

Any advice welcome.

thanks in advance, Brian

gyan:/tmp # zypper install xfs_undelete-master
Retrieving repository 'openSUSE:Factory' metadata ........................[done]
Building repository 'openSUSE:Factory' cache .............................[done]
Loading repository data...
Reading installed packages...
'xfs_undelete-master' not found in package names. Trying capabilities.
No provider of 'xfs_undelete-master' found.

Can't figure if this tool is working.. please help.

Attempting to recovery a folder that I (accidentally) removed using the terminal and taped "return" a little too quickly. I came across your repo and thought I'd get it a try. Ideally it would be nice to narrow down the exact time (within a few minute timeframe) since the rm didn't take that long.

Using the following command(s) didn't recovery any files.

 ./xfs_undelete -i "" -t -7hours -o undelete_sdc /dev/sdc1
  1. is this correct for relative selection?

How do I know that this is working?? I have tried deleting another file (which is the only activity that this drive has had since the directory deletion) and that specific file deleted that was on /dev/sdc1 was not recovered.

I've looked at the documentation and there might be a typo of 2hour and I don't know what timeframe is selected to verify. Even a little print statement would help me here to verifythe right filter is being applied. I don't know this tcl lang or else I would try locally. Maybe I will if have some specific suggestions that I can try to figure out what is going on.

I have tried -t "-7hours..now" and -t "-7hour..now" as an alternate to above, still it didn't recover anything. Not sure since nothing was printed.

  1. I have tried with and without the -i "" flag.

I am not really sure what this would do, either way it doesn't work. I just saw it on a thread where you commended that.

  1. do you have a format between a 5 minute window? I couldn't find any documentation on how to do time formats for clock scan

I imagine that there is a parameter string that I could pass to -t that would be an exact 5 minute timeframe (with or without timezone) like 2024-07-18 13:00..2024-07-18 13:05 though there is no error -- I still don't know if this is working since there are no debug statements.

And after all that, thank you for your patience! I might just be hosed "¯\_(ツ)_/¯", oh well.

Unable to parse time range. Please put it as a range of time specs Tcl's \[clock sc...

./xfs_undelete -t 2024-01-03 -r 'zip/*' /dev/mapper/centos-home invalid command name "lmap" while executing "lmap t $times { if {[catch {clock scan $t} t]} { puts stderr "Unable to parse time range. Please put it as a range of time specs Tcl's \[clock sc..." (procedure "parseTimerange" line 13) invoked from within "parseTimerange [dict get $::parameters t]" invoked from within "set ctimes [parseTimerange [dict get $::parameters t]]" (file "./xfs_undelete" line 460)

always error when executed

Time stamp off by 26 years

I tried to use xfs_undelete to recover some file on my Unraid server after a user error. It did not find anything to recover when using a time range (the files were deleted today, so I wanted to recover anything deleted since yesterday):

# xfs_undelete -t 2024-07-18 -o /mnt/temp/sdd1/undeleted/ /dev/sdc1
Starting recovery.
Done.

However, when I specify no time range (or -t 1998-07-18), I get recovered files:

# xfs_undelete -o /mnt/temp/sdd1/undeleted/ /dev/sdc1
Starting recovery.
Recovered file -> /mnt/temp/sdd1/undeleted/1998-07-18-23-07_196.mp4
...
Recovered file -> /mnt/temp/sdd1/undeleted/1998-07-18-23-07_200.matroska
...
# date
Fri Jul 19 19:12:14 CEST 2024

The date is wrong (this server did not exist in 1998). Any idea what the problem is?

can't find package cmdline

I'm on Kubuntu 24, downloaded the zip, expanded it, ran ./xfs_undelete and get

can't find package cmdline
    while executing
"package require cmdline"
    (file "./xfs_undelete" line 10)

So I tried to install cmdline

 sudo apt install cmdline
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package cmdline

Seems like I need to install some dependencies, not sure how to do that.

missing value to go with key( 0%)

When I try to run xfs_undelete on v13.0 release, it fails with a stack trace:

Commit v13.0 CommitID 43fec30

[root@host xfs_undelete]# /root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete -i "" -t 2024-01-09 /dev/mapper/rhel-data
Starting recovery.
missing value to go with key( 0%)
while executing
"dict get $extent count"
("uplevel" body line 2)
invoked from within
"uplevel 1 $body"
(procedure "lmap" line 4)
invoked from within
"lmap {loffset extent} $extents {
expr {$::blocksize*($loffset+[dict get $extent count])}
}"
(procedure "investigateInodeBlock" line 87)
invoked from within
"investigateInodeBlock $ag $iblock"
(procedure "traverseInodeTree" line 40)
invoked from within
"traverseInodeTree $ag $agi_branch"
(procedure "traverseInodeTree" line 26)
invoked from within
"traverseInodeTree $ag $agi_root"
("for" body line 10)
invoked from within
"for {set ag 0} {$ag<$agcount} {incr ag} {
## Read inode B+tree information sector of this allocation group.
seek $fd [expr {$blocksize*$agblocks*$ag..."
(file "/root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete" line 661)
[root@host xfs_undelete]#

I note that CommitID 43fec30 v13.0 introduced the code at
"lmap {loffset extent} $extents {
expr {$::blocksize*($loffset+[dict get $extent count])}
}"
(procedure "investigateInodeBlock" line 87)

Testing the parent commit:

CommitID 6176de1

[root@host xfs_undelete]# /root/XFS_Data_Recovery/xfs_undelete/xfs_undelete/xfs_undelete -i "" -t 2024-01-09 /dev/mapper/rhel-data
Starting recovery.
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252188.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252190.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_11252191.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800488.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800489.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800490.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800491.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800498.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800499.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800500.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800502.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800503.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800504.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800505.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800506.bin
Recovered file -> xfs_undeleted/2024-01-09-09-43_16800507.bin
...

This appears to be working as advertised.

So it seems that CommitID 43fec30 is broken.

child process exited abnormally06304

I was trying to use xfs_undelete to see if it could uncover directories and files that we either deleted, overwritten, or the drive might have had its MBR overwritten.

I haven't been able to get past the following:

# ./xfs_undelete -o /scr2/recovered-data-l211 /dev/sdb1
--
child process exited abnormally06304  (  0.0%)
while executing
"exec -ignorestderr -- dd 2>/dev/null if=$fs of=$of bs=$blocksize skip=$skip seek=$loffset count=$count"
("for" body line 43)
invoked from within
"for {set block [dict get $::parameters s]} {$block<$dblocks} {incr block} {
## Log each visited block.
puts -nonewline stderr [format $m1format $blo..."
(file "./xfs_undelete" line 59)

From your code:

...
 41 foreach line [split $config \n] {
 42         lassign $line key dummy value
 43         if {$key in {blocksize inodesize agblocks agblklog dblocks}} {
 44                 set $key $value
 45         }
 46 }
...
 58 ## Run through whole filesystem.
 59 for {set block [dict get $::parameters s]} {$block<$dblocks} {incr block} {
...

Some background:
I'm not sure what happened with the data, but the user says there were multiple directories with multiple files and now they're all gone. The suspect commands in the bash history are rm -rf blah which is followed by a fdisk /dev/THE-DRIVE. There weren't any timestamps in the bash history, so those rm and fdisk commands could have been from months ago during provisioning.

System details:
OS: CentOS7
HDD: 8TB Western Digital Red
Partition Details:

# fdisk -l /dev/sdb
...
Disk /dev/sdb: 8001.6 GB, 8001563222016 bytes, 15628053168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt
Disk identifier: E7F55CB0-5C0B-43E3-9978-438D59CAFEDB


#         Start          End    Size  Type            Name
 1         2048  15628052479    7.3T  Microsoft basic primary

When mounted:

# mount /dev/sdb1 /TEST-DATA-RECOVERY

# mount -l | grep sdb
/dev/sdb1 on /TEST-DATA-RECOVERY type xfs (rw,relatime,attr2,inode64,noquota) [/label1]

# ls -la /TEST-DATA-RECOVERY/
total 4
drwxrws---   2 root psgvb    6 Jul 25 14:11 .
dr-xr-xr-x. 26 root root  4096 Jul 26 16:14 ..

# df -lH | grep sdb
/dev/sdb1       8.0T   35M  8.0T   1% /TEST-DATA-RECOVERY

Any thoughts?

Thanks.

agcount value

Hello,

There is a variable called "agcount" in the script xfs_undelete. The default value is 4. This is not updated through reading the superblock. The value of my filesystem is 32. With the default value 4, I cannot get the lost file and then the script stopped too early. By the way, I get the value 32 by executing the command "xfs_info". Hope this will help improve this nice work!

Thanks

no files recover generate in "-o dir" or "./xfs_undeleted"

Hello ianka

I install xfs_undelete tool correctly . I can run the command . 
xfs_undelete locates in /data01/xfs_undelete-11.0 .  I copy a file to /data02/file_to_delete/test.txt with a few KB content.
The mount point "/data02" is from device "/dev/sde1"

I run the following command:

cd /data01/xfs_undelete-11.0
rm -f /data02/file_to_delete/test.txt
./xfs_undelete /dev/sde1

It shows "/dev/sde1 is currently mounted read-write. Trying to remount read-only.
Remount successful.
Starting recovery.
Done. 1 (100%))
"

I could not find the deleted file in " /data01/xfs_undelete-11.0/xfs_undeleted"

If I add "-o /data01/recovery_test" para , I could not find deleted file in "/data01/recovery_test" .

The OS is Centos7.6.1810 x86_64 , tcl is 8.6.10 and tcllib is 1-20 , xfs_undelete is 11.0

Unclear program behavior

I installed tclsh and tcllib. Copy-pasted your script, all set up, I suppose.

Fire up:

~ $ ./xfs_undelete.sh -t -1hour -r 'image/* /dev/sda3
> 

That's all I have. Just the prompt line. What is this supposed to mean?

FYI:

~ $ blkid
/dev/sda3: UUID="..." TYPE="xfs" PARTLABEL="home" PARTUUID="..."

Help-me

I accidentally lost some files from my server in the *ibd format

Mysql database

And I'm not able to use it.

Can I hire your services to guide to restore these files??

I have xfs raid-5 array mounted as /srv/media

          I have xfs raid-5 array mounted as /srv/media

root@myserver:~# df -h
/dev/md3 19T 17T 1.9T 90% /srv/media

root@myserver:~# cat /proc/mdstat
md3 : active raid5 sdg5[6] sde5[2] sdf5[5] sdd5[9] sdb5[8] sdc5[7]
19506902080 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

root@myserver:~# lsblk -o PATH,FSTYPE,MOUNTPOINT /dev/md3
PATH FSTYPE MOUNTPOINT
/dev/md3 xfs /srv/media

Accidentally deleted a folder using rm (intended to delete a soft link).

Mounted an external drive :

root@myserver:~# lsblk -o PATH,FSTYPE,MOUNTPOINT /dev/sda1
PATH FSTYPE MOUNTPOINT
/dev/sda1 ext4 /root/recovery

Cloned your script using git.
Made sure all per-requisits are met:
tcl >= 8.6
tcllib
GNU coreutils
file (having magic number files with MIME type support)

Run
root@myserver:~# ./xfs_undelete -t 2023-05-01 -o /root/recovery/ /dev/md3
Got:
Starting recovery.
Done.

But process went way too fast for 18Tb and 0 files were recovered.

Originally posted by @lelik77 in #34 (comment)

Clock Scan Problems and environments

I had to specify the timezone because clock scan was failing with: time value too large/small to represent

Adding this to the top of the scripts fixed the issue

set env(TZ) Europe/Kiev

Root privileges aren't really required for block devices

If the script is passed a block device, it tries to remount it readonly, and if it gets the error mount: only root can use "--options" option from the mount command, it reports Root privileges are required to run this command on devices.

However, it is possible to have access to a block device, for example due to being a member of the "disk" group, while not having the right to mount/remount that device, so this is unnecessarily restrictive.

This is probably not the case for most users, but it would be nice if it was supported. Perhaps to keep the script "friendly", if the open of the filesystem fails due to access being denied, the script could expand on the error to suggest that the user might need to run it as root.

This is trivial to work around by commenting out the exit.

help recovering a large qcow2 file

os: OL7.9

tcl:
tcllib-1.14-1.el7.noarch
tcl-8.5.13-8.el7.x86_64

app: latest release from github

I did a mv from this filesystem to a different system, then accidentally lost the one on destination. on source I see 6MB of a log file as filesystem activity after I mv-ed the file, nothing else has used the xfs filesystem

$ sudo xfs_info /dev/mapper/ol-home
meta-data=/dev/mapper/ol-home    isize=256    agcount=4, agsize=1264128 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=5056512, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$ file -l | grep -i qcow
Strength =  70 : QEMU QCOW Image []
$ sudo ./xfs_undelete -l
bin application/octet-stream Binary data
txt text/plain               Plain Text
[user@srv xfs_undelete-14.0]$ sudo ./xfs_undelete  -t -3hour -i ""  /dev/mapper/ol-home
Starting recovery.
Recovered file -> xfs_undeleted/2024-06-03-16-39_249.bin
Recovered file -> xfs_undeleted/2024-06-03-16-39_252.bin
Recovered file -> xfs_undeleted/2024-06-03-16-39_40519878.txt
Recovered file -> xfs_undeleted/2024-06-03-17-59_41781571.txt
Recovered file -> xfs_undeleted/2024-06-03-16-25_60678479.txt
Recovered file -> xfs_undeleted/2024-06-03-18-02_60680223.txt
Recovered file -> xfs_undeleted/2024-06-03-18-02_64116153.txt
Done.
[user@srv xfs_undelete-14.0]$ ls -lahtrs xfs_undeleted/
total 364K
4.0K drwxrwxr-x. 3 root root 4.0K Jun  3 18:15 ..
 36K -rw-r--r--. 1 root root  44K Jun  3 18:24 2024-06-03-16-39_249.bin
 84K -rw-r--r--. 1 root root 208K Jun  3 18:24 2024-06-03-16-39_252.bin
220K -rw-r--r--. 1 root root 284K Jun  3 18:24 2024-06-03-16-39_40519878.txt
4.0K -rw-r--r--. 1 root root 3.5K Jun  3 18:24 2024-06-03-17-59_41781571.txt
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-16-25_60678479.txt
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-18-02_60680223.txt
4.0K drwxr-xr-x. 2 root root 4.0K Jun  3 18:24 .
4.0K -rw-r--r--. 1 root root   13 Jun  3 18:24 2024-06-03-18-02_64116153.txt

is there any chance to recover that 12GB qcow2 file?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.