Git Product home page Git Product logo

partclone's People

Contributors

boretom avatar cdeleuze avatar dannf avatar dimstar77 avatar gelma avatar hdkigelcojp avatar joergmlpts avatar jowagner avatar junkhacker avatar jwilk avatar kenhys avatar mdbooth avatar mjoerg avatar moriyama avatar okkez avatar peterdavehello avatar pfrouleau avatar pushkk avatar rindeal avatar robert-scheck avatar samsonovanton avatar sebastian-roth avatar stevenshiau avatar thomas-tsai avatar tjjh89017 avatar tmr5454 avatar vasi avatar vnwildman avatar xiangzhai avatar zboszor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

partclone's Issues

partclone.btrfs segfault on one half of a RAID1 mirror

OS: Debian Buster/Sid

My home server has a single small disk for boot/root/swap/var, and then a pair of disks in a RAID1 mirror for bulk data (media files and home directories).

The RAID setup is a Btrfs RAID1 mirror built out of 2 whole disks. The disks are GPT formatted with no partitions.

The Btrfs filesystem says it is running ok, but one disk is reporting SMART errors so I want to back up the data and swap out the dodgy disk before it fails.

Note
It seems strange to me to clone a RAID stripe or mirror one piece (disk) at a time, but it's not obvious that there's a better way to do it, so that's what i have been trying to do.

Questions

  1. Running partclone.btrfs with either of the two RAID disks as the input device generates a warning that either device 1 or 2 is missing. I guess that's true, but OK, given there is no way to clone the mirror as a whole? (i.e. reading both disks at the same time)?
  2. In a 2-way RAID1 mirror, in theory a copy of EITHER single disk ought to be sufficient to recover the filesystem, because each device stores one redundant copy of the data. However given the way Btrfs spreads data blocks around devices, if you had (say) a Raid1 mirror made out of 3 disks (this isvalid in Btrfs), I assume you'd need clones of 2 out of 3 to be able to get the data back?

Symptoms

  • Running partclone.btrfs on "device 1" in the mirror warns to say "device 2" is missing, but appears to then run to completion.

  • Running partclone on "device 2" dies with the output-

warning, device 1 is missing 
checksum verify failed on 698067648512 found E4E3BDB6 wanted 00000000 
bytenr mismatch, want=638067648512, have-O 
Segmentation fault 

I found one similar issue (#64) describing a segfault in partclone.btrfs, but that was 2-3 years ago, and didn't specifically relate to a RAID component, so i'm not sure if this is the same bug or not?

please, make partclone.dd generate sparse special image files

It seems that partclone.dd outputs the same data whether or not it is in clone mode (-c).

It would be very useful, to have the clone mode search for zeros, and ignore those in the image file, similarly to the way the unused blocks of a filesystem are ignored. This way, backups of empty partitons or of unknown filesystems mostly still unused would take up less space.

Use uninitialized value in src/xfsclone.c

$ ./configure --prefix=/tmp/x \
  --enable-xfs \
  --enable-extfs \
  --enable-exfat \
  --enable-fat \
  --enable-ntfs \
  --enable-hfsp \
  --enable-vmfs \
  CFLAGS="-g -std=gnu99 -Wall"
$ make 2> make.log

I found a warning below.
But I couldn't find how to fix it.

Please check code around there.

xfsclone.c: In function 'readbitmap':
xfsclone.c:350: warning: 'wblocks' may be used uninitialized in this function

Ability to override image file name display in progress GUI

When restoring a compressed image, the data is fed to partclone via stdin. On the progress GUI this shows up as "Starting to restore image (-) to ...".

It would be helpful if we could override this image name using the command line, so that we could specify a real image file name to show in the GUI instead of stdin.

Basic ntfs test fails due to 255kiB rather 1MiB floppy.raw

Reproducible with Fedora 27 and ntfsprogs-2017.3.23-3.fc27:

FAIL: ntfs
==========
Basic ntfs test
==========================
create raw file floppy.raw
    dd if=/dev/zero of=floppy.raw bs=1024 count=256
256+0 records in
256+0 records out
262144 bytes (262 kB, 256 KiB) copied, 0.000423731 s, 619 MB/s
format floppy.raw as ntfs raw partition
    /usr/sbin/mkfs.ntfs -f -F floppy.raw
floppy.raw is not a block device.
mkntfs forced anyway.
The sector size was not specified for floppy.raw and it could not be obtained automatically.  It has been set to 512 bytes.
The partition start sector was not specified for floppy.raw and it could not be obtained automatically.  It has been set to 0.
The number of sectors per track was not specified for floppy.raw and it could not be obtained automatically.  It has been set to 0.
The number of heads was not specified for floppy.raw and it could not be obtained automatically.  It has been set to 0.
Device is too small (255kiB).  Minimum NTFS volume size is 1MiB.
FAIL ntfs.test (exit status: 1)

Introducing libpartclone

Hi Thomas,

Thanks for your great job for developing partclone utilities!

We are developing Open source alternative for Ghost - the Qt frontend of partclone, so we are changing partclone to the libpartclone

  1. change exit to return;
  2. change some implementation in the ControlFlow;
  3. plugin-based;
  4. C/C++ wrapper and exportable;
  5. Code review by clang analyzer and sanitizer;
  6. Under heavy testcase;

Thanks again for your job!

Regards,
Leslie Zhai

please, make the special image format available for data from stdin

In the case when one wishes to backup harddisk parts not assigned to any partition (partition table data or area not yet used):

dd if=/dev/sda bs=512 skip=num count=num | gzip -c | ssh host -- storage_cmd

It would be faster (and similar to the backup of a partition) to replace gzip by a program converting the data (that may contain a lot of zeros) to the partclone special image format..

The following command is not possible according to the manpage, though close to the solution:

dd <args> | partclone.dd -c -s - -o - | ssh host -- <storage_cmd>

The reverse of it is already possible:

ssh <args> | partclone.dd -r -s - -o - | dd <args>

Btrfs support is broken

Btrfs Support is broken (V0.2.88 and earlier). The backup seems "working", but the restore always fail.

I assume, the metadata is not stored correctly.

After restoring, the mount command always fail. Tested unter kernel 4.6.2 with btrfs-utils 4.4.

Maybe moving to native btrfs send/receive mechanism works better. Subvols should be supported also (for minimum saving all subvols).

Basic hfsplus test fails due to 256kiB rather 512kiB floppy.raw

Reproducible with Fedora 27 and hfsplus-tools-540.1.linux3-14.fc27:

FAIL: hfsplus
=============
Basic hfsplus test
==========================
create raw file floppy.raw
    dd if=/dev/zero of=floppy.raw bs=1024 count=256
256+0 records in
256+0 records out
262144 bytes (262 kB, 256 KiB) copied, 0.000430515 s, 609 MB/s
format floppy.raw as hfsplus raw partition
    /usr/sbin/mkfs.hfsplus  floppy.raw
mkfs.hfsplus: floppy.raw: partition is too small (minimum is 512 KB)
FAIL hfsplus.test (exit status: 1)

partclone quiet option

hi

i experiencing the following issue

i use partclone as backup solution from an lvm snapshot.
when i create an image using partclone like the following

partclone.ext3 -c -s /dev/hda1 -o hda1.img
the remaining time for creating the .img file is about 3 minutes

as soon as i want to use the quiet option

partclone.ext3 -c -q -s /dev/hda1 -o hda1.img

the remaining time becomes over 2hours

when i try to pass the whole through gzip

partclone.ext4 -c -q -s /dev/SERVEUR/system_snap | gzip -5 > /home/system_rescue/system_snap.img.gz

it seems to be a never ending process.

be aware thet without "-q" option partclone with gzip takes about 5 minutes.

also the quiet option makes the partclone logfile too big for beeing readable and freezes the session.

am i doing some thing wrong ? is there a work around.

Kind regards, athan

Please add -L (--logfile) option to partclone.info

Currently, partclone.info always tries to open a log file at /var/log/partclone.log and unlike the other partclone.* programs there's no -L option to change the location of the logfile.

This has the unfortunate side-effect which means partclone.info can only be run by the root user. If the location of the logfile could be specified by a command-line parameter, then partclone.info could also be run by non-root users.
(I tested this by temporarily hacking src/info.c to write its logfile to /tmp/partclone.log and then an ordinary user was able to run partclone.info without problems)

partclone.btrfs : segfault when calculating bitmap

I have a BTRFS boot partition of 16 Gb in one of my system, and i want to clone it. But each time i use partclone, if get a segmentation fault during the bitmap calculation. I can still clone it using with partimage/dd mode, of course, but it is quite slow.
This bug occurs with all latest Clonezilla versions, since august 2015 (not tested with previous versions).
Any idea or debug hints ?

Feature request: option to omit unneeded writes during --dev-to-dev

This is a request to add a new option for --dev-to-dev , similar to the -c option of e2image: it attempts to read each destination block and, if identical to the source block, skips writing it.
e.g, if the new option is "--sync":

partclone --dev-to-dev --sync -s /dev/sdX2 -o /dev/sdY2

This is useful when updating a cloned partition to a device where reads are much faster than writes and/or where writes are not desirable (e.g. flash devices, SSDs, etc...)

Thanks for considering this!

list bad files

Dear @Thomas-Tsai ,

after a duration of three months partclone finished the creation of the backup of my drive which is currently in a bad critical state.

Based upon the size of the logfile which is 22 Gigabyte many errors occurred during the backup process.

Now I want to restore the image to a factory new drive of the same physical size and list all the files with bad blocks.

Are there any options available to archieve this? I have found no answer within the wiki or man-pages, so I want to ask here for your help and support.

best regards

Restore/checking an image can fail because they expect an extra checksum

While working on another change, I got an error during partclone.checkimg. I'm still working on a patch, but I post this in case some one else is about to pin point the errors.

partclone tries to read a buffer with 4 bytes more than what is left to read during restore/checking. The creation of the image works fine, ie the images will be usable once this is fixed.

The fix is mostly on the "partial chunk" condition, we have to add "copied":
} else if (blocks_per_cs && blocks_read < buffer_capacity &&
((copied+blocks_read) % blocks_per_cs)) {

However, I get another failure on master even without that fix, but I think it may be related. partclone.checkimg reports an error when checking an ext4 image without any checksums, ie:
partclone.extfs -c -F -d -s floppy.raw -o floppy.img -a 0 -L clone.log
partclone.chkimg -s floppy.img -L chk.log
which give: ERROR: source image too short

But maybe the problem is strickly related to ext4, because every things are fine with ext2 and ext3.

To make it easy to tests, you should edit those 2 lines in mini_clone_restore_test:
cs_a=(1 0 1 1 1 1 1)
cs_k=(1 0 17 1 64 0 3097)
add "1" at the begining of each array.

I would like it if someone could confirm that "./mini_clone_restore_test ext4" fails on his system, while it is fine with ext2 or ext3. I'm using e2fsprogs-dev 1.42.9-3ubuntu1 on
lubuntu.

CVE-2016-10721: Restore Heap Overflow

=======================================================================
 title    : Partclone Restore Heap Overflow
 product  : Partclone
 version  : 0.2.87
 homepage : http://partclone.org/
 found    : 2016-01-17
 by       : David Gnedt
=======================================================================

Vendor description:
-------------------

Partclone is a partition imaging tool. Partclone is shipped by various
Linux distributions and used by specialized disk cloning systems like
DRBL (http://drbl.org/), Clonezilla (http://clonezilla.org/),
Redo Backup (http://redobackup.org/), ...


Vulnerability overview/description:
-----------------------------------

partclone.restore is prone to a heap-based buffer overflow
vulnerability due to insufficient validation of the partclone image
header. An attacker may be able to execute arbitrary code in the
context of the user running the affected application.

<details stripped until public fix is available>


Proof of concept:
-----------------

<details stripped until public fix is available>


Vulnerable/tested versions:
---------------------------

The vulnerability is verified to exist in 0.2.87 of Partclone, which is
the most recent version at the time of discovery.
Older versions are probably affected as well.

partclone.ntfs misses 4096 bytes

Probably double of #54 (Include Boot Sector)

When creating an ntfs-image with partclone v0.2.83 the restored images misses one physical sector of 4096 bytes. Checked with various ntfs partitions.

Physical Device:
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xf47a1902
Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT
/dev/sdc2 206848 1953521663 1953314816 931.4G 7 HPFS/NTFS/exFAT

Partclone Creation:
partclone.ntfs -c -o willi.img -s /dev/sdc2
Partclone Restore:
partclone.ntfs -r -s willi.img -o myimage.iso (-W) Raw File does not make any difference.
myimage.iso cannot be loop mounted because of missing 4096 bytes.

On the other hand ntfsclone v2014.2.15 (libntfs-3g) works correct:
ntfsclone -s -o willi.img /dev/sdc2
resp. restore
ntfsclone -r -o myimage.iso willi.img
myimage.iso can be loop mounted and shows all NTFS files.

Partclone 0.3.5a fails to build against RHEL/CentOS 6 on ppc64 (only)

Partclone 0.3.5a fails to build against RHEL/CentOS 6 on ppc64 (only), it works on other architectures, such as i686 or x86_64.

…
gcc -DHAVE_CONFIG_H -I. -I..  -DLOCALEDIR=\"/usr/share/locale\" -D_FILE_OFFSET_BITS=64  -DF2FS -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mminimal-toc -Wall -MT partclone_f2fs-main.o -MD -MP -MF .deps/partclone_f2fs-main.Tpo -c -o partclone_f2fs-main.o `test -f 'main.c' || echo './'`main.c
In file included from ./xfs/linux.h:32,
                 from xfs/xfs.h:37,
                 from xfs/libxfs.h:24,
                 from xfsclone.c:18:
/usr/include/asm/types.h:31: error: conflicting types for 'umode_t'
xfs/platform_defs.h:50: note: previous declaration of 'umode_t' was here
main.c: In function 'main':
main.c:845: warning: format '%08llX' expects type 'long long unsigned int', but argument 3 has type 'off_t'
make[2]: *** [partclone_xfs-xfsclone.o] Error 1

Even the (re)definition of umode_t in xfs/platform_defs.h:50 is wrapped using HAVE_UMODE_T, there is no code ever testing for the existence and setting the definement via ./configure accordingly. Setting CFLAGS=-DHAVE_UMODE_T for ppc64 builds on RHEL/CentOS 6 myself works around the issue.

prompt to ask if the error should be ignored

What I was proposing was that
if it wasn't on then when an error occurs the user gets a prompt to ask if
the error should be ignored, the run terminated or ignore this and all
future errors, rather than what happens at the moment, which is that
everything stop.

idea from david.wilcox

config.ac is not cross-compile friendly

I'm trying to get partclone to compile under buildroot (a tool to generate embedded Linux systems using cross-compilation) but there are some issues with how the configure.ac was written. For example, checking the version of libext2fs could be replaced with the following line:

PKG_CHECK_MODULES([LIBEXT2FS], [libext2fs >= 1.42])

More details here:
http://lists.busybox.net/pipermail/buildroot/2014-October/108460.html
http://lists.busybox.net/pipermail/buildroot/2014-October/109074.html

Xubuntu 14.04, make fails [partclone_xfs-xfsclone.o] Error 1

I'm trying to get partclone to build from source because the version provided within the normal Xubuntu repo's for 14.04.1 wasn't working to mount a clonezilla image I had created of my old / partition (ext4). So after tons of downloading all the extra patched libraries (from here: http://free.nchc.org.tw/drbl-core/pool/drbl/dev/ ) and installing them I finally got

./configure --enabel-ncursesw --enable-all

to work BUT now when I try to run
make it fails and I'm at a loss as to why because i've made sure i installed all the patched xfs libraries?

here's terminal from running the configure command: http://pastebin.com/qg5hKJYv

here's the terminal output when trying to run make: http://pastebin.com/LDNksk0Y

i have both xfslibs-dev_3.2.1+drbl1_amd64.deb and xfsprogs_3.2.1+drbl1_amd64.deb installed from the patched libraries?

Recovery files from partclone image without restore to a raw disk file? (like Acronis mount tib file)?

I have a 100GB files stored in a 3TB hdd (NTFS).
I backuped it using partclone and resulted a 100GB partclone.pcl image file.
Now I want to recovery the files (I don't care about the MBR or other unnecessary staffs, I just need the files).

All tutorials online ask me to restore it to a sda1.raw dd image (probably will be 3TB?). As the partclone.pcl image can not be mounted.
The problem is I don't have a 3TB free disk space... (external usb disks are too slow - not a good option)

I know linux are good at piping and redirecting. So is there a way to extract the 100GB files without generating a huge 3TB unless image file?

BTW, is it possible to release partclone builds on Windows?

Partclone `0.2.91` > `0.3.6` ntfs regression

I'm trying to restore ntfs partitions made using partclone 0.2.91 with 0.3.6, but when ntfsresize runs afterwards the following error comes up.

ntfsresize -f -f /dev/mmcblk1p3
ntfsresize v2016.2.22AR.2 (libntfs-3g)
ntfs_mst_post_read_fixup_warn: magic: 0x666f736f   size: 1024    usa_ofs:11892    use_count: 26967: Invalid argument
Record 0 has no FILE magic (0x666f736f)
Failed to load $MFT: Input/output error
ERROR(5): Opening '/dev/mmcblk1p3' as NTFS failed: Input/output error
NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE!

There is also an issue open in clonezilla, but my suspicion is that it might be an issue with partclone. stevenshiau/clonezilla#31 (comment)

Re-open source after read error to avoid throttling by OS

ddrescue includes an option to reopen the input file after read error (-O / --reopen-on-error). This behaviour can be very important, because often the kernel responds to read errors by drastically reducing the maximum read size of the file descriptor.

After encountering a bad sector on my system (running the latest clonezilla boot image) every call to read() causes an IRQ & context switch for every 4KiB received from the device, which seriously kills throughput.

This can be avoided by reopening the input file after each read error. I've attached a patch to demonstrate. My clone finished over 4x faster this way.

(I claim no copyright or ownership over the code attached, in case anybody cares)

reopen_source.patch.txt

Description of the --rescue is not accurate

Hi Thomas, thanks for developing this useful open-source tool! Partclone has saved 85% of my data on a dying disk. However, I think the description about -R or --rescue may be not accurate enough.

The "-R, --rescue" option is described as "Continue after disk read errors." in the usage (http://www.partclone.org/usage/partclone.php). However, after reading the source code, I think -R or --rescue actually performs a bad sector backup functionality, which means apart from skipping bad sectors, it will try it's best to read the bad sectors to save more data.

Passing --disable-<fs> to configure enables <fs>

Fixed for me in 0.2.48 with the patch below

--- configure.ac.orig
+++ configure.ac
@@ -44,7 +44,7 @@
 ##ext2/3##
 AC_ARG_ENABLE(extfs,
    AS_HELP_STRING(--enable-extfs,enable ext2/3/4 file system), 
-   enable_extfs=yes,
+   enable_extfs=${enableval},
    enable_extfs=no
 )
 AM_CONDITIONAL(ENABLE_EXTFS, test "$enable_extfs" = yes)
@@ -100,7 +100,7 @@
 ##XFS##
 AC_ARG_ENABLE(xfs,
    AS_HELP_STRING(--enable-xfs,enable XFS file system), 
-   enable_xfs=yes,
+   enable_xfs=${enableval},
    enable_xfs=no
 )
 AM_CONDITIONAL(ENABLE_XFS, test "$enable_xfs" = yes)
@@ -125,7 +125,7 @@
 ##reiserfs##
 AC_ARG_ENABLE(reiserfs,
    AS_HELP_STRING(--enable-reiserfs,enable REISERFS 3.6/3.6 file system), 
-   enable_reiserfs=yes,
+   enable_reiserfs=${enableval},
    enable_reiserfs=no
 )
 AM_CONDITIONAL(ENABLE_REISERFS, test "$enable_reiserfs" = yes)
@@ -179,7 +179,7 @@
 ##reiser4##
 AC_ARG_ENABLE(reiser4,
    AS_HELP_STRING(--enable-reiser4,enable Reiser4 file system), 
-   enable_reiser4=yes,
+   enable_reiser4=${enableval},
    enable_reiser4=no
 )
 AM_CONDITIONAL(ENABLE_REISER4, test "$enable_reiser4" = yes)
@@ -231,7 +231,7 @@
 ##hfs plus##
 AC_ARG_ENABLE(hfsp,
    AS_HELP_STRING(--enable-hfsp,enable HFS plus file system), 
-   enable_hfsp=yes,
+   enable_hfsp=${enableval},
    enable_hfsp=no
 )
 AM_CONDITIONAL(ENABLE_HFSP, test "$enable_hfsp" = yes)
@@ -245,7 +245,7 @@
 ##fat##
 AC_ARG_ENABLE(fat,
    AS_HELP_STRING(--enable-fat,enable FAT file system), 
-   enable_fat=yes,
+   enable_fat=${enableval},
    enable_fat=no
 )
 AM_CONDITIONAL(ENABLE_FAT, test "$enable_fat" = yes)
@@ -259,7 +259,7 @@
 ##NTFS##
 AC_ARG_ENABLE(ntfs,
    AS_HELP_STRING(--enable-ntfs,enable NTFS file system), 
-   enable_ntfs=yes,
+   enable_ntfs=${enableval},
    enable_ntfs=no
 )

@@ -333,7 +333,7 @@
 ##UFS##
 AC_ARG_ENABLE(ufs,
    AS_HELP_STRING(--enable-ufs,enable UFS(1/2) file system), 
-   enable_ufs=yes,
+   enable_ufs=${enableval},
    enable_ufs=no
 )
 AM_CONDITIONAL(ENABLE_UFS, test "$enable_ufs" = yes)
@@ -353,7 +353,7 @@
 ##VMFS##
 AC_ARG_ENABLE(vmfs,
    AS_HELP_STRING(--enable-vmfs,enable vmfs file system), 
-   enable_vmfs=yes,
+   enable_vmfs=${enableval},
    enable_vmfs=no
 )
 AM_CONDITIONAL(ENABLE_VMFS, test "$enable_vmfs" = yes)
@@ -377,7 +377,7 @@
 ##JFS##
 AC_ARG_ENABLE(jfs,
    AS_HELP_STRING(--enable-jfs,enable jfs file system), 
-   enable_jfs=yes,
+   enable_jfs=${enableval},
    enable_jfs=no
 )
 AM_CONDITIONAL(ENABLE_JFS, test "$enable_jfs" = yes)
@@ -400,7 +400,7 @@
 ##btrfs##
 AC_ARG_ENABLE(btrfs,
    AS_HELP_STRING(--enable-btrfs,enable btrfs file system), 
-   enable_btrfs=yes,
+   enable_btrfs=${enableval},
    enable_btrfs=no
 )
 AM_CONDITIONAL(ENABLE_BTRFS, test "$enable_btrfs" = yes)
@@ -415,7 +415,7 @@
 ##libncursesw##
 AC_ARG_ENABLE(ncursesw,
    AS_HELP_STRING(--enable-ncursesw,enable TEXT User Interface), 
-   enable_ncursesw=yes,
+   enable_ncursesw=${enableval},
 )
 AM_CONDITIONAL(ENABLE_NCURSESW, test "$enable_ncursesw" = yes)

@@ -434,7 +434,7 @@
 ##static linking##
 AC_ARG_ENABLE(static,
    AS_HELP_STRING(--enable-static, enable static linking), 
-   enable_static=yes,
+   enable_static=${enableval},
 )
 AM_CONDITIONAL(ENABLE_STATIC, test "$enable_static" = yes)

@@ -445,7 +445,7 @@
 ##memory tracing##
 AC_ARG_ENABLE(mtrace,
    AS_HELP_STRING(--enable-mtrace, enable memory tracing), 
-   enable_memtrace=yes,
+   enable_memtrace=${enableval},
 )
 AM_CONDITIONAL(ENABLE_MEMTRACE, test "$enable_memtrace" = yes)

CVE-2016-10722: FAT Bitmap Heap Overflow

=======================================================================
 title    : Partclone FAT Bitmap Heap Overflow
 product  : Partclone
 version  : 0.2.87
 homepage : http://partclone.org/
 found    : 2016-01-03
 by       : David Gnedt
=======================================================================

Vendor description:
-------------------

Partclone is a parition imaging tool supporting the FAT filesystem.
Partclone is shipped by various Linux distributions and used by
specialized disk cloning systems like DRBL (http://drbl.org/),
Clonezilla (http://clonezilla.org/), Redo Backup
(http://redobackup.org/), ...


Vulnerability overview/description:
-----------------------------------

partclone.fat is prone to a heap-based buffer overflow vulnerability
due to insufficient validation of the FAT superblock. An attacker may
be able to execute arbitrary code in the context of the user running
the affected application.

<details stripped until public fix is available>


Proof of concept:
-----------------

<details stripped until public fix is available>


Vulnerable/tested versions:
---------------------------

The vulnerability is verified to exist in 0.2.87 of Partclone, which is
the most recent version at the time of discovery.
Older versions are probably affected as well.

reiserfs.h?

Hi, I'm stuck while running ./configure. Here's what I'm getting (Fedora 16):

$ sudo ./configure --enable-ncursesw --enable-all
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether NLS is requested... yes
checking for msgfmt... /usr/bin/msgfmt
checking for gmsgfmt... /usr/bin/msgfmt
checking for xgettext... /usr/bin/xgettext
checking for msgmerge... /usr/bin/msgmerge
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for ld used by GCC... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for shared library run path origin... done
checking for CFPreferencesCopyAppValue... no
checking for CFLocaleCopyCurrent... no
checking for GNU gettext in libc... yes
checking whether to use NLS... yes
checking where the gettext function comes from... libc
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking for rm... /bin/rm
checking whether ln -s works... yes
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for UUID... yes
checking for pthread_create in -lpthread... yes
configure: checking for EXT2/3 Library and Header files ... ...
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking ext2fs/ext2fs.h usability... yes
checking ext2fs/ext2fs.h presence... yes
checking for ext2fs/ext2fs.h... yes
checking for ext2fs_initialize in -lext2fs... yes
checking version of libextfs... 1.41.14; suggest you upgrade the library!
configure: WARNING: Suggest upgrade your libextfs to 1.42 or newer version!
checking for aio_init in -lrt... yes
configure: checking for XFS Library and Header files ... ...
checking xfs/libxfs.h usability... yes
checking xfs/libxfs.h presence... yes
checking for xfs/libxfs.h... yes
configure: checking for Reiserfs Library and Header files ... ...
checking reiserfs/reiserfs.h usability... no
checking reiserfs/reiserfs.h presence... no
checking for reiserfs/reiserfs.h... no
configure: error: *** reiserfs header files (reiserfs/reiserfs.h) not found

3.12 build process

Between 3.11 and 3.12, what are the changes needed to build? 3.11 I just ran ./configure, make, make install, but 3.12 doesn't include a configure file.

Bitmap fails with offset messages

Hi. When I try to create an image for some BTRFS partitions, I get a million messages as follows:

btrfsclone.c: offset(134957510656) larger than device size(53686043136), skip it.
btrfsclone.c: offset(134960365568) larger than device size(53686043136), skip it.
btrfsclone.c: offset(134954635264) larger than device size(53686043136), skip it.
btrfsclone.c: offset(134958198784) larger than device size(53686043136), skip it.
btrfsclone.c: offset(134957887488) larger than device size(53686043136), skip it.

btrfs-check runs but doesn't seem to make a difference.

Partclone.ntfs doesn't detect hibernated Windows 10 partition

By default Windows 10 uses "hybrid sleep" when the user shuts down the machine. When partclone.ntfs creates an image of the Windows partition when it's in that state, partclone.ntfs will happily create an image but the resulting image will be corrupted. This means that restoring such an image will result in an unbootable system. The proper (and simple) fix would simply be that partclone.ntfs detects the hibernated state (by eg. detecting the volume is dirty like ntfs-3g does) and refuse to image it.

Incorrect FSF address

I'm packaging Partclone for Fedora Linux, and I'm required by Fedora policy to notify you that many files in the Partclone distribution contain an out-of-date postal address for the Free Software Foundation.

Odd Slow Writing to New m.2 SSDs

I use FOG server at my company, and it uses partclone to restore images over the network from an NFS store.

We have been using FOG for a long time, with many machines, and our restores are usually at GbE speed.

We recently bought a bunch of new machines, and in the process of building an image and restoring to these new machines, I found that I was getting very poor performance in the deploy (restore) - on the order of 450MB/min as opposed to the 6.5GB/min we were seeing on the older machines.

After much debugging, documented on the FOG Project forums , I think I've narrowed it down to a partclone.restore interaction with the m.2 SSD in the new machines. I don't think it's a driver-level problem, because direct writing (like dd if=/dev/zero of=/mnt/m2ssd/test1.img bs=1G count=1 oflag=direct) give 700+MB/sec.

partclone.restore reading from an image on an NFS mount over GbE and writing to /dev/null or to a fast USB3 external SSD show reasonable speed, 6+GB/minute.

partclone.restore reading from an image file on the USB3 SSD and writing to /dev/null shows roughly 12GB/min.

partclone.restore reading from an image file on the m.2 drive and writing to the USB3 SSD shows 14GB/min.

HOWEVER, partclone.restore from an image file on the USB3 SSD to the m.2 drive shows 10GB/min to start, 2GB/min at 5%, and between 400-500MB/min by 50%.

To exclude the FOG Project's minimal client kernel, I booted one of these machines in Ubuntu 18.04 and had similar results.

Other ways of writing to the m.2 drive seem fine: At the filesystem level (ext4), rsync or cp from NFS to the m.2 ssd gives near-GbE speeds. rsync or cp from the USB3 SSD to the m.2 SSD show near-USB3 speeds. At the block level, dd of /dev/zero to the device of the partition is an astounding 925MB/s.

I turned up partclone's debug level to -d2, and the only thing I see in the log that might have something to do with the writing slowness is fragmented writes - I'm using the default buffer of 1MB, and io_all shows full buffer reads, and then multiple writes.

io_all: read 1049600, 0 left.
io_all: write 1048576, 0 left.
io_all: read 1049600, 0 left.
io_all: write 98304, 0 left.
io_all: write 950272, 0 left.
io_all: read 1049600, 0 left.
io_all: write 679936, 0 left.
io_all: write 98304, 0 left.
io_all: write 192512, 0 left.
io_all: write 77824, 0 left.
io_all: read 1049600, 0 left.
io_all: write 655360, 0 left.
io_all: write 12288, 0 left.
io_all: write 8192, 0 left.
io_all: write 372736, 0 left.
io_all: read 1049600, 0 left.
io_all: write 466944, 0 left.
io_all: write 4096, 0 left.
io_all: write 4096, 0 left.
io_all: write 20480, 0 left.
io_all: write 36864, 0 left.
io_all: write 516096, 0 left.
io_all: read 1049600, 0 left.
io_all: write 1048576, 0 left.

I tried with a 4MB buffer, but that didn't make a difference.

I don't know enough about linux system internals or partclone to understand what the difference might be between cp, rsync, raw block-level dd, and the way partclone writes.

Are there any kernel tunables which might help, or is this a deeper problem?

Restore to offset

It would be good to have an option to restore a partition to an offset within the output file.

E.g. if I have a copy of an MBR and a partclone-saved partition, to restore to an image file I have to:

cat ${mbr_file} > file.img
truncate -s ${size_of_disk} file.img
losetup -o ${offset_of_partition} -f --show file.img
cat sda1.ntfs-ptcl-img.gz.* | zcat | partclone.restore -o /dev/loop0
losetup -d /dev/loop0

If partclone.restore had an option to provide an offset, this sequence could be reduced to:

cat ${mbr_file} > file.img
cat sda1.ntfs-ptcl-img.gz.* | zcat | partclone.restore --offset ${offset_of_partition} -o file.img

Double free when cloning a live FS

Cloning a live FS aborts with a double free :

partclone.ext4 -F -s /dev/vda1 -o - --clone
Partclone v0.2.89 http://partclone.org
Starting to clone device (/dev/vda1) to image (-)
device (/dev/vda1) is mounted at /
error exit
*** Error in `partclone.ext4': double free or corruption (!prev): 0x000000000214a370 ***
Aborted

GDB backtrace :

*** Error in `/usr/local/sbin/partclone.ext4': double free or corruption (!prev): 0x000000000060f370 ***

Program received signal SIGABRT, Aborted.
0x00007ffff75ecc37 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007ffff75ecc37 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff75f0028 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff76292a4 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007ffff763555e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x0000000000405523 in open_source (
source=source@entry=0x7fffffffe847 "/dev/vda1",
opt=opt@entry=0x60cb80 ) at partclone.c:883
#5 0x0000000000401c1e in main (argc=, argv=)
at main.c:222

This partclone is compiled from source in case it's relevant.

Differents sizes are confusing me (bits / bytes). I need explanations.

Hi
I decided to backup an existing ext4 partition with partclone.extfs.
I started backup with:
partclone.extfs -c -s /dev/sda5 / -o fedora.img.
When it's done partclone gives this output:

Partclone v0.2.58 http://partclone.org
Starting to clone device (/dev/sda5) to image (fedora.img)
Reading Super Block
we need memory: 383236 bytes
image head 4160, bitmap 374976, crc 4100 bytes
Calculating bitmap... Please wait... done!
File system:  EXTFS
Device size:   12.3 GB = 2999808 Blocks
Space in use:   1.6 GB = 385939 Blocks
Free Space:    10.7 GB = 2613869 Blocks
Block size:   4096 Byte
Total block 2999808
Syncing... OK!

The Device size: 12.3 GB and Space in use: 1.6 GB are differents compared to the sizes reported by GNU df.
When I created the partition, the size was 12Gio. Today, I was unable to find why it's different (even with block size).

df -h prints:

Filesystem      Size  Used  Avail  Use%
/dev/sda5        12G  1.3G   9.4G   13%

df -H (powers of 1000) prints:

Filesystem      Size  Used  Avail  Use%
/dev/sda5        13G  1.4G    11G   13%

df -B4096 prints:

Filesystem   4K-blocks      Used    Available  Use%
/dev/sda5      2952684    338815      2463879   13%

In fact, it seems that partclone adds 300Mb to the output size, and when we decide to "forget" it, the sizes are realistic compared to df -h: 12GB for the device size and 1.3G for the backup.
I also noticed the generated image is actually bigger than the used space:
du -h fedora.img gives: 1.5G fedora.img
Any ideas?

reiser4clone.c: REISER4 can't get status

reiser4clone.c: REISER4 can't get status

but Force will work fine

Partclone v0.2.68 http://partclone.org
Starting to clone device (/dev/loop0) to image (img)
Reading Super Block
reiser4clone.c: REISER4 can't get status
Calculating bitmap... Please wait... reiser4clone.c: REISER4 can't get
status
Elapsed: 00:00:01, Remaining: 00:00:00, Completed: 100.00%

Total Time: 00:00:01, 100.00% completed!
done!
File system: REISER4
Device size: 536.9 MB = 131072 Blocks
Space in use: 127.0 KB = 31 Blocks
Free Space: 536.7 MB = 131041 Blocks
Block size: 4096 Byte
Elapsed: 00:00:02, Remaining: 00:00:00, Completed: 100.00%, Rate:
3.81MB/min,
current block: 130945, total block: 131072, Complete: 100.00%

Total Time: 00:00:02, Ave. Rate: 3.8MB/min, 100.00% completed!
Syncing... OK!
Partclone successfully cloned the device (/dev/loop0) to the image (img)
Cloned successfully.

read image_hdr totalblock error (v0.2.89)

when trying to raw restore a special image I get the following error "read image_hdr totalblock error"
after that I tried to restore it to partition and it gave the same error

the same happened with another image of a different harddrive

partclone.chkimg and partclone.info show the same error!

--ignore_crc not working properly in partclone 3.12

restoring an image created with partclone 2.89 with partclone.restore 3.12 and --ignore_crc flag enabled results in a corrupt restore to device but does not report an error
restoring an image created with partclone 3.12 with partclone.restore 3.12 and --ignore_crc flag enabled results in a crc error while finishing the write to device (before it usually says SYNCING...OK)
an image created with partclone 3.12 using -aX0 flag restored with partclone.restore 3.12 and --ignore_crc flag enabled works fine

extfsclone.c: bitmap free count err

partclone.ext4 -v
Partclone : v0.2.61 (cc68c30)

file -s /dev/md2
/dev/md2: Linux rev 1.0 ext4 filesystem data, UUID=2b6a44db-b497-4bbb-81d4-9c0e7f22e33e (needs journal recovery) (extents) (large files) (huge files)

partclone.ext4 --source /dev/md2 --clone --output /tmp/md2.image
Partclone v0.2.61 http://partclone.org
Starting to clone device (/dev/md2) to image (/tmp/md2.image)
Reading Super Block
Elapsed: 00:02:50, Remaining: 00:00:00, Completed: 99.76%extfsclone.c: bitmap free count err, free:359666275
Partclone fail, please check /var/log/partclone.log !

Could you help us?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.