Git Product home page Git Product logo

multipar's Introduction

MultiPar

v1.3.3.3 is public

  I fixed a few rare bugs in this version. While most users were not affected by those problems, those who saw the matter would better use new version. If there is a problem still, I will fix as possible as I can. I updated some help documents about Batch script. I mentioned the location of help files in ReadMe text.

  New version supports a PC with max 8 OpenCL devices. Thanks Yi Gu for reporting bug in a rare environment. I didn't think a user put so many OpenCL devices on a PC. It will detect a Graphics board correctly.

  I improved source file splitting feature at creating PAR2 files. Thanks AreteOne for reporting bug and suggestion of improvment. When file extension is a number, it didn't handle properly. If someone saw strange behavior at file splitting ago, it should have been solved in this version.

  I fixed a bug in verifying external files. It might not find the last slice in a source file, when the file data is redundant. Thanks dle-fr for reporting bug and testing many times. This solution may improve verification of damaged files, too. When source files are mostly random data like commpressed archive, there was no problem.

[ Changes from 1.3.3.2 to 1.3.3.3 ]

Installer update

  • Inno Setup was updated from v6.2.2 to v6.3.1.

PAR2 client update

  • Bug fix
    • Fixed a bug in GPU acceleration, when there are many OpenCL devices.
    • Failure of splitting source files with numerical extension was fixed.
    • Faulty prediction of the last block in a file with repeated data was fixed.

[ Hash value ]

MultiPar1333.zip
MD5: 01A201CA340C33053E6D7D2604D54019
SHA1: F7C30A7BDEB4152820C9CFF8D0E3DA719F69D7C6

MultiPar1333_setup.exe
MD5: 33F9E441F5C1B2C00040E9BAFA7CC1A9
SHA1: 6CEBED8CECC9AAC5E8070CD5E8D1EDF7BBBC523A
  To install under "Program Files" or "Program Files (x86)" directory, you must select "Install for all users" at the first dialog.

  Old versions and source code packages are available at GitHub or OneDrive.

multipar's People

Contributors

guyi2000 avatar jarupxx avatar yutaka-sawada avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multipar's Issues

MultiPar helped me from silent data corruption

I could install Visual Studio 2019 and tried to resume MultiPar development slowly. But, as I found a strange problem, most files which I copied from backup HDD were broken. When I tested current MultiPar behavior on Windows 10 by using sample set of PAR2 files, both sample source files and PAR2 files were damaged. I thought that it was something wrong with a hardware.They are not small bit error, nor several bytes, but some KB area is changed in some position.

Then, I check other files with MultiPar. (I made PAR2 set for MP3 files ago.) The corruption frequency was high. I needed to re-copy from backup HDD again. Original files on backup drive are OK, but copied files were damaged. I'm not sure the cause of this problem. I checked new PC's RAM and SMART info of SSD. Because downloaded files seem to be OK (Windows Update and Visual Studio's download installer worked.), the error happened on files from external HDD only at this time. I felt that this might be a problem of USB device.

When I replace broken files with complete files from backup drive again, error did not happen. New copied files are all OK this time. (I checked them with MultiPar. ) I don't know what happened at the first time. I don't know the problem was solved somehow or there is unknown corruption still. At least, I understand that backup is important, and also backup isn't enough. Backuped data may be damaged while transfering between devices. Checksum (or MultiPar) is useful to check their integrity.

Multiple verification on SSD

I implemented a method to verify multiple source files on SSD. It checks multiple complete files at once. Incomplete files (damaged or missing files) are checked in single thread still, because it's very hard to synchronize each result. Even when multiple verification seems to work on my PC, I might miss something or forgot a rare case or happened to put bug.

Because I don't have SSD, I need help of users who have SSD. I want to know that this new method is fast or not. When source files are on RAM (disk cache), it's fast on my PC. But, SSD is slower than RAM. How fast it will be actually is unknown yet. If you have SSD, fast CPU (4 or more Cores), and courage to try non-stable version, please test and post your result.

To compare a time of verification, you must use v1.3.1.4 as old version. Because I changed code of MD5 calculation in v1.3.1.4 ago, comparing with older one (v1.3.1.3 or before) is bad. If you use MultiPar GUI for test, you must change a setting of "Re-use verification result" to "Not used". When "Re-use verification result" is enabled, it just read old verification result and cannot see real speed.

I put the sample (gui_sample_2021-03-03.zip) in "MultiPar_sample" folder on OneDrive. Caution ! As this is early sample version, I don't know what will happen. Don't use this sample for daily usage. Be careful the risk and use this only for testing purpose.

Downloads through Git 'releases' section?

Hi,

Just curious if it is possible to use Releases feature of GitHub for posting releases - it would allow users following the project (all 6 of us) to get notifications when new build is released instead of having to check manually.

Potential problems with Instructions on website for adding par2 record to the end of a ZIP file.

Hi,

I like the idea of appending PAR2 data to the end of a ZIP for integrity verification as per the instructions on @Yutaka-Sawada 's web page: http://hp.vector.co.jp/authors/VA021385/record.htm

However the method described for ZIP files has some issues.

  1. If there is an archive comment added to the ZIP file, copying the last 22 bytes is not enough.
  2. The EOC for ZIP64 (EOCD64) is different from classic ZIP files, plus it also has the appended archive comment issue as well.

For both cases, it seems the safest option is to search backwards from the end of the ZIP/ZIP64 file for End of central directory signature = 0x06054b50 or End of central directory signature = 0x06064b50 (ZIP64) and start the binary copy from the beginning of that signature. Should not have to scan many bytes to find it, unless the archive comment text is very large for some reason.

For reference: https://en.wikipedia.org/wiki/ZIP_(file_format)

Please consider updating the instructions on this web page for better ZIP file compatibility. Unfortunately it makes the process more complicated, but it makes the resulting ZIP+PAR2DATA file more compatible with zip/unzip tools.

Thanks.

Linux/macOS support

As I have not seen any issue for it, I will open one here.

If you could add Linux support, this would be amazing for many users. This would also add macOS support in 1 go!
It would not need a GUI, just the command-line interface would be enough for Linux users.

We (SABnzbd) donated before and if it helps development of a Linux version, we will gladly donate a couple of more times! 💯

Python bindings or other output

We use Multipar verification+repair in our application extensively and love the performance! The reason we also donated a couple of times already and will keep donating when we can!

Some background information of our problem:
To display the progress of verification and repair, we parse the output of Multipar line by line. This way we can show the verification progress to the user as Verifying file X/Total and we can show the repair progress during repair.
Additionally we try to keep track of any renames that Multipar performs, so we can keep our own bookkeeping of the files, matching the reality on disk. For example if a user would Retry to download a whole job but Multipar renamed a file, we wouldn't have to download the original file again.
Lastly, we parse the result message to see what action to take (try to get more par2 files or give up).

So I was wondering if there is some easier way we can get this information. For example python bindings that can call the Multipar functions and get some progress information.
However, I understand this is difficult.

We would already really be helped if there was some structured output file (json?) that contains that status information:

  • Number of files scanned
  • Verification result
  • Renames performed (including old and new name)
  • Joins performed (.001, etc files)
  • Repair result
  • Anything else of interest

1.3.1.4 seems incompatible with other versions

Just updated and when i went to check a file created with par2j 1.3.1 is marked every 338 files damaged (completely unusable) and 1 file good.
And older version (1.3.0.6) marks 339 files good and every par for 8 good blocks

I've tried with 4 or 5 different versions ai swear I'm getting s slighty different result with each version
If verification method is changing it needs to auto match with old versions.

Update just created new pars for 5gb of 20mb files and on verification every child and par was damaged but one.
So 1314 is broken using Windows 10

Can source data be read just once?

Hi,
the file sizes are getting bigger and I often create PAR2 for 30 GB file.

  • There are two stages "Computing file hash" and "Creating recovery data", so I observed that source data are read twice.
  • For small files this was not important, but with huge files this becomes significant.

I have tried to read the PAR2 format specification, but wasn't clever from that.

The question: Is 2nd stage depending on result from 1st stage? If yes, can this be filled/calculated later?

I mean to read source data once and feed them simultaneously into 2 seperate functions.

  • One calculating the MD5 and second creating the recovery data.
  • "MD5 Hash of packet" is unknown until complete source data are read, so it can be replaced with placeholder like "ffffffffffffffffffffffffffffffff".
  • When all data are read and complete MD5 calculated the placeholder will be replaced with real MD5 checksum.

This would change speed from "read twice" to "read once and update PARchives on the end".

Thank you for this great tool

'File' and 'MD5' on MultiPar 1.3.1.2 beta are wrong

こんにちは Yutaka Sawada san (I hope I do this the right way).

On your homepage, MultiPar 1.3.1.2 beta 'File' is "MultiPar1311.zip" which must probably be "MultiPar1312.zip" and same with "MultiPar1311_setup.zip" which must probably be "MultiPar1312_setup.zip".

2 features that can be very important , please guys vote for this

Hello there , i would like to see 2 options in multipar please

1 - run in background option , as if i am verifying some files or and more than one .par2

so if i have 2 rar files and there is 2 sets of par one set for each rar

i need to run the par2 files in the background without keep popping on my screen as i am doing other work , just imagine dealing with 10k plus i have to keep closing the windows while i am doing my work , specially where there is a missing rars , or blocks or not able to repair , so the auto option is not very useful this way

2 - an option to remove the par files if there is no missing blocks or files and the file set is 100 % complete ,

same as above if dealing with tons of par2 files there is no reason to delete par files for every set of files if there is no need for the par files anymore

Thank you and please if you accept donation let us know how . thanks.

Par3 Support?

A very looong time ago in 2012 I used multipar to make parity files for some photography projects. At the time Par3 seemed to have a lot of benefits, so I used that format.

I didn't notice when it vanished from future releases, but today I went back to a project from 2012 and realised I couldn't open the parity files I'd made.

In the "About" and "Settings" dialogs it still mentions Par3, but I can't see any way to enable it in the release from September this year. Is there an additional download somewhere?

More used Cores/Threads?

Hi,

old Setting: 1x Intel® Xeon® E-2274G @4,00 Ghz, 4c/8t on 1xSATA-SSD 480 GB (non-RAID), 32 GB RAM
new Setting: 2x Intel® Xeon® E5-2650v4 @2,20 Ghz 24c/48t (2x12c/2x24t) on 2xSATA-SSD 480 SSD (Hardware-RAID1), 128 GB RAM

Much better server. But the performance is unfortunately much worse...

Test-Setting: 32,6 GB recovery files
I used on both MultiPar v1.3.1.9 with GUI (main settings and 10,7% redundancy).

I usually use par2j64.exe with .bat
%MULTIPAR% c /rr10 /sm640000 /rd1 "%query%%name1%%name2%.par2" *.rar
The GUI only allowed me to test it faster...

  • old setting: 07:48min (cpu-load ~100%)

CPU thread : 6 / 8 (even use only 6t instead of 8t, because you can hardly use the computer during the process due to the cpu load)
CPU cache : 512 KB per set
CPU extra : x64 SSSE3 CLMUL AVX2
Memory usage : Auto (23676 MB available)

  • new setting: 10:54min (cpu-load ~40%)

CPU thread : 16 / 32
CPU cache : 1536 KB per set
CPU extra : x64 SSSE3 CLMUL AVX2
Memory usage : Auto (124336 MB available)

old setting is 28,13% more quickly than new setting...

After some research I found this thread: #21 it says that only 16t can be used. Right? My questions:

1.) Why is a maximum of 16t supported? Can you add support for more here, please?
2.) Is the problem with (also) RAID1, that the performance is so bad?
3.) Is it really possible that 6t @4.00 Ghz with MultiPar are so much faster than 16t @2.20 Ghz? Shouldn't it be faster?

Or has to look for solutions in a completely different place?

4.) Is there anything else I can do to improve the performance?

Thank you so much!

Uneditable Recover Files Path UI problem

There should be a way to directly edit the Recovery Files Path.
Now the path is not editable. If you press Browse, you still have to navigate into the folder by mouse, which is overwhelming if the path is deep.

BTW, same as the Base Directory Path when you open a *.par2 file.

There are 2 possible solutions either of which can solve this problem:

  1. Make the Recovery Files Path editable.

  2. After pressing the Browse button, the window coming out shall contain an address bar.

Image of Yaktocat

[Request] Ability to view/share debug information for errors

I recently created a Parity set for a large set of folders, and when I now try to open it I just get an error that says "Error: Cannot Parse Output (0x02)"

If I then click "Verify" I Get "Error: Malloc (12)".

It would be useful to be able to create a crash-dump of some kind so that I could then email it to you or post it here.

Thanks.

PAR3 create support

Greetings @Yutaka-Sawada. I know that multipar can support reading par3 files, but there are no programs that I know of that can actually create it. Are you planning to add par3 support to multipar in the foreseeable future? If not then what does supporting par3 even mean in this case?

recovery blocks

I notice that recovery blocks are exponential growing in size, this is not a good idea as smaller recovery files are less likely to be lost compared to larger ones

I use Par2 to secure computer chess files which amount to 1.12 TB for retrograde 6-pieces perfect chess

disk errors can damage the set so I use redundancy to secure the set

6-pieces
6-pieces.vol000+01
6-pieces.vol001+02
6-pieces.vol003+04
6-pieces.vol007+08
6-pieces.vol015+16
6-pieces.vol031+32

some restraint in exponentially larger blocks is a wiser course of action

Source files don't match version number of binary releases

I downloaded 1.3.0.7 as a release since it's the last non-beta and thought I'd poke around the source code to verify the functionality, however, the readme.md states the version is 1.3.1.5, and the included source files seem to point to newer versions. For transparency, it would be nice to be compile the last supported non-beta from source, but I suspect that things just got jumbled up from the transition to github.

I'll poke around the Onedrive to see if the archives there match up better. It might make sense to provide a singular release for the files hosted on Onedrive here, or making a tagged archive commit so that others can find everything in one place. Onedrive/dropbox links tend to go stale over time as projects move on, so it'd be nice to keep everything in github.

GUI create with batch automatically doesn´t work

Hi,

i want to start the GUI with batch and set it up so that it starts directly.

But it doesn't matter if I do
START "" %MULTIPAR% /create "%SOURCEDIR%"
or
%MULTIPAR% /create "%SOURCEDIR%"
use.

In both cases the GUI opens but does not start automatically. Why?

I always have to press the Create button myself.
Unfortunately, I cannot automate this.

Is there a solution or do I have to use par2j64.exe?

I would like to use the GUI because you can do it easily

  • Set yEnc block size and limit (i would use 5000)
    and GUI also sets the values
    ​​- block count
  • Recovery blocks and recovery files automatically on (based on the source files)

Thanks for the tool!
Greetings

Future of MultiPar

First of all, exciting to see MultiPar on GitHub! Fingers crossed there will be source-code and some lively contributions from the community.

Next, maybe it makes sense (especially in light of the recently closed MultiPar forums) to update README.md with the short list of things you need help with?

Thanks.

Add the option to turn off automatic verification? Remember the base directory?

Now when I open a *.par2 file, it automatically starts the verification. I can pause the process but can not stop it.
I store the *.par2 files in different locations than the data files. So if I need to change the base directory before the verification.
When the verification is not finished, I can't change the base directory.
I have 2 suggestions:

  1. Add the option to turn off automatic verification.
    Maybe under Options -> Client behaviour -> Verification and Repair optioins.
    When automatic verification is turned off, we can press a button to manually start verification.

  2. Let the *.par2 files remember relative path of the data files, so that I can store the data and the *.par2 files in different folders(usually not far away in the directory tree), and that I don't need to change the base directory everytime I open the *.par2 files.

Errorlevel of the Par2 Files Batch file

i create a file test.rar and make a Multipar2 @ 2% , this is create by a Batch file
everything is fine Par2 and Rar are ok , as i undertstand Errorlevel = 0
now if i damage the PAR2 file , par2j v "test.rar.vol1.par2" its say Errorlevel =0 (OK)
so i check the par2 with the windows interface got damage block
why Batch file say errorlevel =0 in batch file (.BAT)
This is a BUG !

Documenting unusual behaviour when a block is repairable from another block due to single byte difference

Unlike the other problem I reported #36 which I'm sure is a bug, this it's more complicated and I haven't tested this much. I also didn't encounter this in the real world myself but instead only while testing the other bug. I can imagine some scenarios it may occur in the real world on but these seem likely very very rare so I'm not sure if it's worth coming up with any fix or solution. Still I thought it would be useful to document what I found and some brief thoughts.

As I understand it, a single byte error doesn't need recovery blocks in PAR2, it can be repaired just with the base PAR2 file. But it seems if you have a corrupted block with a single byte difference to an intact block, even at verification level 2, par2j often doesn't recognise that the corrupted single byte difference block can be "repaired" from another block. I used a file filled with 10h but I imagine it's any case where one block only differs from another by a single byte. (1.3.0.6 or 1.3.1.8 and 64 or 32.)

Test 1

Like with my other report, a simple test is if you create a file filled with null bytes or some other byte. (As mentioned, I mostly used 10h.) But this time modify one byte within the file somewhere away from the first block before creating the PAR2 file. Because of the other problem you probably don't want to splice the file into too many blocks.

After creation, corrupt the block that has single byte difference changing at least 2 bytes so it's not repairable from itself. It should still be repairable from the null byte blocks since it's only a single byte difference, but par2j doesn't recognise this saying you need another block.

However if you corrupt the first block (and possibly the second, I'm not sure since one time it seemed to work, another time it didn't) with at least 2 bytes so it's also not repairable, par2j now recognises the file is repairable without any recovery blocks. If you leave the first block but instead corrupt a null block further away from the beginning and it doesn't, it still thinks you need a recovery block.

Test 2

For another test, put data at the beginning of the file so the beginning is not filled with null blocks but keep the block with a single byte difference in the middle (of the null or duplicate region). Then create PAR2 files. Now if you corrupt (by 2 bytes or more) any null block before the one byte difference block which you corrupted, it recognises the one byte difference block can be repaired. However corrupting any block after the single byte difference block, it doesn't recognise that it's repairable from the other null blocks, and says you need 1 recovery block.

I think it also does this if you corrupt the last non-null block (the end of the data you added). I found even if you change many bytes (and I made sure it wasn't at a boundary) it still says only 1 recovery block needed suggesting it just needs to recovery the newly damaged block. However, if instead you corrupt further back (away from the end) or even the first block, it says you need 2 (or whatever) recovery blocks. So it doesn't seem to recognise the single byte difference block is repairable from the null blocks.

Test 3

As a final test I put data at the beginning of the file, then changed a single byte in the first null (well actually 10h in my case) block rather than the middle. Now it seems to be impossible to get PAR2 to recognise the single byte difference block can be repaired from a null block. If you corrupt the data anywhere; beginning, middle or end it says you need 2 recovery blocks (or whatever). Corrupting a null block after the single byte difference block not surprisingly doesn't help. (Remember there's no null block between the data and the single byte difference block.)

Further comments:

I did try changing verification level, I think 1 and 3 may not recognise the block is repairable at all, but this is probably expected. Levels 0 and 2 are what can sometimes recognise the block is repairable and sometimes not. I didn't try fooling around with memory settings or number of cores or GPU or anything like that. I also only used my A10 5800K not the Core i5 3470.

From my tests I would guess part of the reason for my results is you only make a single pass through the file and you have no idea which blocks are corrupt. Testing every single block to see whether it can be used to repair every other block would be extremely inefficient and wasteful, you're doing it even if zero blocks are corrupt or missing. So instead, it's only when there is corruption you start to look for whether you can repair blocks and depending on where the corruption is, you may or may not realise you can actually "repair" the single byte difference block from the null or duplicate block/s.

In my case simply testing against the next duplicate or null block would work, but this won't always be the case. Notably while I only tested null or duplicate blocks, if you had a block with data and another block with the same data but 1 byte difference and no other duplicate blocks, I suspect the problem could occur if the corrupt block is after the okay block.

Possibly to really fix this, you will need to make 2 passes, the first time you detect which blocks are corrupted, then next pass you try to repair these blocks from every single block. I'm not sure whether it's worth adding such an option but if it does, it may be better to make it a new option. Call it paranoid verification mode or something. As having to read the file twice is likely to slow things down on a lot of modern systems if the data is on a HD.

One half-way option may be to limit it to duplicate (including null) non-corrupt blocks since that seems the only case it's likely to occur. While my example may be artificial, I could imagine an uncompressed file with a non data region filled with 00 or FF or something else. It probably doesn't even have to be a single byte, it could be a pattern like 20802080 or 28C35EF0 provided it ends up alignment. I suspect if there's a region with some "data" or header or whatever, it will be more than one byte, however I suspect in rare cases you could have a single byte. (Not sure about the chance of having a pattern with a single byte variation among repetitions. I also imagine it's rare disk images will only have a single byte not repeating so didn't use it as an example.)

Does not work with NewsLeecher 8

Hi Yutaka Sawada,

You saw my post on:

https://www.newsleecher.com/forum/viewtopic.php?f=9&t=35949&p=137131#p137131

I would suggest you add an option in the repair section to not load subdirectories content when auto-repair.

You already have this option for creating a par2 set, it could be on the same dialog:

https://postimg.cc/fSQDFb2G

This is no much chance for Newleecher 8 author to do anything, I started asking him at least 4 years ago!

But for you, it would be easy and you listen to your users requests in a regular fashion.

Thanks anyway for the best par program.

[Feature] hide output in the command line for healthy files

Thanks for your program. It would be nice to have a parameter for verification that skips all the logging output if everything is OK. It should only print errors, missing or renamed files including the path. I know a similiar option from https://github.com/rhash/RHash . There it is called "--skip-ok" Don’t print OK messages for successfully verified files.

Background:
I'm using multipar to add additional recovery options to backups. I create and check multiple separate par2 files for separate (sub)folders. So at the end I've a lot of files like .\2000\2000.par2 , .\2001\2001.par2, .\200X\200X.par2, ... that are OK normally. The verification of all the files creates a very long logging output. There it is hard to see if everything is OK with just a quick look. It would be much easier if only the errors are visible.

Just for information which command lines are used:
Create one par2 file:
"E:\MultiPar\par2j64.exe" c /ss262144 /sn32768 /sm4096 /rr1 /rf1 /lc32 /m0 /in "E:\Files\2000\2000" *.*"
Create multiple par2 files which belong to different files in different subfolders with "FORFILES":
FORFILES /P "." /M "*" /c "cmd /c echo @path/@file" (example for FORFILES)
FORFILES /P "." /M "*" /c "cmd /c \"\"E:\MultiPar\par2j64.exe\" c /ss262144 /sn32768 /sm4096 /rr1 /rf1 /lc32 /m0 /in @path/@file *.*\""

Check routine for one file:
"E:\MultiPar\par2j64.exe" v /vl2 /m0 "E:\Files\2000\2000.par2"
Check routine for mulitple par-files which belong to different files in different subfolders. It checks independently all par2-files in subfolders that are found by FORFILES. FINDSTR tries to reduce the logging output.:
FORFILES /P "." /S /M "*.vol*.par2" /c "cmd /c \"\"E:\MultiPar\par2j64.exe\" v /vl2 /m0 @path\"" | findstr /C:"Missing :" /C:"Misnamed :" /C:"Damaged :" /C:"All Files Complete" /C:"Recovery File" /C:"File Description packet is missing" /C:"PAR File(s) Incomplete"

Error in Verification Result when Data File is given as External File

Recently, I have started testing MultiPar (v1.3.1.8 beta and v1.3.0.7) with an intention to use it for creating Recovery of my files to protect them from bit-rot/bad sectors. PAR2 files are created successfully. But during verification, some of my files(exactly 3 out of 1527) are shown having missing blocks when they are given as External File to the PAR client. Though they are intact by CRC. If the PAR2 files and Source Files are kept in the same folder, there is no error; they are shown Complete. Need help.

I have attached the Source Files, PAR2 Files, MultiPar Gui Screenshots and MultiPar Log.
Thank you...

Problem with verification or repair when there are a lot of duplicate or null blocks and file/s are split into a lot of slices

I have encountered a weird bug in MultiPAR with verification or repair that happens when your file/s have a lot of duplicate or null blocks and you create PAR2 files with a lot of blocks.

For a simple non real world test, create a file filled with null bytes, e.g:

fsutil file createnew 1000000000fsutilfile 1000000000

(will likely require administrator privileges)

Then create a PAR2 file splicing the "data" into the maximum number of blocks (32766). Next create a PAR2 file splicing the data file into 5000 recovery blocks. You can verify both, and they'll both verify fine.

Then corrupt the data file slightly i.e. change a few byte somewhere to something besides 00. It can be right at the beginning/first block although that may make it less clear was is going on. Possibly not the last block. I just use a hex editor. Try to verify and you will find with the 32766 block PAR, the verification will break and won't continue to verify after it finds the corrupt block. But with the 5000 block PAR, it will verify and say it can be repaired/rejoined, as expected since all blocks are the same.

Since MultiPAR does recognise null blocks, you may also want to create a file filled with some other byte, I did 10h. You will find the same behaviour. (I used the 00 example because it's fairly simple to create.)

If you try to repair, you will have the same problem with 32766 blocks. An exception is if you use the file filled with null blocks, when you repair a second time it will succeed because the first repair will create a temporary "blank" file so now it just has to rename that. This can't happen if the file is filled with some other byte so repair is impossible.

If you use a larger file e.g. 11010101010 bytes (over x10 previous) you will find the same problem. (I'm sure smaller as well.) I also tried adding non duplicate blocks into the file. You get the same problem even if the corruption is in the non duplicate data (and you therefore need recovery blocks).

Additional details:

  • Most of my testing was with 1.3.1.8 but I also tried 1.3.0.7. Version 1.3.0.7 (really 1.3.0.6 since it's par2j/64 that's the problem) is worse than 1.3.1.8. par2j is better than par2j64 but both can have the problem. See my reply for details.

  • Most of my testing was with an AMD A10-5800K with 32GiB of RAM, but I also tried on an Intel i5-3470 with 24GiB RAM. Both computers were running Windows 10 x86-64.

  • I tried fiddling around with memory settings e.g. 1/8 or 7/8 and also limiting thread count down to a single thread and disabling or enabling GPU and finally changing verification levels. These made little or no difference to the problem. (I didn't try the SSD setting in part because of it occurs on 1.3.0.6 in part because all my testing was on HD.)

  • The reason I found the bug weird, there is an middle point where the verification will sometimes work and sometimes fail. To be clear, I mean with the exact same files. Just repeat verification 10 times and you should find that sometimes it will stop after finding the broken block, sometimes it will keep going and confirm recovery is possible. (Or recover if you do it in recovery mode.) To be clear this happens even when I use -lc513 to limit par2j64 to one thread, unless this doesn't completely eliminate multi-threading? See this file for a sample output: Sample of output for GitHub.txt

Blurry UI on secondary screen

The UI is blurry on my secondary screen.

My setup:
Monitor 1 (primary): 27" 4K set to 200% zoom
Monitor 2 (secondary): 19" SXGA set to 100% zoom
Windows 10 20H2

The UI is fine on the primary monitor. The UI would also be fine when using only the secondary monitor, having disconnected the primary.

It seems the UI is rendered at the zoom level of the primary monitor, but when displayed on another monitor it is somehow scaled down pixel-by-pixel or something, instead of rendering at the zoom level of that monitor.

Most other programs render perfectly crisp on my secondary monitor.

I have not yet tried any compatibility settings, because they should be neccesary only for old software.

MP also wants to always start on the primary monitor. It tries to "remember" its position, but never starts itself on the secondary monitor, even if that's where the last position was. I dunno if this is related, but I thought I'd mention it, because it also has to do with multiple monitor setups.

Please note, in order to test 4K zoom level, you do not need a 4K monitor. You can just set any monitor to a higher zoom, even 150% should show the issue. And then leave your secondary monitor at 100%.

Option to Increase checksum performance

I've been using PAR and Multipar for a long time now. I consider it an essential tool to help me avoid data loss.
I appreciate all current performance options, however there is one that is missing!
Computer specs have been steadily increasing over the years and with SSD's that read and write multi GB/s and systems with up to 128 cpu cores and TB's of ram, I find that initial checksumming, or verification seems to be more or less single threaded. With PAR's chunk feature it should be able to execute checksums on more than one chunk at a time from a disk. An option to specify multi threaded IO count would greatly improve performance for me as I have PC specs mentioned.

Demand of Compiled HTML Help ?

Because web browser is general software nowadays, I want to change format of Help documents to normal HTML files instead of Compiled HTML Help (.chm). Then, I don't need to install nor use the obsolate HTML Help Workshop on my new PC. Because it doesn't support Unicode, I must change PC's system language everytime. I'm not sure the usage works on Windows 10.

Reading help on browser may be enough mostly. But, 2 features (Index and Search) will be lost on a simple html file. Does someone require such feature for Help documents ?

Unless there is a strong demand of Compiled HTML Help, I won't compile HTML files from next help update. (I put current CHM files until future change.) I feel that most users never search a word among help files.

archive (split) and create recovery files

  • pick a file, right-click and select "archive and create recovery files"
  • in 7ziip "Add to Archive" interface, put a number that is smaller than your file in "Split to volume bytes", click "Create SFX archive"
  • when 7zip is done, MultiPar GUI shows up, but it only shows the first (exe) file to recover

MutliPar Portable won't run from folder with extended characters in name

Hi, @Yutaka-Sawada .

As I've mentioned before, I run MultiPar portable from .ZIP extracted to folder named MultiPar. For βeta builds I created a folder named MultiPar βeta. However, MultiPar will not start, giving me the following error:

MP Folder name issue 01

If I change name to MultiPar Beta then MP works fine.

Tested with MP 1.3.1.5β, 1.3.1.6β and 1.3.1.7β - if "β" character (U+0382) is in filename, MP fails to start.

Weird bug with large collection of many files

I've encountered some weird bug when selecting a large base folder with many files.

See the image below:
Screenshot 2021-11-27 235912

These are automatically selected options. Multiplying the block count and block size give rise to a number larger than the total data size, same goes to the recovery data size at 3.07%.

This does not occur to a single large file:
Screenshot 2021-11-28 001242

File to be renamed (in subfolder) gets removed if the overall repair fails

It says:

Restored : "Sample\whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"

But, the "Sample" folder is empty.

If I run this repair with enough .par2 files, the rename does work and the file is there in the sub-folder.

Parchive 2.0 client version 1.3.1.3 by Yutaka Sawada
Base Directory	: "C:\Users\xxxx\Downloads\incomplete\Whose.Line.Is.It.Anyway.US.S17E02.1080p.WEB.h264-KOGi-2\"
Recovery File	: "C:\Users\xxxx\Downloads\incomplete\Whose.Line.Is.It.Anyway.US.S17E02.1080p.WEB.h264-KOGi-2\8ad87928735b6dd9e7cb6aadfe96da8f.par2"
CPU thread	: 6 / 8
CPU cache	: 512 KB per set
CPU extra	: x64 SSSE3 CLMUL
Memory usage	: Auto (5408 MB available)
PAR File list :
Size :  Filename
12180 : "8ad87928735b6dd9e7cb6aadfe96da8f.par2"
PAR File total size	: 12180
PAR File possible count	: 1
Recovery Set ID		: 481C669E9C135D087C4FACC3D497E23A
Input File Slice size	: 3840000
Input File total count	: 17
Recovery Set file count : 17
Creator : ParPar v0.3.2 [https://animetosho.org/app/parpar]
Input File list      :
Size  Slice :  Filename
60772521     16 : "Sample\whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"
4736      1 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.nfo"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r00"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r01"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r02"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r03"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r04"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r05"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r06"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r07"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r08"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r09"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r10"
78557006     21 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r11"
100000000     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.rar"
884      1 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sfv"
126030      1 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi_tvm.jpg"
Input File total size	: 1339461177
Input File Slice count	: 364
Loading PAR File       :
Packet Slice Status   :  Filename
36     0 Good     : "8ad87928735b6dd9e7cb6aadfe96da8f.par2"
Recovery Slice count	: 0
Recovery Slice found	: 0
Verifying Input File   :
Size Status   :  Filename
- Missing  : "Sample\whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"
1
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.nfo"
28
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r00"
55
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r01"
99232000        0 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r02"
82
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r03"
109
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r04"
136
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r05"
163
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r06"
190
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r07"
217
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r08"
244
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r09"
271
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r10"
292
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r11"
319
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.rar"
320
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sfv"
321
= Complete : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi_tvm.jpg"
Complete file count	: 15
Searching misnamed file: 2
Size Status   :  Filename
337
60772521 Found    : "Sample+whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"
= Moved    : "Sample\whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"
Misnamed file count	: 1
Damaged file count	: 1
Missing file count	: 0
Input File Slice found	: 337
Finding available slice:
Size Status   :  Filename
363
99232000 Damaged  : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r02"
=       26 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r02"
Input File Slice found	: 363
Comparing lost slice	: 1 within 363
Null byte slice count	: 0
Reversible slice count	: 0
Duplicate slice count	: 0
Counting available slice:
Avail /  Slice :  Filename
26 /     27 : "whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.r02"
Input File Slice avail	: 363
Input File Slice lost	: 1
Ready to rename 1 file(s)
Need 1 more slice(s) to repair 1 file(s)
Correcting file : 1
Status   :  Filename
Restored : "Sample\whose.line.is.it.anyway.us.s17e02.1080p.web.h264-kogi.sample.mkv"
Restored file count	: 1
Failed to repair 1 file(s)

Blurry on secondary monitor

I have two monitors:

  • 27" 4K zoomed to 200%
  • 19" SXGA zoomed to 100%

The program looks fine on my 4K monitor. On the other monitor, it's really blurry. Not because the resolution is lower, but definitely more blurry than it can be. Definitely more blurry than before I had my 4K screen.

Tried with the beta version as well - same deal.

Please note: you don't need a 4K monitor to test this. You just need one monitor at a different zoom level than the other.

[Request] Can I include MultiPar binary into ngPost package?

Hi,
I didn't find where to contact you elsewhere so I'm doing it here...
I'm the dev of ngPost, and I'm planing for the next release to make it easier for the users to choose between the different tools to generate par2. Currently I'm only embedding par2cmdline which is in public domain.
I'll include a compiled version of ParPar as it's much faster and the owner agreed I could do it.
Some people on windows use your soft, so I should probably make it available also. Would it be ok with you?
Do you provide both a 64bit and 32bit version?
May I ask in what language it is coded? Couldn't you build it for Linux and Mac?
Have you benchmark it against ParPar? is it much faster?

Feature request: Improve support for disconnecting USB/Network drives

Right now, MultiPar doesn't have good support for disconnecting a USB drive. It detects "device not ready", turns the progress bar red, then aborts everything.

Would it be hard to check every few seconds if the device has become available again, then resume the operation?

This would help with USB drives being removed, as well as Wifi signal failure for network drives.

[Request] Use of Descript.ion files to rename obfuscated downloads

Hi

Would it be possible for MultiPAR to examine the Descript.ion file with the aim of renaming obfuscated/unscrambled file names, I know it can rename misnamed files using the PAR2 file. I don't think this would be outside the main objective of MultiPAR.

I use Newsleecher and NewsBIN Pro which both have the ability to download Descript.ion files

2 suggestions to fix possible bugs

  1. when scanning files, multipar stops if there are damaged files and (at the same time) file names are not properly encoded, and Multipar wants to correct the file names. It will correct the names of undamaged files, but leaves damaged file names unaltered. The result is Multipar then cannot repair, because it believes some files are missing. If I manually rename the damaged files, Multipar can scan the complete file set and repair the files.
    There must be a decision tree where Multipar chooses to rename files. I think it should choose to rename files, whether damaged or not, then look to see if repair is possible. I suspect it looks at a file's damage status first, and then doesn't look at renaming the file. It could be a matter of these two things being in the wrong order.

  2. Under options I have chosed no repair automatically. I had a file with 13 recovery blocks, and 13 damaged blocks. I had to click to repair manually.
    It could be as simple as a line of code which should say if (# recovery blocks >= # damaged blocks) then do the repair. Maybe the code says > rather than >= ?

I have recently upgraded to 1.3.0.7 but I have seen problem 1 in earlier versions as well.

MultiPar GUI layout will be changed a little

I plan to increase the max of slice size in my PAR2 clients. Though I thought that 1 GB was enough for PAR2, some ParPar users seem to set larger slice size. Because I use 32-bit integer for working buffer area, 1.3 GB is the theoretical limit in my code. That is 4 GB / 3 = 1.3 GB. If I omit searching blocks from damaged files, it will be possible to support upto 4 GB slice size. There is a limit of 4 GB for file IO in Win32API. To support over than 4GB slice size, I need to modify my code largely.

At this time, I changed my par2j to support upto 1.3 GB at next version. (I just change the limit value from 1 GB to 1.3 GB, hehe.) As this doesn't cause over-flow, there will be no problem. Max of split size was increased from 2 GB to 4 GB, also. (This was the max of 32-bit integer.) Because I don't use the file splitting feature by myself, I didn't test so much. It may work, if nobody claims a problem.

Now, I found a problem in modifying MultiPar GUI. Windows common control's edit-box supports max 2 GB value. (It's signed 32-bit integer.) Furthermore, current edit-box is narrow. I increased width of 2 edit-boxes (split size and slice size) for test. I aligned other boxes, too. Then, width of edit-boxes becomes different. (They are same on QuickPar.) I feel that this layout would be acceptable on Windows 7. If someone has a problem or trouble in using other OSes, please report the incident.

I put the sample (gui_layout_2021-02-13.zip) in "MultiPar_sample" folder on OneDrive.

context menu issue

trying to use this in context menu isnnt working for me. I can add it to the copy to context sub menu....but it is not being added to the regular context dropdown menu when right clicking on files. Why is this? I mean for it to be integrated into the shell, but i dont see it in my shell context menu...

[Feature Request] Full scanning of Data Files residing on the Bad Sectors during Verification

Magnetic HDDs develop Bad Sectors gradually over time. If any file by any means resides on those bad sectors, technically it is corrupted. We can repair that corrupted file using the PAR2 recovery files.

I did a test where I created 10% PAR2 recovery from a fresh copy of a file. Then, I created some copies of that file to one of my older HDDs where bad sectors were present. Luckily, two copies fell on those bad sectors. When I verified them against the previously created PAR2 recovery file, the par2j client(v1.3.1.9) couldn't scan the whole file. It stopped where it found the first bad sector and gave a Read Error (0x17, Data error (cyclic redundancy check)). As a result, it needed more slices than the actual requirement to repair the whole file.

Now, when I recovered those partially corrupted files to a good HDD using DDRescue-GUI, it appeared that both the files lost only one slice each due to the bad sectors and they were totally repairable.

So, if the bad sector is encountered at the beginning portion of the file, the file will be irrepairable using 10-20% recovery even though there is sufficient number of good slices after that bad setor. If par2j supports scanning beyond the bad portion of files, there will be no need to use any 3rd party utility like DDRescue-GUI.

I have attached the MultiPar Verification Logs (both when the file is on bad sectors and after recovering it from bad sectors) along with the DDRescue-GUI logs (open them with ddrescueview to view bad portions of files graphically).

By the way, there are several utilities like FileSaver, Dust Signs File Copier, Roadkil’s Unstoppable Copier, Bad Block Copy for Windows, SalvageFile, badcopy etc to recover maximum part of files from bad sectors.

Thanks in advance...

GPU Acceleration via par2j64.exe??? Is it possible? How do I do it?

Hi, I am used to using the MultiPar GUI, with GPU acceleration enabled. I save lots of time by enabling this (Approximately 1 minute and 13 seconds of saved time for a 16GB input file).

Question:

Is there a way that I can: forcibly enable GPU Acceleration from the par2j64.exe?

My configuration:

Redundancy: 5%
Number of Recovery Files: 10

My script is:

(Using Javascript in Directory Opus File Manager):

par2j64 = '"C:\\Users\\Neil Moore\\AppData\\Local\\MultiPar\\par2j64.exe"';

cmdToRun = par2j64 + ' create /rr5 /rf10 "{sourcepath$}_par2\\{filepath$|nopath|noext}" {filepath$}';

cmd.RunCommand(cmdToRun)

Resulting Command that is passed to Command Prompt:

"C:\Users\Neil Moore\AppData\Local\MultiPar\par2j64.exe" create /rr5 /rf10 "Q:\__Plex\_Movies\Tenet (2020)\_par2\tenet_sample" "Q:\__Plex\_Movies\Tenet (2020)\tenet_sample.mkv"

Thank you for your hard work on this program!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.