Unlike the other problem I reported #36 which I'm sure is a bug, this it's more complicated and I haven't tested this much. I also didn't encounter this in the real world myself but instead only while testing the other bug. I can imagine some scenarios it may occur in the real world on but these seem likely very very rare so I'm not sure if it's worth coming up with any fix or solution. Still I thought it would be useful to document what I found and some brief thoughts.
As I understand it, a single byte error doesn't need recovery blocks in PAR2, it can be repaired just with the base PAR2 file. But it seems if you have a corrupted block with a single byte difference to an intact block, even at verification level 2, par2j often doesn't recognise that the corrupted single byte difference block can be "repaired" from another block. I used a file filled with 10h but I imagine it's any case where one block only differs from another by a single byte. (1.3.0.6 or 1.3.1.8 and 64 or 32.)
Test 1
Like with my other report, a simple test is if you create a file filled with null bytes or some other byte. (As mentioned, I mostly used 10h.) But this time modify one byte within the file somewhere away from the first block before creating the PAR2 file. Because of the other problem you probably don't want to splice the file into too many blocks.
After creation, corrupt the block that has single byte difference changing at least 2 bytes so it's not repairable from itself. It should still be repairable from the null byte blocks since it's only a single byte difference, but par2j doesn't recognise this saying you need another block.
However if you corrupt the first block (and possibly the second, I'm not sure since one time it seemed to work, another time it didn't) with at least 2 bytes so it's also not repairable, par2j now recognises the file is repairable without any recovery blocks. If you leave the first block but instead corrupt a null block further away from the beginning and it doesn't, it still thinks you need a recovery block.
Test 2
For another test, put data at the beginning of the file so the beginning is not filled with null blocks but keep the block with a single byte difference in the middle (of the null or duplicate region). Then create PAR2 files. Now if you corrupt (by 2 bytes or more) any null block before the one byte difference block which you corrupted, it recognises the one byte difference block can be repaired. However corrupting any block after the single byte difference block, it doesn't recognise that it's repairable from the other null blocks, and says you need 1 recovery block.
I think it also does this if you corrupt the last non-null block (the end of the data you added). I found even if you change many bytes (and I made sure it wasn't at a boundary) it still says only 1 recovery block needed suggesting it just needs to recovery the newly damaged block. However, if instead you corrupt further back (away from the end) or even the first block, it says you need 2 (or whatever) recovery blocks. So it doesn't seem to recognise the single byte difference block is repairable from the null blocks.
Test 3
As a final test I put data at the beginning of the file, then changed a single byte in the first null (well actually 10h in my case) block rather than the middle. Now it seems to be impossible to get PAR2 to recognise the single byte difference block can be repaired from a null block. If you corrupt the data anywhere; beginning, middle or end it says you need 2 recovery blocks (or whatever). Corrupting a null block after the single byte difference block not surprisingly doesn't help. (Remember there's no null block between the data and the single byte difference block.)
Further comments:
I did try changing verification level, I think 1 and 3 may not recognise the block is repairable at all, but this is probably expected. Levels 0 and 2 are what can sometimes recognise the block is repairable and sometimes not. I didn't try fooling around with memory settings or number of cores or GPU or anything like that. I also only used my A10 5800K not the Core i5 3470.
From my tests I would guess part of the reason for my results is you only make a single pass through the file and you have no idea which blocks are corrupt. Testing every single block to see whether it can be used to repair every other block would be extremely inefficient and wasteful, you're doing it even if zero blocks are corrupt or missing. So instead, it's only when there is corruption you start to look for whether you can repair blocks and depending on where the corruption is, you may or may not realise you can actually "repair" the single byte difference block from the null or duplicate block/s.
In my case simply testing against the next duplicate or null block would work, but this won't always be the case. Notably while I only tested null or duplicate blocks, if you had a block with data and another block with the same data but 1 byte difference and no other duplicate blocks, I suspect the problem could occur if the corrupt block is after the okay block.
Possibly to really fix this, you will need to make 2 passes, the first time you detect which blocks are corrupted, then next pass you try to repair these blocks from every single block. I'm not sure whether it's worth adding such an option but if it does, it may be better to make it a new option. Call it paranoid verification mode or something. As having to read the file twice is likely to slow things down on a lot of modern systems if the data is on a HD.
One half-way option may be to limit it to duplicate (including null) non-corrupt blocks since that seems the only case it's likely to occur. While my example may be artificial, I could imagine an uncompressed file with a non data region filled with 00 or FF or something else. It probably doesn't even have to be a single byte, it could be a pattern like 20802080 or 28C35EF0 provided it ends up alignment. I suspect if there's a region with some "data" or header or whatever, it will be more than one byte, however I suspect in rare cases you could have a single byte. (Not sure about the chance of having a pattern with a single byte variation among repetitions. I also imagine it's rare disk images will only have a single byte not repeating so didn't use it as an example.)