Comments (8)
It's working now! I can't tell why it wasn't before, though... Maybe the system was calling a cached version? But it shouldn't.
Anyways, I'm reaaally glad. I've been trying to make pcompress work to test it since I first found out about it years ago. Thank you!
BTW, You might want to see this, if you're planning on giving pcompress some more love:
Google's brunsli is 5x faster than packJPG, and only about -1% on ratio. Sometimes even stronger, and is under development.
Also, fast-lzma2 is 2x faster than mt-lzma, and lends itself beautifully to the chunk-based approach of pcompress. Basically, it allows to use the same state across chunks, producing a true solid stream no matter how many threads it's using. Bottom line, it will probably compress way better than the current approach while doing it twice faster.
There are a lot more than can be done on pcompress, if you want to discuss it. Thanks again for your work!
from aur-pcompress-git.
Thanks for great reassures, I'm not really into cpp development but will look into it.
from aur-pcompress-git.
As far as I can tell it should be enabled, check nm -C /usr/lib/libpcompress.so|grep jpg
0000000000095220 t packjpg_filter
00000000000a4290 t packjpg_filter_process
00000000001a12a8 b jpgfilename
00000000001a129c b jpgfilesize
000000000009a310 t jpg_parse_jfif(unsigned char, unsigned int, unsigned char*)
0000000000099750 t jpg_next_mcupos(int*, int*, int*, int*, int*, int*)
000000000009c230 t jpg_next_mcuposn(int*, int*, int*)
000000000009b3b0 t jpg_encode_crbits(abitwriter*, abytewriter*)
000000000009ca00 t jpg_encode_eobrun(abitwriter*, huffCodes*, int*) [clone .part.0]
000000000009c120 t jpg_next_huffcode(abitreader*, huffTree*)
000000000009b510 t jpg_setup_imginfo()
000000000009b430 t jpg_rebuild_header()
0000000000014100 t jpg_rebuild_header() [clone .cold]
000000000009da20 t jpg_decode_block_seq(abitreader*, huffTree*, huffTree*, short*)
000000000009c180 t jpg_decode_dc_prg_fs(abitreader*, huffTree*, short*)
000000000009b220 t jpg_encode_block_seq(abitwriter*, huffCodes*, huffCodes*, short*)
000000000009c990 t jpg_encode_dc_prg_fs(abitwriter*, huffCodes*, short*) [clone .isra.0]
By default it's build with LGPL3
license which has packjpg
enabled.
./archive/pc_arc_filter.c
#ifndef _MPLV2_LICENSE_
extern size_t packjpg_filter_process(uchar_t *in_buf, size_t len, uchar_t **out_buf);
ssize_t packjpg_filter(struct filter_info *fi, void *filter_private);
extern size_t packpnm_filter_process(uchar_t *in_buf, size_t len, uchar_t **out_buf);
ssize_t packpnm_filter(struct filter_info *fi, void *filter_private);
#endif
from aur-pcompress-git.
You're right, I have the same output as you. But it just doesn't work then:
$ pcompress -a -l14 -GLPxjC photos photos.pz
Scanning files.
Sorting ...
Scaling to 2 threads
Compression Statistics
======================
Total chunks : 3
Best compressed chunk : 2 MB(14.52%)
Worst compressed chunk : 19 MB(99.90%)
Avg compressed chunk : 13 MB(69.93%)
Adaptive mode stats:
BZIP2 chunk count: 0
LIBBSC chunk count: 0
PPMd chunk count: 0
LZMA chunk count: 1
LZ4 chunk count: 0
37623558 photos.tar.pcf //precomp -cn
44000941 photos.pz
44862622 photos
from aur-pcompress-git.
Yep, looks like a bug in pcompress.c:init_pc_contex()
pctx.enable_packjpg
is set only in section responsible for automatic selection of extra compression filters when higher levels are set.
https://github.com/moinakg/pcompress/blob/c6e779c40041b7bb46259e9806fa92b20c7b78fb/pcompress.c#L3658-L3674
Using any -DPGjx
flags sets pctx.advanced_opts=1
which skips this section and pctx.enable_packjpg
and friends are newer set according to command line flags.
I've pushed to github a small patch that should fix this, gave it a try ( postponed push to AUR until confirmed to resolve the issue, also debug flags is set for you in PKGBUILD so just makepkg -CLfi
it 😏 )
from aur-pcompress-git.
To quickly see whats going on start pcompress
with desired flags and attach perf
to its pid with sudo perf top -d 3 -p $PID
you should see some pjg_*
functions from packjpg
filter.
11.90% libpcompress.so.1 [.] Bt4_MatchFinder_GetMatches
8.94% libpcompress.so.1 [.] model_s::update_model
8.28% libpcompress.so.1 [.] aricoder::encode
6.66% libpcompress.so.1 [.] GetMatchesSpec1
5.06% libpcompress.so.1 [.] model_s::shift_context
4.81% libpcompress.so.1 [.] abitreader::read
3.88% libpcompress.so.1 [.] model_s::totalize_table
3.70% libpcompress.so.1 [.] pjg_encode_ac_high
3.53% libpcompress.so.1 [.] RangeEnc_EncodeBit
3.03% libpcompress.so.1 [.] model_b::shift_context
2.42% libpcompress.so.1 [.] pjg_aavrg_context
2.25% libpcompress.so.1 [.] aricoder::write_bit
2.07% libpcompress.so.1 [.] GetOptimum
from aur-pcompress-git.
It's not working for me:
35810318 photos.pz // -l14
44000941 photos.pz // -l14 -GLPxjC
It's also overriding the use of wavpack:
2021471 audio.pz // -l14
4720015 audio.pz // -l14 -GLPxjC
I'm using the newer binary, of course.
Do you want me to upload it or to run it with some other parameter?
from aur-pcompress-git.
I've tried precomp
(btw. thanks, I haven't knew about it) and it looks fine to me 🤔
d /tmp/photos*
42M /tmp/photos_fix.pz // pcompress -j
42M /tmp/photos_orig.pz // pcompress -l14
42M /tmp/photos.pcf // precomp /tmp/photos.tar
57M /tmp/photos.tar
from aur-pcompress-git.
Related Issues (2)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from aur-pcompress-git.