Git Product home page Git Product logo

Comments (3)

GoogleCodeExporter avatar GoogleCodeExporter commented on August 30, 2024
Hi,

The latest zopfli on git includes a tool called zopflipng.  You could get it 
and use it to compress your pngs instead of AdvanceCOMP.  There is an issue 
with it that can cause it to crash so you will also need the patch from here 
https://code.google.com/p/zopfli/issues/detail?id=28#c1

I ran zopflipng on dragon_gloves.png with several different levels of 
optimization and saw reductions in file size for each.

zopflipng: 94.763% of the original size (40K)
zopflipng -m: 94.289% of the original size (40K)
zopflipng --iterations=500 --splitting=3 --filters=01234mepb --lossy_8bit 
--lossy_transparent: 56.232% of the original size (23K)

The last method took a *very* long time.  Attached is the file.

Original comment by [email protected] on 12 Nov 2013 at 2:34

Attachments:

from zopfli.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 30, 2024
You should be using advdef, not advpng.

The reason why advpng gives worse results for you is because of the way it uses 
filters. Filters are part of the PNG spec and describe various methods for 
predicting the color of the next pixel based on the known colors of previous 
pixels (the pixel immediately to the left, the pixel immediately above, and/or 
the pixel to the upper left). These filters can be defined once for the whole 
image (filters 0-4) or individually for every scanline (called filter 5, but 
really a combination of filters 0-4).

Because of advpng's intended use, the compression of screenshots from 
MAME-emulated games, it forces every line of the image to use filter 0 
(basically, no prediction). This is the best choice for MAME screenshots, 
because they typically will have 256 colors or fewer and will have large areas 
of flat color. The prediction filters usually only produce better results than 
no prediction in the cases of full-spectrum color and gradients -- photographs 
and modern 2D and 3D art. Your image uses almost 7000 colors, as well as 
gradients, so it does worse when forced to use no prediction.

In contrast to advpng's behavior, advdef will simply recompress any DEFLATE 
stream, without trying to modify the filters being used. Using your original 
image, I got the following results from advdef:

advdef -z4 dragon_gloves.png
       43653       41406  94% dragon_gloves.png

With greatly increased iterations (this takes a long time), even better results 
can be had:

advdef -z4 -i1024 dragon_gloves.png
       41406       41117  99% dragon_gloves.png

Now, if we want to go a bit crazy, we can use another tool called pngwolf to 
choose the scanline filters more intelligently. As there is a choice of five 
different filters for each scan line, the total number of combinations, even 
for a small image like this, is too large to test exhaustively. So any PNG 
compressor running in Adaptive (filter 5) mode uses a heuristic to choose a set 
of filters. pngwolf uses a better set of heuristics, as well as a genetic 
algorithm to find a filter set that produces better overall compression.

Your original image uses the following filter set:
    000000000000000000000000000000000000000000000000000000000000000000000000
    000000000000000000000000000000000000000000000000000000000000000000034000
    320200200044111111111114434333343344444444444444444444444444444444444444
    4444444444444444444444444422244444444441

The best result I found with pngwolf was:
    000000000000000000000000000000000000000000000000000000000000000000000000
    000000000000000000000000000000000000000200000000000000000000000000000000
    000000000044111111111111411133323334144444444414444444414444114441444444
    4444444444444444444444444422244444414411

When this output was then compressed with advdef -z4 -i1024, I got a final size 
of 41061 bytes.

Now, beware of the previous poster's results. He/she used a technique called 
"dirty transparency" that tries to adjust the color contents of any pixels in 
the image that are set to be fully transparent by the alpha channel in order to 
improve the image compression. This assumes that any program that will use the 
image will be ignoring any fully-transparent pixels anyway. In your image, 
though, this ends up wiping out the whole lower right portion of the image, as 
the transparency info for that portion of the gloves is missing. If you were 
editing the image with the intent of adding that transparency info later, dirty 
transparency is not a good technique to use.

However, if you don't need that portion of the image, using dirty transparency 
obviously reduces the file size a lot in this particular image. Using a 
different tool (truepng) to do the dirty transparency and following it up by 
pngwolf and advdef -z4 -i1024, I got the file size down to 24465 bytes.

I've attached copies of my optimized files. One is completely lossless, while 
the other uses dirty transparency. I hope this works for you.

Original comment by [email protected] on 1 Feb 2014 at 11:34

Attachments:

from zopfli.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 30, 2024
I tried the current version of zopflipng on the input image (which is 43653 
bytes).

With default settings, it makes the image 6% smaller: 41344 bytes
With --lossy_transparent, it makes it 44% smaller: 24692 bytes
With --lossy_transparent --iterations=1000, it gives 24551 bytes.

Note that --lossy_transparent has the same meaning as "dirty transparency" 
mentioned above.
The result is slighty worse, probably due to having less ideal PNG filter 
values than pngwolf gave above.

But zopfli and zopflipng are making the image smaller and all techniques 
implemented in it are working as intended here, so I think this bug can be 
closed. If AdvanceCOMP still makes it larger, please report it there. Thanks!

Original comment by [email protected] on 2 Jul 2014 at 2:27

  • Changed state: WontFix

from zopfli.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.