gammaunc / fastc Goto Github PK
View Code? Open in Web Editor NEWA fast texture compressor for various formats
License: Apache License 2.0
A fast texture compressor for various formats
License: Apache License 2.0
When FasTC is compressing images with alpha channel filled with distinct values to BPTC compression format, there are few blocks which should contain uniform alpha 255 values, but certain pixels has wrong alpha values: 254 instead of 255. Those blocks can be located far from the blocks containing alpha channel values less than 255.
For example, bug appears after compressing this image:
https://dl.dropboxusercontent.com/u/3319885/source.png
The result of compression:
https://dl.dropboxusercontent.com/u/3319885/result-bptc.png
Part of damaged pixels marked at this area:
https://dl.dropboxusercontent.com/u/3319885/bug-area.png
Our renderer's dithered transparency algorithm erroneously discards those pixels with alpha = 254, leading to completely solid objects having random single-pixel holes :(. But expected value for those pixels is 255 as in source texture. Is there any chance to figure out what's going on, or maybe even fix it?
Thanks for attention.
I tried your decompressor on several ASTC files encoded using the ARM encoder, using the following command options: 3.56 -medium
You can see below the expected result and what I obtained:
The white squares are not important, they appear because you did not provide the void-extent block decompression, that is not the point here.
The major problem comes on the left side of the image, where we can see some pink fragments. These fragments are quite often present, even on other images, and always on the left border. I did not have time to dig it a lot, but my intuition is that this is caused when you are writing your pixel to the out buffer.
Another thing that should be pointed out, you can see that the top of the image is also different from what was expected (there is also an error block). I tried to look around a bit for this, and I have the impression that your raw data is inconsistent. This second problem also appeared on several images.
If I have time to look around, I will try to find why these two issues appear and make a pull request.
This will avoid spurious CMake and other configuration bugs.
For example:
[pavel@Zombie-MBP ~/Projects/TexComp/build]$ CLTool/tc -q 0 -t 32 -j 32 ../test/kodim05.png
Compression time: 3666.615 ms
PSNR: 42.140
However, in PVRTexTool, we get:
Red | Green | Blue | All | |
---|---|---|---|---|
PSNR: | 45.15 | 45.40 | 45.02 | 44.18 |
Currently clunix.cpp
and clwin32.cpp
share a lot of code, and should be refactored.
Including exterior code into your project seems like a bad idea and creates tons of issues when an other project uses GTest and FasTC together. I would advise replacing including Gtest source code with a FetchContent.
There are a few things that are dependent on the "core" library for both the BPTC compressor and the upcoming PVRTC compressor. Namely, TexCompTypes.h
needs to be available, but ideally we would like to not have to depend on things like the FasTC threading hooks around PThreads or Boost. Hence, ideally we should be able to split some base classes out of the core library and have the core library be solely used for front-end testing tools.
One of the biggest things to do to complete this task is to develop a lightweight streambuffer that is threadsafe so that we can log to files while still using multi-threaded code.
Recent refactoring for the BC7/BPTC compressor caused some errors. We can avoid this in the future by implementing a few unit tests:
Right now, the PRVTC decoding step requires linking against the IO library for debugging purposes. This should be removed, or at least placed behind a CMAKE variable...
the readme is missing a section on licensing of the code. Looking around the code it seems different parts of the code are under different licenses. A clear overview would be welcome
It's the only code i found to compress pvrtc tex after a long search, and it's easy to use.but the compressed texture looks worse than pvrtextool, shown in this image http://tinypic.com/view.php?pic=14tvdkj&s=9#.WcAjfYSGOUk, the left is compressed with fastc, the right is pvrtextool, the difference is very obvious on cheek. Is this a known issues? any plan to solve this?
Hi, I'm getting lots of "WARNING: Block error very high at ..." messages and compression takes like 2 minutes when trying to compress a 1024x1024 PNG file.
Any suggestions?
f43e934 seems to introduce artifacts in the compressed output and, in general, decrease PSNR.
Compressing the below picture:
with:
tc -f DXT1 -d picture.ktx picture.png
and then decompressing it with:
decomp picture.ktx decomp.png
generates output with rectangular black artifacts:
Reverting this commit makes the artifacts disappear.
There are a bunch of problems right now. We assume that pixels are read in block-stream order, i.e. the pixels in memory have coordinates
(0, 0), (1, 0), (2, 0), (3, 0), (0, 1), (1, 1), (2, 1), ...
This needs to be explicitly documented and allow for clients to alter/specify the pixel ordering as they see fit.
Also, we should split the high level image functions (like Compress()) out into their own separate file and create a BaseImage class that handles low level image operations.
This would be useful for exporting DXT1 / DXT5 to KTX using the CLTool.
CompressorSIMD.cpp in BPTCEncoder doesn't compile. Multiple errors
I don't see a way to create output file. tc -h
outputs:
Usage: tc [OPTIONS] imagefile
-h|--help Print this help.
-v Verbose mode: prints out Entropy, Mean Local Entropy, and MSSIM
-f <fmt> Format to use. Either "BPTC", "ETC1", "DXT1", "DXT5", or "PVRTC". Default: BPTC
-l Save an output log.
-d <file> Specify decompressed output (default: basename-<fmt>.png)
-nd Suppress decompressed output
-q <quality> Set compression quality level. Default: 50
-n <num> Compress the image num times and give the average time and PSNR. Default: 1
-simd Use SIMD compression path
-t <num> Compress the image using <num> threads. Default: 1
-a Compress the image using synchronization via atomic operations. Default: Off
-j <num> Use <num> blocks for each work item in a worker queue threading model. Default: (Blocks / Threads)
There is no way to produce output?
It would be extremely helpful if you provided images (in github repo or external links) and command line params used for both tools you used to get comparison results between FastTC and PVRTexTool (see http://gamma.cs.unc.edu/FasTC/).
In my case I try to run some defaults that this help provides and FastTC is like 10-20 times slower for me and doesn't even produce any output.
The C++11 standard comes with fixed width integers which should be used if the compiler has C++11 support:
There seems to be a problem with template specialization under MSVC:
14>D:\cygwin\home\Pavel\Projects\TexComp\src\Base\test\TestVector.cpp(140): error C2893: Failed to specialize function template 'MultSwitch<FasTC::VectorTraits<T>::kVectorType,FasTC::VectorTraits<TypeTwo>::kVectorType,TypeOne,TypeTwo>::ResultType FasTC::operator *(const TypeOne &,const TypeTwo &)'
This has been tested to work under clang 3.0-6
, gcc 4.8.1
and icc 14.0.0
The offending function template is:
https://github.com/Mokosha/FasTC/blob/master/Base/include/VectorBase.h#L191
Looking at the ASTC Decompressor, I did not find where you handle void-extent blocks.
Looking at the specifications, I see that void-extent blocks do not have the same layout as "normal" blocks. Can you use your decompressor on normal maps ? because a lot of them have some void-extent blocks, and I am wondering whether you handle it in another way or not.
hi,the project include some .h file and i cant find
such as:BC7Config.h BC7Compressor.h and avpcl.h
There are a few things we can do to speed up the performance of the PVRTC compression routines.
When we move from one row to another
Would like to find out what are the options for PVR formats, looks like currently only PVR version 1 4BPP RGBA supported?
If there are plans to support RGB and 2BPP as well? So there would be:
PVR1_2BPP_RGB
PVR1_2BPP_RGBA
PVR1_4BPP_RGB
PVR1_4BPP_RGBA
Btw, amazing work! This is fastest encoders we've managed to find. A bit more quality options would be great for DXT but overall, speed is just great!
I see a ASTCEncoder subdirectory, but I don't see command line options for targeting ASTC. Am I correct in deducing that ASTC support is still work-in-progress? #
Right now we're only using Boost for cross-platform threading. On OSX and Linux, pthreads are always available (which are already implemented). It shouldn't be too difficult to implement threading properly using win32 threads on Windows.
It would be pretty handy to add pixel sampling for compressed texture data or at least expose block decompression to allow for pixel sampling without having to decompress the whole image.
Hi, while trying to convert a PVRTC I get the following error:
Assertion failed: (!"Implement me!"), function GenerateLowHighImages, file /Users/a12907/Hayabusa/FasTC/src/PVRTCEncoder/src/Compressor.cpp, line 553.
[1] 10010 abort CLTool/tc
Line 77 in 0f8cef6
Right now we only use stb_dxt, which doesn't support punchthrough alpha -- either change this upstream or add in a small amount of support that handles transparent blocks prior to sending them to stb dxt.
Needs to be added to tc.cpp when compiling using VS2013. There's some info here: http://stackoverflow.com/questions/19439670/min-max-not-a-member-of-std-errors-when-building-opencv-2-4-6-on-windows-8
I don't think that this will replace the CMake checks to make sure that the compiler has the proper headers. I think the way we can do this by adding a flag to a CompressionJob.
The constant lookup tables seem to be less accurate (although a bit faster) than straight up compressing the the blocks. This is especially true for textures with alpha. (See #16)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.