Transparency to BC3_UNORM_SRGB problems


It seems that the tool has a bug with using a transparent (PNG/TGA) for input and converting it to BC3_UNORM_SRGB. Basically I have a texture that uses transparent PNG and I converted it to DDS, while loading it into my code I noticed that mipmap 1 had problems (mipmap 0 is fine) so naturally I thought it was in my code, but after digging for about 2 hours and running the older DXT5 format throught he same code and not exhibiting the issues I do not think it is my code. Also I can reproduce the exact issue using the Intel Texture Works plugin (https://software.intel.com/en-us/articles/intel-texture-works-plugin.)

(Unfortunately I cannot use visual studio as when I load the DDS generated and trying to unselect the alpha channel, I get an HRESULT error code that basically will not allow me to not display alpha. Also the NVIDIA photoshop plugin is now so outdated that it does not seem to support the new BC formats. And unfortunately the shipped DDSView application does not allow me to view mipmaps.)

I have attached a piece of the source PNG that allows you to reproduce the issue (the original is 4096x4096), the generated DDS and the problematic mip.

The steps are as follows:

1) Convert the PNG using this command line: "texconv -f BC3_UNORM_SRGB -ft DDS test.png"
2) Load the DDS in either custom code or in photoshop using the Intel Texture Workes Plugin.
3) While in photoshop loading using the load all mips.
4) Then select mip 1 and go to Layer->Layer Mask->From Transparency. (Note Mip 0 is fine, but other mips are not, and then at some point, usually around mip 8-10 it is fine again.)
5) Click the chain icon between the newly created layer mask and the layer data
6) Click the layer mask, select all, paint it white (so that it has no transparency)
7) Notice some pixel corruption (consistent with what I see in my application, should be trivial to write an app that loads the DDS and displays mip 1) For convenience I have added test mip1.bmp which shows the resulting problem.

The same problem occurs when using a TGA with 32 bits per channel (also attached).

Current workaround is to open the texture in photoshop and use the Intel Texture Works tools to save out the BC3 DDS with sRGB, this results in the correct image. Unfortunately I have quite a few textures and would rather run them through my automated pipeline using texconv, so hopefully this issue can be fixed.

Thank you in advance

EDIT: Interestingly enough Intel Texture Work has source code as well... and it seems they build against DirectXTex version 132, whereas the latest code I have is DirectXTex version 134. So perhaps the bug got introduced since then, or perhaps they call it with different options for conversion (note I did try to use "-nogpu" and a couple other things, but that did not make a difference.)

file attachments

Closed Sep 16, 2016 at 5:35 AM by walbourn


walbourn wrote Sep 15, 2016 at 7:02 AM

Answered on GitHub

walbourn wrote Sep 15, 2016 at 7:02 AM

** Closed by walbourn 09/15/2016 12:02AM

efolkertsma wrote Sep 15, 2016 at 4:45 PM

To my understanding this is NOT the same issue, the github issue is specifically about the alpha channel... I am saying if I write a shader sampling the texture and fully ignoring the alpha channel (so just access .xyz) I can see corruption in the colors of the lower mipmaps.

If you loads the DDS into the GPU (glcompressedteximage2d) it will render incorrectly. So by the logic described in the github articles then the hardware also has the wrong decode? Please write a small code sample to at least try to repro the issues I am reporting. I would have written a test application for you, but my employment contract unfortunately prohibits such things.

walbourn wrote Sep 16, 2016 at 5:34 AM

The test image you provide has a very odd alpha channel which doesn't appear to be transparency. If you use:
texconv -f BC3_UNORM_SRGB -ft DDS test.png -sepalpha
It looks correct to me.

walbourn wrote Sep 16, 2016 at 5:34 AM

Which was the conclusion of the GitHub thread too.

efolkertsma wrote Sep 16, 2016 at 6:17 AM

Thank you, this does seem to work...

It is not great because I cannot know whether the alpha is used for transparency or something else (like a spec map in this case), without using the shader as well.

You already answered my follow up question in the GitHub thread:

"You can safely use -sepalpha for all your processing, it is just less efficient since it effectively resizes the image twice. If you don't know if the source image is using alpha for something other than transparency, that's probably the safest bet."

Just out of curiosity what would be a case why you would ever NOT use -sepalpha?

Thanks again

walbourn wrote Sep 17, 2016 at 7:06 AM

The performance of -sepalpha is a little slower, but since it's a one-time conversion cost it's a fine choice for your scenario.