Texture Sampling: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 27: | Line 27: | ||
To bind your texture, use glBindTexture | To bind your texture, use glBindTexture | ||
glActivateTexture( | glActivateTexture(GL_TEXTURE0); | ||
glBindTexture(GL_TEXTURE_2D, tex[0]); | glBindTexture(GL_TEXTURE_2D, tex[0]); | ||
glActivateTexture( | glActivateTexture(GL_TEXTURE1); | ||
glBindTexture(GL_TEXTURE_CUBE_MAP, tex[1]); | glBindTexture(GL_TEXTURE_CUBE_MAP, tex[1]); | ||
glActivateTexture( | glActivateTexture(GL_TEXTURE2); | ||
glBindTexture(GL_TEXTURE_2D, tex[2]); | glBindTexture(GL_TEXTURE_2D, tex[2]); | ||
== In The GLSL Shader == | == In The GLSL Shader == | ||
So what happens when you sample the texture?<br> | So what happens when you sample the texture?<br> | ||
When you sample it, you get a normalized value if the texture is some fixed point format. By fixed point, we mean integer formats. | When you sample it, you get a <b>normalized</b> value if the texture is some fixed point format. By fixed point, we mean integer formats. | ||
Integer formats are BYTE, UNSIGNED_BYTE, SHORT, UNSIGNED_SHORT, INTEGER, UNSIGNED_INTEGER.<br> | Integer formats are BYTE, UNSIGNED_BYTE, SHORT, UNSIGNED_SHORT, INTEGER, UNSIGNED_INTEGER.<br> | ||
These would be formats like GL_RGBA8, GL_LUMINANCE8, GL_ALPHA8, GL_LUMINANCE16 and the many other formats listed in the GL spec.<br> | These would be formats like GL_RGBA8, GL_LUMINANCE8, GL_ALPHA8, GL_LUMINANCE16 and the many other formats listed in the GL spec.<br> | ||
Line 43: | Line 43: | ||
vec4 texel2 = texture2D(MyGlossMap, TexCoord2); | vec4 texel2 = texture2D(MyGlossMap, TexCoord2); | ||
The values returned will be from 0.0 to 1.0 for each RGBA component.<br> | The values returned will be from 0.0 to 1.0 for each RGBA component. In other words, values are normalized, 0.0 to 1.0.<br> | ||
The GPU reads the integer format and converts them automatically.<br> | The GPU reads the integer format and converts them automatically.<br> | ||
If for some reason, you don't want it to do a conversion, see http://www.opengl.org/wiki/index.php/GL_EXT_texture_integer<br> | If for some reason, you don't want it to do a conversion, see http://www.opengl.org/wiki/index.php/GL_EXT_texture_integer<br> | ||
That extension is available on Geforce 8 and up.<br> | That extension is available on Geforce 8 and up.<br> | ||
<br> | |||
If the texture is a floating point format such as GL_RGBA32F, then since this is a 32 bit float, then no conversion takes place.<br> | If the texture is a floating point format such as GL_RGBA32F, then since this is a 32 bit float, then no conversion takes place.<br> | ||
If the texel has a value like -489.5, then that's what you get.<br> | If the texel has a value like -489.5, then that's what you get.<br> | ||
Line 57: | Line 57: | ||
half, half2, half3, half4 are not defined in the standard of GL 2.1 | half, half2, half3, half4 are not defined in the standard of GL 2.1 | ||
#version 110 | #version 110 | ||
Don't put the above at the top of your shader if your shader doesn't follow the standard!<br> | Don't put the above at the top of your shader if your shader doesn't follow the standard!<br> |
Revision as of 14:07, 18 May 2009
After Compiling Your Shader
In order to setup texture sampling, after you have compiled your shader, you need to get the location for the samplers.
For getting a shader's uniform's location, the shader doesn't need to be bound when you call glGetUniformLocation.
Let's assume you have a sampler called MyDiffuseTexture, MyEnvironmentMap, MyGlossMap in your FS
uniform sampler2D MyDiffuseTexture; uniform samplerCube MyEnvironmentMap; uniform sampler2D MyGlossMap;
In your code, get the locations
the_location0 = glGetUniformLocation(ProgramObject, "MyDiffuseTexture"); the_location1 = glGetUniformLocation(ProgramObject, "MyEnvironmentMap"); the_location2 = glGetUniformLocation(ProgramObject, "MyGlossMap");
By default, each of those uniforms are 0. If you use the shader as is, an error will be raised. You would need to call glGetError(). The state of the shader is also consider invalid.
int isValid; glValidateProgram(ProgramObject); glGetProgramiv(ProgramObject, GL_VALIDATE_STATUS, &isValid);
So, to set up those uniforms, bind the shader and call glUniform1i since they are considered as integers
glUseProgram(ProgramObject); //Bind to tex unit 0 glUniform1i(the_location0, 0); //Bind to tex unit 1 glUniform1i(the_location1, 1); //Bind to tex unit 2 glUniform1i(the_location2, 2);
To bind your texture, use glBindTexture
glActivateTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex[0]); glActivateTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, tex[1]); glActivateTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, tex[2]);
In The GLSL Shader
So what happens when you sample the texture?
When you sample it, you get a normalized value if the texture is some fixed point format. By fixed point, we mean integer formats.
Integer formats are BYTE, UNSIGNED_BYTE, SHORT, UNSIGNED_SHORT, INTEGER, UNSIGNED_INTEGER.
These would be formats like GL_RGBA8, GL_LUMINANCE8, GL_ALPHA8, GL_LUMINANCE16 and the many other formats listed in the GL spec.
vec4 texel0 = texture2D(MyDiffuseTexture, TexCoord0); vec4 texel1 = textureCube(MyEnvironmentMap, TexCoord1); vec4 texel2 = texture2D(MyGlossMap, TexCoord2);
The values returned will be from 0.0 to 1.0 for each RGBA component. In other words, values are normalized, 0.0 to 1.0.
The GPU reads the integer format and converts them automatically.
If for some reason, you don't want it to do a conversion, see http://www.opengl.org/wiki/index.php/GL_EXT_texture_integer
That extension is available on Geforce 8 and up.
If the texture is a floating point format such as GL_RGBA32F, then since this is a 32 bit float, then no conversion takes place.
If the texel has a value like -489.5, then that's what you get.
If you are using GL_RGBA16F, then the 16 bit float gets upconverted (cast) to 32 bit float.
If you don't want it to convert, do the following
half4 texel0 = texture2D(My16BitFloatTexture, TexCoord0);
That works on nVidia. You must make sure not to define any version number in your shader else nVidia drivers will consider half4 not defined.
half, half2, half3, half4 are not defined in the standard of GL 2.1
#version 110
Don't put the above at the top of your shader if your shader doesn't follow the standard!