Normalized Integer: Difference between revisions
Linking to the new redirect pages. |
Added a section on usage cases. |
||
Line 4: | Line 4: | ||
The downside to integer normalization is that they can only represent floating-point values on the range [0.0, 1.0] or [-1.0, 1.0], depending on whether they are unsigned or signed integers. This is sufficient in many cases for colors, but it can also be used for some [[Vertex Format|vertex inputs]] like texture coordinates and normals. | The downside to integer normalization is that they can only represent floating-point values on the range [0.0, 1.0] or [-1.0, 1.0], depending on whether they are unsigned or signed integers. This is sufficient in many cases for colors, but it can also be used for some [[Vertex Format|vertex inputs]] like texture coordinates and normals. | ||
== Use cases == | |||
Normalized integers are useful in many parts of OpenGL. The most common uses for them are: | |||
* Colors, whether stored in [[Texture]] [[Image Format]]s or [[Vertex Format]]s. Non-HDR color data has a maximum intensity. As such, a normalized integer per channel is a reasonable representation of colors. The most typical format is 8-bits per channel and unsigned (as negative colors are not frequently useful). | |||
* Texture coordinates. Most [[Texture Sampling]] functions accept normalized texture coordinates, with 0 representing the bottom/left/front of the image and 1 representing the top/right/rear. Indeed, [[Repeat Filtering]] relies on normalized coordinates, as this affects what happens when a texture coordinate is outside of the normalized range. | |||
* Normals. These can commonly use signed, normalized integers. They can be stored in vertex arrays or textures. In vertex arrays, they often use the {{enum|GL_INT_2_10_10_10_REV}} format. | |||
== Storage and bitdepths == | == Storage and bitdepths == |
Revision as of 04:56, 16 April 2015
A Normalized Integer is an integer which is used to store a decimal floating point number. When formats use such an integer, OpenGL will automatically convert them to/from floating point values as needed. This allows normalized integers to be treated equivalently with floating-point values, acting as a form of compression.
For example, if a 2D Texture's Image Format uses normalized integers, it will still be treated as a floating-point texture. The sampler type the shader uses will be sampler2D, just like for a floating-point texture. If you use this image in the framebuffer and write to it from the Fragment Shader, the output variables will be floating-point vectors, not integer ones.
The downside to integer normalization is that they can only represent floating-point values on the range [0.0, 1.0] or [-1.0, 1.0], depending on whether they are unsigned or signed integers. This is sufficient in many cases for colors, but it can also be used for some vertex inputs like texture coordinates and normals.
Use cases
Normalized integers are useful in many parts of OpenGL. The most common uses for them are:
- Colors, whether stored in Texture Image Formats or Vertex Formats. Non-HDR color data has a maximum intensity. As such, a normalized integer per channel is a reasonable representation of colors. The most typical format is 8-bits per channel and unsigned (as negative colors are not frequently useful).
- Texture coordinates. Most Texture Sampling functions accept normalized texture coordinates, with 0 representing the bottom/left/front of the image and 1 representing the top/right/rear. Indeed, Repeat Filtering relies on normalized coordinates, as this affects what happens when a texture coordinate is outside of the normalized range.
- Normals. These can commonly use signed, normalized integers. They can be stored in vertex arrays or textures. In vertex arrays, they often use the GL_INT_2_10_10_10_REV format.
Storage and bitdepths
Every normalized integer has some bitdpeth. These are usually 8 or 16, but some normalized integers use unusual numbers like 2, 10, or even 32. Regardless of the bitdpeth, the way they are converted is identical. Only the specific numbers change.
In all of the following equations, the bitdepth will be represented by B.
Unsigned
For unsigned, normalized integers, the conversion is fairly simple. For a given integer of bitdepth B, the maximum representable unsigned integer is $ MAX=2^{B}-1 $.
Unsigned, normalized integers map into the floating-point range [0, 1.0]. It does this by mapping the entire integer range to that. So it maps [0, MAX] to [0, 1.0] linearly, using the following simple equation:
$ float={\tfrac {int}{MAX}} $
The conversion back to integers uses the inverse equation.
Signed
For signed, normalized integers, the conversion is slightly more complicated. Signed integers in OpenGL are represented as Two's complement numbers. Therefore, for a given integer of bitdepth B, the maximum representable signed integer is $ MAX=2^{B-1}-1 $, while the minimum signed integer is $ MIN=-2^{B-1} $. Notice that the absolute value of MIN is larger than MAX.
In all cases, signed, normalized integers map to the floating-point range [-1.0, 1.0]. How exactly this mapping happens is version-specific.
In OpenGL 4.2 and above, the conversion always happens by mapping the signed integer range [MIN + 1, MAX] to the float range [-1, 1]. Notice that the absolute value of MIN + 1 is equal to MAX, so the range distribution is equal. More importantly, it allows signed, normalized integers to store a floating-point 0 exactly. However, the mapping for the signed integer value of MIN itself is also stated to resolve exactly to -1.0, so you can't get more negative values. Therefore, the mapping function is:
$ float=max({\tfrac {int}{MAX}},-1.0) $
The max function returns the largest value of its arguments, which ensures that if int is MIN, then the return value remains exactly -1.0.
Alternate mapping
In OpenGL versions less than 4.2, there are instances where an alternate conversion is used. These instances are:
- In Vertex Formats. This includes calls to glVertexAttribN* for signed integer types.
- In glTexParameteriv or glSamplerParameteriv, when storing the GL_TEXTURE_BORDER_COLOR.
- In Pixel Transfer operations, but only when the Image Format is a floating-point type (ie: not itself signed normalized), and the pixel transfer format/type describes signed, normalized values). Note: never do this; it's not useful. If the image format is float, your pixel data ought to be float.
The alternate mapping directly maps the signed integer range [MIN, MAX] to [-1.0, 1.0]. The equation for this is simple:
$ float={\tfrac {2int+1}{2^{B}-1}} $
While this allows the full signed integer range to be expressed, it also does not allow a signed integer to exactly express zero.