|
|
Line 1: |
Line 1: |
| First, let's talk about texture types available.<br>
| | #REDIRECT [[Textures]] |
| <br>
| |
| GL has GL_TEXTURE_1D. You can ignore this and use GL_TEXTURE_2D instead.<br>
| |
| This has been part of GL since 1.0<br>
| |
| Texture coordinates are normalized. That means if you have a dimension like 256, texcoords are from 0.0 to 1.0.<br>
| |
| Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.<br>
| |
| <br>
| |
| GL_TEXTURE_2D has width and height and usually the GPU stores this in memory in a format that is quick to access.<br>
| |
| For example, small blocks of the texture are stored in sequence so that cache memory works better.
| |
| This has been part of GL since 1.0<br>
| |
| Texture coordinates are normalized. That means if you have a dimension like 256x256, texcoords are from 0.0 to 1.0.<br>
| |
| Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.<br>
| |
| <br>
| |
| GL_TEXTURE_3D has width and height and depth and usually the GPU stores this in memory in a format that is quick to access.<br>
| |
| Just like 2D, small blocks of the texture are stored in sequence so that cache memory works better but other techniques exist as well.<br>
| |
| This has been part of GL since 1.2<br>
| |
| Texture coordinates are normalized. That means if you have a dimension like 256x256x256, texcoords are from 0.0 to 1.0.<br>
| |
| Of course, if you go beyond that, such as -1.0 to 5.0, then the texture will repeat over your polygon.<br>
| |
| <br>
| |
| GL_TEXTURE_CUBE_MAP has width and height and 6 faces. Kind of like 2D except it has 6 faces and texcoords work in a special way.<br>
| |
| This has been part of GL since 1.3<br>
| |
| Texture coordinates behave in a special way. You should use str coordinates and these behave as normal vectors. A certain algorithm is used to know which
| |
| face has been selected, and then which texel is sampled.<br>
| |
| <br>
| |
| GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_RECTANGLE_NV, GL_TEXTURE_RECTANGLE_ARB are supported as extensions. For having non power
| |
| of 2 dimension 2D textures. Texcoords worked in a unusual way. From 0 to width for S. From 0 to height for the T.<br>
| |
| There are certain limitations such as anisotropy might not work, mipmaps are not allowed, only certain wrap modes are supported such as GL_REPEAT, GL_CLAMP_TO_EDGE.<br>
| |
| On certain GPUs, the driver will pad your texture with black pixels in order to make it power of 2. This is for getting better performance.<br>
| |
| You won't ever see those black pixels.<br>
| |
| <br>
| |
| With GL 2.0, GL_TEXTURE_RECTANGLE becomes obsolete. You can make textures with any dimension, mipmap it, use any anisotropy supported,
| |
| use any texture wrap mode.<br>
| |
| <br>
| |
| <br>
| |
| === How to create a texture ===
| |
| <br>
| |
| Create a single texture ID<br>
| |
| uint textureID;
| |
| glGenTextures(1, &textureID);
| |
| <br>
| |
| You have to bind the texture before doing anything to it.<br>
| |
| glBindTexture(GL_TEXTURE_2D, textureID);
| |
| <br>
| |
| Now you should define your texture properties. Any call sequence will work since GL is a state machine.<br>
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, TextureWrapS);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, TextureWrapT);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, MagFilter);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter);
| |
| <br>
| |
| Some people make the mistake of not calling glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, MinFilter); and
| |
| they get the default state GL_LINEAR_MIPMAP_NEAREST and they don't define the mipmaps, so <b>the texture is considered
| |
| incomplete</b> and you just get a white texture.<br>
| |
| <br>
| |
| If you will use mipmapping, you can either define them yourself by making many calls to glTexImage2D or let the GPU generate the mipmaps.<br>
| |
| Since current GPUs can generate it automatically with a box filter technique, you can call<br>
| |
| Mipmapping is usually good and increases performance.<br>
| |
| <br>
| |
| //Use this if GL 1.4 is supported
| |
| glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
| |
| //Since the above is considered deprecated in GL 3.0, it is recommended that you use glGenerateMipmap(GL_TEXTURE_2D)
| |
| //or if GL_EXT_framebuffer_object is supported, use glGenerateMipmapEXT(GL_TEXTURE_2D)
| |
| //but call it after your call to glTexImage2D
| |
| <br>
| |
| <font color="#ff0000">It has been reported that on some ATI drivers, glGenerateMipmapEXT(GL_TEXTURE_2D) has no effect except if you proceede it with a call to glEnable(GL_TEXTURE_2D). Once again, to be clear, call glTexImage2D, then glEnable, then glGenerateMipmapEXT.</font><br>
| |
| We recommend that you use glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE) for standard textures since this will work on both ATI and nVidia (For GL 2.1 only. For GL 3.0, it is deprecated and you should use glGenerateMipmap).<br>
| |
| <br>
| |
| If you need anisotropy, call<br>
| |
| glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_ANISOTROPY_EXT, Anisotropy);
| |
| <br>
| |
| The minimum value is 0.0 and max is glGetIntegerv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotropy).<br>
| |
| Anisotropy is an extension. It can drag down performance greatly and make your results look better. Use as less as possible.<br>
| |
| You need to check if GL_EXT_texture_filter_anisotropic is present.<br>
| |
| The spec is here http://www.opengl.org/registry/specs/EXT/texture_filter_anisotropic.txt<br>
| |
| <br>
| |
| Define the texture with<br>
| |
| glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, border, format, type, ptexels);
| |
| <br>
| |
| <br>
| |
| You need to make sure that your width and height are supported by the GPU.<br>
| |
| int Max2DTextureWidth, Max2DTextureHeight;
| |
| glGetIntegerv(GL_MAX_TEXTURE_SIZE, &Max2DTextureWidth);
| |
| Max2DTextureWidth=Max2DTextureHeight;
| |
| int MaxTexture3DWidth, MaxTexture3DHeight, MaxTexture3DDepth;
| |
| glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE, &MaxTexture3DWidth);
| |
| MaxTexture3DHeight=MaxTexture3DDepth=MaxTexture3DWidth;
| |
| int MaxTextureCubemapWidth, MaxTextureCubemapHeight;
| |
| glGetIntegerv(GL_MAX_CUBE_MAP_TEXTURE_SIZE, &MaxTextureCubemapWidth);
| |
| MaxTextureCubemapHeight=MaxTextureCubemapWidth;
| |
| int MaxTextureRECTWidth, MaxTextureRECTHeight;
| |
| glGetIntegerv(GL_MAX_RECTANGLE_TEXTURE_SIZE_ARB, &MaxTextureRECTWidth);
| |
| MaxTextureRECTHeight=MaxTextureRECTWidth;
| |
| int MaxRenderbufferWidth, MaxRenderbufferHeight;
| |
| glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE_EXT, &MaxRenderbufferWidth);
| |
| MaxRenderbufferHeight=MaxRenderbufferWidth;
| |
| <br>
| |
| <br>
| |
| Very old GPUs don't support border texels.<br>
| |
| Make sure the format is supported by the GPU else the drivers will convert into a proper format.<br>
| |
| Make sure the internal format is supported by the GPU (example : GL_RGBA8) else the driver will convert the texture for you.<br>
| |
| There is no way to query what formats the GPU supports but IHVs (nVidia, AMD/ATI) publish documents on what is supported.<br>
| |
| For example, it is very common for GL_RGBA8 to be supported but GL_RGB8 is not.<br>
| |
| You should also call glGetError to make you don't get an error message like running out of memory (GL_OUT_OF_MEMORY).<br>
| |
| <br>
| |
| The only thing left is calling glTexEnv but this isn't part of the texture state. It is part of the texture_environment, in other
| |
| words the texture unit.<br>
| |
| <br>
| |
| To use the texture, bind it to a texture unit with glBindTexture and don't forget to enable texturing with
| |
| glEnable(GL_TEXTURE_2D) and disable with glDisable(GL_TEXTURE_2D)
| |
| <br>
| |
| That's the basics of creating a texture and using it.<br>
| |
| <br>
| |
| | |
| === Just allocate memory for a texture ===
| |
| If you want to just allocate memory for the texture but you don't want to initialize the texels, then just give a NULL pointer to glTexImageXD. The GL specification doesn't say what values the texels will have.<br>
| |
| uint textureID;
| |
| glGenTextures(1, &textureID);
| |
| glBindTexture(GL_TEXTURE_2D, textureID);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
| |
| glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
| |
| //GL_BGRA and GL_UNSIGNED_BYTE imply the GL_BGRA8 format.
| |
| //The driver might store it as GL_BGRA8 or it might flip red and blue to create GL_RGBA8
| |
| //Most GPUs support the Microsoft standard of GL_BGRA8
| |
| //The following call is garanteed to be very fast on Windows, nVidia, ATI/AMD
| |
| glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
| |
| | |
| === Update some pixels ===
| |
| If you want to update some pixels and your pixels are in RAM, use glTexSubImage2D.<br>
| |
| Some people use glTexImage2D which causes the previous texture to be deleted and reallocated, which causes a slow down.<br>
| |
| | |
| //Never forget to bind!
| |
| glBindTexture(GL_TEXTURE_2D, textureID);
| |
| //GL_BGRA and GL_UNSIGNED_BYTE imply the GL_BGRA8 format.
| |
| //Most GPUs support the Microsoft standard of GL_BGRA8
| |
| //The following call is garanteed to be very fast on Windows, nVidia, ATI/AMD
| |
| glTexSubImage2D(GL_TEXTURE_2D, level, xoffset, yoffset, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
| |
| | |
| === Copy the frame buffer to the texture ===
| |
| Don't use glCopyTexImage2D. This functions deletes the previous texture and reallocates therefore it is slow.<br>
| |
| Use glCopyTexSubImage2D instead, which just updates the texels.<br>
| |
| So, you need to render the scene to the backbuffer, don't call SwapBuffers, bind the texture and call glCopyTexSubImage2D
| |
| RenderScene();
| |
| glBindTexture(GL_TEXTURE_2D, textureID);
| |
| glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);
| |
| SwapBuffers();
| |
| | |
| === glGenTextures ===
| |
| Always use this to generate texture IDs.<br>
| |
| Also, glGenTextures does not allocate any texture. It just reserves an ID. It doesn't consume much RAM.<br>
| |
| You must call glTexImage2D to really allocate texture memory.<br>
| |
| Furthermore, it is possible that the driver won't upload your texture to VRAM when you call glTexImage2D.<br>
| |
| It will wait until you use it when rendering.<br>
| |
| glGenTextures generates unsigned int (32 bit) values. You can create 2^32-1 texture IDs.<br>
| |
| Texture IDs returned start from 1 and go upwards. If you delete a texture with glDeleteTextures, and then you call glGenTextures, the driver
| |
| may or may not return that same ID. This isn't something you need to worry about.
| |
| When you shut down your program, it is recommended to call
| |
| glDeleteTextures(1, &textureID)
| |
| | |
| but you are not obligated. The driver can delete it for you.
| |
| | |
| === glBindTexture ===
| |
| Sometimes, you might run into a code that calls glBindTexture(GL_TEXTURE_2D, 0). It might look something like this
| |
| glEnable(GL_TEXTURE_2D); //Enable
| |
| glBindTexture(GL_TEXTURE_2D, textureID);
| |
| DrawTheThing();
| |
| glBindTexture(GL_TEXTURE_2D, 0);
| |
| | |
| glBindTexture(GL_TEXTURE_2D, 0) binds the default texture, which effectively disables texturing.<br>
| |
| The advantages of that are not clear.<br>
| |
| You are better off really disabling the tex unit with glDisable(GL_TEXTURE_2D).
| |
| | |
| You could also use some other invalid ID to disable texturing such as 1,000,000 and that will have the same effect as 0.
| |
| | |
| Also, as has been explained on the other Wiki pages talking about shaders, calling glEnable(GL_TEXTURE_2D) or glDisable(GL_TEXTURE_2D) has no effect. glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) are for the fixed pipeline. These became deprecated once GL 3.0 was introduced since you are suppose to do everything with shaders.
| |
| | |
| === Texture Storage ===
| |
| Once you upload your texture to GL, you don't need to keep a copy on your side. You can delete it just after calling glTexImage2D or glTexSubImage2D.
| |
| | |
| === Windows and other OSes ===
| |
| Once you upload your texture to GL, where is it stored exactly?<br>
| |
| The driver will most likely keep it in RAM. Keep in mind that the driver has its own memory manager.<br>
| |
| When you want to render something with this texture, the driver will then upload the texture to VRAM (if you have a system with dedicated video memory). It will likely upload 100% of the texture with all the mipmaps.<br>
| |
| If there isn't enough VRAM, the driver will delete another texture or another VBO. That is the driver's choice and there is nothing you can do about it.<br>
| |
| The driver will always keep a copy in RAM, even when a copy is made in VRAM. RAM is considered a permanent storage. VRAM is considered volatile. Windows can destroy a texture and take over a part of VRAM if it wants to. That's why drivers always keep a copy in VRAM.<br>
| |
| It is possible this will change in the future or perhaps it has already changed with Windows Vista.<br>
| |
| Functions such as <b>glPrioritizeTextures</b> and <b>glAreTexturesResident</b> are useless as has been explained on another page.<br>
| |
| http://www.opengl.org/wiki/Common_Mistakes#glAreTexturesResident_and_Video_Memory<br>
| |
| <br>
| |
| Games should not use these functions and should not rely on them. Gaming video cards such as nVidia Geforce 9800 and ATI/AMD Radeon HD drivers don't care for these functions. The driver is always the boss and not the programmer.<br>
| |
| <br>
| |
| For other OSes (Linux and Mac and FreeBSD), no comment.<br>
| |
| For other video cards such as the workstation video card nVidia Quadro FX, no comment.
| |
| | |
| [[Category:Textures]] | |