Local Info



Complete Texture API Introduction

Having got the basic texturing example to work, now you need to get familiar with the rest of the texturing APIs and what they can do for you. In general, Aviatrix3D presents all of the normal texturing options that you can use at the OpenGL level, by bundling them into a couple of classes. This tutorial will be about walking you through the various classes that can contribute to the texturing look rather than as a complete end to end application demo.

If you would like to see the some code that uses a reasonable number of these options then go to the examples/texture directory and look at TextureTransformDemo.java.


Updating Textures

When textures are being animated or changed, there are several options that you can take. The first option is to completely replace the texture contents with a new set by replacing the entire TextureComponent. To do this, you use the same method as what you used to set the texture up in the first place - setSources(). The big downside of this method is that it will replace the entire texture on the graphics card. This can be a very expensive operation - particularly if only a small subset of the image is changing. However, if you are needing to change the size of the texture, this is the only way to go.

If you are just wanting to replace a small section of the image, then the high performance way of doing this is to use the updateSubImage() method on the appropriate TextureComponent-derived class. This method takes the starting coordinates and size of the sub region to be updated, as well as a handle to the raw pixel data.

When developing applications on Aviatrix3D we found that there were also several different types of updates that you could want to do on textures, depending on the content. For some, such as video processing, you know that you are going to need to replace the entire contents - anything that was there will be completely overwritten. For others, you know that you only are updating a couple of independent sub areas of the overall image. The way that we treat these internally can be very different and have some significant performance impacts. So, working with these different goals in mind, we have a set of update strategies flags that let you tell the texture image what optimisations to make. In the video case, we can just discard any other updates that are currently pending, but in the sub region update, we only want to keep the amount of texture data being sent to the video card to a minimum, so we make a number of calls to OpenGL each with its own small area of pixels.

The texture updating strategy can be set through the use of the setUpdateStrategy() method on the Texture object. In any form of interactive scene, these texture updates may not need to occur for sometime - for example the textured object is not currently visible. That means the low-level calls to OpenGL may not take place on the next frame, so we need to buffer all those sub image calls in some way. Each of the strategies are designed to cater to this sort of situation.

There are three strategies that are defined in that class:

  • UPDATE_BUFFER_ALL All sub-image updates should be buffered until the next chance to send those updates to the card. This takes all the updates and keeps them around until the next chance to commit them to the drawn texture. This will take up the most memory, as all the pixel values must be buffered by the implementation. However, if you have a lot of independent areas that are being updated, this is the way to go.
  • UPDATE_BUFFER_LAST Only keep the last update around to send to the video card at the new chance. Any previous updates are discarded. This is best used when you are replacing the entire contents of the texture, such as video rendering. It also uses the least amount of memory of all of the strategies.
  • UPDATE_DISCARD_OVERWRITES If an update area completely overlaps an area of an existing pending update, discard the older update. Useful when you are updating either the same set of subregions all the time, or that you have a collection of random changes and want to minimise the amount of buffered data, and calls made to OpenGL. Sort of a halfway between the other two strategies.


Unfortunately, textures don't always look the best. A texture far away, reduced to a few pixels on screen may look rather terrible compared to it's original version - particularly if the full size includes some transparent pixels. That is where the filtering modes come in. These allow you to control how to sample the original pixels of the image as the texture is viewed from varying distances.

AV3D provides three different sets of filtering parameters that you can adjust

  • Minification: When a single pixel on screen maps to multiple pixels from the source texture, you need to work out how to average the values to be the final on-screen value.
  • Magnification: When a single pixel on the texture corresponds to multiple pixels on screen you need to do some smoothing around the edges pixels to blend colours.
  • Anisotropic: An alternative filtering method that works based on non-linear scaling of the source texture pixels. The other methods always use square sampling, where this form uses non-square allowing for greater resolution in the coordinate axis that requires it. This is far more expensive to calculate, but typically produces the best looking results.

Boundary Modes

Specifying boundary modes is a capability of the Texture classes. The base class Texture defines a number of constants for you to use when making the method calls. You have four options to work with that mirror the OpenGL capabilities: BM_CLAMP, BM_CLAMP_TO_EDGE, BM_CLAMP_TO_BOUNDARY and BM_WRAP. These follow the usual well-defined definitions, so no need to go over everything again here.

To set the boundary mode, you need to find the appropriate method call. Since textures can have varying number of dimensions, you won't find all the methods on the base Texture object. There you will only find a single method for the S axis. That's because all texture types at least use this as the baseline, but not everyone uses the other axis. At this level you have just setBoundaryModeS() that takes an int as the only value - where that is on of the BM_* values from above. By default, all boundary modes are set to CLAMP. As an example, to change the T boundary mode of a 2D texture to WRAP, then use this code:


Texture Functions

Texture functions control how the texture image is applied to the object and the underlying object colour and lighting model. Control over this ability is found in the TextureAttributes class. Depending on what you are trying to achieve, there are a number of options for you to play with. The primary method that you'll be making use of will be setTextureMode() as this defines the basics of how the texture is applied, and is also the gateway to the more complex texture combiner modes. By default, the texture mode is set to REPLACE, so if you would like to change it to, say MODULATE, you would need to use the follow code.

TextureAttributes attr = new TextureAttributes();

Making use of the texture combiner modes and methods are covered in the Multitexturing tutorial, so we won't cover them in detail here.

If your texturing needs to make use of explicit blend colour, such as using the MODE_BLEND texture mode or texture combiner options, then you can set the value using the setBlendColor() method. This takes four float values corresponding to the RGBA components. For example, to set a colour of blue, use the following:

attr.setBlendColor(0, 0, 1, 1);


Texture filtering can be used with the same set of capabilities as that provided by OpenGL. The same restrictions also apply - such as attempting to use mipmap filter modes without providing mip maps will disable texturing. The library is just a thin layer based on the requested modes, the real visual behaviour is defined by the OpenGL specification.

Three filter capabilities are defined - minification, magnification and anisotropic. Each have their own methods. Separate constant sets are defined for each filter as although some options look common, the underlying OpenGL calls use different constants to describe each filter type. It also helps us internally work out quickly whether you've passed us a dodgy value or not.

Each of the filter setting methods are found on the base class Texture. There you will find the methods setMinFilter() setMagFilter() for minification and magnification filter parameters respectively, and setAnisotropicFilterMode() and setAnisotropicFilterDegree() for the anisotropic settings. The anisotropic mode method takes a constant defining the mode. Currently there are only two modes available - effectively on and off. The degree method can then be used to set the value needed for the filtering (by default - zero).

Texture Coordinate Generation

Automatic texture coordinate generation is used in a lot of places, particularly for the various forms of environment mapping. All of the available options are wrapped up into the class TexCoordGeneration. The method options available are very simple - there is only one method! No doubt, this will make it more confusing though.

All settings are performed through the setParameter() method. This is roughly analagous to the OpenGL glTexGen*() allowing complete control over the generated output. There are four arugments to setParameter() - the texture axis (S, T, R or Q), the type of generation mode to use, then a parameter value describing the type of generation for the specified mode, and an optional array of values (typically plane equations for the various eye mapping modes). Working this interface effectively requires some knowledge of how the OpenGL texture coordinate generation modes work. Note that each axis can have a separate and completely different set of generation modes, should you choose to do so. It may sound a little odd, but the flexibility is there if you want normal reflections for the S axis, object linear for the T and no generation at all for the R.

Parameter values are only needed if you are using one of the eye or object linear generation modes. For any other mode, we don't look at it internally, so you'll be safe to pass a null there. For some more detailed look at the use of this class in action, head to the Cubic Environment Mapping tutorial.

Texture Priorities

In large worlds with a lot of textures, there can be significant performance loss due to the underlying graphics APIs having to shuffle the texture image data on and off the video card as each texture is rendered. The use of state sorting by the graphics API can help somewhat with this, but when there are a lot of textures and there's more texture data than available texture memory, then this thrashing can be a significant bottleneck. Some applications know that certain textures will always be required, and others not. A good example of this is grass and road textures in a game. It would benefit performance if you can indicate which textures are prefered to be always kept on the card, and those which you don't care about.

When you know you have a some textures that you would prefer to keep on the video card over others, then you want to make use of texture priorities. These are your way of indicating which textures are more important to be kept on the video card, and those which are ok to be tossed out between frames.

Priorities are indicated with a floating point value between zero and one. A value of zero means unimportant, with a value of one being the most important. If you don't care about the texture or any priority, then set a value of -1. These can be set uing the setPriority() method on the Texture class.

Depth Textures

Aviatrix3D also allows you to specify textures that describe depth information. Depth textures are used in rendering shadows.


Current Limitations and Missing Capabilities

The following capabilities cannot be used or expressed through the various APIs available in Aviatrix3D. We're considering designs for them, but have nothing solid yet.

  • Texture Proxy
  • Compressed Textures
  • Automatically Generated Mipmaps