Local Info



Changes in Aviatrix3D 2.0 Notes

The change from version 1.0 to 2.0 brought some significant changes to Aviatrix3D. These changes make the two versions incompatible, though for the most part upgrading to the new APIs should be relatively trivial.

The three most significant changes that an end user will see is the change to the new JSR 231 APIs for access to OpenGL (javax.media.opengl package), the addition of layers and viewport control at the root of the scene graph and a repackaging of the pipeline and output device classes. There are several other large internal structural changes, however, these are not visible to the end user.


Layers and Viewports

Although AV3D 1.0 made mention towards layers and viewports, we didn't end up providing an implementation of them. The layered methods were empty and the classes did nothing. With the move to 2.0, we have made layers an integral part of the scene graph. In fact, it is now impossible to build a scene graph without at least one layer.

A layer represents a separate drawable scene that can be composited on top of other layers. A typical use of this is in a HUD such as in a game. Along with layers we also introduced the concept of viewports. A viewport allows you to draw to only a portion of the full window, and yet treat it as a fully self-contained window with proper aspect ratio management etc. A typical use of viewports is in a 4-view CAD-style application.

Layers contain one or more viewports. A viewport may also contain its own layers, which can be useful if you have independent views and each need their own local user interface. Viewports on different primary layers do not need to match between layers. Each primary layer is independent of the other layers, so you can mix and match these as required by your application needs. Of course, we also allow you to share a scene graph between different viewports/layers as well, thus saving on memory usage.

As a result of adding layers, projecting the mouse back into virtual world coordinates has become a lot more difficult. A 2D position on the screen doesn't always correspond to a single virtual environment. To accomodate these changes, the methods on GraphicsOutputDevice (the old DrawableSurface) have changed to also require a layer and sub layer identifier. This allows the code to match the mouse position to coordinates of the viewport and layer within that viewport if needed.

Supporting multipass rendering has resulted in needing to change the way the scene itself is represented at the top of the scene graph. The single Scene class has become an abstract class, and two derived classes added - SimpleScene and MultipassScene representing the old single pass rendering and multipass rendering respectively.

JSR 231 APIs

With the release of the formal JSR 231 APIs, we swapped over to using those. For the most part, you won't notice these as a programmer. Only in a few place, such as the package for the GLCapabilities object that needs to be passed to the surface constructor will you notice. If you are building scene graph objects based on the custom renderable interfaces, you'll also notice that we only pass in the GL object and no longer the GLU object. With the new APIs, the GLU object can be directly constructed by the end user, rather than fetching it from a drawable object provided by the system.

Internally we have some large changes to how we interact with OpenGL. The new JSR APIs allowed direct access to the context rather than only allowing the indirect callback structure. Now we access the context, make it current, do the drawing and then release it, in the same way as a traditional C-based OpenGL application.

We done some more optimisations for single versus multithreaded rendering. In the single-threaded case we grab the context and make it current at the beginning of the application and don't bother to release it until the canvas goes away. This saves a fair bit in the locking/unlocking overheads.

Package Restructuring

One of the areas that became increasingly more cluttered as we developed the first version was the location of almost all the classes in the one package org.j3d.aviatrix3d. All the abstract interfaces for the rendering pipeline, the communications structures and more were all together. This made it quite difficult to find the classes that need to be used by the end user and all the internal stuff. With the new version we decided to reorganise that so that there was a clear delineation between the public scene graph description and the composable rendering pipeline.

As part of this restructure, the existing org.j3d.aviatrix3d.rendering package was repurposed. The contents of the package were moved to org.j3d.aviatrix3d.management. The package now contains all the interfaces that internally describe a scene graph object to the rendering pipeline. Unless you are implementing custom nodes, then this package can be mostly ignored.

During the restructure we also commonised as much as possible. A lot of the interfaces between the audio and visual pipelines were the same, so a collection of sub interfaces were made and then extra packages were defined for the audio and visual device implementations. We also noticed a lot of common code in the bottoms of the various cull and sorting stages, so common base classes were built there as well. Almost all of this you should not notice, other than needing to change the packages, but it makes our life much easier from a development and debugging perspective.

The picking system has been completely abstracted now. In the old code, if you wanted to create a custom node type and have it pickable, you needed to extend one of the core classes, like Group. This is no longer a requirement - in fact, you could create a class picking structure that is completely independent of the rendered scenegraph. All this code has been pushed into a separate package org.j3d.aviatrix3d.picking. A small casualty of this is that we also had to shift the base class BoundingVolume over as well because the picking subsystem implementation required it, but not the derived classes.

SWT Support

In this version we add the support for the SWT toolkit that is the heart of the Eclipse project. We're including this as a first class capability rather than an optionally compiled extra. Eclipse is has a huge following and with the addition of OpenGL capabilities that are being developed by us (a port of the JSR-231 APIs to run under SWT).

SWT support is designed at the level of allowing you to use Aviatrix3D as a plugin direct to Eclipse. In this way you can add first-class 3D graphics capabilities without the need to code in OpenGL. Also, with just the change of the surface class your application can run in either SWT or AWT environments.

2D Rendering

A subset of the scene graph nodes have been defined that allow for scenes to be described in purely 2D nodes. In a 2D system, lighting and other effects are ignored. Also, transparency handling is limited to only separating out transparent and opaque objects. There's no depth blending.

In addition to a 2D scene graph, 2D geometry in the form of bitmaps and rasters are supported. These are projected into 2D space on the screen, that is modified by the 3D transformations above them. These project 2D pixels for rendering on screen, but they are also subject to the OpenGL limitations. Once the raster position goes offscreen (typically the lower-left corner), the entire raster object is clipped, even if it would normally still be partially visible.

Multipass Rendering

Along with the layers, there is also the ability to define a layer as being rendered using multipass techniques. This allows almost all the traditional C-style functional programming tricks for multipass rendering. Shadow Volumes, motionblur and all those usual effects can be drawn using these capabilities.

Access to all 4 main buffer types are provided. The scene graph is still structured in the natural way you would expect: Describe each layer and viewport as needed in the proper order, including the multipass layers. Internally, the implementation takes care of all the messing around and rendering re-ordering that needs to be taken care of. Several examples showing the use of different buffers and volume shadows are provided.