Local Info

Topics

j3d.org

Hello World Example

This is the first demonstration of how to assemble a program that uses the Aviatrix3D 2.0 API. Quite a number of things will be covered in this tutorial - 3D isn't as simple as just printing out a line of text. If you would like to see the complete version of this code, it can be found in the examples/basic directory and is named BasicDemo.java.

 

Setting up the UI components

Starting an application always begins with assembling a basic GUI. For us, that means the usual Frames, main() and other methods, as well as creating the surface that is to contain our 3D content. Because of the flexible nature of the design, there is no single surface object that you just create - in fact, we give you several options. The stock ones that come with Aviatrix3D can be found in the package org.j3d.aviatrix3d.output.graphics.

For the purposes of this exercise we'll start with the most simple of all the surface classes SimpleAWTSurface. When creating this class, you'll need to provide an instance of GLCapabilities. This is a JOGL class found in the package javax.media.opengl that tells the surface exactly what sort of rendering output is desired. For most cases, you can just create an instance of GLCapabilities and pass it directly through, but for this example, let's make sure of a few things:

GLCapabilities caps = new GLCapabilities();
caps.setDoubleBuffered(true);
caps.setHardwareAccelerated(true);

SimpleAWTSurface surface = new SimpleAWTSurface(caps);

To place the surface into your GUI, you must perform one more step. We do a lot of internal setup of the native surfaces, so we don't let you pass in an arbitrary component to paint to. Because of this, we don't derive directly from the standard classes, you must ask for the component from the internals of our surfaces. To do so, you must ask the surface for the object that is usable.

Component comp = (Component)surface.getSurfaceObject();
add(comp, BorderLayout.CENTER);

Casting to Component is required because there are many different possible combinations of output devices that this surface could implement.

Assembling the rendering pipeline

Next on the list is to put together the pipeline that processes the data in the scene graph and turns it into something that the surface can handle. Unlike almost every other scene graph API, this is an explicit set of steps that one must do to set up the application and allows you to control exactly which set of performance optimisations are desired. A number of different options are provided in the org.j3d.aviatrix3d.pipeline package, but for now we'll work with just the simplest.

If you remember back to your 3D graphics theory courses, you'll remember that a typical rendering pipeline consists of application, cull and draw stages. In most modern rendering packages, an additional stage is defined - sorting. Each of these stages are explicitly represented in Aviatrix3D. The scene graph represents the application stage, the surface that you just created represents the draw stage, which leaves cull and sort as the two remaining stages. That's what you are going to be assembling here. The two example stages we will be using do nothing other than the bare minimums required and are represented by the classes NullCullStage and NullSortStage and can be found in the package org.j3d.aviatrix3d.pipeline.graphics. Both have default constructors, leading to the following code:

CullStage culler = new NullCullStage();
SortStage sorter = new NullSortStage();

One of the features of Aviatrix3D is the ability to have nested render-to-texture capabilities within the scene graph. If you are certain that you will be not using these features, then some significant performance gains can be had by disabling the runtime checks that the culling stage performs, by using the following line:

culler.setOffscreenCheckEnabled(false);

With these parts, you now need to assemble them into a complete pipeline - which is conveniently managed by the DefaultGraphicsPipeline class, also found in the org.j3d.aviatrix3d.pipeline.graphics package. Convenience methods take care of the nitty details (you could implement your own custom pipeline if you desire).

DefaultGraphicsPipeline pipeline = new DefaultGraphicsPipeline();

pipeline.setCuller(culler);
pipeline.setSorter(sorter);
pipeline.setGraphicsOutputDevice(surface);

One last step remains, and that is to set up the infrastructure for the application section of the pipeline. This class is responsible for marshalling all of the updates, keeping nodes in sync and a lot of basic low-level state management. This class extends the RenderManager interface (findable in the package org.j3d.aviatrix3d.management). Again, a number of options are available for you to use, but for the majority of applications, just a single threaded handling system will be more than sufficient (and for single processor machines, probably the highest performance too).

SingleThreadRenderManager sceneManager = new SingleThreadRenderManager();
sceneManager.addPipeline(pipeline);
sceneManager.setGraphicsOutputDevice(surface);

There are quite a number of useful methods on the render manager interface, so a pass over the javadoc would be wise to see everything that you can. One feature that you'll almost always be using (particularly in the simple demo apps) is to cap the framerate. Aviatrix3D will run at maximum CPU utilisation if you let it, so the best way of dealing with this is to cap the framerate, allowing the CPU to do other things. The call you want is setMinimumFrameInterval() which takes a time value in milliseconds that represents the minimum time between frames. If the application and processing time takes more than this, the code will just continue on processing the next frame immediately, otherwise it will sleep for the amount of time needed to ensure at least the given amount of time between frame renders.

sceneManager.setMinimumFrameInterval(100);

Also, you will want to keep a reference to the render manager, as this is where you are going to be doing most of your interaction as you change the scene contents around etc.

Assembling the scene graph

Time to put something visible together. In this example we'll just put in a single triangle with a transform over the top to move it about. The code is straight-forward and follows the normal conventions one assumes with scene graphs. For reference, the matrix and vector classes come from the javax.vecmath package.

private void setupSceneGraph()
{
    Viewpoint vp = new Viewpoint();
    Vector3f trans = new Vector3f(0, 0, 1);

    Matrix4f mat = new Matrix4f();
    mat.setIdentity();
    mat.setTranslation(trans);

    TransformGroup tx = new TransformGroup();
    tx.addChild(vp);
    tx.setTransform(mat);

    Group scene_root = new Group();
    scene_root.addChild(tx);

    // Flat panel that has the viewable object as the demo
    float[] coord = { 0, 0, -1, 0.25f, 0, -1, 0, 0.25f, -1 };
    float[] normal = { 0, 0, 1, 0, 0, 1, 0, 0, 1 };
    float[] color = { 0, 0, 1, 0, 1, 0, 1, 0, 0 };

    TriangleArray geom = new TriangleArray();
    geom.setVertices(TriangleArray.COORDINATE_3, coord, 3);
    geom.setNormals(normal);
    geom.setColors(false, color);

    Shape3D shape = new Shape3D();
    shape.setGeometry(geom);

    trans.set(0.2f, 0.5f, 0);
    Matrix4f mat2 = new Matrix4f();
    mat2.setIdentity();
    mat2.setTranslation(trans);

    TransformGroup shape_transform = new TransformGroup();
    shape_transform.addChild(shape);
    shape_transform.setTransform(mat2);

    scene_root.addChild(shape_transform);

    ...
}

The TriangleArray> class allows a lot of configurability. If you're looking up the documentation, most of the method calls are actually in the base class VertexGeometry. Part of the low-level flexibility provided by OpenGL is to allow geometry definition in 2, 3 or 4 dimensional coordinate systems, hence the first argument to the setVertices() call. Say you wanted to render 2D, geometry, just change that first parameter to COORDINATE_2 and you now have instant 2D geometry.

Per-vertex colours can be specified as either 3 or 4 component colour. Most of the time you only want to provide 3 component as that gives far better rendering performance. The first parameter of the setColors() method defines whether alpha is provided (ie 4 or 3 components to the provided colour array). Most of the time you will want to set this to false.

Lastly, you need to set up the transforms. Setting the transfom uses a Matrix4f object. A little gotcha here is that the vecmath library always starts a new matrix instance as all zeros rather than the identity matrix. So, make sure you call setIdentity() first and then set your translations, rotations etc. Without this one call, your geometry will disappear into the void (the scale values will be 0, 0, 0 on the diagonal). So, if you set up a scene and the geometry doesn't render, check that you've set the matrix to identity first!

Making it all render

Scene graph constructed, rendering system initialised, now all you need to do is connect them together. The connection is provided by the Scene class. The role of the Scene class is to take a scene graph and manage the various global states. The most important job of the Scene class is to inform the rendering system of which nodes are to be treated as the active viewpoint, background, fog, etc. At the low level, the rendering APIs can only support a single version of each of these at a time. When you set up a fog, it effects all the geometry, similarly with the viewing location. Although you can have multiples of these in a scene, only one can be active at a time. The Scene object is what nominates to the renderer the instance that it should consider to be the active version.

There are two different types of scenes - single pass and multiple pass. For the majority of applications, and what we mostly demonstrate in these tutorials, is the single pass variety. This is represented by the SimpleScene class. The single pass scene contains a scene graph that is traversed and rendered as is. The multipass scene graph, represented by the MultipassScene, provides the capability to take a single layer and perform multiple rendering passes over that layer, each tweaked slightly differently. A typical use of multipass rendering is creating mirrors or shadows. We won't cover multipass rendering in this tutorial, but you can read about the details in the Multipass Rendering Tutorial.

Although there are quite a number of options available, typically the only parts you'll use are for setting the active view point, background and the scene graph itself.

SimpleScene scene = new SimpleScene();
scene.setRenderedGeometry(scene_root);
scene.setActiveView(vp);

The last step is to connect the scene graph to the rendering system - telling the rendering system what you want to render. To do that, you need some amount of super structure to the scene. The top of the rendering structure describes compositing information - you can compose one or more layers of rendered scenes together to produce the final visual. These can be assembled in all sorts of combinations, but for this example all we need is a single layer. Within the layer, you need to describe the area of the surface that it should take up. The bounds are described in pixels - bottom, left, width and height. These values are relative to the surface. In AV3D, this needs to always be explicitly set. We do not automatically adjust the for the window size changes.

SimpleViewport view = new SimpleViewport();
view.setDimensions(0, 0, 500, 500);
view.setScene(scene);

SimpleLayer layer = new SimpleLayer();
layer.setViewport(view);

Layer[] layers = { layer };
sceneManager.setLayers(layers, 1);

Dealing with JOGL Issues

Aviatrix3D is not immune to some of the fun bugs associated with JOGL. The most important one that you need to be aware of is when to start the rendering process. By default, the renderer does not run. You need to explicitly set it to run. One of the primary reasons for this is dealing with JOGL Bug #54. In short, this bug states that you cannot start the JOGL renderer before the containing window has been set visible for the first time. If you try to start the renderer before this point, you'll get an exception thrown.

To work around this problem, what we do is not enable the rendering engine, and then let you start it when the window first becomes visible. How do you know when this is? Simple - attach a WindowListener event listener and when you get the opened callback, start the renderer. The code looks something like this:

class MyFrame extends Frame implements WindowListener
{
    private SingleThreadRenderManager sceneManager;

    MyFrame()
    {
        addWindowListener(this);

        setSize(600, 600);

        Component comp = (Component)surface.getSurfaceObject();
        add(comp, BorderLayout.CENTER);

        ....

        sceneManager = new SingleThreadRenderManager();

        ....

        setVisible(true);
    }

    ....

    public void windowOpened(WindowEvent evt)
    {
        sceneManager.setEnabled(true);
    }
}

And that's it. You now have the basics for setting up a static scene graph and application. Here's BasicDemo.java running: