From ActionScript 3 to C++ 2011

During the last Flash Onlince conference, I had the chance to share the latest work I’ve been involved in at Aerys with the rest of the Minko team. We’ve been working a lot on the next major version because we really want it to be a game changer for 3D on mobiles and the web.

You can read the original announcement for more details. But the big picture is that Minko is going to support WebGL. To introduce this new major feature we’ve created a first technical demonstration:

To do this, we are completely rewriting Minko using C++ 2011. This new version will include bindings for ActionScript 3 (and obviously Javascript too). So if you’re an AS3 developer: do not panic! You’ll still be able to leverage your AS3 skills with Minko. Yet if you want to learn new tricks now would be a good time and C++ is a good choice.

To understand the process of working with C++ code targeting the Flash platform and HTML5/Javascript, you can start by reading my slides:

To help AS3 developers migrating to C++, I’ve decided I’ll start gathering resources here on this very blog. If you are interested you can start by:

If you have suggestions regarding what you need to know in particular regarding C++ and especially cross-compilation targeting the Flash platform or Javascript, please let me know!

Stage3D Online Conference Slides

It was really awesome to be invited to talk about Minko today during the Stage3D online conference organized by Sergey Gonchar. He has done an excellent job in organizing this and I hope people enjoyed attending it as much as I enjoyed being a part of it.

minko_file_formats_comparison minko_editor_workflow minko_editor_triggers minko_darksider

You can watch the entire conference here.

As I promised, here are the slides to this presentation. They are pretty heavy because they embed some videos. Here is the outline of the content of the presentation:

  • Community SDK
    • Scripting
    • Shaders and GPU programming
    • Scene editor
  • Professional SDK
    • Physics
    • Optimizations for the web and mobile devices

At the end of the presentation, I also demonstrated how Minko can load and display Crytek’s Sponza smoothly on the iPad and the Nexus 7 in just a few minutes of work thanks to the editor and the optimizations granted by the MK format. You will soon here more about this very demonstration wiht a clean video demonstrating the app. but also the publishing process. This is incredibly cool since Sponza is quite a big scene with more than 50 textures including normal maps, alpha maps and specular maps for a total of 200+MB (only 70MB when published to MK).

Don’t forget to have a look at all the online resources for Minko:

As stated in the presentation, Minko’s editor public beta should start next week. So stay tuned!

JIT Shaders For Better Performance

The subject is really vast and complex and I’ve been trying to write an article about this for quite some time now. Recently, I made a small patch to enhance this technique and I thought it was a good occasion to try to summarize how it works and the benefits of it. In order to talk about this new enhancement, I would like to draw the big picture first.

The Problem

That might look like a complicated post title… but this is rather complex than really complicated. Here is how it starts: rendering a 3D object require to execute a graphics rendering program – or “shader” – on the GPU. To make it simple, let’s just say this program will compute the final color of each pixel on the screen. Thus, the operations performed by this shader will vary according to how you want your object to look like. For example rendering with a solid flat color requires different operations than rendering with per-pixel lighting.

Any programming beginner will understand that such program will test conditions – for example whether to use lighting or not – and perform some operations according to the result of this test. Yes: that’s pretty much exactly what an “if” statement is. It might look like programming trivia to you. And it would be if this program was not meant to be executed on the GPU…

You see, the GPU does not like branching. Not one bit (literally)! For the sake of parallelization, the GPU expects the program to have a fixed number of operations. This is the only efficient way to ensure computations can be distributed over a large number of pipelines without having to care too much about their synchronization. Thus, the GPU does not know branching and each program has a fixed number of instructions that will always be executed in the same order.

Conclusion: shader programs cannot use “if” statements. And of course, loops are out of the game too since they are pretty much pimped out “if” statements. Can you imagine what such logic would imply on your daily programming tasks? If you simply try to, you will quickly understand that instead of writing one program that can handle many different situations you will have to write many different programs that will handle a single situation. And then manually choose which one should be launched according to your initial setup…

Workarounds…

Mutables

The simplest workaround is to find “some way” to make sure useless computations do not affect the actual rendering operations. For example, you can “virtually disable” lighting by setting all lights diffuse/specular components to 0.

As you can imagine, this is really a suboptimal option. Performance wise, it’s actually the worst possible idea: a lot of computations happen and most of them are likely to be useless in most cases.

If/else shader intrinsic instructions

After a few years, shaders evolved and featured more and more instructions. Those instructions are now usable through higher level languages such as CG or GLSL. Those languages feature “if” statements (and even loops too). How are they compiled into shader code that can run on a GPU? Do they overcome the challenges implied by parallelization?

No. They actually fit in in a very straight forward and simple way. As a shader program must feature a single fixed list of instructions, the two parts of a if/else statement will both be executed. The hardware will then decide which one should be muted according to the actual result of the test performed by the conditional instructions.

The bright side is that you can use this technique to have a single shader program that handles multiple scenarios. The dark side is that this shader is still very inefficient and might eventually break the limit number of instructions for a single program. On some older hardware, the corresponding hardware instructions simply do not exist…

So even this “brand new” feature that will be introduced in Flash 11.7 and its “extended” profile is far from sufficient.

Pre-compilation

Some engines will use high level shader programming languages (like CG or GLSL) and a pre-compilation workflow to generate all the possible outcomes. Then, the right shader is loaded at runtime according to the rendering setup. This is the case of the Source Engine, created by Valve and used in famous games like Half Life 2, Team Fortress 2 or Portal.

This solution is efficient performance wise: there is always a shader that will do exactly and strictly the required operations according to the rendering setup. Plus it does not have to rely on some hardware features availability. But pre-compilation implies a very heavy and inefficient assets workflow.

Minko’s Solution

We’ve seen the common workarounds and each of them has very strong cons. The most robust implementation seems to be the pre-compilation option despite the obvious workflow issues. Especially when we’re talking web/mobile applications! But the past 10 years have seen the rise of a technique that could solve this problem: Just In Time (JIT) compilation. This technique is mostly used by Virtual Machines – such as the JVM (Java Virtual Machine), the AVM2 (Actionscript Virual Machine) or V8 (Chrome’s JavaScript virtual machine). It’s purpose is to compile the virtual machine bytecode into actual machine opcodes at runtime in order to get better performances.

How would the same principle apply to shaders? If you consider your application as the VM and your shader code as this VM execution language, then it all falls into place! Indeed, your 3D application could simply compile some higher level language shader code into actual machine shader code according to the available data. For example, some shader might compile differently according to whether lighting is enabled or not or even according to the number of lights.

With Minko, we tried to keep it as simple as possible. Therefore, we worked very hard to find a way to be able to write shaders using AS3. As the figure above explains, the AS3 shader code you write is not executed on the GPU (because that’s simply not possible). Instead, the application acts as a Virtual Machine and as it gets executed at runtime, this AS3 shader code transparently generates what we call an Abstract Shader Graph (ASGs). You can see it as an Abstract Syntax Tree for shaders (you can even ask Minko to output ASGs in the consoleas they get generated using a debug flag). This ASG in then optimized and compiled into actual shader code for the GPU.

For example: everytime you call the add() method in your AS3 shader code, it will create a corresponding ASG node. This very node will be linked with the rest of the ASG as you use it in other operations until it is finally used as the result of the shader. This result node becomes the “entry point” of the ASG.

Here is what a very simple ASG that just handles a solid flat color rendering looks like:

Here is what a (complicated) ASG that handle multiple lights looks like:

Your AS3 shader code is executed at runtime on the CPU to generate this ASG that will be compiled into actual shader code that will run on the GPU (in the case of Flash it will actually output AGAL bytecode that will be translated into shader machine code by the Flash Player). As such, you can easily perform “if” statements that will shape the ASG. You can even use loops, functions and OOP! You just have to make sure the shader is re-evaluated anytime the output might be different (for example when the condition tested in a “if” changes). But that’s for another time…

Using JIT shaders, Minko can efficiently dynamically compile shaders shaped by the actual rendering settings occuring at runtime. Thus, it combines the high performance of a pre-compilation solution while leveraging all the flexibility of JIT compilation. In my next articles, I will explain how JIT shaders compilation can be efficiently automated and how multi-pass rendering can also be made more efficient thanks to this approach.

If you have questions, hit the comments or post in the Minko category on Aerys Answers!

3D Matrices Update Optimization

4×4 matrices are the heart of any 3D engine as far as math is concerned. And in any engine, how those matrices are computed and made available through the API are two critical points regarding both performances and ease of development. Minko was quite generous regarding the second point, making it easy and simple to access and watch local to world (and world to local) matrices on any scene node. Yet, the update strategy of those matrices was.. naïve, to say the least.

TL;DR

There is a new 3D transforms API available in the dev branch that provides a 50000% 25000% boost on scene nodes’ matrices update in the best cases, making it possible to display 50x 25x more animated objects. You can read more about the changes on Answers.

Continue reading 3D Matrices Update Optimization

New Minko Feature: ByteArray Streams

I’ve just pushed on github my work for the past few weeks and it’s a major update. But most of you should not
have to change a single line of code in the best case. The two major changes are the activation of
frustum culling – who now works perfectly well – and the use of ByteArray objectst to store vertex/index
streams data.

Using ByteArray instead of Vector, why are we doing this?

As you might now, Number is the equivalent of the “double” data type and as such they are stored on
64bits. As 32bits is all a GPU can handle regarding vertex data it is a big waste of RAM. Using ByteArray
makes it possible to store floats as floats and avoid any memory waste
. The same goes with indices stored
in uint when they are actually shorts.

Another important optimization is the GPU upload. Using Number of uint requires the Flash player to
re-interpret every value before upload: each 64bits Number has to be turned into a 32bits float, each
32bit uint has to be turned into a 16bits short. This process is slow by itself, but it also prevent
the Flash player to simply memcopy the buffers into the GPU data. Thus, using ByteArray should really
speed up the upload of the streams data to the GPU
and make it as fast as possible. This difference should be even bigger on low-end and
mobile devices.

Finally, it also makes it a lot faster to load external assets because it is now possible to memcopy
chunk of binary files directly into vertex/index streams. It should also prove to be very very useful
for a few exclusive – and quite honestly truly incredible – features we will add in the next few months.

What does it change for you ?

If you’ve never been playing around with the vertex/index streams raw data, it should not change a single
thing in your code
. For example, iterators such as VertexIterator and TriangleIterator will keep working just the way
they did. A good example of this is the TerrainExample, who runs just fine without a single change.

If you are relying on VertexStream.lock() or IndexStream.lock(), you will find that those methods now
return a ByteArray instead of a Vector. You should update you code accordingly. If you want to see a good example of ByteArray manipulations for streams, you can read the code of the Geometry.fillNormalsData() and Geometry.fillTangentsData() methods.

What’s next?

This and some recent additions should make it much easier to keep streams data in the RAM without wasting too much memory and be able to restore it on context device loss. It’s not implemented yet but it’s a good first step on this complicated memory management path.

Another possible feature would be to store streams data in compressed ByteArray. As LZMA compression is now available, it could save a lot of memory. The only price to pay would be to have to uncompress the data before being able to read/write it.

Tutorial: Add pixel-perfect 3D mouse interactivity

In this tutorial we’re going to see how you can add pixel-perfect 3D mouse interactivity. I’ve already introduced a technique called “ray casting” in another article. But it works only with very basic static shapes. And sometimes, testing very complex shapes can be very painful performance wise. It’s even more expensive when you want it to be very precise.

In this article, we will see a technique called “pixel picking”. This technique uses hardware acceleration to provide pixel perfect mouse interactivity. It works very well for both static and animated models. The concept is very simple: we render the scene with one color per mesh. Then, we just have to get the pixel under the mouse cursor to know what mesh is “interactive”. Of course, things are much more complicated in the real life: this kind of stunts are pretty hard to push properly in a general purpose rendering pipeline.

But Minko provides everything required out of the box! Even better, the minko-picking extension features a simple controller – the PickingController – that provides all the mouse signals we might need! This tutorial will explain how to setup the PickingController and listen for the mouse signals.


Pixel picking test application (sources)

Create and setup the PickingController

The first step is to instanciate a new PickingController:

The constructor takes only one argument: the “picking rate” of the controller. This value will determine how many times per second the controller will try to execute the picking pass and the relevant mouse signals. The lower the picking rate, the better the performances. A picking rate of 30 should be more than enough for 99% of the applications. You can also set that value at any time using the PickingController.pickingRate property:

Setting the picking rate to the half of the frame rate will work just fine for most applications and should be completely painless performance wise. By default, the picking rate is fixed to 15.

Set the mouse events source

The job of the PickingController is to listen for the mouse events on one (or more) specific dispatcher(s) and re-dispatch them as mouse signals. The difference between the original events and the signals executed by the PickingController is that the signals are aware of the 3D scene. To setup the dispatcher to listen, you just have to call the PickingController.bindDefaultInputs() and provide the IDispatcher object to listen:

Setup the PickingController on the 3D scene

In most cases, you don’t want the whole 3D scene to be mouse interactive. Sometimes it’s just a Mesh or a Group. The PickingController can be added to any Mesh/Group so it’s easy to target precisely what is interactive and what is not. The basic use case is to add mouse interactivity on a single Mesh:

BUt you also might want to listen for the mouse signals trigerred by a whole sub-scene instead of a single mesh. For example, some skinned 3D assets have multiple meshes animated by a single skeleton. To do this, we can add the PickingController on Group:

In the code snippet above, the PickingController will execute mouse signals for all the Mesh descendants of the target group. You don’t have to worry about the descendants of the groups targeted by a PickingController: it will listen for the Group.descendantsAdded and Group.descendantsRemoved to start/stop tracking any descendant Mesh added to this part of the scene.

Thus, if your whole 3D scene is interactive, you can add the PickingController directly on the Scene node:

Listen for the mouse signals

To catch 3D mouse events, you just have to add callback(s) to any of the PickingController.mouse* signals. The available signals are:

  • mouseClick, mouseDown, mouseUp: executed when the left button is clicked, down or up
  • mouseRightClick, mouseRightDown, mouseRightUp: executed when the right button is clicked, down or up
  • mouseMiddleClick, mouseMiddleDown, mouseMiddleUp: executed when the right button is clicked, down or up
  • mouseDoubleClick: executed when the user makes a double click
  • mouseMove: executed when the mouse moves
  • mouseWheel: executed when the mouse wheel turns
  • mouseRollOver, mouseRollOut: executed when the mouse roll over/out a mesh

The following code sample will catch the left and the right click signals:

It would be too difficult to use the PickingController if the mouse signals where triggered only when an actual 3D object is under the cursor. For example, it would be pretty hard to select/unselect objects without listening to some actual 2D mouse events. The code would then quickly become very complicated to mix both 2D mouse events and 3D mouse signals.

Therefore, the mouse signals are triggered whenever the corresponding mouse event is dispatched (and when the picking rate allows it of course). As a direct consequence, the mesh : Mesh argument is null when there is no actual interactive 3D object under the mouse cursor.

Conclusion

You can find the complete source code of the picking example demo in the minko-examples repository on github. If you have questions/suggestions regarding this comment, you can ask them in the comments or on Aerys Answers, the official support forum for Minko.

Tutorial: your first mobile 3D application with Minko

As you already know I’m sure, you can build Android and iOS devices with the Flash platform. And Stage3D is also available on those devices! As a matter of fact, Stage3D was especially designed to work on mobiles. And so was Minko! We put a lot of efforts in building a robust and fast engine that will work on most mobile devices. This tutorial will start where the “Your first Minko application” tutorial stopped and explain what needs to be done to get it working on mobile.

Create your mobile project

The first thing to do is – of course – create a mobile project. With Flash Builder it is very simple: you just have to go into File > New > ActionScript Mobile Project. If you need a little reminder of how to bootstrap your project/development environment, you can read the “Getting started with Minko” tutorial. The only difference compared to creating a desktop/wepp application is to uncheck “BlackBerry Table OS” in the Mobile Settings panel: Stage3D is not yet available on BlackBerry devices. There is an issue opened on the BlackBerry tracker if you want to vote for it!

Configure the application

Now our project has been created we just have to make sure it can use the Stage3D API. It implies two little changes in the app.xml file (this file is named after your main class, most of the time it’s Main-app.xml):

  1. renderMode has to be set to “direct”
  2. depthAndStencil has to be set to “true”

Here is a basic example of a properly setup app.xml file for AIR 3.2:

Bootstrap the Main class

That’s the beauty of the Flash platform, Stage3D and Minko: the project boostrap aside, the code of the application is exactly the same whether you are working on a desktop, web or mobile application! Therefore, you can bootstrap your Main class by following the “Your first Minko application” tutorial!

Basically, you just have to copy/paste the MinkoApplication sample class…

… and make your Main class extend it:

Run your mobile application for the first time

If you use Flash Builder, it will display the Debug Configurations panel when you will try to run/debug your mobile application for the first time. This panel does not have anything special regarding Stage3D or Minko, but it’s still a good thing to see the basics! There are two important fields on the panel:

  1. The “Target platform” field will specify what device you want to target for this debug session.
  2. The “Launch method” field will specify whether you want to run the application in the desktop device emulator or directly on the device. Of course, the “On device” method is better if you want to have a preview of the actual performances.

Display your first 3D object

Now that our project is setup and that we can launch it on the device or in the emulator, we will display our first 3D object. You just have to follow the “Display your first 3D object” tutorial for your mobile project. Here is what you’ll get if you choose to run it on the desktop emulating the iPhone4 device:

You can also directly download the sources for this project!

If you have questions/suggestions regarding this tutorial, please post in the comments or on Aerys Answers, Minko’s official support forum.

Minko Weekly Roundup #1

Updates are committed every day. Demos are starting to pop from third party developers. And I clearly don’t have enough time to write an article about each of them! So I got the idea to write little summaries of what happened during the (past few) week(s). Here we go!

Demos

Smooth shadows

We’ve been working a lot to give the user more control on the shadow quality. One of the options now involves shadow smoothing. This features is available on all lights but the PointLight for now:

Click to view the live shadow smoothing demo

This new feature and the corresponding examples should be available in the public repository next week.

Points/particles rendering

minko-examples has been updated with a points/particles rendering example. The code includes both the geometry and the shader required to draw massive amounts of particles. It also demonstrates how one can built simple animations directly on the GPU:

Click on the picture to launch the PointsExample app.

Yellow Submarine

A little demo done by Jérémie Sellam (@chloridrik), developer at the “Les Chinois” interactive agency in Paris, France. The demo mixes my terrain generation example, texture splatting, points rendering and a custom displacement shader to simulate an underwater trip in control of a yellow submarine:

The submarine model was imported and customized using Minko Studio. In a few minutes, Jeremy was able import the original Collada asset, customize it with alpha blending and environment mapping and export an optimized compressed MK file.

Color Transition Shader

Another great work from Jérémie Sellam who implemented a very nice transition effect using nothing more but the public beta of the ShaderLab:

If you cannot run this demo, there is a video of this nice color transition shader on Youtube.

Answers

Tutorials

Features

  • Support for multiple shadows in Minko Studio.
  • New geometry primitives: ConeGeometry and TorusGeometry
  • Normals flipping: you can now flip (= multiply by -1) the normals (and tangents) of a geometry by calling Geometry.flipNormals(). We will soon add an IndexStream.invertWinding() method to be able to fully turn any shape inside out without bugging the shaders that might rely on the normals/tangents.
  • Merging geometries: you can now merge two Geometry objects. Used along with Geometry.applyTransform(), it makes it very easy to merge any static objects.
  • Disposing local geometry data: you can now dispose the entire geometry data (IndexStream + all VertexStreams) with a single call to Geometry.disposeLocalData().
  • New Matrix4x4 methods: Matrix4x4.setColumn(), Matrix4x4.getColumn(), Matrix4x4.getRow() and Matrix4x4.setRow().

Fixes

Tutorial: Display your first 3D object with Minko

Now that we’ve seen how to bootstrap an empty Minko application, it’s time to learn how to display a simple 3D primitive.

Step 1: The Camera

In order to display anything 3D, we will need a camera. In Minko, cameras are represented by the Camera scene node class. The following code snippet creates a Camera object and adds it to the scene:

By default, the camera is in (0, 0, 0) and looks toward the Z axis. We must remember this when we will add our 3D object in the scene: we must make sure it’s far enough on the Z axis to be visible!

Step 2: The Cube

A Mesh is a 3D object that can be rendered on the screen. It is somekind of 3D equivalent of the Shape class used by Flash for 2D vector graphics. But in 3D. As such, it is made of two main components:

  1. a Geometry object containing the triangles that will be rendered on the screen
  2. a Material object defining how that very geometry should be rendered

Creating a Mesh involves passing those two objects to the Mesh constructor:

There are many primitives available as pre-defined geometry classes in Minko: cube, sphere, cylinder, quad, torus… Those classes are in the aerys.minko.render.geometry.primitive package. You can easily swap the CubeGeometry with a SphereGeometry to create a sphere instead of cube for example.

The BasicMaterial is the material provided by default with Minko’s core framework. It’s a simple material that can render using a solid color or a texture. Here, we use it with a simple color. To do this, we simply set the BasicMaterial.diffuseColor property to the color we want to use with an RGBA format.

Remember: the camera is in (0, 0, 0) and – by default – so is our cube. Therefore, we have to slightly translate our cube on the Z axis to make sure it’s in the field of view of the camera:

We will introduce 3D transformations in details in the next tutorial.

Conclusion

To make it simple, our main class will extend the MinkoApplication class detailed at the end of the previous tutorial. We will simply override its initializeScene() method to create our cube, our camera and add both of them to the scene:

And here is what you should get:

If you have questions or suggestions, you can post in the comments or on Aerys Answers!

Tutorial: Your first Minko application

In this tutorial we will see how to create your first scene with Minko. At the end of this tutorial, you will have nothing but a colored rectangle. Before you follow this tutorial it is recommended to read the “Getting started with Minko 2” article in order to learn how to setup your programming environment.

Creating the Viewport

Instanciating a new Viewport object

The first step before rendering anything is to have a rendering area. In Minko, this rendering area is called the “viewport” and is represented by a Viewport object. The viewport can be seen as the middle-man between the classic 2D rendering list and the hardware accelerated 3D rendering. Indeed, the Viewport class extends the Sprite class so it will behave like any other rendering element of the display list: it has a (x, y) position, a width, a height, etc…

Creating the viewport is really simple:

The Viewport constructor accepts the following arguments:

  • antiAliasing : uint, the anti-aliasing level to use when rendering in this viewport; this value can be 0, 2, 4 or 8 and the default value is 0
  • width : uint, the width of the viewport; the default value is 0 to make the viewport fit its parent width automatically
  • height : uint, the height of the viewport; the default value is 0 to make the viewport fit its parent height automatically

There a few things to remember about a viewport though:

  • The viewport can only be behind or infront of all the other elements in the display list. This is because of a technical limitation of the Stage3D API. To make the viewport visible infront, you should set the Viewport.alwaysOnTop property to true.
  • If the viewport is set to resized itself automatically according to its parent’s size (ie. the Viewport constructor was built with width == height == 0), then you have to make sure its parent actually has a size different from 0

The following code snippet will create a 640×480 viewport with 4x anti-aliasing and move it in (100, 200):

Adding the viewport to the display list

Just like any DisplayObject, the Viewport must be added to the stage to be visible. As it behaves like any other DisplayObject, you can simply use the addChild() method to add it to the display list:

The viewport can be added to any DisplayObjectContainer, just make sure its parent has a proper width and height if you are working with an automatically resized Viewport.

Rendering into the viewport

As you can see, even with the viewport added to the Stage, there is no visual change. That’s because the viewport is empty as long as we don’t use it to render a scene. Now that we have a rendering area, we should render something in it! For now, we will just create an empty scene and render it in this viewport:

This code snippet creates a new Viewport and a new Scene objects. Then, it adds the Viewport to the Stage and renders the Scene in that very Viewport. The immediate consequence is that our viewport will now be filled with black. Our viewport is completely black because we just rendered an empty scene and the default background color of the Viewport is black.

Manipulating the Viewport

Setting the background color

You can change the background color of the viewport by setting the Viewport.backgroundColor property. This property holds the background color of the viewport in the RGBA format:

The alpha component of the background color is not used for now and is here only for forward compatibility.

Resizing the viewport

You can resize the viewport by setting the Viewport.width and Viewport.height properties:

Everytime you set the Viewport.width or the Viewport.height property, the Viewport.resized signal is executed. Thus, in order to avoid executing unnecessary signals when you want to set both the width and the height of the viewport, it is recommended to use the Viewport.resize() method:

This way, the Viewport.resized signal will be executed only once at the end of the Viewport.resize() method when the viewport has been successfully resized.

Moving the viewport

You can move the viewport using the Viewport.x and Viewport.y properties. It will behave just like any other DisplayObject element: the final position of the viewport is affected by the transformation applied by its parents. Thus, if you add the Viewport in a Sprite and if you move that Sprite, the viewport will move as well.

Conclusion

The following code sample describe the basic structure of a main class used to create a new Minko application:

You can re-use this class as you main class everytime you want to create a new 3D app!