From ActionScript 3 to C++ 2011

During the last Flash Onlince conference, I had the chance to share the latest work I’ve been involved in at Aerys with the rest of the Minko team. We’ve been working a lot on the next major version because we really want it to be a game changer for 3D on mobiles and the web.

You can read the original announcement for more details. But the big picture is that Minko is going to support WebGL. To introduce this new major feature we’ve created a first technical demonstration:

To do this, we are completely rewriting Minko using C++ 2011. This new version will include bindings for ActionScript 3 (and obviously Javascript too). So if you’re an AS3 developer: do not panic! You’ll still be able to leverage your AS3 skills with Minko. Yet if you want to learn new tricks now would be a good time and C++ is a good choice.

To understand the process of working with C++ code targeting the Flash platform and HTML5/Javascript, you can start by reading my slides:

To help AS3 developers migrating to C++, I’ve decided I’ll start gathering resources here on this very blog. If you are interested you can start by:

If you have suggestions regarding what you need to know in particular regarding C++ and especially cross-compilation targeting the Flash platform or Javascript, please let me know!

Stage3D Online Conference Slides

It was really awesome to be invited to talk about Minko today during the Stage3D online conference organized by Sergey Gonchar. He has done an excellent job in organizing this and I hope people enjoyed attending it as much as I enjoyed being a part of it.

minko_file_formats_comparison minko_editor_workflow minko_editor_triggers minko_darksider

You can watch the entire conference here.

As I promised, here are the slides to this presentation. They are pretty heavy because they embed some videos. Here is the outline of the content of the presentation:

  • Community SDK
    • Scripting
    • Shaders and GPU programming
    • Scene editor
  • Professional SDK
    • Physics
    • Optimizations for the web and mobile devices

At the end of the presentation, I also demonstrated how Minko can load and display Crytek’s Sponza smoothly on the iPad and the Nexus 7 in just a few minutes of work thanks to the editor and the optimizations granted by the MK format. You will soon here more about this very demonstration wiht a clean video demonstrating the app. but also the publishing process. This is incredibly cool since Sponza is quite a big scene with more than 50 textures including normal maps, alpha maps and specular maps for a total of 200+MB (only 70MB when published to MK).

Don’t forget to have a look at all the online resources for Minko:

As stated in the presentation, Minko’s editor public beta should start next week. So stay tuned!

Anamorphic Lens Flare

Update: I’ve just pushed a new SWF with a much better enhanced effect. I’ve tweaked things like the number of vertical/horizontal blur passes – which are now up to 3/6 – but also the flares’ brightness, contrast and dirt texture. I think it looks way better now!

Tonight’s experiment was focused on post-processing. My goal was to implement a simple anamorphic lens flare post-processing effect using Minko. It was actually quite simple to do. Here is the result:


The 1st pass applies a luminance threshold filter:

Then I use a multipass Gaussian blur with 4 passes: 3 horizontal passes and 1 vertical passes. The trick is to apply those 5 passes (1 luminance threshold pass + 4 blur passes) on a texture which is a lot taller than wide (32×1024 in this case). This way, everything gets streched when the flare are composited with the rest of the backbuffer.

JIT Shaders For Better Performance

The subject is really vast and complex and I’ve been trying to write an article about this for quite some time now. Recently, I made a small patch to enhance this technique and I thought it was a good occasion to try to summarize how it works and the benefits of it. In order to talk about this new enhancement, I would like to draw the big picture first.

The Problem

That might look like a complicated post title… but this is rather complex than really complicated. Here is how it starts: rendering a 3D object require to execute a graphics rendering program – or “shader” – on the GPU. To make it simple, let’s just say this program will compute the final color of each pixel on the screen. Thus, the operations performed by this shader will vary according to how you want your object to look like. For example rendering with a solid flat color requires different operations than rendering with per-pixel lighting.

Any programming beginner will understand that such program will test conditions – for example whether to use lighting or not – and perform some operations according to the result of this test. Yes: that’s pretty much exactly what an “if” statement is. It might look like programming trivia to you. And it would be if this program was not meant to be executed on the GPU…

You see, the GPU does not like branching. Not one bit (literally)! For the sake of parallelization, the GPU expects the program to have a fixed number of operations. This is the only efficient way to ensure computations can be distributed over a large number of pipelines without having to care too much about their synchronization. Thus, the GPU does not know branching and each program has a fixed number of instructions that will always be executed in the same order.

Conclusion: shader programs cannot use “if” statements. And of course, loops are out of the game too since they are pretty much pimped out “if” statements. Can you imagine what such logic would imply on your daily programming tasks? If you simply try to, you will quickly understand that instead of writing one program that can handle many different situations you will have to write many different programs that will handle a single situation. And then manually choose which one should be launched according to your initial setup…



The simplest workaround is to find “some way” to make sure useless computations do not affect the actual rendering operations. For example, you can “virtually disable” lighting by setting all lights diffuse/specular components to 0.

As you can imagine, this is really a suboptimal option. Performance wise, it’s actually the worst possible idea: a lot of computations happen and most of them are likely to be useless in most cases.

If/else shader intrinsic instructions

After a few years, shaders evolved and featured more and more instructions. Those instructions are now usable through higher level languages such as CG or GLSL. Those languages feature “if” statements (and even loops too). How are they compiled into shader code that can run on a GPU? Do they overcome the challenges implied by parallelization?

No. They actually fit in in a very straight forward and simple way. As a shader program must feature a single fixed list of instructions, the two parts of a if/else statement will both be executed. The hardware will then decide which one should be muted according to the actual result of the test performed by the conditional instructions.

The bright side is that you can use this technique to have a single shader program that handles multiple scenarios. The dark side is that this shader is still very inefficient and might eventually break the limit number of instructions for a single program. On some older hardware, the corresponding hardware instructions simply do not exist…

So even this “brand new” feature that will be introduced in Flash 11.7 and its “extended” profile is far from sufficient.


Some engines will use high level shader programming languages (like CG or GLSL) and a pre-compilation workflow to generate all the possible outcomes. Then, the right shader is loaded at runtime according to the rendering setup. This is the case of the Source Engine, created by Valve and used in famous games like Half Life 2, Team Fortress 2 or Portal.

This solution is efficient performance wise: there is always a shader that will do exactly and strictly the required operations according to the rendering setup. Plus it does not have to rely on some hardware features availability. But pre-compilation implies a very heavy and inefficient assets workflow.

Minko’s Solution

We’ve seen the common workarounds and each of them has very strong cons. The most robust implementation seems to be the pre-compilation option despite the obvious workflow issues. Especially when we’re talking web/mobile applications! But the past 10 years have seen the rise of a technique that could solve this problem: Just In Time (JIT) compilation. This technique is mostly used by Virtual Machines – such as the JVM (Java Virtual Machine), the AVM2 (Actionscript Virual Machine) or V8 (Chrome’s JavaScript virtual machine). It’s purpose is to compile the virtual machine bytecode into actual machine opcodes at runtime in order to get better performances.

How would the same principle apply to shaders? If you consider your application as the VM and your shader code as this VM execution language, then it all falls into place! Indeed, your 3D application could simply compile some higher level language shader code into actual machine shader code according to the available data. For example, some shader might compile differently according to whether lighting is enabled or not or even according to the number of lights.

With Minko, we tried to keep it as simple as possible. Therefore, we worked very hard to find a way to be able to write shaders using AS3. As the figure above explains, the AS3 shader code you write is not executed on the GPU (because that’s simply not possible). Instead, the application acts as a Virtual Machine and as it gets executed at runtime, this AS3 shader code transparently generates what we call an Abstract Shader Graph (ASGs). You can see it as an Abstract Syntax Tree for shaders (you can even ask Minko to output ASGs in the consoleas they get generated using a debug flag). This ASG in then optimized and compiled into actual shader code for the GPU.

For example: everytime you call the add() method in your AS3 shader code, it will create a corresponding ASG node. This very node will be linked with the rest of the ASG as you use it in other operations until it is finally used as the result of the shader. This result node becomes the “entry point” of the ASG.

Here is what a very simple ASG that just handles a solid flat color rendering looks like:

Here is what a (complicated) ASG that handle multiple lights looks like:

Your AS3 shader code is executed at runtime on the CPU to generate this ASG that will be compiled into actual shader code that will run on the GPU (in the case of Flash it will actually output AGAL bytecode that will be translated into shader machine code by the Flash Player). As such, you can easily perform “if” statements that will shape the ASG. You can even use loops, functions and OOP! You just have to make sure the shader is re-evaluated anytime the output might be different (for example when the condition tested in a “if” changes). But that’s for another time…

Using JIT shaders, Minko can efficiently dynamically compile shaders shaped by the actual rendering settings occuring at runtime. Thus, it combines the high performance of a pre-compilation solution while leveraging all the flexibility of JIT compilation. In my next articles, I will explain how JIT shaders compilation can be efficiently automated and how multi-pass rendering can also be made more efficient thanks to this approach.

If you have questions, hit the comments or post in the Minko category on Aerys Answers!

3D Matrices Update Optimization

4×4 matrices are the heart of any 3D engine as far as math is concerned. And in any engine, how those matrices are computed and made available through the API are two critical points regarding both performances and ease of development. Minko was quite generous regarding the second point, making it easy and simple to access and watch local to world (and world to local) matrices on any scene node. Yet, the update strategy of those matrices was.. naïve, to say the least.


There is a new 3D transforms API available in the dev branch that provides a 50000% 25000% boost on scene nodes’ matrices update in the best cases, making it possible to display 50x 25x more animated objects. You can read more about the changes on Answers.

Continue reading 3D Matrices Update Optimization

New Minko Feature: ByteArray Streams

I’ve just pushed on github my work for the past few weeks and it’s a major update. But most of you should not
have to change a single line of code in the best case. The two major changes are the activation of
frustum culling – who now works perfectly well – and the use of ByteArray objectst to store vertex/index
streams data.

Using ByteArray instead of Vector, why are we doing this?

As you might now, Number is the equivalent of the “double” data type and as such they are stored on
64bits. As 32bits is all a GPU can handle regarding vertex data it is a big waste of RAM. Using ByteArray
makes it possible to store floats as floats and avoid any memory waste
. The same goes with indices stored
in uint when they are actually shorts.

Another important optimization is the GPU upload. Using Number of uint requires the Flash player to
re-interpret every value before upload: each 64bits Number has to be turned into a 32bits float, each
32bit uint has to be turned into a 16bits short. This process is slow by itself, but it also prevent
the Flash player to simply memcopy the buffers into the GPU data. Thus, using ByteArray should really
speed up the upload of the streams data to the GPU
and make it as fast as possible. This difference should be even bigger on low-end and
mobile devices.

Finally, it also makes it a lot faster to load external assets because it is now possible to memcopy
chunk of binary files directly into vertex/index streams. It should also prove to be very very useful
for a few exclusive – and quite honestly truly incredible – features we will add in the next few months.

What does it change for you ?

If you’ve never been playing around with the vertex/index streams raw data, it should not change a single
thing in your code
. For example, iterators such as VertexIterator and TriangleIterator will keep working just the way
they did. A good example of this is the TerrainExample, who runs just fine without a single change.

If you are relying on VertexStream.lock() or IndexStream.lock(), you will find that those methods now
return a ByteArray instead of a Vector. You should update you code accordingly. If you want to see a good example of ByteArray manipulations for streams, you can read the code of the Geometry.fillNormalsData() and Geometry.fillTangentsData() methods.

What’s next?

This and some recent additions should make it much easier to keep streams data in the RAM without wasting too much memory and be able to restore it on context device loss. It’s not implemented yet but it’s a good first step on this complicated memory management path.

Another possible feature would be to store streams data in compressed ByteArray. As LZMA compression is now available, it could save a lot of memory. The only price to pay would be to have to uncompress the data before being able to read/write it.

Minko Weekly Roundup #1

Updates are committed every day. Demos are starting to pop from third party developers. And I clearly don’t have enough time to write an article about each of them! So I got the idea to write little summaries of what happened during the (past few) week(s). Here we go!


Smooth shadows

We’ve been working a lot to give the user more control on the shadow quality. One of the options now involves shadow smoothing. This features is available on all lights but the PointLight for now:

Click to view the live shadow smoothing demo

This new feature and the corresponding examples should be available in the public repository next week.

Points/particles rendering

minko-examples has been updated with a points/particles rendering example. The code includes both the geometry and the shader required to draw massive amounts of particles. It also demonstrates how one can built simple animations directly on the GPU:

Click on the picture to launch the PointsExample app.

Yellow Submarine

A little demo done by Jérémie Sellam (@chloridrik), developer at the “Les Chinois” interactive agency in Paris, France. The demo mixes my terrain generation example, texture splatting, points rendering and a custom displacement shader to simulate an underwater trip in control of a yellow submarine:

The submarine model was imported and customized using Minko Studio. In a few minutes, Jeremy was able import the original Collada asset, customize it with alpha blending and environment mapping and export an optimized compressed MK file.

Color Transition Shader

Another great work from Jérémie Sellam who implemented a very nice transition effect using nothing more but the public beta of the ShaderLab:

If you cannot run this demo, there is a video of this nice color transition shader on Youtube.




  • Support for multiple shadows in Minko Studio.
  • New geometry primitives: ConeGeometry and TorusGeometry
  • Normals flipping: you can now flip (= multiply by -1) the normals (and tangents) of a geometry by calling Geometry.flipNormals(). We will soon add an IndexStream.invertWinding() method to be able to fully turn any shape inside out without bugging the shaders that might rely on the normals/tangents.
  • Merging geometries: you can now merge two Geometry objects. Used along with Geometry.applyTransform(), it makes it very easy to merge any static objects.
  • Disposing local geometry data: you can now dispose the entire geometry data (IndexStream + all VertexStreams) with a single call to Geometry.disposeLocalData().
  • New Matrix4x4 methods: Matrix4x4.setColumn(), Matrix4x4.getColumn(), Matrix4x4.getRow() and Matrix4x4.setRow().


Tutorial: Display your first 3D object with Minko

Now that we’ve seen how to bootstrap an empty Minko application, it’s time to learn how to display a simple 3D primitive.

Step 1: The Camera

In order to display anything 3D, we will need a camera. In Minko, cameras are represented by the Camera scene node class. The following code snippet creates a Camera object and adds it to the scene:

By default, the camera is in (0, 0, 0) and looks toward the Z axis. We must remember this when we will add our 3D object in the scene: we must make sure it’s far enough on the Z axis to be visible!

Step 2: The Cube

A Mesh is a 3D object that can be rendered on the screen. It is somekind of 3D equivalent of the Shape class used by Flash for 2D vector graphics. But in 3D. As such, it is made of two main components:

  1. a Geometry object containing the triangles that will be rendered on the screen
  2. a Material object defining how that very geometry should be rendered

Creating a Mesh involves passing those two objects to the Mesh constructor:

There are many primitives available as pre-defined geometry classes in Minko: cube, sphere, cylinder, quad, torus… Those classes are in the aerys.minko.render.geometry.primitive package. You can easily swap the CubeGeometry with a SphereGeometry to create a sphere instead of cube for example.

The BasicMaterial is the material provided by default with Minko’s core framework. It’s a simple material that can render using a solid color or a texture. Here, we use it with a simple color. To do this, we simply set the BasicMaterial.diffuseColor property to the color we want to use with an RGBA format.

Remember: the camera is in (0, 0, 0) and – by default – so is our cube. Therefore, we have to slightly translate our cube on the Z axis to make sure it’s in the field of view of the camera:

We will introduce 3D transformations in details in the next tutorial.


To make it simple, our main class will extend the MinkoApplication class detailed at the end of the previous tutorial. We will simply override its initializeScene() method to create our cube, our camera and add both of them to the scene:

And here is what you should get:

If you have questions or suggestions, you can post in the comments or on Aerys Answers!

Tutorial: Your first Minko application

In this tutorial we will see how to create your first scene with Minko. At the end of this tutorial, you will have nothing but a colored rectangle. Before you follow this tutorial it is recommended to read the “Getting started with Minko 2” article in order to learn how to setup your programming environment.

Creating the Viewport

Instanciating a new Viewport object

The first step before rendering anything is to have a rendering area. In Minko, this rendering area is called the “viewport” and is represented by a Viewport object. The viewport can be seen as the middle-man between the classic 2D rendering list and the hardware accelerated 3D rendering. Indeed, the Viewport class extends the Sprite class so it will behave like any other rendering element of the display list: it has a (x, y) position, a width, a height, etc…

Creating the viewport is really simple:

The Viewport constructor accepts the following arguments:

  • antiAliasing : uint, the anti-aliasing level to use when rendering in this viewport; this value can be 0, 2, 4 or 8 and the default value is 0
  • width : uint, the width of the viewport; the default value is 0 to make the viewport fit its parent width automatically
  • height : uint, the height of the viewport; the default value is 0 to make the viewport fit its parent height automatically

There a few things to remember about a viewport though:

  • The viewport can only be behind or infront of all the other elements in the display list. This is because of a technical limitation of the Stage3D API. To make the viewport visible infront, you should set the Viewport.alwaysOnTop property to true.
  • If the viewport is set to resized itself automatically according to its parent’s size (ie. the Viewport constructor was built with width == height == 0), then you have to make sure its parent actually has a size different from 0

The following code snippet will create a 640×480 viewport with 4x anti-aliasing and move it in (100, 200):

Adding the viewport to the display list

Just like any DisplayObject, the Viewport must be added to the stage to be visible. As it behaves like any other DisplayObject, you can simply use the addChild() method to add it to the display list:

The viewport can be added to any DisplayObjectContainer, just make sure its parent has a proper width and height if you are working with an automatically resized Viewport.

Rendering into the viewport

As you can see, even with the viewport added to the Stage, there is no visual change. That’s because the viewport is empty as long as we don’t use it to render a scene. Now that we have a rendering area, we should render something in it! For now, we will just create an empty scene and render it in this viewport:

This code snippet creates a new Viewport and a new Scene objects. Then, it adds the Viewport to the Stage and renders the Scene in that very Viewport. The immediate consequence is that our viewport will now be filled with black. Our viewport is completely black because we just rendered an empty scene and the default background color of the Viewport is black.

Manipulating the Viewport

Setting the background color

You can change the background color of the viewport by setting the Viewport.backgroundColor property. This property holds the background color of the viewport in the RGBA format:

The alpha component of the background color is not used for now and is here only for forward compatibility.

Resizing the viewport

You can resize the viewport by setting the Viewport.width and Viewport.height properties:

Everytime you set the Viewport.width or the Viewport.height property, the Viewport.resized signal is executed. Thus, in order to avoid executing unnecessary signals when you want to set both the width and the height of the viewport, it is recommended to use the Viewport.resize() method:

This way, the Viewport.resized signal will be executed only once at the end of the Viewport.resize() method when the viewport has been successfully resized.

Moving the viewport

You can move the viewport using the Viewport.x and Viewport.y properties. It will behave just like any other DisplayObject element: the final position of the viewport is affected by the transformation applied by its parents. Thus, if you add the Viewport in a Sprite and if you move that Sprite, the viewport will move as well.


The following code sample describe the basic structure of a main class used to create a new Minko application:

You can re-use this class as you main class everytime you want to create a new 3D app!

New Minko 2 Features: Normal Mapping And Parallax Mapping

One of Aerys’ engineers – Roman Giliotte – is the most active developer on Minko. He is the one behind the JIT shaders compiler, the Collada loader and the lighting engine. This last project received a special attention in the past few days with a lot of new features. Among them: normal mapping and parallax mapping.

The following sample shows the difference between (from left to right) classic lighting, normal mapping and parallax mapping:

The 3 objects are the exact same sphere mesh: they are just rendered with 3 different shaders. You can easily see that the sphere using parallax mapping (on the right) appears to have a lot more details and polygons. And yet it’s just the same sphere rendered with a special shader that will mimic the volume effect and details on the GPU.

Parallax mapping can be used to add details and volumes on any mesh. This technique is used in many modern commercial games such as Crysis 2 or Battlefield 3. It makes it possible to load and display a lot less polygons but with a high-polygon level of details.

And of course, thanks to Minko and Flash 11/AIR 3, it works just as well on Android and iOS!

The only thing you need is a normal map and a heightmap. And those two assets are very easy to generate from any actual 3D asset. The technique we use is called “steep parallax mapping”. And thanks to Minko’s exclusive JIT AS3 shaders compiler, you can now use parallax mapping in any of your custom shaders! The code is available on github :

One of the future optimizations include storing the height in the w/alpha component of the normal map. This way, the memory usage will be the same than with normal mapping but with a much better rendering.

If you have questions or suggestions, you can leave a comment or post on Aerys Answers.