Anamorphic Lens Flare

Update: I’ve just pushed a new SWF with a much better enhanced effect. I’ve tweaked things like the number of vertical/horizontal blur passes – which are now up to 3/6 – but also the flares’ brightness, contrast and dirt texture. I think it looks way better now!

Tonight’s experiment was focused on post-processing. My goal was to implement a simple anamorphic lens flare post-processing effect using Minko. It was actually quite simple to do. Here is the result:

minko_anamorphic_lens_flare_vipermarkII_2

The 1st pass applies a luminance threshold filter:

Then I use a multipass Gaussian blur with 4 passes: 3 horizontal passes and 1 vertical passes. The trick is to apply those 5 passes (1 luminance threshold pass + 4 blur passes) on a texture which is a lot taller than wide (32×1024 in this case). This way, everything gets streched when the flare are composited with the rest of the backbuffer.

New Minko 2 Features: Normal Mapping And Parallax Mapping

One of Aerys’ engineers – Roman Giliotte – is the most active developer on Minko. He is the one behind the JIT shaders compiler, the Collada loader and the lighting engine. This last project received a special attention in the past few days with a lot of new features. Among them: normal mapping and parallax mapping.

The following sample shows the difference between (from left to right) classic lighting, normal mapping and parallax mapping:

The 3 objects are the exact same sphere mesh: they are just rendered with 3 different shaders. You can easily see that the sphere using parallax mapping (on the right) appears to have a lot more details and polygons. And yet it’s just the same sphere rendered with a special shader that will mimic the volume effect and details on the GPU.

Parallax mapping can be used to add details and volumes on any mesh. This technique is used in many modern commercial games such as Crysis 2 or Battlefield 3. It makes it possible to load and display a lot less polygons but with a high-polygon level of details.

And of course, thanks to Minko and Flash 11/AIR 3, it works just as well on Android and iOS!

The only thing you need is a normal map and a heightmap. And those two assets are very easy to generate from any actual 3D asset. The technique we use is called “steep parallax mapping”. And thanks to Minko’s exclusive JIT AS3 shaders compiler, you can now use parallax mapping in any of your custom shaders! The code is available on github :

One of the future optimizations include storing the height in the w/alpha component of the normal map. This way, the memory usage will be the same than with normal mapping but with a much better rendering.

If you have questions or suggestions, you can leave a comment or post on Aerys Answers.

Single Pass Cel Shading

Cel shading – aka “toon shading” – is an effect used to make 3D rendering look like cartoon. It was used in many flavours in a few games such as XIII or Zelda Wind Wakers.

TL;DR

Click on the picture above to launch the demonstration.

The final rendering owns a lot to the orginal diffuse textures used for rendering. But this cartoon style is also achieved using two distinct features on the GPU:

  1. The light is discretized using a level function.
  2. A black outline on the edges.

This is usually done using two passes (or even more). One pass renders the scene in the backbuffer with the lighting modified by the level function. Another pass renders the normals only and then pass is done as a post-process to perform an edge detection algorithm (a Sobel filter) to add the black outline.

Another technique uses two passes: one to render the object with the lighting, and a second one with front-face culling and a scale offset to render the outline with a solid black color.

But I thought it might be done more efficiently in one pass using a few tricks. The most difficult part here is how to get the black outline. Indeed, we are working on a per-vertex (vertex shader) or per-pixel (fragment shader) basis. Thus, it’s pretty hard to work on edges. It’s pretty much the same problem we encountered when working on wireframe, except here we want to outline the object and not the triangles. But this little difference actually makes it a lot easier for us.

Why? Because detecting the edges of the 3D shape is much easier.

The Outline

The trick is to get the angle between the eye-to-vertex vector and the vertex normal. If that very angle is close to PI/2 (=90°), then the vertex is on an edge. If the vertex is on an edge, then we will displace it a bit toward its normal. The vertices deplaced this way will form an outline around around shape:

Our fragment shader is pretty simple here: _isEdge is supposed to contain 1 if the vertex is on an edge, 0 otherwise. Therefore, as we want our outline to be black, we simply use the “lessThan” operation. If the vertex is on an edge, the outline value will be 0 and 1 otherwise. We just have to multiply “outline” with whatever color we want to use.

You’ll get the following result:

The Light Level Function

There are many ways to transform a continuous per-pixel lighting equation into another one that will use levels. The best way to make it possible for the artists to customize it is to use a light texture. The Lambert factor is then used as the UVs to sample that texture.

But here I wanted this effect to rely on no textures (except a diffuse one eventually). So I implemented a very simple level function using an euclidian division:

Here is what you get:

The Final Shader

You can combine both effects by simply multiply the Lambert factor by the outline value we used to output.

And voilà!

Minko released as open source


I’m glad to announce the very first public release of Minko, Aerys’ 3D framework targeting the Adobe Flash platform and the new Stage3D API. This release focuses on setting up the main concepts and APIs. You can download Minko on the Aerys Developers Hub. The code is realeased under the LGPL license so everyone is free to start building awesome 3D applications for the web. Here are the key features:

  • Extensible Scene Graph API
  • ActionScript 3.0 GPU programming
  • Extensions system
  • Support for 3DS and Collada file formats
  • Dynamic lighting and dynamic shadows
  • Hardware accelerated animations
  • 3D physics

>Yes… with Minko you can now create “shaders” and program the graphics hardware using ActionScript 3.0 code! Thanks to this awesome feature, any Flash developer can now program the GPU to create incredible hardware accelerated rendering effects. We have also released a bunch of tutorials, technical articles and open source demonstrations.

You can find the complete release blog post on the official Aerys blog.

New Minko demonstration: Citroen DS3

Update: The actual application is available here.

The above video demonstrates what Minko is capable of:



This application simple car customizer was created using Minko, a new 3D engine targeting the Adobe Flash Platform. The exhibited car is a Citroën DS3 with more than 150 000 polygons. It is rendered with dynamic lighting and reflection effects (use arrow keys to move the light, see the “Controls” section for more details). The car is made of close to 400 different objects (wheels, buttons, lights, …).

This application uses:

  • a 3DS file parser
  • a high polygon car model, with close to 400 different objects
  • dynamic Phong lighting with specular effect
  • reflection effect with dynamic spherical environment mapping
  • lighting/reflection blending using multipass
  • multiple cameras in a single scene (inside/outside the car)
  • up to 300 000 polygons rendered per frame

The actual application will be released tomorrow during Adobe Lightning Talks.

Quake 3 HD with Flash “Molehill” and Minko

This video demonstrates what can be achieved using Minko, the next-gen 3D engine for Flash “Molehill” I’m working on. The environment is a Quake 3 map loaded at runtime and displayed using high definition textures. The demo shows some technical features such as:

  • Quake 3 BSP file format parser
  • Potentially Visible Set (PVS)
  • complex physics for spheres, boxes and convex volumes using BSP trees
  • light mapping using fragment shaders
  • materials substitution at runtime
  • dynamic materials using SWF and the Flash workflow
  • first person camera system

Quake 3 BSP file format

Quake 3 uses the version 46 of the BSP file format. It’s farily easy to find unofficial specifications for the file format itself. Yet, parsing a file and rendering the loaded geometry are two very different things. With Minko, it is now possible to do both without any knowledge of the specific details about the BSP format and algorithms. Still, those are interesting algorithms so I wanted to explain how it works.

Binary Space Partioning (BSP) is a very effective algorithm to subdivise space in two according to a “partition”. To store the subdivided spaces, we use a data structure called a BSP tree. A BSP tree is just a binary tree plus the partition that separates both children. This picture shows the BSP algorithm in 2 dimensions:

In that case, each node of the BSP tree will store the partition line. When working with 3D the partition will be a plane. We can then walk through the tree by simply comparing the location of a particular point to the partition. A good use case is 3D z-sorting: we simply have to compare the location of the camera to the planes stored in the BSP nodes to know which child should be walked through (or renderered) first. But in Quake 3, the BSP tree is used for:

  • physics: the tree makes it possible to optimize collisions detection by performing only the required computations and tests
  • rendering optimization: by locating the node where the camera is, we can know what is the Potentially Visible Set (PVS) of polygons that should be renderered and ignore the rest

In each “*.bsp” file are stored the BSP tree itself, the partition planes, the geometry, the PVS data and even bounding boxes/spheres for each node to perform frustum culling. This data is very precious and is compiled directly in the file by tools like GTKRadiant.

Credits

HD textures were provided by the ioquake3 project.

Back from Adobe “Retour de MAX” 2010

I was at “Retour de MAX” (“Back from MAX”) 2010 – an Adobe France event – to present Molehill, the new 3D API for the Flash Platform, alongside Adobe’s web consultant David Deraedt. The goal was to present the technical details about Molehill and the pros and cons of the API. We also wanted to demonstrate how Minko, our 3D engine, adds all the features Flash developers might expect when dealing with 3D.

We also presented a worldwide premiere demonstration of the new Flash 3D capabilities right in the browser (Internet Explorer 8 in this case). This experiment follows my work on rendering Quake 2 environments with Flash 10 using Minko. Of course, the new Molehill 3D API and the next version of Minko built on top of it makes this kind of work a lot easier. For this demonstration, we chose to present a Quake 3 environment viewer using HD textures provided by the ioquake3 project. Here are some screenshots (more screenshots on the Aerys website):


Video and more details right after the jump…

Flash 11 drawing API will be hardware accelerated

I love catchy titles. I know nothing about the next major version of Flash and all of this is just speculations.

Anyway, as the whole HTML5 versus Flash battle is raging, it appears Flash relatively poor performances are indeed criticizable at best. Beside poor developers, the Flash Platform suffers from a very very slow software renderer. Or at least much slower than the rest of the platform. It’s no secret: hardware acceleration is a key feature for the future of the Flash Player. It’s even hard to believe it is not available yet!

As you must already know, Flash 10.1 will support OpenGL ES 2 on mobile devices to leverage the lack of CPU horsepower. While hardware HD video decoding will be available on the desktop too, the drawing API will only be accelerated on mobile devices.

Anyone knowing a bit about OpenGL ES knows it is a subset of OpenGL. Thus if it works with OpenGL ES, it should work with OpenGL. With this in mind, a few questions:

  1. Why isn’t Flash 10.1 drawing API hardware accelerated on any OpenGL capable platform, including the desktop?
  2. How will it work?
  3. Will Pixel Bender be hardware accelerated?

1. Hardware Accelerated Desktop Flash Player

Let’s face it: there must be an hardware accelerated desktop Flash Player in the works. As I said previously, OpenGL supports all features of OpenGL ES so this is not far fetched at all. Yet, it is neither released nor announced. Why?

My first guess is OpenGL provides with a lot of features that would make the desktop experience a lot smoother than just using what offers its little brother OpenGL ES. So at some point Adobe had to make a choice :

  • release a fully hardware accelerated Flash 10.1 on both mobile and desktop platforms, with the last one being very far from what desktop hardware is actually capable to handle
  • or release Flash 10.1 focusing on mobile devices and announce hardware acceleration for the desktop just after its the final release… and I’m guessing it might be a key feature of Flash 11

When I was at the French Flash User Group (TTFX – les TonTons FleXeurs) a few months ago, I spoke with Lee Brimelow and Mike Chambers about the just announced microphone raw-data access. I asked them why it was announced only for AIR 2.0 and not Flash 10.1. The answer was something about the “quality guys” making sure the feature was well suited on both the roadmap and the logic of the incoming updates. And this feature eventually made its way into the Flash Player! I think that’s what is happening with hardware acceleration on the desktop.

But if you don’t believe me, you don’t have to take my word for it! What about Adobe’s word? Cnuuja, one of the engineer working on the Flash Player, posted this very message on the Flash 10.1 Forum:

Can OpenGL 3.3/4.0 improve Flash 10.x ?

“Yes….  with a lot of work.   We have spent the last year writing new code which allows OpenGLES2 to render flash content on mobile devices.  Performance varies significantly from one gpu to the next, with some gpus being slower than the software renderer.   What Flash does is significantly different from the 3d triangles+shaders GPUs were designed to support. Its a lot of work to make OGL/D3D usable as our renderer, but we’re working on it 

-chris”

2. How will it work?

Just like in Flash 10.1 for mobile devices. But much faster thanks to OpenGL and Direct3D.

The internals of such feature is very important: developers must know and understand how it works to make the best out of it. Adobe already talked about how the drawing API is accelerated in Flash 10.1 on mobile devices:

“When a GPU renders vector graphics, it breaks them up into meshes made of small triangles before drawing them, a process called tesselating. There is a small cost to doing this, which increases as the complexity of the shape increases. To minimize performance impact, avoid morph shapes, which must be retesselated on every frame.”

It’s straight forward and I think it’s actually the best (and only…) way to do it. Tesselation will create triangles by computing sets of vertices/indices (also called Vertex and Index Buffers) and push them to the graphics hardware. The end of the quote suggests such data is cached and should not be recomputed if no redraw occurs.

Something very important though: z-sorting. People might think hardware acceleration implies z-sorting. But it doesn’t. When you know how 3D hardware and APIs work, you know it will be very tricky to make it work with something as general purpose as Flash. If Adobe wants to use the z-buffer, they will have to cut the compatibility with the software renderer. And I don’t think this will happen anytime soon.

3. Hardware Accelerated Pixel Bender

Pixel Bender is already hardware accelerated pretty much everywhere except the Flash Platform. I’m not sure why. Still, it’s hardware accelerated in other products of the Creative Suite so I guess that an OpenGL/Direct3D shader languages compliant intermediate represenation of Pixel Bender kernels does exist.

This said, it’s just a matter of how to make it work with the very general purpose Flash Player. Considering Flash 10.1 is using tesselation, my guess is pixel shader should follow quite easily.