Update: my implementation worked well on “native” OpenGL configuration but suffered from a GLSL-to-HLSL bug in the ANGLE shader cross-compiler used by Firefox and Chrome on Windows. It should now be fixed.
Update 2: you can toggle each cascade frustum using “L” and the camera frustum using “C”.
arrow keys: rotate around the scene or zoom in/out
C: toggle the debug display of the camera frustum
L: toggle the debug display of each cascade frustum
A: toggle shadow mapping for the 1st light
Z: toggle shadow mapping for the 2nd light
Lighting is a very important part of rendering real-time convincing 3D scenes. Minko already provides a quite comprehensive set of components (DirectionalLight, SpotLight, AmbientLight, PointLight) and shaders to implement the Phong reflection model. Yet, without projected shadows, the human eye can hardly grasp the actual layout and depth of the scene.
My goal was to work on directional light projected shadows as part of a broader work to handle sun-light and a dynamic outdoor environment rendering setup (sun flares, sky, weather…).
Update: I’ve just pushed a new SWF with a much better enhanced effect. I’ve tweaked things like the number of vertical/horizontal blur passes – which are now up to 3/6 – but also the flares’ brightness, contrast and dirt texture. I think it looks way better now!
Tonight’s experiment was focused on post-processing. My goal was to implement a simple anamorphic lens flare post-processing effect using Minko. It was actually quite simple to do. Here is the result:
The 1st pass applies a luminance threshold filter:
Then I use a multipass Gaussian blur with 4 passes: 3 horizontal passes and 1 vertical passes. The trick is to apply those 5 passes (1 luminance threshold pass + 4 blur passes) on a texture which is a lot taller than wide (32×1024 in this case). This way, everything gets streched when the flare are composited with the rest of the backbuffer.
Developer Tomas Vymazal has released a first video of “Rave”, his Artificial Intelligence (AI) framework. This AI framework is built with ActionScript and uses Minko to render the 3D graphics. Here is a video showing some of the features:
According to its author, Rave is soon to be available for licensing (commercial use) and open source (non-commercial). Features:
A* multilayer pathfinding
fully customizable NPC control using json defined FSMs (API functions of AI can be called using reflection from json)
The above video demonstrates what Minko is capable of:
This application simple car customizer was created using Minko, a new 3D engine targeting the Adobe Flash Platform. The exhibited car is a Citroën DS3 with more than 150 000 polygons. It is rendered with dynamic lighting and reflection effects (use arrow keys to move the light, see the “Controls” section for more details). The car is made of close to 400 different objects (wheels, buttons, lights, …).
This application uses:
a 3DS file parser
a high polygon car model, with close to 400 different objects
dynamic Phong lighting with specular effect
reflection effect with dynamic spherical environment mapping
lighting/reflection blending using multipass
multiple cameras in a single scene (inside/outside the car)
This video demonstrates what can be achieved using Minko, the next-gen 3D engine for Flash “Molehill” I’m working on. The environment is a Quake 3 map loaded at runtime and displayed using high definition textures. The demo shows some technical features such as:
Quake 3 BSP file format parser
Potentially Visible Set (PVS)
complex physics for spheres, boxes and convex volumes using BSP trees
light mapping using fragment shaders
materials substitution at runtime
dynamic materials using SWF and the Flash workflow
first person camera system
Quake 3 BSP file format
Quake 3 uses the version 46 of the BSP file format. It’s farily easy to find unofficial specifications for the file format itself. Yet, parsing a file and rendering the loaded geometry are two very different things. With Minko, it is now possible to do both without any knowledge of the specific details about the BSP format and algorithms. Still, those are interesting algorithms so I wanted to explain how it works.
Binary Space Partioning (BSP) is a very effective algorithm to subdivise space in two according to a “partition”. To store the subdivided spaces, we use a data structure called a BSP tree. A BSP tree is just a binary tree plus the partition that separates both children. This picture shows the BSP algorithm in 2 dimensions:
In that case, each node of the BSP tree will store the partition line. When working with 3D the partition will be a plane. We can then walk through the tree by simply comparing the location of a particular point to the partition. A good use case is 3D z-sorting: we simply have to compare the location of the camera to the planes stored in the BSP nodes to know which child should be walked through (or renderered) first. But in Quake 3, the BSP tree is used for:
physics: the tree makes it possible to optimize collisions detection by performing only the required computations and tests
rendering optimization: by locating the node where the camera is, we can know what is the Potentially Visible Set (PVS) of polygons that should be renderered and ignore the rest
In each “*.bsp” file are stored the BSP tree itself, the partition planes, the geometry, the PVS data and even bounding boxes/spheres for each node to perform frustum culling. This data is very precious and is compiled directly in the file by tools like GTKRadiant.
HD textures were provided by the ioquake3 project.
I was at “Retour de MAX” (“Back from MAX”) 2010 – an Adobe France event – to present Molehill, the new 3D API for the Flash Platform, alongside Adobe’s web consultant David Deraedt. The goal was to present the technical details about Molehill and the pros and cons of the API. We also wanted to demonstrate how Minko, our 3D engine, adds all the features Flash developers might expect when dealing with 3D.
We also presented a worldwide premiere demonstration of the new Flash 3D capabilities right in the browser (Internet Explorer 8 in this case). This experiment follows my work on rendering Quake 2 environments with Flash 10 using Minko. Of course, the new Molehill 3D API and the next version of Minko built on top of it makes this kind of work a lot easier. For this demonstration, we chose to present a Quake 3 environment viewer using HD textures provided by the ioquake3 project. Here are some screenshots (more screenshots on the Aerys website):
Drawing (a lot of) triangles is nice. But making them move is another problem… even harder: making them move like objects would move in real life! This is what a “physics engine” is about:
“A physics engine is computer software that provides an approximate simulation of certain simple physical systems, such as rigid body dynamics (including collision detection), soft body dynamics, and fluid dynamics, of use in the domains of computer graphics, video games and film. Their main uses are in video games (typically as middleware), in which case the simulations are in real-time. The term is sometimes used more generally to describe any software system for simulating physical phenomena, such as high-performance scientific simulation.”
Flash is no exception and has a few physics engine libraries available. Nothing like a “high-performance scientific simulation” though…
Still, jiglibflash is one of those libraries. It is free and open-source. It provides a 3D-capable physics engine and exposes a rather simple API to make it work with any 3D graphics engine. Which brings me to the good news: Minko now supports jiglibflash!
The Adobe 24H Challenge was last friday and the application the 14 teams created are already online. You can see all the available applications on the official website.
Our application is called “funanbulle”. The goal of the application is to allow families to create their own micro virtual world and gather. We wanted to show what such virtual worlds would look like. The idea was to enable people to share and chat in real time with a fun and engaging user experience.
The application is nothing more than a proof of concept. If we had enough time, we would have added lots of feature like:
Photos and videos sharing
Interactive objects to trigger applications (games, sharing applications, etc…)
In the end, we had just enough time to build a 3D chat. But I think it was a lot of fun and it looks really nice! Here is a quick video to show what funanbulle is about and how it works:
This video was made by Michael Chaize to show the 14 applications created during the contest.
… or at least that’s the plan! The next meeting of the Tonton Flexers – the closest thing to a “french Flash user group” – is taking place the 23rd of this March and I’ll be there to present my 3D library.
I would be more than happy to talk about the software, the way I built it and the technical choices that drove its development. I will also try to emphasize what makes this library different through a few demonstrations.
Depending on the agenda of one of my co-worker, we might also present a very cool piece of software I never spoke about!
You can read more about the event here (in french).
The following video demonstrates a new “voice gesture” library targeting the Flash Platform. As you might have guessed, those “voice gestures” are pretty much like “mouse gestures” but they are activated by voice only. I guess it uses some kind of voice learning/recognition algorithm. I can’t stress enough how trhilled I am to see this kind of new and powerful software coming to Flash. This enables a whole new kind of usages and applications…