Predictable garbage collection with Lua

In one of my previous posts I talked about how you can make the Lua garbage collector (GC) more predictable in running time. This is a virtue that is highly valued in a GC used in games where you don’t have the luxury of going over you frame time. In that post I described a solution to the problem which works fine most of the time, leaving little space for garbage collection times that will hurt the framerate. However I ended that post with a promise to provide a better solution and in this post, I deliver.

The ideal situation would be to have the GC run for a specific amount of time. This way the game engine will be able to assign exact CPU time to the GC based on the situation. For example one strategy would be to give a constant amount of time to the GC per frame. Lets say 2ms every frame. Or it can be more clever and take into consideration other parameters, like the amount of time it took to do the actual frame. Is there enough time left for this frame? If there is, spend some for GC, if not, hold it for the next frame when things might not be too tight. Other parameters can be memory thresholds, memory warnings, etc.

All of the above depend on a GC that can be instructed to run for an exact amount of time. This kind of GC is what we call a realtime GC. And Lua does not have one. However it turns out that we can get very close to realtime with minor changes to the Lua GC.

The patch below modifies the behavior of the GC in the way we need it:

--- a/src/lgc.c
+++ b/src/lgc.c
@@ -609,15 +617,14 @@ static l_mem singlestep (lua_State *L) {
 
 void luaC_step (lua_State *L) {
   global_State *g = G(L);
-  l_mem lim = (GCSTEPSIZE/100) * g->gcstepmul;
-  if (lim == 0)
-    lim = (MAX_LUMEM-1)/2;  /* no limit */
   g->gcdept += g->totalbytes - g->GCthreshold;
+  double start = getTime();
+  double end = start + (double)g->gcstepmul / 1000.0;  
   do {
-    lim -= singlestep(L);
+    singlestep(L);
     if (g->gcstate == GCSpause)
       break;
-  } while (lim > 0);
+  } while (getTime() < end);
   if (g->gcstate != GCSpause) {
     if (g->gcdept < GCSTEPSIZE)
       g->GCthreshold = g->totalbytes + GCSTEPSIZE;  /* - lim/g->gcstepmul;*/

The only missing part from the patch above is the getTime() that can be something like this:

double getTime() {
    struct timeval tp;
    gettimeofday(&tp, NULL);
    return (tp.tv_sec) + tp.tv_usec/1000000.0;
}

I guess however that everyone will want to use their own time function.

The patch modifies the code so that is stops based on a time limit and not based on a calculated target memory amount to be freed. The simplicity of the patch also comes from the fact that we “reuse” the STEPMUL parameter that is no longer used to carry the aggressiveness of the GC. We now use it to hold the exact duration we want the GC to run in milliseconds. So the usage will be this:

lua_gc(L, LUA_GCSETSTEPMUL, gcMilliSeconds);
lua_gc(L, LUA_GCSTEP, 0);

The above code will run the GC for gcMilliSeconds ms. This way you will never be out of your frame time budget, because the garbage collection took a little longer to execute. Problem solved!

From Python to Lua

(This blog was originaly posted at #AltDevBlogADay)

All game developers, sooner or later, learn to appreciate scripting languages. That magical thing that allows for letting others do your job, better scaling of the team, strengthening the game code/engine separation, sandboxing, faster prototyping of ideas, fault isolation, easy parametrization, etc. Every game has to be somehow data driven to be manageable, and stopping at simple configuration files, with many different custom parsers, without going the extra mile of adding a full scripting language, is 90% of the times a bad design choice. 

Today the developer can choose from a large variety of scripting languages, or even go crazy and implement one on his own. It happens that the most favored language for game developers is Lua. Its easy to understand why Lua is the favorite but other options are used as well. For example Python and the lately upcoming force of  Javascript.

Here I would like to share some of my experience of moving a game engine from Python to Lua. Continue reading

Joined #AltDevBlogADay

Yesterday I was accepted to #AltDevBlogADay as a writer.  For those not familiar with #AltDevBlogADay, is a group of game developers (generally found on Twitter) that want to blog more regularly (according to the about page). However what is not clearly stated in the aforementioned about page, is that this is a true treasure of articles related to game development, writen by people that love what they do. Lately, I have been spending quite some time each day reading amasing posts by programmers, artists, etc, and I will finaly be able to contribute something back.

My first post is scheduled for June 23, and I have in mind some topics to blog about. However I am open to your suggestions. What would you like to here more about from me?

Sylphis3D lighting, shadows, physics demonstration

This is some “memory lane” kind of post. As you probably already know, I am working on an iOS port of Sylphis3d lately and I have been going through some old videos from Sylphis3D. I must admit the feeling is overwhelming. All those nights strugling with algorithms, data structures, broken drivers, experimental scripting… The vibrant community of people surrounding the project starving for more info on the progress. I really miss those days. I would like to share one of the oldest videos with you. The video below was “shot” in 2003 and is now of historical value! It features per-pixel normal mapped lighting with realtime shadows from every light in the scene, coupled with realistic physics. Note that this was more than one year before DOOM 3 came out… Enjoy!

Something moving in the shadows…

I have been spending quite some time the last months working on a port of the Sylphis3d game engine to the iPhone. I am now to a point that the thing works really nicely. I will not get into details about the changes that I did to the engine for the purpose (I will keep that for an other post), but I want to get the word out that it is final: WE ARE MAKING A GAME 😀

I don’t think I can stress out how excited I am about this. After too much struggling we finaly have an original consept, a good game design, and the team to make it true! The game is based on a wild idea I had some time ago, based on which we created a beautiful game concept. With the help of the artistic nature of Vangelis Bobolas and the soundscapes of the out-of-this-world music composer Thanasis Lightbridge, we are on the right track!

I can’t uncover much at the moment, but I believe we have something totally original and fun cooking in the oven 😉

Optimizing script language performance with custom memory allocators

The last weekend I did some exploration on the script language execution performance. Specifically on the memory allocation side of things, and I would like to share my findings.

Script languages and memory usage

As you probably know script languages (most of them at least, like Python, Lua, etc) have the tendency to make a huge amount of small allocations on the heap. Almost everything is stored on the heap, and if you care for performance, you start to feel homesick about your beloved C stack! Anyway, nothing comes for free, and scripting languages have to take something from you in exchange for all the goods it gives you back. So the best you can do is make sure that you have the best memory allocator for the job.

Doing too many small allocations and releases on the heap can create memory fragmentation, along with all the evil that comes with this. The common approach is to create a specialized memory allocator that serves small and constant in size blocks of memory to the scripting language, taken from a bigger chunk of memory reserved from the system. This is a common in all “realtime” and intencive applications like games, and something I did many times to gain performance.

Can’t beat the standard malloc

What I discovered with my latest attempt was that it has gotten quite hard to beat the GNU implementation of malloc(). Something that used to be easy in the past when you focused on a specialized case (e.g. small blocks of memory). Not that you can’t do better if you try hard, but at this point the malloc() implementation is already super-fast for 99.9% of applications on the desktop. Rest asured that you will not be able to do much better. However that is not the case for embedded devices that don’t share the same virtual memory benefits as the desktop computers.

My hand tuned specialized memory allocator for small blocks of memory ( <= 256bytes ) was not able to be more that 1% faster that the native malloc() on the OS X 10.6. However on the iPhone the same allocator was twice as fast as the native malloc() ! Since the target was from the begining the iPhone that seemed like big win! However when I set up a small benchmark in the scripting environment that did some allocations of game engine objects and released then again in various patterns, the results were disappointing. The gain from using my specialized (and twice as fast) allocator resulted in improvement of about 5% in execution speed in a memory intensive benchmark. And at some tests even slower! That was odd and most of all not good!

Why I was failing

After some inspections and tests that made the case of me doing something really stupid less probable, I narrowed down the cause.

In most cases of using a scripting language you have some classes defined in C++ that you instatiate in the scripting language. Take for example a 3D vector class “CVector3” defined in C++. When you instatiate this in the script language you get two allocations. One in the scripting language that allocates the “proxy” object and one in the C++ environment. When giving a new allocator to the scripting language to do its allocations you only “optimize” the first allocation. The one in C++ still goes through the system default allocator.

And since you optimize half of the allocations you expect to have half the performance boost… well… wrong. It turns out that you can even be slower this way. The secret here is the CPU cache. By doing the above, you have two memory blocks that are usually accessed together, but are far apart in memory. This can really hurt performance badly on a device with slow memory like the iPhone.

The solution

The solution was of course to use the same allocator on the C++ side by overriding the “new” operator of the class. This made the blocks of memory allocated on the script side to be close to the block allocated on the C++ side. This way access to the object only involves accessing one part of the memory and giving nice cache hits. Performance up by 30%, which was nice and expected.

One other interesting thing that I found from this is that, on the iPhone, if I just override the “new” operator of a class and make it allocate the memory with plain malloc() and don’t use my allocator at all, the system is again faster!

This is probably from the fact that “new” does not go through plain malloc() (didn’t bother to check) as the scripting language environment does. So the allocated blocks end up in differect arenas at different parts of the memory, with the result of losing performance for the same reason as above!

So, keep your related allocations close together when crossing the language barrier!

Ray Tracing into a Sparse Voxel Octree

And just when you thought you were through with tracing things all over the place… John Carmack strikes back with a mortal blow with something about ray tracing into a sparse voxel octree!!

The article doesn’t really say much (nothing actually) about the algorithm, and this is where the fun/fuss starts! I can’t wait to see all the amazing/crazy ideas people from all over world will come up with, about what John is actually talking about. Plots over plots will emerge.. flames.. Continue reading

NVIDIA to Acquire AGEIA Technologies

According to this press release, nVidia will acquire Ageia Technologies. Yeap! The well known physics software and hardware vendor. In my mind this means that the future nVidia based accelerators will support physics acceleration, too. It will basically mean the death of the PhysX processor, since the GPU can do that easily with no extra cost.

Actually the PPU solution was never to work. I find it quite hard to believe people would ever Continue reading

The wait is over… Sylphis3D is open source!

I just release the source to Sylphis3D! Check out the story at the Developer Network.

The wait is over! Sylphis3D is officially released under the GNU GPL ver.2 (with the classpath exception for those that need closed source solutions). The engine weights at around 45000 lines of source code written in C++ and Python. The source code can be obtained from the download page of the [sourceforge.net project page](http://www.sf.net/projects/sylphis3d). Latter on the source will be added to the subversion repository for easier access. The source code compiles under Microsoft Visual Studio .NET 2003. The makefiles and sconsturct files, for compiling with GCC, are out of date. However the mapcompiler is up to date. The source would compile out of the box. Continue reading

Tomorrow the Release

The time has come… tomorrow is the release day of Sylphis3D as an open source project. I’m very excited for this new begining! This is going to be my biggest contribution to the open source community until now.

The source that is going to be released counts ~45000 lines of code in C++ and Python counted with SLOCCount and the development cost was evaluated at $1.500.000 !!!

Oh.. well…. 🙂

sylphis3d, release, open source, GPL, 3d, engine, opengl