Thursday, December 11, 2014

How the voxel zebra got its stripes

Here is the story behind these two zebras:



The zebra at the left was handcrafted by an artist. It is a traditional polygon mesh where each triangle has UV coordinates. These coordinates are used to wrap a handpainted 2D texture over the triangle mesh.

This is how most 3D objects have been created since the beginning of time. It is a very powerful way to capture rich surfaces in models. It is very efficient, it aligns well with the hardware, allows you to have incredible detail and even animate.

Voxels can also have UV. This allows you to capture more detail at much lower voxel resolution.

The zebra at the right had an interesting life. It went from the artist made polygon into a full voxel representation. Then it went back to triangles just before rendering. UV coordinates were preserved along this trip, but there is a lot of trickery involved. These are different meshes.

Both models use exactly the same texture the artist made. This is the important part. You could draw both in the same draw call.

The voxel version has fewer triangles. This is a 100x100x100 voxelization. To give you an idea of how small that is, here is the equivalent of that in 2D:
If you approached the zebra and looked at its head, at the left is how big these voxels would be:


At the right you see our results. The same amount of voxels can provide a lot more detail if UV coordinates are used.

I am happy with the results. To me this is as important as solving the physics problem. This will take the look of voxel scenes to a whole new level, while allowing you to harvest and destroy these carefully designed things.

This is still experimental and there are tricky issues ahead, like handling topology changes (holes closing.) and dealing with aliasing. For now I got to make a post with images of only zebras in it.



Monday, December 1, 2014

Looking inside voxel assets

You do not have to be a cat or a Floyd fan to enjoy a laser show.

Here is a laser-looking tool that allows you to explore the inside of voxelized assets. The challenge was to show the interior features of a model while keeping the context clear in the viewer's mind. The following video shows it in action:


I really like this new toy. I have wasted many hours already playing with it, looking if any of the assets we have so far had any sort of defects inside, getting a better understand of how these models are built.

It also allows to place pivot points inside our instances:



This is how we came up with it. We could not see anything inside!

Friday, November 28, 2014

Progressive LOD

We still have to apply this to entire scenes, but here you can see how it is possible to have progressive level of detail transitions with voxel data.

Below you can see the same mushroom from the earlier post going through several LODs:


Here is another example, this time using a more regular object so you can better follow the voxel splits:


Wednesday, November 26, 2014

The Missing Dimension

I believe when you combine voxels with procedural generation you get something that goes well beyond the sum of these two parts. You can be very successful at any of these two in isolation, but it is when you mix them that you open up a whole set of possibilities. I came to this realization only recently.

I was watching a TV series the other night. Actors were filmed against a green screen and the whole fantasy environment was computer generated. I noticed something about the ruins in this place. The damage was clearly done by an artist's hand. Look at the red arrows:


The way bricks are broken (left arrow) reminds me more of careful chisel work than anything else. The rubble (right arrow) is carefully arranged and placed around the floor. Also we should see smaller fragments of rocks and dust.

While the artists were clearly talented, it seems they did not have the budget to create physically plausible damage by hand. The problem with the series environment was not that it was computer generated. It wasn't computer generated enough.

Consider physically-based rendering. It is used everywhere now, but there was a time when artists had to solve the illumination problem by hand. Computing photons is no different than computing rolling stones. You may call it procedural generation when it is about stones, and rendering when it is photons, but these are the same thing.

As we move forward, I see physically based generation becoming a thing. But there is a problem. Until now we have been too focused on rendering. Most virtual worlds (like game scenes) are described only as a surface. You cannot perform physically based generation in a world that is only a surface. We are missing the inner dimension.

Our world is 4D. This is not your usual "time is the fourth dimension" pickup line. The fourth dimension is the what, like when you ask what's inside a box. Rendering was focused mostly on where the what turns from air into solid, which is a 3D surface. While 3D is good enough for physically based rendering, we need 4D for a physically plausible world.

Is that bad that we are not 4D? In games this translates to static worlds, or scripted destruction at best. You may be holding the most powerful weapon in the universe but it won't make a dent on the floor. It shows everywhere as poor art, implausible placement of rocks, snow, debris and damage, also as lack of detail in much larger features like cities, castles and landscape.

If you want worlds that can be changed by its inhabitants, or if you want to generate content by simulation, you need to know your world as a volumetric entity. Voxels are a very simple way to achieve this.

Going 4D with your content is a bit of a problem. Many of the assets you may have could not work. Not every mesh defines a volume. Often, meshes have holes in them. They do not show because they are hidden by other parts of the object. These are not holes like the center of a doughnut. It is a cut in the mesh that makes it just a surface in 3D space, not a closed volume.

Take a look at the following asset:

The stem of this mushroom is not volumetric. It is missing the cap. This does not show because the top of the mushroom is sunk into the stem and this hole is completely hidden from sight. If you tried to voxelize this stem it would have unpredictable results. This hole is a singularity to the voxelization, it may produce all sorts of artifacts.

We have voxelization that can deal with this. If you voxelized the top and bottom together, the algorithm is robust enough to realize the hole is capped by other pieces. But we just got lucky in this case, the same does not apply to any open mesh.

Even if you get meshes that are closed and topologically correct, you are only describing a surface. What happens when you scratch the surface? If I cut the mushroom with a knife, it should reveal some sort of mushy, moist material. Where is this information coming from? Whoever is creating this asset has to put it there. The same applies to the bricks, rocks, plants, even living beings of your virtual world.

I think the have reached a turning point. Virtual worlds will remain static and very expensive to build unless we can make physically correct decisions about the objects in there. Either to destroy them or to enhance them, we need to know what they are made of, what is inside.

Tuesday, November 25, 2014

Instance Voxelization

We are finishing the voxelization features in Voxel Studio.  Here is how it looks like:


At only 40x80x40 voxels it is a good reproduction of the Buddha. You can still see the smile and toes.

This computes 12 levels of detail so when this object is distant we can resort to a much smaller representation. If you know what texture mipmaps are, you'd see this is a very similar concept.

The LOD slider produces a very cool effect when you move it quickly: you can see the model progress quickly from high to low resolution.

And here is the Dragon at 80x80x50 voxels:


Saturday, November 15, 2014

Cantor City

The Cantor set (or how you obtain it) is probably one of the simplest fractals out there. It was one of the first examples I encountered and it helped me greatly understanding whatever came after. Here is how I like to explain it:

1. We start with a horizontal segment.
2. We divide the segment into three equally sized segments.
3. We remove the segment in the middle
4. Repeat from step (2) for the two remaining segments



This produces a set of segments that puzzled mathematicians for a while. I am more interested in the fractal properties of the whole construct. It shows how a simple rule operating at multiple scales can generate complex structures. This set is too primitive to be used for realistic content, but it is the seed of many techniques you will require to achieve that.

This is how you would code the Cantor set using our grammars:

This is the real world so we need to make sure it stops. An easy way would be to add another definition of "cantordust" for when the size is small enough. This definition would do nothing, thus stopping any further subdivision:


Here you can see the output:


Let's make it more interesting. First let's add depth to it. Instead of segments we will use 3D boxes:


This already looks like we were growing little towers. We can make the towers taller if there is enough room:



Let's add a random element. We will not split into three equal spaces anymore, instead we will pick a random value for each iteration:


For a microsecond this image can fool you into thinking you have seen a city. I think that is pretty cool if you consider it is only a few lines of grammar code.

Of course this is not a replacement for a city or buildings. The cantor grammars are probably the simplest you can have. You will need a lot of more code to produce somethings that can pass as a city. But odds are it will be mostly variations of the Cantor dust.

Sunday, November 9, 2014

Life without a debugger

Some programmers love the more literary aspects of coding. They enjoy learning new languages and the new structures of thought they bring. You will often hear words like "terse" and "art" from them. Some programmers like learning different frameworks, while others die by the not-invented-here rule. Some see themselves more like civil engineers: they have little regard for the nuances of coding, frameworks are just bureaucracy to them, whatever they build must stand on its own merits.

I have been a member of all these camps in one moment or another, so really no judgment here. But what probably unites all coders is that if you write a program, you will likely have to debug it.

If you have not stepped through your code at least once, you probably do not know what it is doing. Yes you can unit test all you want, but even then you should also step through your test code.

Karma dictates if you put a new language out there, you must also give a way for people to trace and debug it. That is what we just did with our LSystem and grammar language:




It turned to be quite surprising. We are very used to tracing imperative languages like C++ and Java, but tracing the execution of a L-System is unlike anything I had tried before.

I get the power of L-Systems and context sensitive grammars better now. This system feels like it had the ability to foresee, to plan features ahead. You see it happening in the video: often empty boxes appear in many different spots, like if the program was testing for different alternatives. That is in fact what it is happening.

It looks amazing that end features like the tip of a tower may appear even before than the base. In reality the program has already decided there will be a base, but its specific details are still a bit fuzzy so they come up much later. But then once they do appear, everything connects as it should, all predictions line up properly.

All that did not require planning from the programmer. In this case the code is not a series of instructions, it is more like a declaration of what the structure should be. The blocks fall into their pieces, the debugger just shows how.



It reminds me of another type of code at work: DNA. Now that would be one really cool debugger.

Saturday, October 25, 2014

Unboxing Oculus DK2

It felt like Christmas yesterday when FedEx dropped our first Oculus development kit.

I had tried Oculus before at GDC this year. I was not particularly impressed, which was expected for an early prototype. I did get a very positive feeling about the potential of the VR medium.

It was the Couch Knights demo back then. While I was "inside" the demo I wondered why I would spend time in such a place. But it did take me to another place. This was a big deal for me, I do not remember any other device or medium getting close to that.

A few months later I had it in a box right in front of me. (I made sure it would be delivered to my home address instead of the company's so it could be just mine, at least for a few days.)


I was immediately impressed with the quality of the hardware. It was light and solid. You would get a distinct feeling this thing was properly built. The SDK was alright too.

It did not work at all in the first machine I tried. I blame Windows 8.1 and its new ability to use either integrated graphics or the standalone GPU. I see a lot of applications getting confused by that. As soon as I switched to a machine with just a GPU it began to work properly.

Then the sickness began. It was not a subtle discomfort, it made me so sick I could not function properly for the rest of the day. I am not astronaut material, but I have never been troubled by motion sickness in my life. I was aware the Oculus was making a lot of people sick, and was convinced I was not part of that population.

That experience was so bad it got me thinking. I felt poisoned. Poisons and our ability to survive them are masterpieces of evolution. So, in some sense, it is like I had evolved against VR.

What if no matter how much we improve displays, cut latency, etc. we will still be hitting biological triggers that tell your body something is wrong and it must puke its guts out?

I want to go back to working with the device. If the content is appealing  people won't mind the discomfort and will spend time to build tolerance. A lot of people do sickening drugs like alcohol.

Wednesday, October 8, 2014

3ds max

We have a 3ds max plugin in the works. Here is a quick screenshot:


We want people to use Voxel Farm to create scenarios for their video projects. But this may also help for the content you create and experience in real time. For instance: any content from your game world, procedural or hand-made, could now appear rendered in very high detail. Since this does not need to run realtime anymore we have time to crank up the voxel resolution. Exporting to max and creating a high quality rendering can be a very effective way to showcase your work.

3ds max is where this project started. Before writing a single line of code, I used 3ds max for a while to prototype geometry and texturing. It is good to be back.

Friday, October 3, 2014

Desktops, Tablets and Phones

One of my goals starting this project was to have relatively simple client applications exposing rich and complex worlds. While we later worked on generating as much as possible in the client-side, there will always be a case where you want access from power-challenged devices. Phones, tablets and even desktop web browsers do not necessarily have the power to generate everything you would like to have in your virtual world, but are still ideal mediums for people to experience it.

The good news is that generation can be offloaded to the cloud. Check out the following video. This is me at my home running several of these simple clients. The content they display is generated by servers running in Amazon's cloud:



Sunday, September 21, 2014

Your Euclideon Yearly Fix

Like every year or so, comet Euclideon comes close to us and we get to see what they have been working on. Here is a video they posted a couple of days ago:


As usual, the technology is brilliant and quite interesting. It is still similar to what Nvidia and Olick were able to show a long time ago, but nevertheless a significant feat of engineering. I am just having trouble reconciling the narrator's discourse with what we see on screen.

I was hoping they would tone down the hype in the future. They however named this video "The World's Most Realistic Graphics". I wonder which World they are talking about. In planet Earth, 2014, pretty much any AAA game looks better and runs faster than this.

I'm not sure why they would go and compare themselves to engines like UE4 when everybody knows UE4 will produce far better looking graphics. Same for CryENGINE and even Unity. It is not enough you say it looks better, it has to look better. Jedi tricks do not work over YouTube.

The "conspiracy of fools" aspect is also interesting. The true sign of genius is to see all fools conspiring against you. Somewhere in this video the narrator points that many experts around the web were very critical of the tech. These are the fools. That they got so many fools to speak against them must surely mean they are geniuses.

Well we know it does not work like that. Sometimes the fools just go against other fools. Being criticized does not make you right. Now in this video they claim they have proven all the fools wrong. That has yet to happen. The burden of proof is on them.

I had some secret hopes for this tech to be applied to games, but the tech gap is growing every year. Let's see what happens next year, when the comet approaches us again.

Tuesday, September 16, 2014

Runtime Extensions

We recently developed an extension model for Voxel Studio and Voxel Farm in general. The idea is you should be able to come up with entirely custom procedural content without having to recompile any of the tools, and even your final EXE if you like.

You can achieve this by wrapping your custom code inside an extension. During world design time, Voxel Studio is able to load your extension and allows you to input whatever configuration parameters you have chosen for it. Then, during runtime, the same extension code runs thus guaranteeing you get the same results you saw inside Voxel Studio.

Let's follow a quick example. Imagine we have developed a small scene in Voxel Studio using the default terrain component. At this point we have interacted only with the vanilla settings, so our scene looks like this:


Note this is only terrain, it does not contain other layers like rock and tree instancing, but it should be enough for this example.

Now let's say we want to add a massive sphere somewhere in the image. While we could go in edit mode and add a sphere using a voxel brush, this would set a lot of voxels. Since we know this will be a perfect sphere we can save a lot of data if we just store the center and radius and produce the voxels on the fly. Voxel Studio does not include a layer like that out of the box, but we can create it ourselves. Here is how:

Voxel Studio in Windows OS can load extension DLLs at runtime. You can develop the DLL in any form you like as long as the few required entry-points are found. The first few are functions so Voxel Studio and Voxel Farm in general can ask the extension what parameters it wants to capture. And then there is one function that will return the voxel data for a given chunk of space.

So we create a new DLL project. Just by dumping the binary DLL in the extension folder, Voxel Studio should be able to find it and allow us to use it for a new voxel layer:


Here our extension has identified itself as "Mega Sphere". Clicking on it will add it to the list of voxel layers in the tree.

We then define four properties for the sphere: origin x, origin y, origin z and radius. Exposing property metadata is what allows Voxel Studio to create editors for the extension without really knowing what they are and how they will be used:


Now comes the real work. So far it was mostly about metadata, let's see how we get actual voxels out of the extension. It comes down to implementing a function that looks like this:

        bool getVoxels(
                char* object,
                VoxelFarm::CellId cell,
                double cellOrigin[3],
                double cellSize,
                double voxelSize,
                int blockSize,
                VoxelFarm::VoxelType* changeFlags,
                VoxelFarm::MaterialId* materials,
                VoxelFarm::Algebra::Vector* vectors,
                bool& empty)


I will not go into the implementation this time, but the overall idea is this function is expected to fill the material, vector and flag 3D buffers with voxel data. The requested region of space is defined by cellOrigin and cellSize.

Once we code this to output a sphere, we are finally able to refresh our render view and see the results:



Here you can see some spheres. The one in the last image has a 10 km radius. Naturally we could have developed a more interesting layer, for instance a meteorite impact zone or ore veins, but hopefully you get the idea.

One last thing: Using native code for extensions always brings up the issue of security. We debated this for a while when designing the system. We finally chose to use DLLs just because they allow  to run native code without penalty. You can get really close to the metal. The security aspect can be managed by certification, also by running the DLL in a lower OS integrity mode, thus restricting the kind of access it would have over the system. And of course you can always have a DLL extension you trust that acts as a wrapper for code you do not trust, but runs in Lua or some other form of application language where you are certain it can be contained.

Sunday, September 7, 2014

Voxel Studio Revisited

We are working to make Voxel Farm accessible for a bigger crowd. Among other things we are easing Unity integration and improving our tools. Here is a teaser of the new Voxel Studio application, showing some terrain building:


We removed a lot of the complexity from the old Voxel Studio. No need to set up or connect to a render Voxel Farm anymore for instance.

The tool allows you to work on the macro features of your world like terrain, lakes, caves, architecture, then to jump in and edit the voxels directly. We will be showing more of that in the future.



Friday, August 22, 2014

Eureka!

Tired of running and swimming in virtual worlds? I see people wanting to build their own vehicles in order to get around. When the time comes, their imagination should be free to create ships and airships in any form they like.

Voxels are a good help here. They are volumetric by nature, and we can always know what kind of material is there. Each material then can have its own specific weight. So wood is lighter than stone, and stone may be lighter than raw iron. These properties have a great deal of influence one how things interact with each other, also how they rotate, how their center of mass is computed.

Air is no different. It is meant to weight something. So what happens if you have a material lighter than air? Check out the following video:



Here the balloons are just regular voxels. They could be any shape. The trick is they have a material that is much lighter than air. As soon as they become free standing objects, they raise. This is because the air they displace weights more than themselves and this force may be big enough to overcome gravity.

We will be looking at the same principle for water in a next iteration.

Tuesday, August 19, 2014

Dynamic Fluids

You may have caught some of this if you attended SOE Live or if you watched some of the audience videos from the panels.

We have been working for a while on a cellular automata system. The following video shows some early results:



As you can see the automata is producing something that resembles flowing water. I guess with different rules we could represent other phenomena, like smoke, clouds, slime and lava. Water in particular is a tough system because it likes to flow pretty fast.

This system is already able to schedule simulation so areas near the player (or players) get more simulation ticks. Simulation frequency is also affected by how much the system "wants" to change in a given area. For instance a stable puddle of water will get far less ticks than water falling from a cliff.

This makes the solution manageable from a computational point. This approach does create its own set of problems. For instance if nobody is there to experience the water, it does not really change. As someone approaches, the simulation frequency will increase gradually. This is enough to deceive players I hope, the player influence area is large enough to mask the fact that distant water does not update as frequently.

There are a few significant challenges ahead. We need to make sure this scales when we have hundreds of players. Also not everything in a game world is made out of voxels, eventually we would need to make props block and modify the flow of water.

Thursday, August 14, 2014

Appetite for Destruction

We continue to work non-stop on the physics for Voxel Farm. It has to be that way, physics is a huge component of what we are trying to build.  All the pretty things we can help you create must also come down. 

In earlier videos you saw things breaking apart. Now we can do it at a much larger scale, also with all sort of flying fragments and debris. We are taking a page from Michael Bay on this one. We want enough stuff blowing up to make you run for cover under your desk.

Still a long way to go, but I am very happy with our latest results. Have a look for yourself:


An interesting point for me in the video is the creation of the chair. It shows how physics can be an intuitive way of making objects for your world:


These objects you could rotate, position and scale at will. It would be very similar to a traditional mesh prop.

Monday, August 4, 2014

Mind of a Terrain Engineer

A few days ago I decided to write a new terrain system. I wanted to see if I could get better performance, and also additional features. I will post more about this experience, for now I just wanted to share the feeling I got working on terrains again. I put together this inspirational video for all the terrain makers out there:



Yes it is very silly, I could not resist. I realized I had taken enough screenshots along the process and probably a single video was the best format to share them.






Wednesday, July 23, 2014

Final Logo

A while ago I posted about our efforts to find a logo for Voxel Farm. Back then we got a lot of very useful feedback from you guys. We took all these comments and suggestions and worked on them for a while. During this time we used the red cow as an interim logo.

It seems we have reached a conclusion. The cow is out and we have a new shape to identify us:


I liked the cow, but it really had some issues. In our board meetings the vegan members were not cool with disintegrating cows. The others were getting hungry from the logo, it reminded them too much of a steak house.

I like the new shape because it can be a flower, a hurricane or a galaxy. If you are into sacred geometry you may also see more into it. I do not care much for sacred geometry, secular geometry already gives me enough trouble, but in this case it did help us find a nice shape.

As usual let me know what you think.

Friday, July 18, 2014

Video Update for July 2014

Here is a video covering some of the work we did in the past few weeks. This time you get to see the water voxels up close. There is more physics and a look at the new rendering engine.

  

 As usual let me know what you think.

Monday, June 30, 2014

Off-grid Copy

It seems June was Clipboard month. Just to close on that topic, there is one interesting thing we did with the clipboard I would like to introduce: Off-grid Copying. You may ask what is that, and more important, do we really want it.

Imagine you want to copy a piece of a scene you have done. Normally you would create a selection box and copy its voxel contents into the clipboard. The selection box can be resized, but so far, the selection boxes have been necessarily aligned to the world axes.  What if you could rotate the selection box?


It does bring up an interesting possibility. Now you can get any slice of an existing object, no longer you are constrained to the horizontal or vertical.

The following video shows this in action:



In this case we were just rotating a selection box. The really nice bit about this is, it does not have to be a box. It can be any arbitrary volumetric shape.

This opens up a new set of tricks. You could for instance make a statue. Then select the statue using a regular on-grid selection box and copy it into the clipboard. Now comes the trick: you could use the clipboard contents (the statue) as the selection scope. This would copy the voxels as usual, but their outside shape will still conform to the original statue.

Another way to see it, is you can perform boolean operations using the clipboard. What I like about the approach is it feels simple once you try it. You do not need to understand the "booleaness" of it all. Just like you can have circular or even free-shaped selections in Photoshop, you can have your selection take any form you need. Sweet!


Monday, June 2, 2014

Copy Paste Paste Paste

I did not see this one coming. I must thank players for showing me this: Old-school copy and paste helps a lot when you are building with voxels. Take a look at this screenshot of a Landmark build:


It many not be obvious at a first glance, but a lot of what you see here is the result of copying and pasting.

It is true that non-voxel edition systems also have copying, but I believe this is a very different mechanic. A voxel clipboard is a new thing.

When you paste objects in a mesh based system (like Maya, or a traditional game engine editor), you are cloning objects or creating instances of them. This is very cool and allows you to do many invaluable tricks, however, every time you paste a new object goes into the scene. If you paste overlapping an existing object they do not merge into one. In some sense, pasting in mesh systems does not help you build a larger thing. You build an assembly of things.

In mesh based editors you get a sense that every paste counts. The scene complexity grows every time you press that Ctrl+V. Yes you could stack some boolean modifiers and have any new pasted meshes in them, but this is getting complex now. My interest is systems anyone at home can figure out.

Copy-Paste with voxels feels organic. It is more like the clipboard in a word processor. You get things you like and combine them in a new form. Then you copy the new form and use it as an element of something else. All this time you are working on only one thing, it remains simple in your mind. You are not leaving a long trail of objects behind you.

One thing is certain, we are taking the clipboard very seriously now. Here is a video showing some new cool tricks we are able to do:



There are still aliasing issues, just like with our line tool, so some configurations may not paste back into pristine conditions. I think this is alright as long as you remain aware of what the limits are. And of course, our plan is to continue to bring down these limits.


Thursday, May 22, 2014

New renderer

Our new deferred renderer is taking shape:



I like the water in particular. Still many features missing, like point lights you can move around. And I haven't given up yet on a realtime global illumination hack. I think this is mandatory today to make interior spaces interesting.

This seems a lot of work. Does it make sense to build your own renderer? My interests are more around content generation and management.

Having a custom renderer has some advantages. It makes it very easy to get any project going. Once this engine is packaged into some form of SDK, it will help that anyone can make changes, compile them and run them without any third party libraries (just the OS and OpenGL).

And then, looking at the problems you get when implementing a renderer gives you some perspective. That tiny pixel in the horizon should grow and become a fully volumetric rock. Understanding how these distant features can be produced by the shaders helps tie them down to their close-range volumetric representations.

Monday, May 5, 2014

Grammar Time

Yes, grammars. The last update had a hint of this, but here you can see them in more detail. As you probably know, we are building a repertoire of architectural grammars. The following video shows a few of them:



This set is geared towards medieval, fantasy settings, but I suspect most of these grammars would hold for a different theme.

This system becomes very interesting once you consider ending building blocks could be replaced by elements that tie closely to your project's vision. At any time you could replace the arches, columns, ornaments, even bricks, by custom components you may have sculpted earlier. What the grammar gives you is the order and structure of these prefabs, but the final look and feel can be pretty much up to you.

As you can see for the moment we are focusing on smaller grammars. You can think of these as smart brushes that will allow to lay walls, floors, bridges, even towers in the locations you choose.

While grammars are able to express entire buildings (even cities), I believe we need to start small and allow you to place these smaller elements following your imagination. Also there is little point in generating an entire castle if the building elements are not interesting enough. So we are making sure we have a solid repertoire of grammars before we take on larger things. Even something as simple as a basic stair tool can save you a lot of time.


As usual let me know what you think. I'm in particular interested in how many of these do you think we would need for a particular theme, how generic or specific they should be, what kind of parameters would you like to have as inputs.


Friday, April 18, 2014

Video Update for April 2014

Wondering what happened in the last few months? Here is an update:


There are several things we did that were not covered in this update. You will notice a river in the background but there is no mention about water.


It is not that we are hydrophobic or that we want to tease you about this feature, we just want to spend more time improving the rendering.

I also go on in this update about how clean and sharp our new tools are. There is indeed a big difference in the new toolset, but still there are serious issues with aliasing when you bring detail beyond what the voxels can encode. For instance, the line tool now can do much better lines, but we still cannot do a one-voxel thick line that goes in any angle. This is because in order to fix the aliasing in this line would need sub-voxel resolution. So it OK to expect cleaner lines, but they can still break due to aliasing.

Monday, March 24, 2014

Landmark Voxel Creations


Not sure if you knew about this, but Everquest Next Landmark entered Alpha about one month ago. During that month players were introduced for the first time to the voxel world and tools we have jointly developed with Sony Online Entertainment.

We still have a lot of work to do. The game just now entered Beta. Still I am marveled by the incredible creations made by the players in such a short time and with such early versions of the tools.

Have a look:


I do not know about you, but it seems to me player-generated-content does come close to what game studios can do. Hopefully very soon we will be able to completely blur that line.

Probably the biggest surprise was to see all the emergent techniques devised by the players. We knew our voxels were able to encode all sort of funny things, however the specifics of how they could be achieved was a purely player-driven development. Players even had to name these things, so they gave us "microvoxels", "antivoxels", "zero-volume voxels" and other similar things that actually make a big difference on how you can create in the game.

Someone once told me the best software you can write is one that won't have any users. You can relax and have a life. Users (or players in this case) are that reality check developers secretly fear so much. Now I realize this software cannot exist in isolation from the builder community. Thanks to our players we continue to learn and understand about all the emerging properties of the platform we have created.

Keep up the amazing work guys!

Tuesday, March 18, 2014

Parallel Concerns

Computers continue to increase in power, pretty much at the same rate as before. They double their performance every 18 to 24 months. This trend is known as Moore's law.  The leaders of the microprocessor industry swear they see no end to this law in the 21st century, that everything is fine. Some others say this will come to an end around 2020, but with some luck we should be able to find another exponential to ride.

Regardless of who you believe, you should be wary of the fine-print. Moore's law is about transistor count, it says nothing about how fast they operate. Since the sixties programmers became used to see algorithms improve their performance with every new hardware generation. You could count on Moore's law to make your software run twice as fast every 18 months. No changes were needed, not even to recompile your code. The same program would just run faster.

This way of thinking came to an end around 2005. Clock frequencies hit a physical limit. Since then you cannot compute any faster, you can only compute more. To achieve faster results the only option is to compute in parallel. 

Chip-makers try not to make a big deal out of this, but there is a world of difference to programmers. If they were car-makers, it is like their cars had reached a physical limit of 60 miles per hour. When asked how could you do 120 miles per hour they would suggest you take two cars.


Many programmers today ignore this monumental shift. It is often because they produce code for platforms that already deal with parallelization, like database engines or web servers. Or they work creating User Interfaces, where a single thread of code is already fast enough to outpace humans.

Procedural generation requires all the processing power you can get. Any algorithms you design will have to run in this new age of computing. You have no choice but to make them run in parallel.

Parallel computing is an old subject. For many years now programmers have been trying to find a silver bullet approach that will take a simple serial algorithm and re-shuffle it so it can run in parallel. None has succeeded. You simply cannot ignore parallelization and rely on wishful thinking. Your algorithms have to be designed with it in mind. What is worse, your algorithm design will depend on the hardware you choose because parallel means something different depending where you go.

This post is about my experience with different parallel platforms.

In my view, today there are three main different hardware platforms worth considering. The first one is traditional CPUs, like the x86 series by Intel. Multicore CPUs are now common, and the number of cores may still grow exponentially. In ten years from now we could have hundreds of them. If you manage to break your problem into many different chunks, you can feed each chunk to an individual core and have them run in parallel. As the number of cores grows, your program will run faster.

Let's say you want to generate procedural terrain. You could break the terrain into regular chunks by using an Octree, then process many Octree cells in parallel by feeding them to the available cores.

The x86 platform has the nicest, most mature development tools. Also since the number of concurrent threads is not very high, the parallelization effort is around breaking the work into large chunks and sewing the results back together. Most of the actual computation remains serial. This is a bonus: anyone can write stable serial code, but you will pay dearly for good parallel programmers. The more old-fashion serial code you can write within your fully parallel solution the better.

Being so oriented towards serial, generic code is also the weakness of traditional CPUs. They devote a big deal of transistors and energy dealing with the unpredictability of generic algorithms. If your logic is very simple and linear all this silicone goes to waste.

The second hardware platform are the GPUs. These are the video cards powering games in PCs and consoles. Graphics rendering is highly parallel. The image you see on screen is a matrix of pixels, where each pixel's color can be computed mostly in isolation from the rest. Video cards have evolved around this principle. They allow hundreds of concurrent threads to run in parallel, each one is devoted to producing one fragment of the final image. Compared to today's average CPUs which allow only eight simultaneous threads, hundreds of threads may seem a bonanza.

The catch is all these GPU threads are required to run the same instruction at the same time. Imagine you have 100 dancers on stage. Each dancer must execute exactly the same steps. If just one dancer deviates from the routine, the other dancers stop and wait.


Perfect synchronization is desirable in ballet, not so much in general purpose computing. A single "if" statement in the code could be enough to create divergence. What often happens when an "if" is encountered, is that both branches are executed by all threads, then the results of the branch that was not supposed to run are discarded. It is like some sort of weird quantum reality where all alternate scenarios do happen, but only the outcome of one is picked afterwards. The cat is really both dead and alive in your GPU.

Loops having a variable number of iterations are a problem too. If 99 of the dancers spin twice and the one remaining dancer -for some inexplicable reason- decides to spin forty times, the 99 dancers won't be able to do anything else until the rogue dancer is done. The execution time is the same as if all the dancers had done forty loops. 

So programming mainstream GPUs is great as long as you can avoid loops and conditional statements. This sounds absurd, but with the right mindset it is very much possible. The speedup compared to a multithreaded CPU implementation may be significant.

There are some frameworks that allow general purpose programs to run on GPUs. CUDA is probably the best known. It is deceptively simple. You write a single function in a language almost identical to C. Each one of the many threads in the GPU will run this function at the same time, but each one will input and output data from a different location. Let's say you have two large arrays of the same size, A and B. You want to compute array C as the sum of these two arrays. To solve this using CUDA you would write a function that looks like this:

void Add(in float A[], in float B[], out float C[]) 
{     
  int i = getThreadIndex();     
  C[i] = A[i] + B[i]; 


This is pseudocode, the actual CUDA code would be different, but the idea is the same. This function processes only one element in the array. To have the entire array processed you need to ask CUDA to spawn a thread for each element. 

One big drawback of CUDA is that it is a proprietary framework. It is owned by Nvidia and so far it is limited to their hardware. This means you cannot run CUDA programs on AMD GPUs.

An alternative to CUDA is OpenCL. OpenCL was proposed by Apple, but it is an open standard like OpenGL. It is almost identical to CUDA, maybe a tad more verbose, but for a good reason: not only it runs on both Nvidia and AMD GPUs, it also runs on CPUs. This is great news for developers. You can write code that will use all computing resources available.

Even with these frameworks to aid you, GPU programming requires a whole different way of thinking. You need to address your problem in a way that can be digested by this swarm of rather stupid worker threads. You will need to come up with the right choreography for them, otherwise they will wander aimlessly scratching their heads. And there is one big skeleton in the closet. It is easy to write programs that run on the GPU, but it is hard to make full use of it. Often the bottleneck is between the GPU and the main memory. It takes time to feed data and read results. Adding two arrays in the naive form, like it was shown in the example before, would spend 99% of the time moving data and 1% doing the actual operation. As the arrays grow in size, the GPU performs poorly compared to a CPU implementation.

So which platform should you target, CPUs or GPUs?

Soon it many not be a clear cut anymore. CPUs and GPUs are starting to converge. CPUs may soon include a higher number of less intelligent cores. They will not be very good at running unpredictable generic code, but will do great with linear tasks. On the other side GPUs will get better at handling conditionals and loops. And new architectures are already improving the bandwidth issues. Still it will take a few years for all this to happen so this hardware becomes mainstream.

If you were building a procedural supercomputer today, it would make sense to use a lot of GPUs. You would get more done for a smaller electricity bill. You could stick to Nvidia hardware and hire a few brainiacs to develop your algorithms in CUDA.

If you want to crowd-source your computing, have your logic run by your users, GPUs also make a lot of sense. But then using a general purpose computing framework like CUDA or OpenCL may not be a good idea. In my experience you cannot trust everyone to have the right stack of drivers you will require. In absence of a general computing GPU framework you would need to use graphic rendering APIs like DirectX and OpenGL to perform general purpose computing. There is a lot of literature and research on GPGPU (General Computing on GPUs) but this path is not for the faint of hart. Things will get messy.

On the other hand, CPUs are very cheap and easy to program. It is probably better to get a lot of them running not-so-brilliant code, which after all is easier to produce and maintain. As often happens, hardware excellence does not prevail. You may achieve a lot more just because how developer friendly the platform is and how easy it will be to port your code.

This brings us to the third parallel platform, which is a network of computers (aka the cloud). Imagine you get hundreds of PCs talking to each other over a Local Area Network. Individually they could be rather mediocre systems. They will be highly unreliable, hard drives could stop working anytime, power supply units getting busted without a warning. Still as a collective they could be cheaper, faster and simpler to program than anything else.

Friday, March 14, 2014

GDC 2014

I will be visiting GDC this year with a couple of friends from Voxel Farm. If you would like to meet us drop me a line at miguel at voxelfarm.com

Saturday, March 8, 2014

Procedural Information

Information is the measure of what you do not know. 

You just looked at your watch and realized it is 3:00 PM, then someone comes into your office and tells you it is 3:00 PM. The amount of information the person gave you amounts to zero. You already knew that. That person did give you data, but data is not necessarily information.

Information is measured in bits, bytes, etc. If you ask someone, "Is it raining out there?", the answer will be one bit worth of information, no matter what the weather looks like.

You are now looking at a photo of a real lake on your computer screen:



Let's imagine it is the first time you see this photo. This is information to you, but how many bits of it? You could check the file size, it is already in bytes. It turns out it is a BMP file and it is 300 KBytes. Did you just receive 300 KBytes through your eyes? Somehow this seems suspicious to you. You know that if the file was compressed as a PNG the file size would be a lot less, probably around 90 KBytes, no visual degradation. So what is going on, is it 300 or 90 KBytes what you just saw? Nobody can tell you the right amount. Your eyes, brain and psyche are still mysterious objects to modern science. But whatever it is, it will be closer to 90 than 300. The PNG compression took out a lot of bits that were not really information. Compression algorithms reshuffle data in ways redundancy becomes evident. Then they take it out. It is like having someone else stop that person before entering your office to announce it was 3:00 PM. How is this related to procedural generation? Now imagine I have sent you this little EXE file. It is only 300 KBytes. When you run it, it turns out to be a game. You see terrain, trees, buildings. There are some creatures that want you dead. You learn to hate them back, you fight them everywhere you go. You find it amusing that even if you keep walking, this world appears to never end. You play for days, weeks. Eventually you realize the game's world is infinite, it has no limit. All this was contained in 300K, still the information coming out of it appears to be infinite. How is this possible? You are being tricked. You are not getting infinite information, it is all redundant. The information was the initial 300 KBytes. You have been listening to echoes believing someone was talking to you. This is a hallmark of procedural generation: A trick of mirrors that produces interesting effects, like a kaleidoscope. A successful procedural generator deceives you into thinking you are getting information when you are not. That is hard to achieve. In the same way we love information, we dislike redundancy. It wastes space and time, it does nothing for us. Our brains are very good at discovering it, and we adapt quickly to see through any new trick. Now, does this mean software cannot create information? There is energy going into this system, can it be used for more than powering infinite echoes? This is one of the big questions out there. It is beyond software. Can anyone create information at all? If you look at the lake picture again, you may ask yourself how it came to be. Not the picture, the actual lake. Is it there partly by chance, or because there was no other choice. Its exact shape, location and size, could they be the inevitable result of a chain of cause-effect events that started when the Universe began? If that is the case, the real lake is not information, it is an echo of a much smaller but powerful universal seed. The real answer probably does not matter. Even if the lake was an echo of the Big Bang, 42 or some sort of universal seed, the emergent complexity is so high we cannot realize it. Our brains and senses cannot go that far. If you are ready to accept that, then, yes, software can create information. The key is simulation. Simulations are special because they acknowledge the existence of time, cause and effect. You pick a set of rules, a starting state and you let things unfold from there. If humans are allowed to participate by changing the current state at any point in time, the end results could be very surprising. The problem with simulation is that it is very expensive. If you keep rules too simple or simulate for very little time, results may not be realistic enough. If you choose the right amount of rules and simulate for the right amount of time, you may realize it would take too long to be practical. When it comes to procedural generation you will find these two big families of techniques. One family is based on deception, produces no information, but it is fast and cheap. The other family has great potential, but it is expensive and difficult. As a world builder you should play to their strengths, avoid their pitfalls. And what is more interesting: learn how to mix them.

Wednesday, February 12, 2014

Vacansopapurosophobia

The terror of staring at a blank page. We all have felt it one time or another. Even seasoned artists can feel anxiety when starting a project from scratch. If someone is watching over your shoulder it can become unbearable. At that point you may just give up and save yourself all the stress.

Sandbox games put the player in a similar situation. You now have the ability to shape the world and create beautiful things, but again there is the game world as a blank canvas looking back at you, wondering if you are going to suck as a creator again.

If you share this sandbox world with others, you may have already been exposed to all sort of wonderful creations. Maybe you have seen them on YouTube or Twitch. These people know how to build perfect columns, archways and vaults for their creations.

So you start placing a few blocks, but it gets you nowhere close to that grandiose design you had in mind. It is not that you lack the imagination. What you don't have is the technique to produce the elements you want.

We have designed a tool that can level the playing feel.

For a long time now we had architecture grammars. These grammars take a block of space and create something inside. I have shown many examples so far. Very often they were used to generate entire buildings. The trick is, you do not have to target full buildings for these grammars to be very useful.

You can realize your vision by combining simpler elements like roofs, barandas, columns, arches, vaults, etc. And these elements, instead of being labor intense, could be just the output of smart architecture grammars.

The fact that these are full fledged programs means they get to adapt to different configurations. For instance look at the output of this architecture program:



The program was clever enough to reinforce the corners. Yes, you could do this by hand, but imagine how many corners like this one you will get in your massive castle. Also note how the program has introduced a random element to the stone placement, making the walls more interesting.

Here is a video showing a different example:


As you can see the program adds more columns as they are needed. No distortion in the columns occurs as you make the box wider or taller.

At this point your are no longer dealing with the masonry. It is like you had hired one of the talented builders from YouTube to do the low level work for you. Knowing you have a repertoire of interesting prefabs you can combine in imaginative ways will surely help you get over your vacansopapurosophobia.