You can get Voxel Farm now. For more information click here.

Monday, October 17, 2016

Slaying the LOD monster

LOD, or Levels of Detail, is a technique for managing very large and dense scenes like we have today in open world games. The general idea is you can have multiple copies of the same content at different resolutions. Then you switch which copy to use depending on how much detail is necessary for the scene.

LOD switches can be implemented as plain swaps, morphing between states as shown in this earlier post, or even as completely adaptive mesh representations, as shown here.

Sometimes we get fixated on LOD swaps and we drift away devising tricks to make them harder to see. I feel this is not the right angle to look at this problem. While LOD swapping techniques are very important, and progressive swaps are much nicer, etc., the real goal is to have as much detail as possible in the first place. If you find yourself masquerading LOD swaps, it is probably because your scenes still include levels of detail that are too coarse.

How much detail do you need? Just enough to make LOD transitions close to being sub-pixel. Once LOD swaps are sub-pixel, it does not matter what swapping technique you use. It also means your rendering of the content is as detailed as the screen resolution will allow.

This sounds like a Holy Grail, is it feasible with the current hardware generation? We believe it is. Take a look at the following video:

As you can see in the video, buildings preserve their detail, even when viewed from afar. LOD changes, while they do happen, are virtually impossible to detect. All this while keeping interactive frame-rates.

It turns out the same techniques that prevent buildings in the distance from looking like melted crayons also give you smooth LOD transitions. I will be covering how all this works in my next post.

Thursday, September 8, 2016

Introducing Farm Cloud

We have included a preview of our collaborative edition system in the latest Voxel Farm 3 release.

The following short video shows what takes to setup and share a project from scratch:

We are still working on this system, but hopefully you get the idea. Our focus is to make the setup as simple as possible so people can start collaborating without significant effort.

This is also intended for more than collaboration and source control. This same layer of distributed persistence can be used by any application to store and sync the changes their users make.

This is a very active area of development for us. I will be posting more about this in the future, meanwhile as always I look forward to the questions you may have about this system.

Monday, August 22, 2016

Pyramid Scene in UE4

Here you can see a video of the Pyramid scene from the previous post:

There are many improvements in this new engine version. Aside the new features like textured voxels and procedural materials, we invested a big deal of time in the quality of the experience. There is still some pop-in, which is coming from a little bug in the meta-material layer. Aside from that the LOD changes are pretty stable and unnoticeable for the most part. The use of UV-mapped voxels also helps minimize the pop-in.

Another old problem area was the replanting of procedural flora (and other objects) every time there was an edit. The new instancing system provides a stable representation that is able to transfer the flora planting into the new geometry. Only edits that really affect the planted instances result in a visual change.

The destruction and building shown here is controlled by blueprints inside UE4. You could easily switch the bombs by a pickax or by rockets.

Thursday, August 4, 2016

System Integration

Here is a scene that combines all the recent developments in Voxel Farm:

This integration is subtle. There are no giant turtle mountains here, but the same system we used for the giant turtle is applied here to produce the natural rock pillars that are connected by the bridges, also the platform where the pyramid rests:

These natural structures are defined by a low resolution mesh that captures the basic shape. This is expanded in realtime into detailed features like rocks, sand and dirt using procedural materials that we call Meta-materials.

The large terrain around the pillars and the pyramid's base is using Voxel Farm's standard procedural terrain.

This scene also uses UV-mapped voxels. This is a different method to apply textures to voxels. It allows much finer control. The entire pyramid is using them:

There is also the new instancing system, which is responsible for all the trees plants and rocks you see scattered around. You cannot see this in just screenshots, but this new system is able to preserve existing instances even in the event of user edits and destruction. This avoid the visible "replanting" of vegetation and rocks around edits.

And all this is running in Unreal Engine 4. I would say the UE4 integration is the last of the systems than went into taking these shots.

Friday, July 15, 2016

Intelligent Terrain Synthesis

Don't you hate it when your favorite TV series puts out an episode that is just clips of stuff that happened in earlier episodes? This post has some of that but hopefully will provide you with a better idea of how we see Procedural Generation in the near future.

This video shows the new procedural terrain system to be released in Voxel Farm 3:

In case you want to find out more about what is happening under the hood, these previous posts may help:

Geometry is Destiny Part I and Part II
Introducing Pollock
Terrain Synthesis

The idea is simple. Instead of asking an artist to labor over a hundred of different assets, either by hand or by using complex generation tools like World Machine, we now have a synthetic entity that can do some of that work through a mix of AI and simulation. You do not have to be an expert or initiated at all in the arts of procedural generation to get a satisfactory outcome.

Why are AI and simulation important? After working for a while in procedural generation, it became clear to me there was no workaround to the entropy problem. This I believe can be stated like this: Viewers of procedurally generated content will perceive only the "seed" information, not the "expanded" data. Yes, you may have a noise function that can output terabytes of data, but all this data will be compressed by the human physique to the few bytes that take to express the noise function itself. I posted more in detail about this problem here:

Uncanny Valley of Procedural Generation
Procedural Information
Evolution of Procedural

This does not mean all procedural generation is bad. It means it must produce information in order to be good. Good Procedural Generation is closer to simulation, automation and AI. You cannot have information without work being done, and if work is to be done better leave it to the machine.

The video at the top shows our first attempt at having AI that can do procedural generation, stay tuned because more is coming.

There was an error in this gadget