Friday, September 23, 2011

How much data?

While we are in the subject of big data, I wanted to show how much detail these algorithms are able to generate. It may give you an idea of the amount of information that needs to be delivered to clients, one way or another.

Before anything else a disclaimer: the following images (and video) are taken from a mesh preview explorer program I have built. The quality of the texturing is very poor, this is actually textured in realtime using shaders. There are a lot of shorcuts, especially in the noise functions I use. Also the lighting is flat, no shadows, no radiosity. But it is good enough to illustrate my point about these datasets.

Take a look at the following screenshot:


Nothing fancy here, it is just some rock outcrops. In the next two screenshots you can see how many polygons are generated for this scene:



This is how meshes are output from the dual-contouring stage. If you look carefully you will see there is some mesh optimization. The dual contouring is adaptive, which means it can produce larger polygons if the detail is not there. Problem is, with this type of procedural functions there is always detail. This is actually good, you want to keep all the little cracks and waves in the surface. It adds character to the models.

Here is another example of the same:



These scenes are about 60x60x60 meters. Now extrapolate this amount of detail to a 12km by 12km island. Are you sweating yet?

I leave you with a screen capture of the mesh preview program flying over a small portion of  this island. It is the same 60m x 60m x 60m view moving around, detail outside this view is clipped. The meshes loading here are a bit more optimized, but they still retain a lot of detail. I apologize for the motion sickness this may cause. I did not spend any time smoothing camera movement over the terrain.

4 comments:

  1. Wow. As usual the beautiful terrain just blew me away. The escarpment at 5:40ish was a really nice bit.

    I find the terrain is very plausible -- apart from a few odd bits: floating rocks, a hard line between dark and light green at 4:40, the scattered white rocks which looked out of place.

    That escarpment made me think how prehistoric the landscape appeared. It would make a wonderful site for a bronze/iron age hill fort. Now I want a game where I lead a tribe in this sort of terrain and need to build a settlement, manage resources, go hunting, and of course defend against and attack other tribes!

    (if I go down the sim/strategy route for the game I'm developing those sorts of things will be possible, but I've been leaning more towards RPG lately...still a lot to do engine-wise before I have to make that decision)

    ReplyDelete
  2. Really nice to see the underlying wireframes in that video. I can understand the issues you will be getting when it comes to dealing with exponential polycounts. (unlike heightmap generation you have another axis to multiply too!). Are you still running this all via CUDA? I've only really run some CUDA demos, (my card didnt run them too well :().

    Ive been working on my own procedural world engine stuff, and ive tried a number of ways to optimise and deliver the data. Essentially I chunk the world and for a while i was saving chunks once they had been built (oth by saving just the noise cube and by saving the polys). I found that not only did it create some insane save files (admittedly i hadnt done much optimisation) but it was almost as quick to just rebuild the chunks from the original noise functions in real time. THis obviously avoids any disk storage at all. But I am storing other items in chunked save files , ones that arent related directly to the mesh itself.

    I find that the limit seems to be one of scale vs detail. you can only generate a certain amount of detail (ie polys) and keep pace wth payer movement. It maybe be different for you if you are using CUDA stuff, but Im also trying to anticipate extra load from gameplay and any ai etc that will end up in the engine later.

    Anyway, as usual amazing work! gj!

    ReplyDelete
  3. @pdurdin: I agree there are some hits and misses with the terrain. The good news is that these terrain algorithms are generic, it is possible to achieve very different features by changing the number of layers and their parameters.

    ReplyDelete
  4. @nullpointer: I check out your work and website regularly: http://www.nullpointer.co.uk/content/

    I think your Infinite Terrain demo hits the sweetspot on what can be achieved by realtime generation. As you say there is a fine balance between scale, detail and how fast the viewer moves. If you target average hardware, it becomes very hard to create a satisfying experience, something you can turn into a game.

    If you throw in more complex generation like architecture and vegetation, it borderlines the impossible.

    For this reason I chose not to generate on-the-fly. I precompute the world's geometry, then I send it to clients. I use OpenCL for the generation, which is same as CUDA, so it is kind of cheap for me to create a little farm of rendering GPUs. But OpenCL programming is time-consuming. For a regular budgets and projects it makes more sense to buy more CPUs and use traditional serial programming, which comes cheaper.

    As hardware evolves, CUDA or OpenGL may be useful for on-the-fly generation. This comes at a price. The GPU is already busy rendering the world, so now it would have to share cycles with the generation logic. I think a satisfying balance will be possible in a few years from now.

    ReplyDelete