Wednesday, December 16, 2015

Going Meta

We have this new system we are currently testing. We call it Meta Materials. What is a meta material? It is just stuff.

Let's build an abstract snowman:

We could render this using a nice snow material for the snow balls. For traditional rendering this would be a normal map and some other maps describing surface properties, like roughness, specularity, etc.

What happens if the camera get extremely close? In a real snowman you would see that there is no single clear surface. The packed snow is full of holes and imperfections. If we go close enough we may be able to see the ice crystals.

A meta material is information about stuff at multiple scales. It helps a lot when dealing with larger things than a snowman. A mountain for instance:


An artist hand-built this at the grand level. It would be too much work to manually sculpt every rock and little bump in the ground. Instead, the artist has classified what type of material goes in every spot. These are the meta materials. At a distance they may be just rendered using surface properties, however up close they can be quite large geometric features measuring 10 meters or more.

We would like a system that seamlessly covers from mountain to snowflakes, but we are taking smaller steps:


Here you can see a massive feature measuring more than 2 kilometers.

As you approach it, it comes into nice detail. Here is something that looks like a cave entrance. It is on top of the large thing. Note the rocks in the ground, these are coming from the meta-material:


I like this direction, it is a simple and robust way to define procedural objects of any shape and size. We intend to release this as an engine feature soon, I am looking forward to what people can create with a system like this. I will be posting more screenshots and videos about it.



17 comments:

  1. I really like this direction, I imagine one day being able to procedurally generate 3d surface textures with voxels. In most cases it wouldn't tax the system too much as the user would be too far away from the objects to trigger that LOD, but if a user held a rock/branch/metal pipe/etc there would be that additional level of detail that really drives the power of procedural generation home.

    ReplyDelete
  2. I haven't been following TOO closely, but is this a future way to guide/seed the general shape for terrain creation? Or is that handled with other functionality?

    ReplyDelete
    Replies
    1. General terrain shape creation happens by other means. It can be sculpted or simulated, both produce a very coarse coverage of space because they are expensive to perform. Meta-materials fill in the gaps by producing detail in the 20 meter to one meter scale.

      Delete
    2. I was thinking, how much more intensive would it be to have down to say a 1mm voxel density, but only have these higher density grids available at very close distances

      Delete
    3. I know in the past you have mentioned that your procedural algorithm will generate voxels based on position (and I think velocity), but would it be beneficial to also include where the camera is looking? I'm imagining that it would be efficient if the center of a screen was biased toward a higher than average voxel density than the outside. Also, does your procedural generation happen in a single pass? or does it build up to higher densities? I'd imagine that when the camera is moving, the 1 mm voxel densities I mentioned previously would simply not be generated, but when the camera stops and there is processing power available for generation finer and finer details would be filled in.

      Delete
    4. You can have 1mm voxel density and would not be that intensive. Actually it would not be any different. A clipmap makes complexity manageable in both directions, the immensely large and the very small.

      You could have human sized scales at LOD 8. The scene would render from LOD 8 up until the horizon, which would be LOD 16 or so. But then you could shrink yourself 64 times and see a very different reality at LOD 2. A chair next to you would appear the size of a cathedral. While in this tiny realm, you could take a microscope and still see another two scales below.

      Delete
    5. That is what I was thinking, is the trouble when you add motion into the mix? Or do you simply scale back the lod distances based on velocity?

      Delete
    6. Motion is OK, but camera speed must also shrink if you do not want the camera to outpace generation. So you became 10 times smaller, your top speed must also be 10 times smaller. You will still perceive the world moving at the same speed since now you perceive 10 times more detail.

      Delete
    7. Lets say half the screen is covered by a half height wall, and the other half of the screen is far away scenery. If you were to have an extremely fine level of generation on the wall that detail would be very noticable. However, the moment the camera starts moving along the wall and new content needs to be generated the viewer will find it very difficult to focus on all of this generated data so continuing to procedurally generate at this level of detail would be a waste. It would be much more efficient to work in a velocity multiplier that prevents your algorithms from adding extra detail into geometry that will only be on screen for a very short period of time.

      Delete
    8. This would allow crazy levels of detail without having to shrink the character or restrict their movement speed.

      Delete
  3. Why stop there? IIRC there is already some platforms advertising eye-tracking, and having it for everything should be that many years of. The way human eyes work you are only actually looking at a tiny fraction of the screen and the rest is blurry and colorless peripheral vision filled in by the brain. It's even possible to measure attention this way as well... Sorry getting lost in SCIFI dreams.

    So yea, not actionable quite yet, but if this becomes reality, this software will be in the perfect position to take advantage of it and skip a decade or two of moors law overnight.

    ReplyDelete
    Replies
    1. It is a nice dream. I remember reading somewhere the human eye can move one degree per millisecond. One degree is enough to shift the focus to a whole different area, that means the sensing and then display update must have lower latency.

      Delete
    2. No... The eye can move at that rate, but your brain can't comprehend what it is seeing at that rate.

      Delete
    3. Comprehend maybe not, but if focus updates a bit later it is likely to be picked up by our brains. We are wired for noticing change, even in the periphery. The wiggling wherever you look will likely convince you to scratch your eyes out. But this is wild speculation on my side.

      Delete
    4. From my understanding, the periphery is far more susceptible to noticing change. It's there to attract our attention, and it is also why fluorescent lights appear to flicker when you are not staring directly at them. Seeing as most vr headsets are gunning for 90fps, I assume a 1/90s delay or. 0.01s vs 0.001s delay shouldn't be perceptible.

      Delete
    5. Also, hopefully the eye movements are not entirely unpredictable, meaning you could do some machine learning magic and try to stay a step or two ahead, rendering a few path shaped regions where it seems most likely to look next in advance.

      Delete
  4. Not sure this would be the location to actually ask this question but it may be an extension to this system.

    Would it be possible through the meta material method to add glue/hardness properties as well? You have physics so that if you destroy material that is no longer supported it will fall. This great! But there is a single hardness, a block wall for example may fracture once it hits something else.

    In addition, tunnel behavior based around material you are tunneling. If you are tunneling through sand, it should be difficult (impossible) as it would collapse around you until something with substantial surface tension holds it up.

    This behavior could be applied to an erosion behavior from environmental influence as well.

    ReplyDelete