Database-driven lookdev. I just came up with it right now. It almost sounds legit, doesn't it? But the reality of it is that I'm sure something like it exists and is already used in production, to some extent, somewhere. Through a couple of simple ideas and procedures, the process of surfacing something like a massive full-CG environment can be easily handled without any manual/artist work beyond the one initial, simple step required to define characteristics of any given object.
Most major vfx houses have already put into practice the process of "Object Tagging." What object tagging is, for those who aren't aware, is the definition of a part of a model on an object-level through a reference code which you can define through many different ways. That reference code can state what type of material should be assigned, what particular UV Tile is associated, what matte ID you'd like to assign it, etc. All of which can be utilized through post-render submission scripts, or even in the 3d software itself.
What I would like to see with object tagging is something more specific to the tedious task of surfacing thousands of objects. Object tagging assignments are done at the modeling stage. There's just no way around that...at the moment. So if someone is going to go through the painstaking task of doing that, everyone else shouldn't need to struggle like the modelers do. We might as well benefit from it.
A model gets tagged on its individual objects' nodes with a definition of material type. Is this wall concrete? Is this window heavily-tinted? Is this floor granite? Once all of those objects have been tagged accordingly, you'd publish it. After that, you'd run the auto-surfacing process (another term I just came up with). It would correlate the object's tagging to a shader library which would hook the two up.
You'd have to make sure every object has a set UVW scale. Let's say, 3ft by 3ft. All of your shaders would have a base tillable texture that matches real-world scale in compliance with the 3x3 scale. Above your base shader, you'd do everything else through procedurals. Ambient occlusion masks, fractals, normals-driven AO masks (like richDirt), etc. You'd basically eliminate any texture tiling, and ensure no-two objects have the same exact texturing. The auto-surfacing process should also cycle through random variations of any given-type of shader. You'd have, let's say, 50 different concrete shaders, and it would automatically randomize the assignments so that there are as few identical assignments as possible.
Random assignments would clearly introduce a problem off too much randomness. So, it would be obvious that some sort of control would be required. A color palette distribution percentage (more reds?...more grays?, etc.), an architectural look percentage (more brick buildings above all else?...more concrete than anything else?, etc.), and so on. There would have to be a process somewhere in the mix that drives all of these options. Obviously, you could manually define these during the modeling process, but you'd be limiting yourself to any flexibility and change after the fact by doing so.
Anyway, after having worked at a couple of different facilities on different shows and all-CG environments, I figured, as automated as things had been, there was still just way too much manual labor involved, and not enough automated post-processing. This is just an idea of how to overcome the time-consuming process of surfacing, which would free up a lot more time to focus on more important things, like coffee-and-cigarette breaks. And it's not outlandish, by any means, because I've seen bits and pieces of these processes scattered here and here, mainly by Digital Domain--the concept just needs to go further, and I think this is one approach to take it there.
No comments:
Post a Comment