Guest YoBeardMaestro Posted May 1, 2009 Share Posted May 1, 2009 I'd like to create a digital asset to make it easier for artists to generate point clouds from geometry and bake occlusion data from said geometry into this point cloud for re-use at render time. The purpose of this is simply to speed up the ambient occlusion pass for static geometry at render time, by importing the occlusion data from the geometry's point cloud rather than recomputing the occlusion for each frame. In this case my geometry is sufficiently complex (think 'procedural city') to prevent me from using the shader unwrapping technique. I reckon that this asset should be a SOP node, as the position of the points in the point cloud will vary according to the geometry iteslf, and there's no reason why the occlusion shader couldn't be bundled into the SOP asset. Ideally, this asset would allow the user to specify the file path for the point cloud and the number of points to scatter about the geometry (as well as some occlusion-specific parameters as necessary), but most importantly it should expose a button that, when pressed, re-generates the point cloud file, exporting the occlusion channel for each point into this file. My limited knowledge of HDAs suggests to me that I'm going to need to delve into scripts and/or VEX code to accomplish this (which thus far I haven't), so I would certainly be grateful for any pointers you can give me on this. I've played around with the pcpack on odForce (just search for 'pcpack' on the odForce forums if you're not familiar with it), but I can't use those shaders to bake occlusion data for each point in the point cloud from SOPs, and it doesn't seem right to be baking occlusion data in SHOPs since I need to iterate over *all* the points in the point cloud, not just points on the geometry that are visible to the camera. At render-time, of course, a SHOP shader would access the point cloud as necessary to shade each surface point on the geometry as it is iterated over by the renderer. Of course, if anyone has had experience baking out occlusion data (or any kind of data, for that matter) into point clouds for re-use at render time and has a different suggestion then, as they say, 'I'm all ears'. Thanks, YBM. Quote Link to comment Share on other sites More sharing options...
Ezz Posted May 1, 2009 Share Posted May 1, 2009 (edited) Hi. Im not an expert a all on this subject but I know its possible acces the pointdata from a GI calculation. Digital Tutors covers that in there intro to Mantra DVD- Maybe you can use that to control the color on your geometry via a custom SHOP somehow. I know its not the same as ambient occlusion, but it could lead you in the right direction. Erik Edited May 1, 2009 by Ezz Quote Link to comment Share on other sites More sharing options...
Guest YoBeardMaestro Posted May 1, 2009 Share Posted May 1, 2009 Yes, thanks, I tried that. However, because the irradiance cache is generated from the camera perspective only, it does not contain any information about points on the rear-facing surfaces of the geometry. Since my camera will be moving through my procedural city I need some way of pre-rendering and capturing the occlusion data for *all* points on the geometry. I thought that perhaps I could use the VEX VOP SOP to pipe the geometry point positions into the occlusion function, but the occlusion function isn't relevant in SOPs, presumably because SOPs do not iterate over point positions (which is required by all shader operators). This is really bugging me - surely someone must have had a need to render an occlusion pass on a procedural city or something similar, and I'd love to know if it's possible to save render time by pre-computing the occlusion data in the geometry context...? Quote Link to comment Share on other sites More sharing options...
symek Posted May 1, 2009 Share Posted May 1, 2009 (edited) I'd like to create a digital asset to make it easier for artists to generate point clouds from geometry and bake occlusion data from said geometry into this point cloud for re-use at render time. The purpose of this is simply to speed up the ambient occlusion pass for static geometry at render time, by importing the occlusion data from the geometry's point cloud rather than recomputing the occlusion for each frame. In this case my geometry is sufficiently complex (think 'procedural city') to prevent me from using the shader unwrapping technique. I reckon that this asset should be a SOP node, as the position of the points in the point cloud will vary according to the geometry iteslf, and there's no reason why the occlusion shader couldn't be bundled into the SOP asset. Ideally, this asset would allow the user to specify the file path for the point cloud and the number of points to scatter about the geometry (as well as some occlusion-specific parameters as necessary), but most importantly it should expose a button that, when pressed, re-generates the point cloud file, exporting the occlusion channel for each point into this file. My limited knowledge of HDAs suggests to me that I'm going to need to delve into scripts and/or VEX code to accomplish this (which thus far I haven't), so I would certainly be grateful for any pointers you can give me on this. I've played around with the pcpack on odForce (just search for 'pcpack' on the odForce forums if you're not familiar with it), but I can't use those shaders to bake occlusion data for each point in the point cloud from SOPs, and it doesn't seem right to be baking occlusion data in SHOPs since I need to iterate over *all* the points in the point cloud, not just points on the geometry that are visible to the camera. At render-time, of course, a SHOP shader would access the point cloud as necessary to shade each surface point on the geometry as it is iterated over by the renderer. Of course, if anyone has had experience baking out occlusion data (or any kind of data, for that matter) into point clouds for re-use at render time and has a different suggestion then, as they say, 'I'm all ears'. Thanks, YBM. Not that I would like to discourage you, but... to make a long story short, as far as I can tell, there is no reason for that. There are a number of issues in your forwarding path, some interesting topic (like shading points for hidden surface - what is possible), all can be achieved once you decide which route you go (baking/computing occlusion in sops/mantra). The point is you won't get from that more then you get from irradiance cache built in Mantra. It works well also in complicated scenes and does just what you need. Occlusion with irradiance cache is highly optimize thing. Anyways, If you see something I don't, sorry for that. There are lots of cases point clouds rules, we use it all the time, but at times of H10, classic occlusion with caching works faster then any optimizing I was able to see. skk. PS Irradiance cache can be appended with samples frame by frame. Also every object can have his own cache file, what makes possible to mix animated objects with static one. Edited May 1, 2009 by SYmek Quote Link to comment Share on other sites More sharing options...
Guest YoBeardMaestro Posted May 1, 2009 Share Posted May 1, 2009 Thanks, SYmek. You've identified the problem I was having with the irradiance cache not shading hidden points (either because they're facing away from the camera or out of view completely). Secondary to that, mantra doesn't offer a lot of control over the irradiance cache (other than min/max pixel distances) and I'd like to be able to optimise my occlusion point sampling by choosing these points myself. Mantra doesn't know the path that my camera is taking through the scene, and the irradiance cache from the first frame of animation may be totally inappropriate for another later frame. There *must* be a way to get mantra to compute the occlusion for all points on a specific piece of geometry, regardless of the camera view. I can't think of any practical reason why this shouldn't be possible. I've just got to figure out how to do it... ? Quote Link to comment Share on other sites More sharing options...
Jason Posted May 1, 2009 Share Posted May 1, 2009 There *must* be a way to get mantra to compute the occlusion for all points on a specific piece of geometry, regardless of the camera view. I can't think of any practical reason why this shouldn't be possible. I've just got to figure out how to do it...? I haven't tried this, but perhaps try frame the scene with another camera, add the Rendering Parameter "Enable Hiding (vm_hidden)" and turn it off, and render once to create your irradiance cache. Now read that irradiance cache from your regular camera. Quote Link to comment Share on other sites More sharing options...
symek Posted May 1, 2009 Share Posted May 1, 2009 (edited) and I'd like to be able to optimise my occlusion point sampling by choosing these points myself. Mantra does it. Irrdiance cache uses adaptive sampling. It computes a gradiant of changes in surface color and samples accordingly to that measure. Do you have any idea how to vary sampling in a similar way? Mantra doesn't know the path that my camera is taking through the scene, and the irradiance cache from the first frame of animation may be totally inappropriate for another later frame. Did you try this? As far as I remember, samples are saved in world space, so as long as geometry is static, occlusion cache will stay valid for all frames. Read/Write mode of irradiance cache is just for that. For what other purpose..? There *must* be a way to get mantra to compute the occlusion for all points on a specific piece of geometry, regardless of the camera view. I can't think of any practical reason why this shouldn't be possible. I've just got to figure out how to do it...? ... yes, like mixing uv space rendering with caching occlusion. Disabling hidding won't work, points will be there, but not shaded. Don't remember if cache work with uv space rendering, but it should . I see quite a lot of technical reasons for why mantra could have trouble to shade points regardless camera view, but anyway... What I wanted to say is that even if you mange to get your point cloud complete for a whole scene, there will be another challenge waiting. How to reasonable use these samples with your algorithm. I used to try a lot of these tricks with baking raytraced occlusion into point cloud for interpolation purposes, and after many tries it appeared that irradiance cache works faster anyway with a little or none setup. Of course I don't know what kind of smart solution you're keeping behind . Good luck! cheers, skk. Edited May 1, 2009 by SYmek Quote Link to comment Share on other sites More sharing options...
Jason Posted May 2, 2009 Share Posted May 2, 2009 Disabling hidding won't work, points will be there, but not shaded. You sure? It should shade all points. I'll have to check... Quote Link to comment Share on other sites More sharing options...
symek Posted May 2, 2009 Share Posted May 2, 2009 (edited) You sure? It should shade all points. I'll have to check... No, I'm not, and your question intimidates me. I'll have to check... EDIT: well, you'we right Jason, sorry for confusion. Points will be shaded as you said. It was some time ago, I played with these things. For some reason I didn't wanted to shade them, since in camera animation scenario one have to deal with joining point clouds files frame to frame anyway, so there is no reason to bother specially with backface points. Anyway, in case of big scene set and long animations, the size of decent point cloud becomes quite prohibitive what turns us to brickmap/texture3d issue again. That's why I suggested irradiance cache. Edited May 3, 2009 by SYmek Quote Link to comment Share on other sites More sharing options...
Guest YoBeardMaestro Posted May 4, 2009 Share Posted May 4, 2009 Okay, thanks for your thoughts everyone, particularly SYmek. I've tried enabling the irradiance cache on my render, including adding the vm_gifile and vm_gifilemode parameters to read/write to and from a cache file on disk. I then set Mantra to render from frames 1 to 100 (during which time my camera passes through a substantial amount of my procedural city geometry) in increments of 10 frames (for convenience) and compared* the total render times with the irradiance cache enabled and disabled. Between the two cases I noticed no improvement in render time whatsoever. In fact, to obtain a similar picture quality as the disabled irradiance cache case (by setting max and min pixel spacing to 1), it actually takes *longer* to render with the irradiance cache, and also introduces artefacts into the occlusion, presumably because the irradiance cache does not contain the right position samples as my camera moves further through the (static) geometry. Am I using the irradiance cache correctly? Because it seems to me that it's not helping me at all, and merely computing the occlusion samples it can see from the first frame only. To re-use these samples when the camera is pointing at another part of the geometry will of course introduce artefacts: the occlusion samples just aren't there. So how is the irradiance cache used to help speed up render times? And, more along the lines of my project requirements, how can I pre-compute and bake occlusion data from the *entirety* of my procedural city geometry and then use this data to interpolate occlusion as my camera passes through (and observes) different parts of the city? * Incidentally I discovered that, on my system, with the Render Scheduler open and set to NOT clear completed jobs, Mantra hangs whenever it tries to render the second frame in a sequence. Re-enabling the clear completed jobs toggle fixes this problem, but then I can't compare my render times as precisely. Try it on your system. I've raised this as a bug. Quote Link to comment Share on other sites More sharing options...
symek Posted May 4, 2009 Share Posted May 4, 2009 (edited) Hmm, no idea where is a problem. Are you sure Mantra doesn't complain it can't write to cache file or so? Here you have my setup. Not sure about speed, but speedup is pretty noticeable. Of course quality is another issue much depending on a specification of your scene. Lot's of people use Environment Light instead of occlusion for simplicity and lack of headache for a price of render time (which is far below what you could dream about just a few years back anyway). Use render_me ROP to prepare setup. It renders a single frame in Write mode, then every 10 frames in read/write, and finally renders beauty pass with read only mode). It seems that Mantra finds samples in world space correctly. hope this helps, skk. PS You can always inspect point cloud by opening it in SOP just like any geometry. irrad_cache.hipnc.zip Edited May 4, 2009 by SYmek Quote Link to comment Share on other sites More sharing options...
Guest YoBeardMaestro Posted May 5, 2009 Share Posted May 5, 2009 Thanks for uploading your setup, SYmek. I think I now have a better understanding of how the irradiance cache works, although I was still unable to see any improvement in rendering speed with the cache in read mode (!). Judging by your setup, the first render node (pre_generate_cache_in_frame_one) computes and writes out the irradiance cache for the first frame. So it takes a while to render, obviously. Your third render node (render_me) renders every frame and uses the data in the irradiance cache to help it. So it should render fast(er). Then you've got the second render node (generate_cache_every_10_frames) which reads from the irradiance cache as well as writing any additional points that are needed from the new camera position. Correct? Your setup seemed to produce two frames in mplay for every frame of animation (ignoring the frame from the first render node): one from the 'generate_cache_every_10_frames' node, followed by another from the 'render_me' node, and so on in this fashion. I noticed that the former was a relatively fast render whilst the latter was relatively slow - slow enough to lead me to believe that it was re-generating the irradiance cache from scratch, or doing something else similarly computationally expensive. What's going on here? Incidentally, I re-imported the irradiance cache into your scene as geometry after the renderers had finished, and it only contained enough points for the first frame. It looked as though no additional points had been added into the file to account for the different camera views at each 10-frame interval. This seems strange to me. Anyway, if I understand it correctly, when the irradiance cache is in read/write mode, mantra attempts to use the cache data to interpolate the irradiance at each shading point, which (should) speed up the rendering of those points. However, if the cache does not contain sufficient data near-enough to the shading point (which is presumably controlled by the irradiance error parameter?) mantra calculates the irradiance for these shading points and appends them to the cache file for later use. However, when the irradiance cache is in read only mode, mantra does not calculate any new irradiance points and is forced to use the existing data in the cache. Right? If so, it begs a few questions: 1. If the second render node is set to read/write the irradiance cache every 10 frames, is the first render node even necessary? Is mantra unable to create the irradiance cache from scratch in read/write mode, and does it have to be explicitly told to write (only) to the cache initially? 2. Do you know what the irradiance parameters (irradiance error, max/min pixel spacing, default samples) actually mean to mantra? I've taken some guesses, but I'm not entirely sure, and the documentation isn't particularly clear on this topic. And, of course, the big one: 3. Why am I not noticing any rendering speed improvements when I'm using the irradiance cache in read-only mode? Why doesn't mantra seem to be adding new irradiance points into the cache at the 10-frame intervals? Is my understanding of the irradiance cache not correct? If Jason (Iverson) and/or Mark (Elendt) are reading this, I'd be most grateful for your input as well. Kind regards, YBM. Quote Link to comment Share on other sites More sharing options...
symek Posted May 5, 2009 Share Posted May 5, 2009 (edited) All seem to be correct. First rop is needed, because Mantra won't create a file from scratch in read/write mode (at least I see this here), what is a kind of buggy behavior, but also not a big deal. Irradiance size is increasing frame by frame in that mode, so my understanding of what is going on seems to be correct. After all, as I already said, you can open irradiance cache in SOP for inspection. You will see all your samples, read attributes values etc. I see noticable speedup in a simplest scene. Anyway, I'll leave the ground now for Mr. Iverson and Mr. Elendt for some explanations. cheers, skk. Edited May 5, 2009 by SYmek Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.