mmontoya Posted June 29, 2007 Share Posted June 29, 2007 Hello all, I'm extremely grateful for the existence of this forum. As a recent convert from Maya, this site has proven immesurably valuable in my quest to become a Hodnik. I don't know how many of you remember the eighties TV show "The Greatest American Hero." The process of learning Houdini is quite similar - I am keenly aware of posessing incredible powers, now if only I could find that darned instruction manual... What started it all for me was this post by ptakun: ecosystem simulation using l-systems (Thanks for the wonderful inspiration). Since then I've been avidly reading Prusinkiewicz's "The Algoithmic Beauty of Plants" and tripping out over every plant, pine cone, and blossom I see. I've got a long way to go, but getting this far has been incredibly satisfying. Some day I hope to completely understand all the details that yielded this great image. I love that Houdini is primarily a tool for creative discovery. I am continually astounded how following a set of simple rules or premise can yield such wonderfully rich results. My next endeavor will be to learn how to apply UV coordinates, apply an sss shader and cover the cactus with spines as in ptakun's post here. Any advice as to how to to proceed (rather than through trial and error tinkering, which, by the way, is also great fun) is welcome. How did you veterans here learn the fundamentals? Here's my crack at a cactus: Quote Link to comment Share on other sites More sharing options...
Andz Posted June 29, 2007 Share Posted June 29, 2007 Looking great so far! Keep up with the work, and welcome to the forum!! Quote Link to comment Share on other sites More sharing options...
edward Posted June 29, 2007 Share Posted June 29, 2007 Did you notice the link on this post? http://forums.odforce.net/index.php?s=&...ost&p=29425 It points to this thread which should help: http://odforce.net/forum/index.php?showtopic=2135 Some other links: http://www.sidefx.com/exchange/info.php?fi...p;versionid=144 http://www.sidefx.com/exchange/info.php?fi...p;versionid=209 http://www.sidefx.com/exchange/info.php?fi...p;versionid=208 http://www.sidefx.com/exchange/info.php?fi...p;versionid=230 http://www.sidefx.com/index.php?option=com...opic&t=6420 Quote Link to comment Share on other sites More sharing options...
mmontoya Posted June 30, 2007 Author Share Posted June 30, 2007 Thanks edward, those are some very helpful links. I can't wait to delve deeper - I hope it won't hurt my head too much... In PRRenderman SSS lookups can be optimized by the creation of a brickmap. Does Mantra support a similar paradigm or are pointclouds currently the only supported solution for caching irradiance? (I know Houdini plays nice with Renderman and I can do things the way I already am familiar with by rendering in Renderman but I would like to see how far I can take Houdini without leaving my comfy sandbox for now). Quote Link to comment Share on other sites More sharing options...
ptakun Posted June 30, 2007 Share Posted June 30, 2007 Hey mmontoya, In PRRenderman SSS lookups can be optimized by the creation of a brickmap. Does Mantra support a similar paradigm or are pointclouds currently the only supported solution for caching irradiance? (I know Houdini plays nice with Renderman and I can do things the way I already am familiar with by rendering in Renderman but I would like to see how far I can take Houdini without leaving my comfy sandbox for now). You can create composition include sss picture and project it on object from camera NDC. Ptakun Quote Link to comment Share on other sites More sharing options...
mmontoya Posted June 30, 2007 Author Share Posted June 30, 2007 Thanks andz! It's a pleasure to be part of this forum. ptakun - While I can understand the process of utlilizing the camera's coordinate space and then projecting this onto the object, don't you still need an initial SSS shader on the object to calculate the back and front scattering in order to get the intial SSS image? But maybe your recomendation was not a substiture for this first step - I must confess that I don't really understand the advantage of using the camera to map back onto the object - is the savings that you can avoid having to assign UV coordinates to the object? Also, wouldn't you then be limited to a locked camera (the moment you move off the camera's frustrum the illusion would be broken and you'd have to rerender the image from the new angle). The point cloud, on the other hand, would store the values so that the data is rendered only once in the first frame and (provided the object does not move) these values can be re-used in subsequent frames. Am I missing something obvious here? The hairs on that cactus - are those Ri Curves that you rendered in PRRenderman? By the way, I loved the Katamari challenge you posted! This forum has some great gems nestled within its seemingly modest exterior. Quote Link to comment Share on other sites More sharing options...
ptakun Posted July 2, 2007 Share Posted July 2, 2007 hey! The hairs on that cactus - are those Ri Curves that you rendered in PRRenderman? Those are mantra curves. check this file(occ project on camera NDC): occ_ndc.hip cheers Ptkaun Quote Link to comment Share on other sites More sharing options...
mmontoya Posted July 5, 2007 Author Share Posted July 5, 2007 ptakun: Thank you so much for the file, it was very kind of you to take the time to provide me with a concrete example - it was very illuminating. I hope to post some new version making use of this approach soon. Quote Link to comment Share on other sites More sharing options...
zoki Posted July 5, 2007 Share Posted July 5, 2007 great thread and your renders ptakun are amazing so for animation you render these occ and sss passes and then reproject them using ndc? why are you projecting them again you can just composite them as separate passes into final image? Is here something that I am missing? thanks z Quote Link to comment Share on other sites More sharing options...
mmontoya Posted July 8, 2007 Author Share Posted July 8, 2007 (edited) I'm united with Zoki in my confusion - While I understand the VOP network for the example you provided, I don't understand what the advantage is of projecting the rendered occlusion image back onto the object using the camera's NDC. How is this different from applying, say a VEX Clay shader and rendering as usual utilizing a GI light set on "ambient occlusion"? Also, I tried to recreate the VOP network from scratch and couldn't, for the life of me, figure out how to get the shader to use the currently rendered image within the same job (it seems to use the previously used image in $HIP/occ.pic), which if you've moved the camera, ruins the entire effect. What am I missing? Edited July 8, 2007 by mmontoya Quote Link to comment Share on other sites More sharing options...
rdg Posted July 9, 2007 Share Posted July 9, 2007 which if you've moved the camera, ruins the entire effect. What am I missing? This is an implicit issue of view-dependent solutions. Some renderers build large pointcloud caches to store the information for changing angles - but this doesn't take moving objects into account. I am not sure if this is really the answer but it helped me to understand the network a little bit more. Georg Quote Link to comment Share on other sites More sharing options...
ptakun Posted July 9, 2007 Share Posted July 9, 2007 Hi, I wanted to show you easy solution which give you possibility to cache occ sss etc. This solution has very obvious defect: You can't move object,camera etc. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.