Jump to content


Popular Content

Showing most liked content on 05/22/2017 in all areas

  1. 7 points
    I want to share a little tool I made for grooming feathers. Its a set of 6 nodes, one base node and 5 modifiers. Super easy to use. Just connect them and.. there you go - you got yourself a pretty little feather. You can layer modifiers as many as you want. Any feedback is super appreciated. https://www.dropbox.com/sh/8v05sgdlo5erh0b/AADSfadqkxgPOBVeaGr2O49Oa?dl=0
  2. 2 points
    Dot node can survive Y scissoring. The dot is persistent, if you click on the dot again with Alt. EDIT: btw you can also color the dot.
  3. 1 point
    Jumping on the GIF train, all created with Houdini: https://boaringgifs.tumblr.com/ Cheers, Nick Deboar www.nickdeboar.com
  4. 1 point
    Uhm, why? It makes perfect sense and is widely accepted. Its called the object context, /obj/ for short, and 'object level' is just easy to say. If you'd call it "scene level", another novice might look for /scene/ that doesnt exist. If someone is refering to sops as object level, they are the ones that are inconsistent and doing it wrong. But theyre probably not in the majority, feel free to slap them on the wrist when they do.
  5. 1 point
    Are you maybe able to do simple copy stamping on a switch node? (Use the Copy [Stamp] SOPs, and check out the stamp() hscript function)
  6. 1 point
    You setup this absolutely right. And also you can use multiple bends, multiple clumps, multiple turbulences. You can layer things easily. First of all, strands and quill are different outputs. It was an intentional choice and if you want to render them together, yep you have to merge them first. Secondly, to see strands in viewport you need to change some parameters on scene level.
  7. 1 point
    If you are on H16 you can use those two scripts. One adds visibility node with blast (as i didnt wanted to render hidden prims) and second finds all visibility nodes with specific names and deletes them. kill isolate.txt isolate after.txt
  8. 1 point
    this isn't necessarily true and depends on which type of dual you are using. in case of houdins barycentric based dual for instance, it isn't true. if you wanna get voronoi cells you have to compute the circumcentric (voronoi) dual instead. in this case the dual is orthogonal to it's primal triangulation which is one of the key properties of a voronoi. dual1.hipnc
  9. 1 point
    If you open Display Properties you can find there Visible objects field (Optimize Tab). The script above just puts the selected object names into this field with a help of Hscript function. If you want to hide primitives you have to play with visibility sop, there is no way to do that other than in sop context.
  10. 1 point
    Also for a bit of a shorthand: http://www.sidefx.com/docs/houdini/vex/functions/ow_space http://www.sidefx.com/docs/houdini/vex/functions/wo_space
  11. 1 point
    hou.hscript('oppane -t parmeditor path_to_your_node')
  12. 1 point
    Im not sure why you would see the interface for tabs like this although you will still see network controls... I am on kde/kunix h16. To make what for me looks and act exactly like the floating param via RMB -->Parameters and Channels -> parameters try this... node = hou.node('/obj/some/valid/oppath/to/look/at') # get desktop dt = hou.ui.curDesktop() # floating pane (pane not availible via floating pane tab) pane = dt.createFloatingPane(hou.paneTabType.Parm) # what parameters are in view pane.setCurrentNode(node) pane.setPin(True) # hide network interface and pane bar hou.hscript("pane -a 1 -A 0 " + pane.name())
  13. 1 point
    Hello everyone, This is a simple introduction to using Vex.
  14. 1 point
    Open source project initiated by Oliver Hotz to copy and paste points, polygons and respective attributes across multiple apps. This is not an asset transferring mechanism but an on-the-project quick way to transfer geometry without file management concerns (You don't file manage your clipboard right?) Try it out and contribute! https://heimlich1024.github.io/OD_CopyPasteExternal/ Cheers
  15. 1 point
    The wire solver could totally work but it doesn't look efficient. It wouldn't be easy if you have a lot of snot to do. Something like a pintoanimation would be just perfect but I couldn't make it work. Maybe i'm doing something wrong though @anthonymcgrath ""Warrington UK... the VFX limbo of nowhere" lol, you should see here, Porto União - Brazil, I'm sure it can't get more nowhere then here
  16. 1 point
    I think Cristin mentioned it in the launch event, but but in case anyone missed it: Nearly all the areas presented in H16 will continue on into the next dev cycle. That's not to say other projects won't be started, just that we'll be building on the cool stuff presented in H16.
  17. 1 point
    Shots shouldn't have constantly changing requirements. Challenge your supervisors to be more clear and "lock down" what you are building. Let them know that vapid willy-nilly direction just cost more money. Cite the fall of major studios, like the one who made the life of pi, who based their business model on what was called the "open shot". The concept of the open shot is ridiculous and we should not repeat the mistakes of the past. With the open shot, directors pay one fee and then change their minds as often as the wind blows. In one of those shots the FX team struggled with adding in rain and complex fluid simulations only to have the director come in two weeks later and say "why the hell is there rain in that shot?". We know that is money down the drain yet the studio ate that cost, not the production. Try to avoid scope creep and let everyone on the team know that if it creeps too far that it becomes a new shot and requires a new budget. Cost is your best defense against unreasonable demands. Always say yes to your client or supervisor yet present them with realistic cost and hours. Nothing takes 5 minutes. Print that out and post on the wall. If you lose a bid to a smaller shop who wants to under sell you then move on. I have created 1st drafts that were better than final deliverables provided to me from external vendors that the team wanted go with. When the results come back and it sucks no one says a word because they gambled and lost and just have to live with it or pay for it to be done right again.
  18. 1 point
    HACK ALERT!! In reply to comments on the BSDF bonanza thread. You can use a very very simple hack to trick the pathtracer to use an irradiance cache, of sorts... In the past I've done a lot of point cloud baking for this purpose, in combination with a lot of other `indirect light` vop hackery, in order to do stuff like the gi light distance threshold... which works, but by using the indirect light vop (deprecated stuff) hacks made rendering much slower when more than 1 bounce was needed and irr cache was not in use... and sometimes the pcwriting would randomly skip objects for reasons I never got to the bottom of. In this case I'm using ptex baking (H15), but I suppose it could be anything... Since the ggx thread post was made I had what currently seems to be a much better/simpler idea than how I did it before, without any real modification to the pathtracer. Basically the hack is, plug the ptex bake into Ce and zero F on indirect rays (not the pbrlighting input)... despite zero'ing indirect rays Ce is magically looked upon by erm, indirect rays... But of course there are lots of practical implications and associated further hackery... Need to wedge bake entire scene (though it could maybe be selective, like just the caching the walls) auto filling in the shaders cache file path Just discovered baking doesn't work on packed geo!! Don't want to be killing indirect reflection beyond the first bounce. This leads to needing pre multiplied separate F components until they arrive in compute lighting, which in turn means making your own shader struct and all the layer blending tools that go with that. OR (I really really hope someone knows how to do this and shares it here), make a is_diffuse/reflection/refraction/etc ray vop. I have a hunch that the best way to do irrcaching in general might be to voxelize the scene... not only because it would provide a cache for shading as we are doing here, but also because we (meaning sideFX or other brainy people) could then start looking at things like cone tracing (like nvidia is doing for realtime gi). But the best thing would be (really dreaming here) that it would remove geometry complexity from the problem of raytracing... Basically the voxels would become a geo LOD, so if the cone angle is big and the distance is bigger than say 2 or 3 voxels then it would do all that expensive ray intersection stuff against the level set instead... forests, displacements, bazzillion polys, reduced down to voxels. I think this might work because I've been doing this in essence for years, but limited to hair... by hiding hair from all ray scopes and using a volume representation to attenuate/shadow direct/indirect light reaching the hair (nice soft sss looking results). But! I hear some say it will be blurry etc, because the volume lacks definition, so there is also a simple illum loop to trace `near` shadows and so add some texture/definition back in... fast... even compared to today's much faster hair rendering in mantra, and arguably better looking (the sss/attenuation effect esp if the volume shader casts colour attenuated shadows), but there is the hassle of generating the volumes even if automated as much as it can without sesi intervention. 1m11s for cached, 2m26s for regular. This is using the principled shader. Btw a test on this simple scene looks like the GI light works (here!), but it is way way way brighter than brute force PBR, and yeah also had grief with the gilight, sometimes not writing the actual file... irrcache_v003.hip
  19. 1 point
    Done. I rendered with sample lock On so we can clearly see how the caching methods differ. Its surprising how the photon cache strobe is not as visible in the beauty as the direct photon render would suggest. It looks like the photons that hit the wall directly stay stable, but of course the ones that hit the statue swim all over. Wonder if its still forgiving if the light moves... The irr cached render is rock solid though, and now at 1 minute per view dependant cache frame (with RHF filter) its starting to look really compelling. Solved the weird reflection on the right wall by setting min reflect ratio to 0.5, but its pretty clear that the hack confuses the path tracer, it looks as though the indirect rays are playing russian roulette with diffuse rays that aren't there, and this is still visible on the floor reflections. Mantra is same settings as photon cache render except for min refl ratio, but its still clear to see that the noise levels are lower because the irr cache is far more coherent than a photon cache. The funny thing with photons is that as the photon count increases blotching becomes less of a problem but variance gets worse, because they are getting smaller in radius and so harder to hit coherently ray after ray. I think there is a very good case for irr caching in Houdini... But like I said before, it needs sideFX to make it work proper and hassle free irrcacheVSphotons_001.mov irrcache_v008.hip
  20. 1 point
    Cheers! One more test, with happy Buddha's yay! The only thing that got cached was the walls. Goes to show you don't have to cache everything... The Buddha's still get the benefit Indirect contribution from them shows up in the regular aov's while light from room ends up in emission. I also setup a hybrid of global GI cache + view dependant cache. The view dependant cache is just a half res render from camera where every object but the room is phantom and specular rays disabled. Fresnel is view dependant so there is less diffuse energy in the room than there should be (there is ways around it like a baking mode in the shader where Fresnel is Off)... Everything inside the frame that is not occluded from camera (a self shadow test) is using the view dependant cache, everything outside the frame or occluded is using the global cache. The view dependant cache has the potential to provide vast coverage with a cache detail level auto appropriate for the distance. The point of it is mainly to deal with animated stuff in frame efficiently with the option of a view dependant cache per frame, and a static frame for global. Gain and Gamma was adjusted until it roughly matches the brightness of Photon cache render. I turned up the settings until quality >= patience limit, pretty good for 8m14s + 4m for both caches (cache time could be much less if blurry reflections). This was 6x6 2min 9max 0.0025 noise level (var aa still struggles with dark transitions). Think pbr would still be chewing the first 16 buckets after this finished and no where near these noise levels. Something weird with the reflection on the right wall though. Photon cache does really well 5m53 with same mantra settings. 10million were fired, there were some odd glows/blotches with 1million but not that big a difference. Quality wise pretty similar for the time spent, I call it about even, irr cache took longer but its cleaner over big areas (less variance in irr cache). More photon caching glitches... think turning off prefilter photons then re-rendering photons breaks it (black) and seems to stay stuck black, after a while of trying to get it back (on/offs, different file paths, etc) it somehow comes back. irrcache_v006.hip
  21. 1 point
    More craziness. This time I'm keeping the F components separate so I'm not terminating reflection rays (you'll see what I mean if you look at the hip). I also set the reflect limit to 10 (previously 1) and rough to zero (to see reflect bounces clearly), and I turned Adaptive sampling Off because it makes this scene more noisy rather than less. Interesting results... in that the beauty render time difference is bigger than before, at 6.4x faster than brute fore PBR and better noise quality than even Photon caching. Photons take a hell of a lot less time to cache than any way other way I can think of to bake lighting. ptex baking looks to not be viable at all for high polygon counts objects ... takes an age on something like the happy budha scan (640k poly)... pcwrite from an offset camera would be good but it's currently saying no to baking anything coming out of pbrlighting. Photon cache is a very practical solution but it looks much brighter. It would seems either PBR or Photon cache is wrong, but there's probably more to it... With Gi Light in point cloud mode the brightness is also too bright, the blotchy photons are clearly visible in reflections and light leaks at corners. photon caching is glitchy though, one minute its working, the next it doesnt, and then it works again... irrcache_v005.hip
  22. 1 point
    After actually trying this out myself, some more thoughts: The Sweep SOP already does what I describe when told to automatically sweep. So instead of doing it manually like I described with quaternions, you can actually just sweep some lines through your curve to get the reference frames I described. The Follow Curve IK with the Path object is really what you need except it's not in a easily usable form for this problem because it requires actual bone objects instead of just outputting reference frames. The real problem is that you need to specify constraints as to where the roller-coaster should be upright, ie. at the beginning and end of loops. So you can use these to evenly distribute the twist over the loop. Given this, the problem might not be too bad, rotate each successive frame around the Z axis by the quantity 360 degrees divided by the number of curve segments in between them. For helixes, it's similar I think except that it might be n*360 degrees distributed evenly, where n is the number of loops you have. For a more automatic way, perhaps what you do is start the sweeping algorithm from both ends of the loop (ie. do the same thing with the curve reversed). Now blend the two results with some feathering at the ends. FWIW, I've attached the manual way that I ended up trying out before realizing I had just recreated the Sweep SOP's algorithm. refFramesQuat.hipnc