Jump to content

Neon Junkyard

  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Neon Junkyard last won the day on September 16 2017

Neon Junkyard had the most liked content!

Community Reputation

20 Excellent


About Neon Junkyard

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
    Drew Parks

Recent Profile Visitors

2,023 profile views
  1. So I know all about deep compositing in theory but in actual implementation I can find almost no info on this topic - What is the current workflow within houdini to output deep compositing images to comp in Nuke? I don't need to do anything super fancy or crazy, I just need deep data to output a super accurate depth pass to do highly photoreal depth of field in comp with pgBokeh, as I need all the lense artifacting and chromatic abberation that you can't get out of in camera DOF in mantra It involves super fine semi-transparent strands and hair and stuff like that, so any caveats I should be aware of? Does houdini's current implementation of EXR support deep data or do I need to use a .RAT file or something? Also I really just need it for depth of field so is it possible to render just a separate deep data utility pass with just the depth info and not a million other AOV's embedded in it to save on disk space, or does the beauty have to be rendered with deep data in order to work correclty? Any help would be greatly appreciated!
  2. That is great, exactly what I have been after for some time now, however is that a 16.5 file? Doesnt work in 16.0.557, point wrange is pointing to nonexistent parameters
  3. Orienting copied objects

    The copy SOP has a heirarchy of attributes it reads in, orient being the foremost, N and up if orient is not present, etc. N alone is not enough to define orientation in most cases, usually you pair it with an up vector attribute as well http://www.sidefx.com/docs/houdini16.0/copy/instanceattrs You can use a polyframe SOP to generate N and up attribs from curves pillar_orientation_roof_FIX.hip
  4. the new workflow is to use the newer for-each loop network if you arent comfortable with those and need to follow a tut exactly you can unhdie the old for-each subnetwork using "opunhide" function in the textport opunhide Sop For-Each ^or something to that extent http://www.sidefx.com/docs/houdini//commands/opunhide.html
  5. might be better answered in the redshift forums but off the top of my head you could maybe try reading in a packed prim intrinsic attribute and promoting to a point attribute http://www.sidefx.com/docs/houdini16.0/vex/functions/primintrinsic
  6. I have a pretty heavy scene with lots of mechanical robot models and such, static geo with a camera flythrough, and I need to render it with toon outline. It renders fine, but I am getting massive flickering between frames during animation I suspect the problem is there are polygons that are too close together, I fixed what I could but its a massive scene and a rebuild is out of the question at this point. Toon is a displacement effect so I tried cranking the dicing quality, increased samples etc nothing really helps. Anything I can do to help fix this? Thanks
  7. Foreach, multiple outputs

    did anyone figure this out? I am looking for a similar solution, need to have 2 operations done but has to be on the exact same copy index....
  8. H16 auto-place nodes option missing?

    Not exactly the same as H15 and prior but shift+enter when creating a node will auto-place it
  9. menu pan/scroll with wacom pen?

    ^ I was looking for some sort of workaround just like this, I wonder if you can mess with something in the houdini.env file to fix the mmb viewport breaking
  10. In most other DCC apps you can mmb click and drag anywhere in a menu with a wacom pen to scroll up and down the menu, in after effects you can hold space bar to get the hand tool, etc Is there any way to get that functionality in Houdini? Its getting maddening having to click and drag on the scroll bar every time I want to scroll through a long parameter list Yes the scroll wheel on the mouse works but I use a tablet 99% of the time so that doesn't do much good. Thanks for any help
  11. I have some relatively big scenes, ~50mb HIP files, and houdini is starting to take forever to load and save, wondering if there are any ways to optimize both scene size and general houdini startup speed, besides obviously just deleting nodes (windows 7, SSD install btw) For example do some nodes like complex uncompiled shaders, COP networks, etc take up more space on disk than other nodes and might be worth some housecleaning, or are all nodes more or less the same disk footprint? Regarding app load time, and this seems obvious but the more HDAs I install the slower houdini loads (qLib, Aelib, etc). Is that also true of things like node presets, gallery items, shelf tools etc? Or is that all read from disk on the fly and shouldnt matter? Then in Maya you can do things like disable 95% of plugins it ships with, disable modules like bifrost, legacy dynamics, mental ray, etc and the program loads infinitely faster, and if you need them later you can just load them on the fly. Is there anything you can do like that in houdini, perhaps maybe in the .env file? For instance disable renderman, which I never use and clutters up the MAT context? I appreciate any suggestions
  12. Distort noise to follow geometry flow?

    I'm trying to figure out how to distort noise so it follows the curvature and flow of the geometry. I attached an example of what I am trying to do, from the Nike Flyknit spot. It is pretty well documented on how to do this using volumes and advecting points to create curl noise streamers, and I want to achieve a similar effect but using procedural textural noise. I am especially interested in the elongated striping shown in the example where it is not regularly broken up like turbulent noise does but is stretched along the flow direction. The basic setup for doing this in volumes would be to take the double cross product of the volume gradient and an up vector, add noise, create a velocity field, advect points through it and create lines So I have the same setup using the normals instead of volume gradient output to velocity and the vectors flow around the geo correctly. However from there I am not sure how to apply that to noise to have it distort the way I would like. If I output that to the input of the noise It sort of works but really just looks like adding noise to the normals. The part that I think I am missing is advecting the noise through a velocity field at a given time step to create the streaking. My guess you would do this like gridless advection where you multiply the velocity by a noise in a for loop and add it to back to the noise coordinates but so far haven't been able to figure it out I also thought about doing a volume sample from the velocity field, or doing something with modulus or ramps but everything ended up streaky and full of artifacts I attached a simple file I was messing with but didnt get too far. Any help on this would be very appreciated Distort_noise.hip
  13. UV Pass in Houdini

    Not 100% sure but I doubt it, at least out of the box. You would do it in COPs if anywhere but sidefx hasn't given that area a lot of love lately so I doubt there is a built-in way. But what you are looking for, at least in nuke-terms is an st-map, you might be able to find the math behind it on google. But honestly you are probably better off just re-rendering a flat color pass with a constant shader using the new texture than going through the hassle of building a retexturing tool in Houdini
  14. I came here literally with this exact issue, thank you
  15. Create UVs on instance object node?

    Thanks for the reply, I have a thread going with Juanjo now regarding this issue (originally started out as a request to access UV coordinate inputs on the RS_Texture node) My workaround is similar to what you posted here however there is an RS_camera_map node that will allow you to do a camera projection within the redshift shader itself, so you can manipulate the texture directly in the shader and pipe it into different material channels I can think of a few ways to do it in mantra...the biggest difference is mantra's ability to manipulate UV data directly in the shader. So there is triplanar project VOP, UV project VOP, unpack in SOPS > timeshift to F0 > UV project > attrib copy (so you only have to unpack 1 frame), renderstate VOP (or bind really) to read in UV data into shader, pcopen to read in proxy mesh UV data on disk...anything like that implemented in redshift will work. You can't currently use procedural noise either as it gets automatically applied in object space when applying to an instance