Jump to content

Neon Junkyard

Members
  • Content count

    64
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    2

Neon Junkyard last won the day on September 16 2017

Neon Junkyard had the most liked content!

Community Reputation

20 Excellent

3 Followers

About Neon Junkyard

  • Rank
    Peon

Contact Methods

  • Website URL
    www.neonjunkyard.com

Personal Information

  • Name
    Drew Parks

Recent Profile Visitors

2,266 profile views
  1. soft transform crash

    Is it just me or is soft transform sop completely broken? As soon as I create it, instant crash. Both in network view and from the viewport This has been the case for as long as I can remember...since h14 even, across multiple machines, linux, windows, different builds, same deal currently running 16.5.439, but it seems the problem persists across versions
  2. this seems like a bug, when I am viewing my scene through a camera I created and select any object level node in the network view with the visibility flag on, it will switch to the default perspective camera (No cam), similar to if I didnt have the lock viewport button on and tried to move the camera in the viewport Anyone else having this issue? Running 16.5.439
  3. Orient Along Organic Surface?

    Has anyone been able to implement this using packed prims? Making physical copies of geometry like this is way too taxing for what Im trying to do (city creation) Been bashing my head against the wall trying to figure this out, biggest issue is I have some square geometry and some non-square base geometry, currently trying to calculate @scale (going into a copy to points) based on the length of each side of the polygon but I don't really know the math and am stuck Another approach I tried and failed at was implementing what you guys did here with reading the uv intrinsic and tried applying that to setprimintrinsic() but couldnt figure it out, any help here would be amazing
  4. So I know all about deep compositing in theory but in actual implementation I can find almost no info on this topic - What is the current workflow within houdini to output deep compositing images to comp in Nuke? I don't need to do anything super fancy or crazy, I just need deep data to output a super accurate depth pass to do highly photoreal depth of field in comp with pgBokeh, as I need all the lense artifacting and chromatic abberation that you can't get out of in camera DOF in mantra It involves super fine semi-transparent strands and hair and stuff like that, so any caveats I should be aware of? Does houdini's current implementation of EXR support deep data or do I need to use a .RAT file or something? Also I really just need it for depth of field so is it possible to render just a separate deep data utility pass with just the depth info and not a million other AOV's embedded in it to save on disk space, or does the beauty have to be rendered with deep data in order to work correclty? Any help would be greatly appreciated!
  5. That is great, exactly what I have been after for some time now, however is that a 16.5 file? Doesnt work in 16.0.557, point wrange is pointing to nonexistent parameters
  6. Orienting copied objects

    The copy SOP has a heirarchy of attributes it reads in, orient being the foremost, N and up if orient is not present, etc. N alone is not enough to define orientation in most cases, usually you pair it with an up vector attribute as well http://www.sidefx.com/docs/houdini16.0/copy/instanceattrs You can use a polyframe SOP to generate N and up attribs from curves pillar_orientation_roof_FIX.hip
  7. the new workflow is to use the newer for-each loop network if you arent comfortable with those and need to follow a tut exactly you can unhdie the old for-each subnetwork using "opunhide" function in the textport opunhide Sop For-Each ^or something to that extent http://www.sidefx.com/docs/houdini//commands/opunhide.html
  8. might be better answered in the redshift forums but off the top of my head you could maybe try reading in a packed prim intrinsic attribute and promoting to a point attribute http://www.sidefx.com/docs/houdini16.0/vex/functions/primintrinsic
  9. I have a pretty heavy scene with lots of mechanical robot models and such, static geo with a camera flythrough, and I need to render it with toon outline. It renders fine, but I am getting massive flickering between frames during animation I suspect the problem is there are polygons that are too close together, I fixed what I could but its a massive scene and a rebuild is out of the question at this point. Toon is a displacement effect so I tried cranking the dicing quality, increased samples etc nothing really helps. Anything I can do to help fix this? Thanks
  10. Foreach, multiple outputs

    did anyone figure this out? I am looking for a similar solution, need to have 2 operations done but has to be on the exact same copy index....
  11. H16 auto-place nodes option missing?

    Not exactly the same as H15 and prior but shift+enter when creating a node will auto-place it
  12. menu pan/scroll with wacom pen?

    ^ I was looking for some sort of workaround just like this, I wonder if you can mess with something in the houdini.env file to fix the mmb viewport breaking
  13. In most other DCC apps you can mmb click and drag anywhere in a menu with a wacom pen to scroll up and down the menu, in after effects you can hold space bar to get the hand tool, etc Is there any way to get that functionality in Houdini? Its getting maddening having to click and drag on the scroll bar every time I want to scroll through a long parameter list Yes the scroll wheel on the mouse works but I use a tablet 99% of the time so that doesn't do much good. Thanks for any help
  14. I have some relatively big scenes, ~50mb HIP files, and houdini is starting to take forever to load and save, wondering if there are any ways to optimize both scene size and general houdini startup speed, besides obviously just deleting nodes (windows 7, SSD install btw) For example do some nodes like complex uncompiled shaders, COP networks, etc take up more space on disk than other nodes and might be worth some housecleaning, or are all nodes more or less the same disk footprint? Regarding app load time, and this seems obvious but the more HDAs I install the slower houdini loads (qLib, Aelib, etc). Is that also true of things like node presets, gallery items, shelf tools etc? Or is that all read from disk on the fly and shouldnt matter? Then in Maya you can do things like disable 95% of plugins it ships with, disable modules like bifrost, legacy dynamics, mental ray, etc and the program loads infinitely faster, and if you need them later you can just load them on the fly. Is there anything you can do like that in houdini, perhaps maybe in the .env file? For instance disable renderman, which I never use and clutters up the MAT context? I appreciate any suggestions
  15. Distort noise to follow geometry flow?

    I'm trying to figure out how to distort noise so it follows the curvature and flow of the geometry. I attached an example of what I am trying to do, from the Nike Flyknit spot. It is pretty well documented on how to do this using volumes and advecting points to create curl noise streamers, and I want to achieve a similar effect but using procedural textural noise. I am especially interested in the elongated striping shown in the example where it is not regularly broken up like turbulent noise does but is stretched along the flow direction. The basic setup for doing this in volumes would be to take the double cross product of the volume gradient and an up vector, add noise, create a velocity field, advect points through it and create lines So I have the same setup using the normals instead of volume gradient output to velocity and the vectors flow around the geo correctly. However from there I am not sure how to apply that to noise to have it distort the way I would like. If I output that to the input of the noise It sort of works but really just looks like adding noise to the normals. The part that I think I am missing is advecting the noise through a velocity field at a given time step to create the streaking. My guess you would do this like gridless advection where you multiply the velocity by a noise in a for loop and add it to back to the noise coordinates but so far haven't been able to figure it out I also thought about doing a volume sample from the velocity field, or doing something with modulus or ramps but everything ended up streaky and full of artifacts I attached a simple file I was messing with but didnt get too far. Any help on this would be very appreciated Distort_noise.hip
×