Jump to content

Neon Junkyard

Members
  • Content count

    83
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    3

Neon Junkyard last won the day on August 28

Neon Junkyard had the most liked content!

Community Reputation

27 Excellent

3 Followers

About Neon Junkyard

  • Rank
    Peon

Contact Methods

  • Website URL
    www.neonjunkyard.com

Personal Information

  • Name
    Drew Parks

Recent Profile Visitors

2,854 profile views
  1. Handling array in Vex

    List comprehension? Not really... Foreach is in itself shorthand, for( int i = 0; i< len(a1); i++). You could create a custom VEX function library for stuff like this, close as you will get to python classes
  2. this is how I deal with this for freelancing, if I am physically at a studio using their equipment I tend to not use any personal HDAs for exactly this reason, instead kind of unboxing them with limited functionality for the purpose of the job and leaving it as a subnet. Outside HDAs can also be extremely problematic for certain pipelines, farm & cloud rendering and most studios discourage/outright ban them anyway if you really want to use a personal HDA at a studio you can blackbox it (Assets > create black boxed asset). this will make it usable and distributable but unable to be inspected if I am working remotely (most of my work), with random clients or studios regardless I include it in my contract that I do not include any houdini source files, assets, setups etc, only renders/caches/nuke scripts w/e, as setups/tools that you have been developing for years are worth far more than any individual job. Much like any non-redistributable software EULA finally, if you really want to be a pain in the ass you can make your tools so complicated, or bury tons of python code so deep in them that no one will be able to figure them out/use them anyway. job security
  3. I have an fbx subnet with a bunch of different models that Im object merging into a separate geo network with wildcards to grab all the file nodes in the fbx subnet, and enabling create path attribute in the object merge I need the shop_material path to correctly point to the corresping fbx material that was created, and while most objects are read in with the shop_materialpath attribute correctly, the houdini fbx importer will interpret the material assignment at the object level depending on how the model was prepped, so the resulting shop_materialpath from the object merge is missing. The textures and materials are there at the object level (in the fbx subnet) but the shop path is wrong, and thus textures break in the object merge. so I need in a python sop to run a loop over each prim, eval the obj level shop_materialpath parameter from its path parameter (i.e. its source geo network in the fbx subnet) and append that string to the current geo shop_materialpath for the missing prims thanks
  4. Convenient Material location

    Personally, I never use /shop, its annoying to navigate to and on complex projects I like to group my materials in multiple mat networks at the object level . Arnold/Redshift/Pipelines in general dont care where you put your materials in terms of breaking the render, you can put them in literally any context and it will work as long as the shop_materialpath attribute is maintained. Generally speaking though, its probably best to only use sop-level material networks in HDAs, standalone assets etc. Otherwise it makes it extremely difficult to navigate your scene should you hand it off to someone, or come back to it months or years later. Ultimately personal preference though
  5. fore each loop question

    in a python sop you can use the pressButton() argument pointing to your file cache "save to disk" parameter (might have to use evalparm as well) that will in essence physically press the save to disk button on every cook. combine that with the detail attribute filepath variables and that should be all you need also you can set a global variable holding your detail expression (under edit menu > aliases and variables > variables), say $CACHE = `detail(blahblah)`, then use $CACHE in your filepath instead of that expression, allowing you to read everything back in as packed prims in a single node using the filemerge sop utilizing that variable (similar to how you can load back in multiple wedge passes)
  6. Transfer all attributes using VEX

    Ah thanks Tomas, those are great solutions! Yea the vex route Im realizing is more trouble than its worth, currently for most cases that dont require interpolated values im using the ray sop (that is now compilable) set to mindist with transform points unchecked and import attributes from hits. Doesnt account for your second solution though so thank you for that I know as soon as I spend any sort of time making some custom attribute transfer sop that handles all edge case scenarios sidefx will release a new version of it, happens every time
  7. Updating nested HDAs

    I updated to h17.5 and all my custom HDAs broke, sidefx updated the bound sop and now im getting this error message on all my HDAs - /obj/path/to/my/HDA/bound1: "Too many elements found for parameter "blahblah/bound1 Bounding Type" This is across about 20 different HDAs in this scene, with probably 100 total instances throughout the scene. Is there any way to go about fixing this sort of issue without manually updating every single HDA by hand? However this is just one case, looking for general solutions and good practices for handling & updating nested HDAs so things dont break like this Example case- you have a tree HDA, inside that you have various leaf modules that are each their own HDA. Each leaf's parameters are relative reference linked to the top level tree HDA. Each leaf is a different HDA (leaf_A, leaf_B etc) however they all share a common HDA that controls the leaf shader, for instance. Now the problem comes when you try and update that leaf shader HDA, if you version up your asset & make changes, the other leaf assets will not update to the latest version so you have to go and update them all by hand. But the bigger problem is if you do something like change the name of a parameter (that is expression linked to the top level HDA) e.g. from leafcolor to leafColor, everything will break and you will get this error message: I'm not 100% sure what these options really mean, but from what I can tell the old leafcolor parameter will be a spare parameter now, unique to that HDA instance, and you can go and delete all spare parameters and that will get rid of them, and I think thats what "destroy all spare parameters" is suppose to do for each instance, HOWEVER since all the other instances of this HDA are nested inside LOCKED HDAs, that button will have no effect and everything will remain broken. Other than undoing these changes, the only way ive been able to fix this is to go and manually delete all the spare parameters in EACH HDA INSTANCE, not just the master HDA. sigh. That is just one example of a parameter change but it can happen in a million different ways. Anyone have any insight on how to handle/avoid situations like this, best practices etc? Any help is much appreciated
  8. couple things... if you are using packed primitives, make sure you add a redshift spare parameters to your objects, and enable "instance sop level packed primitives" under the redshift OBJ > settings > instancing tab of your geometry network second, are you using the sop-level instancer or the OBJ level instancer? sop level instance doesnt actually instance anything its just a copy sop wrapped in an HDA. you want to be using the OBJ level instance third, if you have nested OBJ networks inside a subnet you need to enable "render the OBJ nodes inside OBJ/SOP subnets" on the redshift OBJ spare parameters tab, which should be enabled by default. you should enable "force update OBJ nodes inside of subnets" in the IPR tab as well finally, redshift only allows point attributes on geometry* (for the most part, prim attributes can be used for strands/hair, you can promote color to vertex attribute to eliminate point color bleeding from shared edges, etc) buts its a safe bet to promote all your custom attributes to point level BEFORE you pack anything. Also if you have mismatched attributes when using packed primitives you are going to have a bad day, i.e. if you have color on your points on 1 thing and color on your source geo on another thing and merge them together that attribute will break in your shaders
  9. Transfer all attributes using VEX

    awesome, I will check that out. starting to use intrinsics more and more
  10. Is there a way to transfer -all- point & prim attributes in VEX like you can with attribute transfer (or ray), which is old and slow and non-compilable, looking for a modern VEX solution Normally I would just use intersect or pcopen or whatever for single attributes but I want to be able to transfer ALL incoming attributes (or have a typed list that can utilize wildcards) without having to manually specify them, so it can account for any incoming attributes upstream thanks
  11. Convolution curve

    I saw this HDA demo from Kim Goosens circa 2012, where it takes any smooth input curve and outputs a straight edge or angled curve with angle shaping controls Anyone have any insights as to how you might go about recreating this? Especially interested in the parts towards the end where he switches between angled curves and straight edges, with control over the angle of the curve The closest I can get is doing a linear resample and quantising/snapping to grid, which gives an okay result but it is limited to the nearest snap point or rounded integer, there isnt any actual control over overall edge angle, like for instance I would like to be able to set a global curve angle, so curves can only be 90 degrees along an axis or say 60 degrees, or increments of 15 degrees, etc https://www.youtube.com/watch?v=fPy4U0eGQ0Q&t=1s
  12. I was looking for something like this as well...is there a way to extract the node from the sop chain without breaking the wiring? like alt+cntrl+X in Nuke
  13. Creating netboxes with python

    I'm trying to create network boxes with padding in python However I'm having problems getting the padding to be even and constant on all sides. I can recreate the built in functionality with fitAroundContents(), but I cant figure out how to correctly figure out the bounding rect from that and then expand it evenly on all sides. I think this is partially because I'm not correctly calculating the center point from sel.position Here is my code so far (my python knowledge is very basic). Any help is much appreciated! sel = hou.selectedNodes()[0] all = hou.selectedNodes() pos = sel.position() parent = sel.parent() box = parent.createNetworkBox() box.setPosition(sel.position()) # add selected nodes for node in all: box.addItem(node) # set color box.setColor(hou.Color(0.3, 0.3, 0.3)) # fit box box.fitAroundContents() # fit box size = box.size() center = box.position() pad = hou.Vector2(2, 2) # set bounding box bbox = hou.BoundingRect((center[0]-size[0])-pad[0], (center[1]-size[1])-pad[1], (center[0]+size[0])+pad[0], (center[1]+size[1])+pad[1]) # set to bbox box.setBounds(bbox)
  14. I'm running houdini on windows and have been using \ in my file paths without issue, then out of nowhere houdini stopped recognizing backslashes as a valid file path, so it broke all my file paths, even in older files this is irrrelevant of the $JOB or $HIP variable, even setting that it will not recognize the path until I change all my \ slashes to / slashes. Even though in the same scene earlier it was working fine, Is this a preference that got corrupted or something? Any help fixing this or any help with a python script to recursively change all back slashes to forward slashes would be very much appreciated
  15. customizing netbox creation and other UI commands

    This is a great starting point but I cant figure out how to get the network box to resize to selected nodes, like the default behavior does currently the bounding box is hard coded, how would I code it to resize to fit all the selected nodes on creation? Thanks!
×