Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

Neon Junkyard

Members
  • Content count

    51
  • Joined

  • Last visited

  • Days Won

    1

Community Reputation

16 Good

1 Follower

About Neon Junkyard

  • Rank
    Peon

Contact Methods

  • Website URL www.neonjunkyard.com

Personal Information

  • Name Drew Parks

Recent Profile Visitors

1,738 profile views
  1. I have some relatively big scenes, ~50gb HIP files, and houdini is starting to take forever to load and save, wondering if there are any ways to optimize both scene size and general houdini startup speed, besides obviously just deleting nodes (windows 7, SSD install btw) For example do some nodes like complex uncompiled shaders, COP networks, etc take up more space on disk than other nodes and might be worth some housecleaning, or are all nodes more or less the same disk footprint? Regarding app load time, and this seems obvious but the more HDAs I install the slower houdini loads (qLib, Aelib, etc). Is that also true of things like node presets, gallery items, shelf tools etc? Or is that all read from disk on the fly and shouldnt matter? Then in Maya you can do things like disable 95% of plugins it ships with, disable modules like bifrost, legacy dynamics, mental ray, etc and the program loads infinitely faster, and if you need them later you can just load them on the fly. Is there anything you can do like that in houdini, perhaps maybe in the .env file? For instance disable renderman, which I never use and clutters up the MAT context? I appreciate any suggestions
  2. I'm trying to figure out how to distort noise so it follows the curvature and flow of the geometry. I attached an example of what I am trying to do, from the Nike Flyknit spot. It is pretty well documented on how to do this using volumes and advecting points to create curl noise streamers, and I want to achieve a similar effect but using procedural textural noise. I am especially interested in the elongated striping shown in the example where it is not regularly broken up like turbulent noise does but is stretched along the flow direction. The basic setup for doing this in volumes would be to take the double cross product of the volume gradient and an up vector, add noise, create a velocity field, advect points through it and create lines So I have the same setup using the normals instead of volume gradient output to velocity and the vectors flow around the geo correctly. However from there I am not sure how to apply that to noise to have it distort the way I would like. If I output that to the input of the noise It sort of works but really just looks like adding noise to the normals. The part that I think I am missing is advecting the noise through a velocity field at a given time step to create the streaking. My guess you would do this like gridless advection where you multiply the velocity by a noise in a for loop and add it to back to the noise coordinates but so far haven't been able to figure it out I also thought about doing a volume sample from the velocity field, or doing something with modulus or ramps but everything ended up streaky and full of artifacts I attached a simple file I was messing with but didnt get too far. Any help on this would be very appreciated Distort_noise.hip
  3. Not 100% sure but I doubt it, at least out of the box. You would do it in COPs if anywhere but sidefx hasn't given that area a lot of love lately so I doubt there is a built-in way. But what you are looking for, at least in nuke-terms is an st-map, you might be able to find the math behind it on google. But honestly you are probably better off just re-rendering a flat color pass with a constant shader using the new texture than going through the hassle of building a retexturing tool in Houdini
  4. I came here literally with this exact issue, thank you
  5. Thanks for the reply, I have a thread going with Juanjo now regarding this issue (originally started out as a request to access UV coordinate inputs on the RS_Texture node) My workaround is similar to what you posted here however there is an RS_camera_map node that will allow you to do a camera projection within the redshift shader itself, so you can manipulate the texture directly in the shader and pipe it into different material channels I can think of a few ways to do it in mantra...the biggest difference is mantra's ability to manipulate UV data directly in the shader. So there is triplanar project VOP, UV project VOP, unpack in SOPS > timeshift to F0 > UV project > attrib copy (so you only have to unpack 1 frame), renderstate VOP (or bind really) to read in UV data into shader, pcopen to read in proxy mesh UV data on disk...anything like that implemented in redshift will work. You can't currently use procedural noise either as it gets automatically applied in object space when applying to an instance
  6. I have a lot of geo instanced onto points using the OBJ-level instance node, and I want to basically create my UVs after the geometry is instanced so I can apply a texture across all instances at once, rather than it duplicating across all instances repetitively I know this is easily done in many ways using copy SOP and packed prims etc, but I am rendering with redshift and it supports none of that, it relies on the old SOP-level instancing workflow so I am bound to that for the moment. I can do things like camera project at the shader level but for now redshift shop nodes are very limited and do not support any native houdini nodes so I have my hands tied quite here, and if I skip instancing everything the render becomes almost completely unresponsive. Any ideas if this is possible? I attached an example HIP file of what I am looking for. Many thanks instance_uvs_ODFORCE.hip
  7. Ah that is perfect, exactly what I was looking for thank you! Not 100% totally sure what is happening with that wrangle node, I just started learning chops for this project. But trying to wrap my head around it - So you have x amount of channels coming in through the geo node stored in an array the wrangle processes each incoming data stream in parallel for each index (channel) in the geo array at each time step (or sample in chops terms) when the current channel to process is the incoming channel from the count node, it sets the value of the current sample to the value from the trigger channel, which is 0-1, applies it to the currently processed geometry, spits it out and moves on to the next sample From here I am sort of fuzzy though... so the count value increases by 1 each time the trigger changes, so each time the count increments does the geo index increase as well? Or does the count channel multiply its value against the geo index and divide by the trigger value or something? Or are they all just offset in time somehow, which is really what it looks like in the motion graph Either way thank you for the help! I was definitely stuck on this one
  8. I have a simple audio react animation that is conneceted to a CHOP network, just a few polygons with a simple extrude based on an audio trigger. However I want the objects to extrude sequentially; on the first audio trigger primitive 0 will extrude, then on the second prim 1 will extrude, so on and so forth, but have it be sequential and based on the audio trigger from chops. I attached a simple hip file, any help would be greatly appreciated Chops_sequential_ODFORCE.hip
  9. Thanks for the tip, I did get that to work and was able to read my cop network into the shader however...I guess by UV space I meant that the texture data is not resolution dependent on the point density of the mesh, but rather is stored elsewhere, i.e. like UV maps, ptex, camera projections etc. Cops do solve the resolution issue and I can bring that color data into the shader at high resolution without changing the point count, however cops (as far as I can tell) are completely UV dependent and so you lose the ability to use world/object space procedural maps that are UV independent. So ideally I am looking for a way to avoid UVs completely (unless I need them obviously) and somehow promote point level data to something else that would read in the full resolution of the procedural texture. However I am guessing that is not realy possible without something like a ptex solution built into houdini
  10. For example, if I have a noise map in sops, its detail will be limited to the number of points/primitives on my model. However take that same noise map into a shader and it will render with infinite detail as it is now in UV/shader/etc space. Is there any way to replicate that behavior in sops without increasing point count on my model? Basically writing directly to a UV map like you would do in a 3D painting app for example, but in sops. The end goal here- I have been getting into redshift and its great but the procedural textures are almost nonexistent, mantra/vops have tons but they are incompatible with redshift. So ideally I would like to get as much procedural texturing done in sops and read that data into redshift shaders. Also I know redshift can read in COP data, I wonder if there is some intermediate solution there
  11. I was also looking at those chips, suspiciously cheap for what you get...I would like to know more about this as well. I'm currently running a (somewhat) outdated dual xeon build, it runs great but slow core clock speed, I'm thinking convert it to a dedicated render/cache slave, build a new faster box and also get into the GPU rendering game. My current mobo (supermicro MBD-X10DRL-I) only has 1 PCIe x16 slot, and I would want to run minimum 2x GTX 1080ti cards, maybe more, but still have a dual xeon system with tons of RAM. Traditionally I think that is a somewhat contradictory ask because it is a server build, and admittedly I really don't know that much about GPU rendering or how to plan a build for it, but I would like to have the best of both worlds- dual xeon 16-24 core machine with 2+ high end cards for GPU rendering as well. Anyone have any advice on that type of build? Windows btw
  12. I can't find the auto-place nodes on creation option in H16...really annoying, is it gone or am I missing something?
  13. Yea I understand that..obviously hotkeys are gone. The major things I am looking to salvage- Galleries Node presets & defaults Orbolt downloaded OTLs/HDAs Shelves
  14. Hey guys, I want to upgrade to H16 but keep all my preferences, shelves, galleries, node defaults, node presets, custom desktops, etc. I have had some problems in the past with hip files getting corrupted likely due to corrupt preferences, bizarre shader errors from old scenes, etc. from just copying the %HOME% folder into the new version. Maya has the same issues, it just makes me nervous Does anyone have the cleanest way to upgrade without copying the whole preferences folder over, like specifically what files you need and what files are likely to create conflict? Same applies to daily builds, seems like a lot of the preferences are stored in the install directory and not the %HOME% directory, I would like to figure out a cleaner way to upgrade without having to redo a lot of saved prefs
  15. that is a great start, thanks!