Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

Neon Junkyard

  • Content count

  • Joined

  • Last visited

  • Days Won


Community Reputation

16 Good

1 Follower

About Neon Junkyard

  • Rank

Contact Methods

  • Website URL www.neonjunkyard.com

Personal Information

  • Name Drew Parks

Recent Profile Visitors

1,656 profile views
  1. I came here literally with this exact issue, thank you
  2. Thanks for the reply, I have a thread going with Juanjo now regarding this issue (originally started out as a request to access UV coordinate inputs on the RS_Texture node) My workaround is similar to what you posted here however there is an RS_camera_map node that will allow you to do a camera projection within the redshift shader itself, so you can manipulate the texture directly in the shader and pipe it into different material channels I can think of a few ways to do it in mantra...the biggest difference is mantra's ability to manipulate UV data directly in the shader. So there is triplanar project VOP, UV project VOP, unpack in SOPS > timeshift to F0 > UV project > attrib copy (so you only have to unpack 1 frame), renderstate VOP (or bind really) to read in UV data into shader, pcopen to read in proxy mesh UV data on disk...anything like that implemented in redshift will work. You can't currently use procedural noise either as it gets automatically applied in object space when applying to an instance
  3. I have a lot of geo instanced onto points using the OBJ-level instance node, and I want to basically create my UVs after the geometry is instanced so I can apply a texture across all instances at once, rather than it duplicating across all instances repetitively I know this is easily done in many ways using copy SOP and packed prims etc, but I am rendering with redshift and it supports none of that, it relies on the old SOP-level instancing workflow so I am bound to that for the moment. I can do things like camera project at the shader level but for now redshift shop nodes are very limited and do not support any native houdini nodes so I have my hands tied quite here, and if I skip instancing everything the render becomes almost completely unresponsive. Any ideas if this is possible? I attached an example HIP file of what I am looking for. Many thanks instance_uvs_ODFORCE.hip
  4. Ah that is perfect, exactly what I was looking for thank you! Not 100% totally sure what is happening with that wrangle node, I just started learning chops for this project. But trying to wrap my head around it - So you have x amount of channels coming in through the geo node stored in an array the wrangle processes each incoming data stream in parallel for each index (channel) in the geo array at each time step (or sample in chops terms) when the current channel to process is the incoming channel from the count node, it sets the value of the current sample to the value from the trigger channel, which is 0-1, applies it to the currently processed geometry, spits it out and moves on to the next sample From here I am sort of fuzzy though... so the count value increases by 1 each time the trigger changes, so each time the count increments does the geo index increase as well? Or does the count channel multiply its value against the geo index and divide by the trigger value or something? Or are they all just offset in time somehow, which is really what it looks like in the motion graph Either way thank you for the help! I was definitely stuck on this one
  5. I have a simple audio react animation that is conneceted to a CHOP network, just a few polygons with a simple extrude based on an audio trigger. However I want the objects to extrude sequentially; on the first audio trigger primitive 0 will extrude, then on the second prim 1 will extrude, so on and so forth, but have it be sequential and based on the audio trigger from chops. I attached a simple hip file, any help would be greatly appreciated Chops_sequential_ODFORCE.hip
  6. Thanks for the tip, I did get that to work and was able to read my cop network into the shader however...I guess by UV space I meant that the texture data is not resolution dependent on the point density of the mesh, but rather is stored elsewhere, i.e. like UV maps, ptex, camera projections etc. Cops do solve the resolution issue and I can bring that color data into the shader at high resolution without changing the point count, however cops (as far as I can tell) are completely UV dependent and so you lose the ability to use world/object space procedural maps that are UV independent. So ideally I am looking for a way to avoid UVs completely (unless I need them obviously) and somehow promote point level data to something else that would read in the full resolution of the procedural texture. However I am guessing that is not realy possible without something like a ptex solution built into houdini
  7. For example, if I have a noise map in sops, its detail will be limited to the number of points/primitives on my model. However take that same noise map into a shader and it will render with infinite detail as it is now in UV/shader/etc space. Is there any way to replicate that behavior in sops without increasing point count on my model? Basically writing directly to a UV map like you would do in a 3D painting app for example, but in sops. The end goal here- I have been getting into redshift and its great but the procedural textures are almost nonexistent, mantra/vops have tons but they are incompatible with redshift. So ideally I would like to get as much procedural texturing done in sops and read that data into redshift shaders. Also I know redshift can read in COP data, I wonder if there is some intermediate solution there
  8. I was also looking at those chips, suspiciously cheap for what you get...I would like to know more about this as well. I'm currently running a (somewhat) outdated dual xeon build, it runs great but slow core clock speed, I'm thinking convert it to a dedicated render/cache slave, build a new faster box and also get into the GPU rendering game. My current mobo (supermicro MBD-X10DRL-I) only has 1 PCIe x16 slot, and I would want to run minimum 2x GTX 1080ti cards, maybe more, but still have a dual xeon system with tons of RAM. Traditionally I think that is a somewhat contradictory ask because it is a server build, and admittedly I really don't know that much about GPU rendering or how to plan a build for it, but I would like to have the best of both worlds- dual xeon 16-24 core machine with 2+ high end cards for GPU rendering as well. Anyone have any advice on that type of build? Windows btw
  9. I can't find the auto-place nodes on creation option in H16...really annoying, is it gone or am I missing something?
  10. Yea I understand that..obviously hotkeys are gone. The major things I am looking to salvage- Galleries Node presets & defaults Orbolt downloaded OTLs/HDAs Shelves
  11. Hey guys, I want to upgrade to H16 but keep all my preferences, shelves, galleries, node defaults, node presets, custom desktops, etc. I have had some problems in the past with hip files getting corrupted likely due to corrupt preferences, bizarre shader errors from old scenes, etc. from just copying the %HOME% folder into the new version. Maya has the same issues, it just makes me nervous Does anyone have the cleanest way to upgrade without copying the whole preferences folder over, like specifically what files you need and what files are likely to create conflict? Same applies to daily builds, seems like a lot of the preferences are stored in the install directory and not the %HOME% directory, I would like to figure out a cleaner way to upgrade without having to redo a lot of saved prefs
  12. that is a great start, thanks!
  13. As the title says I would like to (somewhat) recreate the functionality of nuke;s grade node, in SOPS, as on OTL. I find the color correct pretty unintuitive, and would like the controls nuke offers which I find very easy to use. Obvious controls for black and white point, gain exposure gamma etc. Also the ability to tint any parameter is a big bonus. However I am having a hard time finding info on the math that goes into even basic color corrections, i.e. if you wanted to tint the gamma (midtones) how would you even go about doing that. Haven't found any good rescources on this topic Thanks!
  14. Similar to the H14 workflow of hitting X on the currently selected node in any vop context to create a visualizer node, I want to do the same but connect the currently selected node to the compute lighting Ce output (or a bind export node set ot Ce but that can throw errors) to quickly visualize the color data of the map I am working on without having to do expensive shader calculations. The visualize node is great when working with Cd in SOPS but does not return the same functionality in shaders This is a default hotkey in arnold for C4D, its like alt+w+v or something. I'm sure there is a way to do this with a python script attached to a hotkey. Any ideas?
  15. So this might be a question of mantra settings in general but I am having problems getting rid of indirect reflection noise, it seems like no matter how high my settings this pass is coming out extremely noisy and it is showing up in the final render. I have read through all the docs regarding noise and read through the forums but since mantra changes with every release I don't know what information is still relevant. For instance I read the practical upper limit for pixel samples was 6x6 unless you have heavy DOF or motion blur. I have no idea what extreme settings would be for anything else though like sampling quality on lights or min/max pixel samples. The docs say in some cases you have to increase min ray samples for indirect reflection/refraction noise, but I read elsewhere not to increase that. What is the workflow there, when should this be increased? And how does that affect reflection/refraction quality, which seems to be a global multiplier? Also on the noise removal docs it says as an example 100 max indirect ray samples to achieve a clean image, is that referring to "enable indirect sample limits" rendering parameter on a shader? Is there not a more global way to set that or is that what sampling quality on each light is doing? However increasing that per light doesn't seem to make any difference. I should mention I come from a vray background so that is the workflow I am used to, if anyone can draw any parallels. Also this actual scene doesn't really matter, these are more general questions about newer mantra settings and when to tweak what Quick info about my scene- Still image, no DOF or motion blur, no environment light, just 3 area lights with sampling quality of 9 I do have some fine displacement which might be part of the issue, set shading quality to 3 on that object. Here are my render settings (refl and refr limit at 5, no diffuse bounces, color limit 5). Everything else is default Indirect reflection pass- Here is the comp zoomed in a bit, you can see there is visible noise on the black spheres and the displacement, there is also some square artifacting happening, my guess is the displacement is just too fine for mantra to dice properly resulting in a moire pattern type of effect but that wouldn't explain the square artifacting (that is in the raw exr, is not jpg artifacting) Any help on this would be greatly appreciated