Jump to content

Search the Community

Showing results for tags 'aov'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 14 results

  1. I've run into this issue a few times now. I have a shader that generates several output variables for extra image planes. The variables that are generated would be really useful in SOPs. The only thing I can think of that would allow me to run the same calculation on each point, is to collapse the SHOP network into an OTL that I could also exist inside a VOP. But the parameters are being driven on the shader and it seems a bit messy to channel reference all the ramps and other parameters. Is there some way to apply the shop_materialpath and then compute and export a variable on a per point basis? Thanks!
  2. Motion vector

    Hi there, I'm having a hard time exporting some motion vectors out of Mantra. I used a method that was described in this forum. Two get blur nodes, one set to 0 and the other one to 1. Subtracted them and piped into a custom direction vector attribute which I used to render out an image plane in Mantra. Unfortunately I'm not getting any motion vector in the exr file. The image plane is black and when trying to use it with a vector blur in Nuke or a velocity blur in Houdini's comp area nothing happens.
  3. Arnold objectID

    Hi, I'm trying to render with objectID aov in Arnold. I haven't used AOVs yet, so I'm a bit confused and the arnold doc is very meeeh about that. Do I have to make shaders for the aovs ? How do I set an ID to an object, or a group ? I'm still trying out ... but any info would be great.
  4. Hi guys, So have a explosion peice ive been working on working upto final render - now im looking for a way of efficiently seperating out each element (Base explosion, dust blast, shockwave, trails) - I would use deep but i know deep files get huge for volumes quickly and im not sure on how the houdini deep pipeline works. - I really dont want to add extra render time tbh, soo any handy or efficient tips? - thought about rendering each element seperately instead too rather than all together - but it would need the holdouts/influence off the other elements and tested this using force matte/phantom but couldnt get what i was after - would provide a hip file but i feel theirs no need this is more of a general how would you go about it? Will give me more control in comp and mean i can do breakdowns of the elements without rendering twice Thanks Chris
  5. Object-based AOVs?

    Is there any way to create object-based mantra AOVs (for object ID mattes) instead of having to go through the shader, like how vray for maya handles it where you can create an object property with an override object ID and you can throw multiple objects in it that will all inherit the same object ID? I'm referring to would would be object RGB object mattes in other software I know you can export custom AOVs inside a shader but for my workflow I have three issues with that- 1) if you are doing quick look-dev and iterating through multiple different shaders and you export that variable in one but quickly shift gears to a different look and forgot to include it in the new shader it won't render 2) If you want multiple objects to share the same object ID matte rather than having to annoyingly shuffle copy everything together in nuke you have to duplicate the same variables for each shader 3) and most importantly, and I don't really see a way around this, you need to unlock every shader HDA which is, as I have read, extremely innefficient in H15 as their load time and memory footprint are greatly optimized in their VEX HDA definition state Ideally this would be something you could implement at the SOP level in a wrangle or something Is there any way to quickly enable/disable all extra image planes like you can in maya/max/cinema/every other software, without manually unchecking every box? Then finally, when you export a float AOV and bring it in to nuke it basically reads it as an alpha channel; it won't be visible to a shuffle node as most AOVs would, you need to use a copy node to extract it. That isn't a huge deal except for I use a lot of python scripts that automate a lot of this and that breaks my workflow a bit. Any way around this? Any insight on this is greatly appreciated!
  6. Destruction UV Pass

    Hello everyone, I'm doing a destruction project where I need to isolate the front faces of bricks in a bullet sim so that I can export out a UV pass of only the front face to use in compositing. I know how to setup Mantra to export out an overall UV pass but I'm not sure how to isolate only the front faces of my bricks geo in the UV pass.
  7. Hello guys, I'm working on a scene with some atmospheric fog. The problem is that I can't find in which render layer the fog goes! It does appear in the beauty, but can't find it anywhere else. My theory is that the z-depth pass is used to create the fog in the beauty and thus I've tried to replicate the fog in comp using this theory. With success I might add. In Nuke if I shuffle out the z-depth, grade it and plus it with (for example) the diffuse (just quick and dirty) the same effect shows. Problem is that I can't recreate the exact same image (as if I would shuffle everything out and in again). I would love to create my own fog shader to output it to a render layer, but I don't have the knowhow. Also Houdini doesn't let me dive into the Z-Depth Fog node for examples. Does anyone know which render layer the fog ends up in? If not can someone give me a quick example of how to create my own simple fog shader? Thank you guys in advance, Russle P.S. Included a simple scene for review. Z-Depth.hip
  8. Hey guys!!! I’m playing with a large ocean render using Mantra PBR. In my scene i have a simple HDR environment light, and an Area light. The Area light is actually creating all the specular highlights in this render and shader is the default from ocean waves. I’m not sure what passes to render for composite.. Right now i’m rendering depth pass, normal pass and directreflection pass. I would really like to have the specular in a pass, and maybe some kind of motion vector pass, how do i achive this? Since i'm using the defualt shader, i'm guessing i just have to use the correct vex variables for output extra image planes? If that makes any sense.. And can you houdini experts perhaps recommend me some other passes to render for oceans and flip fluids? Any help is greatly appreciated, thanks!
  9. Hello everyone! I need temperature values render pass for some post compositing purposes from Pyro simulation. When I'm rendering it as "temperature" float 32 pass, I'm receiving negative values near the borders of volume. Why does it happening? Attaching picture. I can normalize this somehow after render, but I want to now why it is that way. There is dead end node "Fit Range Unclamped"(picture attached) in default shader(Fireball in this case). I can pass "temperature" parameter through some "fit ranges" or even "Reshape" node. But it doesn't answering the question Why? And I don't think that is the right way. Should say that remapping from shader tab from shades hasn't worked properly for me - even with clamping and fit set for 0 - 1 there was negative values and values bigger than 1. Basically, I need normalized (from 0 to 1) temperature render pass for my simulation. Maybe someone can help my with this? Will be much appreciate. Attaching 2 screenshots. Don't attaching .hip file because there is a simple sphere with Pyro Explosion shelf preset on it. P.S. Don't want to make another post right now, so should ask - is anybody know proper way to make a vectorfield texture from Pyro Sim in Houdini? Right now I'm using Volume Slice, rendering remapped normalized (from 0 to 1) motion vector pass from top orthographic view and compositing it in kind of flipbook, which is converted into volume texture dds next. Maybe there is another way to do it?
  10. Hello, I have been trying some volumetrics with Arnold and they seem to work really fast. But I have an issue with recreating Beauty pass in compositing. Arnold can output 4 AOVs for volumes: volume (beauty), volume_direct, volume_indirect and volume_opacity. In my scene I am not using indirect lighting for volumes so volume_indirect is empty. Therefore volume (beauty) = volume_direct. But I don't know how to composite volume and volume_opacity AOVs as they both have RGB data. If volume_opacity had only one channel I could maybe set it as alpha for volume AOV. All AOVs in Arnold should be composited with additive operation (plus or screen) but it doesn't seem to work for volumes. Any ideas? Thanks
  11. Extra Image Planes

    Hi all, In my ROP node I have set the extra image plane with "Different File" turned on. Gave it if a channel name like "rgbOut" and save it as open EXR ( Houdini 14). Viewing the extra image plane inside nuke, there is no information in my rgb channel. Mantra puts it into a separate channel. I can't seem to find a way to render extra images planes (aov with their own file directory and name) into their own rgb channel, like any other 3d package would do. If I don't give it a channel name it will simply use the parameter name as the channel name. I cant see the point of this, what is the work around for this? Cheers Gordon
  12. Hi everyone! Have got some issue in illuminance loop(H14 Apprentice). Trying to export Cl(light color) parameter inside Illuminance loop. Doing this to obtain light color render passes for each light. So, seen in some videos - it's working for people but not for me. When I toggle For Each Light check box on, passes I receive is black and empty. Scene in attach. Could you please help me? I think it's basics and simple, but can't understand, what is going wrong. Would be well appreciate. Thanks for your attention. P.s. using visual builder, not a coder for a now. loop_issue.hipnc
  13. Hi, I want to output a pass with the shadows that are casted on an object that is matte in my beauty render. Since I can't think of a way to force matte and phantom objects in each different image plane, I created a second mantra node to output just the shadow pass, which works fine. But is there a way to disable the primary beauty render on that second node, so I don't render unnecessary images? I just want the extra AOVs from that render node. thank you Georgios
  14. deep primid aov?

    (Apologies for the repost, originally sent this to the sidefx list, thought folk here might have info too... ) Short version: Outputting primid as a deep aov appears to be premultiplied by the alpha, is that expected? Unpremultiplying helps, but doesn't give clean primid's that could be used as selection mattes in comp. Long version: Say we had a complex spindly object like this motorcycle sculpture created from wires: Comp would like to have control over grading each sub-object of this bike, but outputting each part (wheel, engine, seat etc) as a separate pass is too much work, even outputting rgb mattes would mean at least 7 or 8 aov's. Add to that the problems of the wires being thinner than a pixel, so standard rgb mattes get filtered away by opacity, not ideal. Each part is a single curve, so in theory we'd output the primitive id as a deep aov. Hmmm.... Tested this; created a few poly grids, created a shader that passes getprimid -> parameter, write that out as an aov, and enable deep camera map output as an exr. In nuke, I can get the deep aov, and use a deepsample node to query the values. To my surprise the primid isn't clean; in its default state there's multiple samples, the topmost sample is correct (eg, 5), the values behind are nonsense fractions (3.2, 1.2, 0.7, 0.1 etc). If I change the main sample filter on the rop to 'closest surface', I get a single sample per pixel which makes more sense, and sampling in the middle of the grids I get correct values. But if I look at anti-aliased edges, the values are still fractional. What am I missing? My naive understanding of deep is it stores the samples prior to filtering; as such the deepsample picker values returned should be correct primid's without being filtered down by opacity or antialising. As a further experiment I tried making the aov an rgba value, passed Of to the alpha, then in a deepExpression node unpremultiplied the alpha. It helps on edges, but if I then add another deepExpression node and try and isolate by an id (eg, 'primid == 6 ? 1 : 0'), I get single pixels appearing elsewhere, usually on unrelated edges of other polys. Anyone tried this? I read an article recently were weta were talking about using deep id's to isolate bits of chimps, seems like a useful thing that we should be able to do in mantra.
×