Jump to content

Rendering pyro Zdepth problem


Recommended Posts

Hi all,

I search here and on google first but didn't find anything, so sorry if i missed something...

(and i'm a real newbie in houdini and in render, so it didn't help !)

 

I'm trying to render the Zdepth of a smoke i did, with mantra PBR.

It kind of works when i uncheck Stochastic Transparency, because with it checked, it's full of white dots and artifacts.

 

I saw on an other post that i should use DCM, but i am trying to see if there is a way out to render it that way, as it seems to nearly work...

But there is one thing annoying me : my container is being rendered (as shown in the attached file).

If someone can enlighten me, i would highly appreciate, cause i'm really lost !

 

One other thing i saw is that it seems that in the Zdepth pass, the voxels are rendered flat (or full colored ?) and so there is this voxel pattern i don't see in the color pass.

Is it normal ?

 

Thanks for you time, and thanks a lot if somenone can answer me.

 

 

post-7471-0-61269200-1392214180_thumb.jp

Link to comment
Share on other sites

  • 1 year later...

Deep works seamlessly with Nuke. Where are you getting stuck?

 

I'm coming from a rigging/maya td background with minimal compositing and rendering experience.  The generalists/compers I'm working with have said they'd like to get deep RGB, depth, alpha, and opacity passes for my pyro fx.

 

I've tried using tutorials from past version of houdini for rendering depth passes, but houdini has changed a lot in the past couple of years, so I'm having difficulty finding relevant information on the subject.  I'll do some searches for deep renders using H14, but if you wouldn't mind explaining the deep rending workflow in case i get stuck that would be much appreciated.  Thank you!  

  • Like 1
Link to comment
Share on other sites

Cool. Here's a post where there is a zip file that contains everything you need for Deep in Houdini and Nuke IIRC

http://forums.odforce.net/topic/22236-help-me-about-smoke-obscurancematting/?hl=deep

 

In H14 Deep output is now located in Mantra/Images/Deep Output - > Deep Camera Map.

 

Set that DCM filename to .exr and read that into Nuke with DeepRead.

 

Let us know how it goes.

Link to comment
Share on other sites

Thanks a ton, Marty!  The renders came out great and the compers have the point cloud working in Nuke.  

 

The only issue we have now is that the point cloud is a bit skewed.  The compers think it has to do with the alembic camera inside of nuke.  My scene has a ratio of 1 and the nuke camera has a ratio of .5.  I'll correct the camera tomorrow and let you know if that fixed the problem.  Again, thank you!

Link to comment
Share on other sites

  • 2 weeks later...

Hi Marty, sorry for the delay.  The deep pass was translating in Nuke and we didn't have time to figure out what the issue was so we left the smoke out of the shot.  Here's an edited file with the alembic camera if you'd like to do a postmortem on it.  I'd really like to find out what caused the issue so we can use deep images in the future. Thanks!

deepRender_odForce.zip

  • Like 1
Link to comment
Share on other sites

I took a quick look at your file. Never ever, ever, scale your camera. That is most probably the cause of your problems.

 

Ahh, thank you for the tip!  Would you mind giving me some pointers on working with scale in Houdini?

 

This is my current workflow: Scale the alembic scene (geo and camera from Maya) down to .01, create and cache out the sim, then scale the alembic scene back up to normal and scale the sim by 100.  Is this a correct workflow or is there a better way?  I know you can change the hip unit length from meters to cm, but I found that to be ineffective.  Thank you!

Link to comment
Share on other sites

Scaling geometry and such is usually fine and painless, the big issue is directly scaling the camera inside Houdini. I can't quite remember but you usually scale down things coming from Maya by 0.1 or 0.01.
 
For the camera for things to render properly, what you want to do is to scale the transform, but not the camera itself.

Like this (it's a gif, click it!):

post-9276-0-58058200-1430153170_thumb.gi

So we bring in the camera transform with an Alembic Xform, which we scale with the null. Then we apply that transformation, but not the scale, to a new fresh camera. I've used this a lot in production and it seems to be the only way that works every time.

  • Like 3
Link to comment
Share on other sites

Hi Mike,

 

Especially with volumes you never want to render with a scaled camera.  I'm know render master but the renderer uses the scale to determine how to correctly march through your volumes.  If you increase or decrease camera scale your volume's density will render incorrectly and the renderer will not sample the volume as it should.  And you will see the problems in any masks you try to export, they will become dotty and unusable.   

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...