Jump to content

Suppress small Artefact in PBR render


Recommended Posts

I'm noticing excellent startup times in Vray for Maya and Arnold for Houdini - a few seconds - whereas Mantra is currently taking many more seconds - are you all getting the same on Windows /Linux? I'm thinking it's an OsX 'feature' 

 

Unfortunately Mantra takes a very long time to startup. This is a result of the way it parses the scene with python using SOHO. The time is almost linearly proportional to the number of objects and shaders you have in your scene. Merging everything into one single geo node using delayed load procedurals will speed up the startup time considerably. We still regurarly hit the wall with production scenes that take longer to generate the ifd than the rendering would take (15 minutes ifd - 10 minute rendertime).

We are working on solutions for this but none has proven to be the silver bullet yet. I am open for any suggestions ;)

Edited by dennis.albus
  • Like 1
Link to comment
Share on other sites

Unfortunately Mantra takes a very long time to startup. This is a result of the way it parses the scene with python using SOHO. The time is almost linearly proportional to the number of objects and shaders you have in your scene. Merging everything into one single geo node using delayed load procedurals will speed up the startup time considerably. We still regurarly hit the wall with production scenes that take longer to generate the ifd than the rendering would take (15 minutes ifd - 10 minute rendertime).

We are working on solutions for this but none has proven to be the silver bullet yet. I am open for any suggestions ;)

 

Mantra does take a while for some shots to generate the geometry to render, maybe usually more than other renders but I never had the experience that take that long to generate the IFDs, are you using Windows or Linux? Did you try using packed objects to render to see if there is a difference?

Edited by Mzigaib
Link to comment
Share on other sites

From Houdini help:

 

"If you notice some artifacts in corners, increase the Prefilter Samples or try increasing the Photon Distance Threshold. Increasing the number of samples may require you to go back and increase the Photon Count to sharpen the photon map result."

 

I hope that helps.

Link to comment
Share on other sites

I found that photon distance threshold at 0 helps in performance but you can have corners artifacts, I also found that if you increase photons count and the prefilter samples you can fix lot of the artifacts or even all of them maybe a combination between the two can get you the best of the worlds.

Edited by Mzigaib
Link to comment
Share on other sites

It's pretty well explained here

http://www.sidefx.com/index.php?option=com_content&task=view&id=1412&

 

basically

- at 0 mantra read the photon map only. it's the fastest but also the most prone to corner artefact

- the more you go up the less mantra rely on the photon map, the most it rely on true PBR calculation => slower

 

for the 18min rendering my Photon Distance Threshold was at 0 thus the faster render.

Link to comment
Share on other sites

@marty => Great tricks for the pmap loading

 

@Zoran => what do you mean by packing geometry ? creating an ifd ?

 

i have done a small cam anim (mov+sequence)

https://www.dropbox.com/s/l4fjw6ae298oyen/seq_test.mov?dl=0

https://www.dropbox.com/s/ygairux8r91o8l8/test_seq.rar?dl=0       

you have to download it to avoid dropbox shitty preview

 

i activate the sample lock option

- i have no flickering

- but i have my noise pattern that stay constant and thus we have the weird impression that the noise slide on the walls

- there are also some corner artefact in certain area but nothing tragic

 

I am not sure the sample lock is a good idea at the end i will try without it.

Edited by sebkaine
Link to comment
Share on other sites

Mantra does take a while for some shots to generate the geometry to render, maybe usually more than other renders but I never had the experience that take that long to generate the IFDs, are you using Windows or Linux? Did you try using packed objects to render to see if there is a difference?

 

Hi,

this is a bit off topic but I guess it makes sense to elaborate the problems with the current state of Houdinis Alembic Workflow a bit. It is closely related to the artists shading/lighting workflow anyway.

 

There are currently two ways you can load your alembics into Houdini which is via loading a hierarchy of geo nodes on obj level or loading it in SOPs as packed geometry (SideFX seems to favor the latter, as that's where the development is headed).

 

Both ways have their benefits but also their problems. When working with a hierarchy you get fast viewport feedback when assigning shaders and IPR updates are quick once the initial translation is completed. The ability to use bundles is also a big plus for that workflow. Unfortunately when you have big assets (think 5000 geo nodes) the scene translation takes ages. Delayed loads are also not really an option for this as far as I see it (correct me if I'm wrong).

 

Working with packed alembics in SOPs is a nightmare with big assets (5000+ packed primitives with a total of 20+ Mio. unpacked primitives). Assigning shaders with packed edits takes up to 5 Minutes to update because it seems Houdini is sending the whole geometry to the graphics card everytime instead of only the changes. It also eats your memory for breakfast as just selecting nodes which should already be in memory seem to be loaded again but previously allocated memory is not released. Scene translation time is not that bad for this but in contrast to the hierarchy workflow it has to be done everytime a new shader gets assigned because shading is stored as an attribute in the geometry. Delayed loads are not helping here as Houdini still has to grab the shading information from the SOP network.

 

We are experimenting with material stylesheets right now and we'll see if that helps in regards to the shading workflow. I would be happy for any suggestion in regards to the alembic workflow in Houdini as for us - in its current state - it seems utterly broken for any real production scenario.

 

cheers,

Dennis

 

P.S.: I don't know why the first question (when it comes to performance) always seems to be whether you are using Windows or Linux. It might be true, that Linux is more efficient with memory management and that might be a big plus for simulations, but in general I have not found any relevant performance benefits in Linux. Especially not in terms of rendering speed (I tested this exhaustively). I am using CentOS at work and Windows at home.

Link to comment
Share on other sites

From the problem you describe Dennis, this is exactly where i haven't been able to taste the power of clarisse. But in this regards clarisse while not having nodal tree and linear layering looks to be very powerful for the scenario you describe.

 

The loading of alembic is pretty clean and easy

The shader assigment by rules looks extremely good

The fact that you throw it billions of polygons also look quite not a problem

 

i was kinda disapointed by the tool , but i think it has a very strong potential and for the scenario you describe

- ultimatte number of .abc geo

- need for high interactivity

- fast shader assignment

- assigment bundle group

- efficient memory managment

 

i would consider testing clarisse in depth because while still a young product i think it will answer most of the issue you are describing.

i don' say clarisse is better than mantra but your area of use looks to be the one where clarisse shine.

 

I am still waiting for a 2.0 trial by the way ... clarisse support is not chaosgroup support ...

 

Very interesting read , thanks for your feedback !

Link to comment
Share on other sites

I hope that was not answered before, but is Photon Mapping used in Production? I am asking because Ive worked with Arnold the last five years (in softimage) and do not even bother anymore with caching or baking. But it seems that it is something worth looking into. 

 

Comparing Mantra to Arnold I must say that Mantra has a huge benefit with the ray variance. Arnold is sampling every Pixel the same (more or less). It does is pretty quick but feels wasted some times. 

On the other side Arnold is chewing thru geometry without consuming that much memory. That is something that annoys me with Mantra, as I need to have at least 32Gb RAM in my render clients to account for heavier scenes.

Link to comment
Share on other sites

Hi,

this is a bit off topic but I guess it makes sense to elaborate the problems with the current state of Houdinis Alembic Workflow a bit. It is closely related to the artists shading/lighting workflow anyway.

 

There are currently two ways you can load your alembics into Houdini which is via loading a hierarchy of geo nodes on obj level or loading it in SOPs as packed geometry (SideFX seems to favor the latter, as that's where the development is headed).

 

Both ways have their benefits but also their problems. When working with a hierarchy you get fast viewport feedback when assigning shaders and IPR updates are quick once the initial translation is completed. The ability to use bundles is also a big plus for that workflow. Unfortunately when you have big assets (think 5000 geo nodes) the scene translation takes ages. Delayed loads are also not really an option for this as far as I see it (correct me if I'm wrong).

 

Working with packed alembics in SOPs is a nightmare with big assets (5000+ packed primitives with a total of 20+ Mio. unpacked primitives). Assigning shaders with packed edits takes up to 5 Minutes to update because it seems Houdini is sending the whole geometry to the graphics card everytime instead of only the changes. It also eats your memory for breakfast as just selecting nodes which should already be in memory seem to be loaded again but previously allocated memory is not released. Scene translation time is not that bad for this but in contrast to the hierarchy workflow it has to be done everytime a new shader gets assigned because shading is stored as an attribute in the geometry. Delayed loads are not helping here as Houdini still has to grab the shading information from the SOP network.

 

We are experimenting with material stylesheets right now and we'll see if that helps in regards to the shading workflow. I would be happy for any suggestion in regards to the alembic workflow in Houdini as for us - in its current state - it seems utterly broken for any real production scenario.

 

cheers,

Dennis

 

P.S.: I don't know why the first question (when it comes to performance) always seems to be whether you are using Windows or Linux. It might be true, that Linux is more efficient with memory management and that might be a big plus for simulations, but in general I have not found any relevant performance benefits in Linux. Especially not in terms of rendering speed (I tested this exhaustively). I am using CentOS at work and Windows at home.

 

To decide how, and what to load from an Alembic file is something Ive been amazed by when started to use Houdini. But your right, it somehow still is not enough. I really looking forward to the KL implementation (Fabric Engine) into Houdini. Which I am pretty sure will happen soon. Imagine a toolset in which you could decide how to load content from abc's, dependent on rules and logic. Multithreaded traversing and and and.. some good times ahead of us.

Link to comment
Share on other sites

 Imagine a toolset in which you could decide how to load content from abc's, dependent on rules and logic. Multithreaded traversing and and and.. some good times ahead of us.

 

Sebastian i think what you describe is already a reality in clarisse ! :)

 

For not using photon map and going brute force without optimisation, i honestly think it's the most efficient way to do things when you have the money to buy an army of blade.

Maxwell / Arnold give you :

- very good predictability in your renders

- a No Flick warrenty

- a 30s to 10min setup time

- very good outpout

 

Optimisation is a big lost of time. If you have use prman with bake shadow map / bake ptc / generate bkm / export rib / read archives etc ... you know what i mean.

I think those optimisation are justify when you have a small shop and can't afford to get 2-5h render time per frame but more 5min-30min per frame.

 

But if i was rich my choice would be easy 300 Maxwell Render nodes / some nice Xeon - Blade and voila ... :)

Edited by sebkaine
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...