Jump to content

Leaderboard


Popular Content

Showing most liked content on 04/22/2019 in all areas

  1. 2 points
    I see a couple of things happening. You have based your grain generation off of the proxy mesh for the ground plane. It is better to dedicate a unique object for this. Also I noticed that the points coming into the point deform have a count mis-match. This is due to the fact that when OpenCL is enabled, it generatsd a different amount of particles, compared to the CPU rest version. Tun Off OpenCL to reestablish the match. Basically you don't need the point deform version after making these modifications. But it could be used as a secondary material element. ap_LetterSetup_001_collProbs.hiplc
  2. 1 point
    If you want to render the files sequentially, you can do the same you did in that .cmd, but in python. Something like hou.hipFile.load(file1) rop = hou.node(path_to_rop) rop.render(args) hou.hipFile.load(file2) rop = hou.node(path_to_rop) rop.render(args) ... If you want to to spawn new processes to render your files, you can call hrender using subprocess subprocess.Popen([path_to_hython, path_to_hrender, hrender_args]) And, of course, you can write a specific custom python script to suit any other needs
  3. 1 point
    Is there a reason you're not rendering using python directly? You can check inside $HFS/bin/hrender.py or render.py for some examples. If your usage is simple, those scripts would be enough for you to use directly
  4. 1 point
    I was already using that method. Anyway, your advice was helpful because I was not sure of the role of "point deform" node. However, I solved the issue, by making more specific groups. Before, I made one group of points for two separated points. After I make more specific groups, it's not happening again. Anyway, thank you very much.
  5. 1 point
    Nature first post...play with that file ...strange that you didn't discover.. You can make everything from nature from that file ...with that file 80 % have been done on this gallery . Play with vop and Leaf ...Noise its answer .
  6. 1 point
    I also moved all the tumblr example files I've been sharing onto my new website that you can find here. https://www.richlord.com/tools
  7. 1 point
    Here's a music video I made using a bunch of the techniques from this thread.
  8. 1 point
    If you are using the defaults for volume rendering, then yes you can certainly get a handle over the render times by tuning the volume rendering with a bit of a sacrifice on quality (if at all). Volume Step Size The main way to tune your volume rendering is with the volume step Size on the mantra output ROP (Properties > Sampling folder). Step size is in Houdini units in Camera Space. This will also have a great impact on quailty. It is a trade-off and that is what tuning is for. What you are looking for here is if your volume object is very large (bigger than say 1 Houdini unit), then you can safely increase the Volume Step Size and see little to no quality difference. I turn down the step size when I am tweaking colors and shaping. I then turn up the step size when tuning the actual fine detail on the smoke. Pixel Samples This is the second major way to get speed back in to your volume rendering. It defaults to 3x3 which is decent quality. Decreasing this to 1x1 will certainly increase your render times but will introduce visible noise in to the final image. If you have a great deal of motion blur and/or DOF, you can get away with turning down the samples down a bit. At 2x2, the difference in quality from the 3x3 render is noticeable but not as bad as the 1x1 where sampling noise is clearly evident. Shading Quality Since we are dicing the volume primitive in to micropolygons (non PBR) before we even shade, shading quality comes in to play. You can control the shading quality on a per object level and that is exactly where you find this parameter: Object > Render > Dicing > Shading Quality. Increasing causes more micropolygons therefore more shader calls. Because we are marching through the volumes, the number of shader calls can get out of hand. Decreasing this value will reduce the memory consumed as well as render times but not nearly as drastic as throttling back the step size. If you are unfamiliar as to what shading quality does in a micoropolygon renderer, you have some homework to do. Seek out the OdForce Wiki and RenderMan documentation plus the Advanced RenderMan book. Fyi it controls the size of the micropolygons which inherently controls the number of times the shaders are called. Motion Factor, DOF and Motion Blur If you are using either Depth Of Field (DOF) and/or Motion Blur, you should try out Motion Factor. It controls the Shading Quality dynamically. Depending on the level of DOF or MB, you can decrease the shading quality which = less memory and faster renders. Control the Number of Lights Since we are marching in to a volume, eash shader call will loop through all the lights illuminating the current micropolygon in the volume. By limiting the number of lights we need to look up, we get a proportional decrease in render times. Following this, the less lights we have generating and using deep shadow maps, the quicker the render times. Remember it is the deep shadows that give you depth. You may only want one or two lights with this option on and the rest of the lights to not use shadows and just add bits of illumination. So, if you are using ambient occlusion (the Environment Light from the shelf) with your volume objects, don't. They really slow down your volume rendering because each shaded micropolygon will cause an awful lot of rays to be generated due to the raytraced ambient occlusion. Use standard CG lighting techniques to try to get the look you are after. At the default setting of 16, you can obstensibly add up to 16 carefully placed lights and get a similar impact yet have an awful lot of contro. If you really need this, limit the number of rays. Opacity Limit When you are rendering geometry with a certain amount of transparency, you can use the opacity limit to stop the current ray if the opacity threshold is met. If used too aggressively, you will see flickering and strange banding in your volume over a sequence of renders if the cam is moving or the smoke is evolving. This is the varying opacity thresholds in the volume. Most evident if your volumes are more wispy and transparent. Here you are chasing that 1-5% decrease in render times and not the large decreases above. The default value is already pretty aggressive so be careful here to avoid adding nasty artefacts. Use Standard Displacement and Surface Shader Optimizations It's really nice to have volume rendering where we can use standard surface and displacement shader opimization tricks. There should be quite a few tips/ideas on the two main forums. At this point your optimizations are in the 1-5% decerase in render times. Note: Both the Depth Maps (deep shadows with opacity) and the final rendered image benefit from the tuning above. Note: If you make a changes such as decreasing the step size, you need to regenerate your depth maps. Note: Make sure to turn on Mantra profiling to see the actual render times to see actual impact on rendering speeds.
×