Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Activate last won the day on August 10 2020

Activate had the most liked content!

Community Reputation

56 Excellent


About Activate

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
    Niels Provos
  • Location
    Mountain View, CA

Recent Profile Visitors

1,225 profile views
  1. Thank you for that suggestion and the example. I will try that. As for motion blur, I have not figured that out yet. I was going to try to add it in Nuke. As far as I can tell, Redshift does not produce motion vectors for volumes but can certainly produce motion blur directly.
  2. I am rendering a small animation which includes a camera moving through clouds. Here is a first previz: I am wondering if I could get advice on my approach. I have a few hundred clouds overall but only five clouds that are directly seen by the camera over 240 frames of animation. - I wrote a small solver to identify which clouds enter the view of the camera, e.g. by mapping points to NDC coordinates. - Non visible clouds are created with the cloud node and a small uniform sampling rate of 50. - Visible clouds use 1200 uniform sampling. (Video resolution will be 1920x1080). Each cloud takes between 1-2 GB of memory. At first I created an RS Proxy for each cloud but that used too much memory. To deal with this, I took a box and mapped it's normal coordinates to NDC to give me the camera frustrum which I turned into a vdb. For each frame of the animation, I intersected the frustrum vdb with the vdbs of my five clouds and saved that off as a Redshift proxy: Each proxy is ~250MB per frame. The renders are also quite noisy but Optix is doing a reasonable job at denoising them: Bottom half is denoised - top half with noise. The clouds still look somewhat unnatural which may require some more tweaking. My question here really is if this is the overall correct approach for dealing with the memory limitations of GPUs. Thank you.
  3. Genetic Algorithms in Houdini

    Here is the second part - still with music and higher screen resolution though. If I end up doing another one of these, I will increase the UI size in Houdini. The application was to find simulation parameters for a tree simulation. The parameters subtly depend on each other which makes genetic algorithms reasonably suitable. The real challenge is expressing the target function. In my case, I just used the volume of the convex hull but the total length of all wires may be another interesting one to consider.
  4. Genetic Algorithms in Houdini

    I am not sure if this is the appropriate forum. I recently wondered how hard it would be to implement genetic algorithms in SOPs. The particular example I was curious about was if I could turn a box into a sphere by optimizing for a target function that maintains volume but minimizes surface area. Overall, this was easier than I thought and did not involve all that much code. I ended up recording a video that explains this: The underlying motivation was to get this working in PDG but I figured I had to start with SOPs first. Let me know what you think.
  5. Melting Letters Intro Sequence

    Here is the final version including the complete video - not that most of you would care. This was Houdini/Redshift, Substance, Nuke and Davinci Resolve for Grading in HDR10 - yay, to EXR having so much information :
  6. Here is the intro sequence for a new video I am working on: Simple FLIP simulation with some smoke and whitewater sparks. Rendered in Redshift. Lens flares, DoF and Glow added in Nuke. The heat distortion from the smoke is the part I am least happy with. I took the volume tint AOV and created the difference from one frame to the next to drive STMap distortion. Would love to hear a better idea of doing that. Let me know what you think.
  7. Would anyone here mind posting an example file that produces Custom AOVs for Redshift 2.6.38 and Houdini 17.5? Neither default shader nor Store Color to AOV or anything else is working for me. Sample file is attached. broken_aovs.hiplc
  8. Tileable Noise

    That makes a lot of sense. Here is the same thing but converting one dimension to polar coordinates. It makes for a much better result. The tiled result looks nicer as well. noise_tile_v2.hiplc
  9. Tileable Noise

    @rohandalvi was asking on Twitter about tiling noise in Houdini. It seemed like this should be easy to do. That should have been a warning sign. Doing this right is much harder but here is a super quick blur based noise tiler: Since blurring is feature agnostic, it does not make for the best tiles but may be good enough for some cases. Hip file attached. noise_tile.hiplc
  10. Help with Leaves on a Tree

    Thanks for the feedback. The trees are not based on L-systems. The growing algorithm is based on papers published by Runions and Prusinkiewicz. The orientation and position of the leaves is based on phyllotaxis which is really hard to see in the image. Here is a close up: It's possible that I am anchoring too much on where the tree simulation places the leaves and instead should just go with some scatter based approach of placing the leaves. Part of determining in which direction the buds will grow into branches is based on space colonization and a volume that circumscribes the outer extend of the branches. The volume gets filled with marker points and uses space colonization to select the buds nearest to each marker. Houdini is still pretty much a learning experience for me. Graphics is not really part of my background but it's a good creative outlet. Thank you.
  11. Help with Leaves on a Tree

    Still trying to figure out leaves; some progress.
  12. Using python for convert file sequence path

    Do you have examples where this does not work? The regular expression replaces occurrences of $F4, etc with *. The * is a wildcard for globbing for matching files in the filesystem. expandString does all the Houdini magic to turn the expression into a complete path. If your path just contains file.$F.png, the regular expression above will not match and then expandString will substitute the frame number, i.e. you only get the first frame. Try to use r'\$F\d?' instead.
  13. Help with Leaves on a Tree

    I have not made much progress on the leaves but started with shading the trees. Not really what I asked about originally but here is the current look - most of this will not be visible once the leaves are attached as well. The tree bark is from a photo processed with Bitmap2Material to get the normal and displacement maps. The main challenge here was manipulating the UVs from Polywire, e.g. scaling the uv.y dimension by the length of the branches. If anyone here has feedback, I would appreciate it.
  14. Help with Leaves on a Tree

    Thank you. That is providing some good food for thought. I had previously only attached leaves where buds had not grown into branches. I like your attribute wrangle to bias the orientation of the leaves. I will experiment some more
  15. Help with Leaves on a Tree

    Hi everyone, I am wondering if somebody here would be gracious enough to help me with putting leaves on a tree. Here is the context, I grew myself some trees by implementing some standard algorithms: and am now looking to put some leaves on the trees. I modeled a super simple leaf and instanced it: I don't like the look of the leaves and I am not happy with their orientation or really the overall appearance of this. The process of growing trees gives me a lot of attributes to play with, e.g. the direction of the buds, the direction of tropism, the growth direction, the steps in the iteration, the degree of the branch, how much light a bud sees, etc. If somebody here with experience would be willing to put me on the right path I would appreciate it. Hip + geo attached. Thank you, Niels. tree_example.zip