Jump to content

Fran

Members
  • Content count

    21
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Fran

  • Rank
    Peon

Personal Information

  • Name
    Francois Duchesneau
  • Location
    New York
  1. HDA or Gallery ?

    Better later than never If my memory is good, the gallery was added after otls. However, they serve different purposes to my point of view, although I would probably use the gallery a lot more if we didn't have otls (hda). HDA 1- I definitely use HDAs for creating nodes that are multipurpose and low level. These are similar to the kind of nodes you generally find in Houdini that are used as building blocks for different kind of effects. 2- I also use HDAs wherever I need to duplicate the same logic within the same hip on multiple nodes. This prevents me from having to change a parameter on multiple location for similar part of a setup. In fact very often this HDAs is Embedded to the hip if I don't plan to use it in other shots, or I let it become mature enough on my look dev shot and make it external when I'm ready to apply the setup in other shots. 3- I make HDAs that I use as part of an FX setup for a specific sequence for example. I may have more than one of those little task assets that together, along with other standard Houdini nodes, make my whole setup. An example of one of those assets can be an emitter or the whole portion that does the simulation. The reason why I fragment it this way is because when you try to make a huge FX asset everybody always end up unlocking it to be able to fine tune it for his own shot. It's take less time to design this way too because you don't have to think about all the different variation of the effect the setup has to cover. What you want is a bunch of smaller assets that no one will need to unlock. Gallery So far I've used galleries to make examples of an FX setup to keep in a library. We can then use those as a starting point if we need to do a similar FX later. It also persist better over time in a studio since we put them all in the same folder and are all visible at once in the gallery window. I'd be happy to hear about more usage of galleries because I feel they are under estimated. Thanks Francois
  2. Geometry saving error

    Sorry for the late reply but for the record, I got the same error. My problem was that even though I wasn't missing geometry on the current frame I was rendering, I had a Trail Sop somewhere that referred to one frame before where my file sequence started in order to calculate the velocity. I put a Timeshift Sop to clamp the file to fix it, putting "Missing Frame" to "No Geometry" didn't help probably because the Trail Sop tried to calculate the velocity between a bunch of points and no points at all. You were right Carlos about issues reading a file though. Good call Francois
  3. Soap bubble generation

    I had a look inside the Thin Film Vop because when varying the input P I didn't get any change on the environment map like I expected. It turns out that this P input only affect the oily pattern. If you don't provide an environment map and you apply an offset on the P you'll see a difference between with and without the offset. If you want to the same to the environment map, you can dive inside and learn from what they're doing and hack it. But you want to do is go inside the "do_reflections_if_level_zero" node and rotate the global N and plug it into nN after normalizing it and you'll see the map rotating.
  4. Soap bubble generation

    I didn't know about the Thin Film Vop. Thanks for the info because I might need that soon. Although I usually build that kind of thing with a dot product and a few other nodes for better control. PBR uses a special data type represented as a yellow "f". Take a look at a Lighting Model Vop for example. A lot of the nodes have been re-written to support that relatively new data type when they introduced the PBR render engine but not all of them. You'll have to stick to Raytracing in this case. Francois
  5. Of course that should be part of a standard Houdini distribution but I think clusterThis is not missing much for geometry instancing. With the following I think it would work great. -Usage of an Up vector combined with N or an orient attribute or rot. -Geometry file name per point -More control on motion blur, i.e. centered, backward and forward motion blur. I haven't tested if object cast motion blured shadow neither but I think that's about it. François
  6. I'm trying motion blur too and I've got something wrong with the motion blur. If you look at the attached file and render with the switch inside geo1 to 0 and 1, you'll see what I mean. It's as if the velocity attribute is transformed in rotation by the N attribute. Also, maybe I missed that information somewhere but is it possible to have a different object instanced per point, i.e. to be able to set the geo_file per instance point? Thanks Francois clusterThis_motionBlurTest_v01.hip
  7. I've started to make some test with clusterThis. I want to use it especially for instancing geometry. I see I can use the N vector to orient the geometry but it doesn't seem to make use of the Up attribute nor rot to have full control on orientation. Is there any other way to do it? Francois
  8. I've finally managed to compile it for Windows. Here's the dll for 11.0.581 and the Makefile I used to compile it. For some reasons the forum doesn't allow me to upload files with certain extensions nor without extension so you'll have to rename the Makefile.txt to Makefile and VRAY_clusterThis.txt to VRAY_clusterThis.dll The hardest part is still to get a hand on the non-available-anymore compiler Visual Studio 2005 and add the two service packs to allow it to compile 64-bit. Thanks Mark again. Francois Makefile.txt VRAY_clusterThis.txt
  9. My Windows compilation nightmares are back. Even though I was able to compile in H10, I'm not able to do it in H11. Anyone has managed to do it? If yes, is there a particular procedure? I tried with hcustom under the Houdini shell and with Cygwin and neither work. I haven't tried with MakeFile because with my previous compilation test on H10 I haven't been able to use it so I thought hcustom is still the best thing to do. Mark told me the following variables are not set properly but I set them and it's still not working. When I use cygwin and source the houdini_setup_bash, these are set automatically. setenv HOUDINI_MAJOR_RELEASE 11 setenv HOUDINI_MINOR_RELEASE 0 Thanks guys! François
  10. Tx Toolkit

    Just for those who haven't seen on the SideFx forum. I've created around 30 Houdini nodes that I give there. http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&t=21794 François
  11. Any ideas how to achieve this?

    All the blur, bright light that clamps in the white should be done as a 2d post process in cop. However you can generate a simple layer in 3d to be used in comp. Just create a small sphere attached to your ball and apply a shader with an animated noise pattern on it. The lighting can be constant. You can also emit some particles rendered as point with a variation in the color intensity overtime to make it more alive. Those particles are needed if you need a trail. Regarding the smoke you can use sprites. François
  12. Any ideas how to achieve this?

    Do you have a reference to get a better idea of what you're talking about. Is it like a smoke trail? François
  13. Writing .Sim Cache Benefits

    Since a couple of month I've changed my pipeline to work with a local cache instead of accessing it on the network and I've noticed a great improvement in my workflow, even in particle simulation.
  14. I wanted to create a volume with an animated displacement and I noticed some density appeared from nowhere. I searched further and here's what I've found. The volume displacement is not really displacing the volume but instead it's displacing the space at which the density is sample. In this scene I demonstrate what I mean. If you render one frame of the Mantra node "const_disp" you will see a displaced volume -0.5 in X even though I've told the shader to displace 0.5 in X. If you render the sequence "noise_disp" (or look at the attached video), you will see how the displacement evolves overtime while the noise amplitude is increasing. It's not displacing. Is it normal? François volume_disp_v001.hip volume_disp_noise_v001.mov
  15. Have a look at the test file I created. Output the "instance" and "points" Mantra node and compare the ifd. Each instance has to create the transformation matrix and write each attribute like the shader etc. Only this part fits in 192 lines. Now look at the points and you'll see it fits in 30 lines only. The reason is because each attribute name is define at the beginning of the block and a bunch of value are put one after the other. It even looks like there are some sort of clever string recognition. It defines every different string and further refer to which index it correspond for each point. That's my explanation of why it is slow but I think Soho add an other level load just because it's Python. That's why I'm looking forward to test the clusterThis tool. François instance_soho_v001.hip
×