Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Everything posted by pencha

  1. Lil´trick just in case it can help with manual ungrouping: have in mind that wildcards in an Object merge can do wonders to organize multiple subnets. (object merging obj/yourSubnet*/*/* will automagically bring all of those geos to one node).
  2. accessing voxel neighbours

    Just dropping a +1 for this method in case you haven´t tried it yet! I used something similar a while ago with good results.
  3. merging mutliple subnet geometries

    Or you could just use the object merge using wildcards (reference "subnet1/*/*")
  4. I´ve saved a pose to a channel CHOP from a channel group (left click, motion effects/create pose), animated the character, and now I can´t find the way to create a keyframe from that saved pose at another frame. Setting the export flag on the channel CHOP works fine, showing the desired pose while the flag is on, but I can´t find the way to copy the values as keyframes so that the pose becomes keyed and stays once I turn the export flag off. The copy to export button seems to mean do exactly that, but I can´t get it to work on channels that are already animated, even if I set a key on all the scoped channels at the desired frame before clicking copy to export. Any help appreciated Thanks!
  5. create keys from a saved pose

    So THAT was messing my channel groups haha It still works as you said. Thanks!
  6. create keys from a saved pose

    The "moved channels" are the channels created on the Channel CHOP? If so, I´d lose the ability to modify them using handles? Thank you
  7. create keys from a saved pose

    thanks edward! Seems like i´ll have to delve into pythonland for this. May I ask why it´s not implemented by default? Ain´t it nice being able to have your keys "baked" back to keep keying by hand? (e.g, "baking" a walk cycle from chops, to be able to make adjustments with the pose tool) Am I missing something? As you may see I´m totally new to character animation workflows in H. Thanks again
  8. I want to render, using mantra not wren, and using a non-ortographic camera, lines with a width of say 2 px, no matter how close or far they are from the camera. I assume the way to go would be an expression that modifies the width attribute of every point (in between the points I don´t really care about "correctness") based on the distance lens and aperture of the camera. Anyone out there has already got something like this working, or a better idea in how to achieve it? Thanks!
  9. How to morph particle fluid into geometry??

    in the same direction, you could try applying a custom "point blend" type vop, blending based on position or life on the particles edit: maybe you´ll finde this is useful (you´ll have to hack it though) http://www.vfxtoolbox.com/users/Guillaume+Fradin/operators/Point+Blend
  10. Thanks again rdg, thanks Jacob and gracias Mario! Just needed the aclaration that the relationship in this case was linear, problem solved. I was studying light attenuation in real world photography and then switched to this, overcomplicating things (1/distance squared is the attenuation factor you need to compensate using real-world lights). ps: Never thought about solving this without VOPS! Nice file Edit: In case it´s useful for someone, If zooming is needed, dividing Mario´s expression result with the focal length seems to work OK
  11. Thanks rdg, that´s what I´m doing right now, but I don´t know how to get the correct factor. It would be hard to cheat as I have lots of lines with VERY different z values (and so the perspective wildly affects the resulting width in the render).
  12. Smoke effect

    I´ve just posted an example on how to achieve volume blending: http://forums.odforc...ume-blendshape/ good luck!
  13. LED Display

    I did that a year ago! Sadly I can´t find the file now. I built a matrix of spheres, and with vopsops I transfered the R G and B values of a 1st proxy video separately to alternating sphere primitives. A 2nd proxy video drove the effects on the LEDS (R values would trigger particle effects, G values would trigger DOPs, and B values would extrude the screen or something like that). To make bigger chunks to fall I´d just use an expand filter on the G channel of the 2nd proxy video (the one controlling the effects). Of course, the leds would start to fall in a bigger chunk, but they wouldn´t fall together till the end...
  14. Volume Blendshape

    I´m trying to implement volume blending without DOPs, but I could use some help. here´s what I´ve got so far, it´s the core idea, it would also need some mixing and compensations so the outside of the volume stays at 0 density, but I´m actually having problems with the lookup of a point cloud from a volume vop. It´s not working as supposed and not even refreshing properly.I´ve already tried caching the point cloud but it didn´t help. Any suggestions? Thanks! EDIT: Now working, see next post and file.
  15. Volume Blendshape

    Try this version instead! Changed pcimport for pcfilter, not sure it´s working as supposed (changing the number of points to filter doesn´t seem to have an effect) but it´s way less jittery, and not buggy on start-up. volblend2.hipnc
  16. Folding (or unfolding) character?

    Some random ideas to avoid simulations: -you could do some "space-blending" in a VOP SOP, something like a couple of clever ramps used to modify the position of the points according to the distance to a line/point/point cloud. -Another one, VOPSOPsy too, would be a custom blendshape to drive the folds, with a folded and a non-folded model. -last one, record the fold trajectories in CHOPS and then drive the folds with attributes. Of course, combine these three techniques and you´ve got a lot of to play with
  17. Volume Blendshape

    This tool stood on the back of my mind, I just re-opened it and got it to work, it was just a "syntax error" on the VOP network. It´s a blendshape but instead of blending geometry, it "blends voxels" (fog volumes or sdf). Next steps are packing it as an OTL, with point filtering, documentation and some polish, but I don´t know when I´ll have the time, in the meanwhile, you can try it as-is, just play the timeline to see the blend. There seems to be a bug and sometimes it doesn´t update some voxels (sometimes all of them) on the first start; if you are not seing the voxel teapot, play the other null and then go back to the OUT. C&C welcome, let me know if you improve it. volblend.hipnc
  18. storing points to file

    Try this one I made, have in mind that creating the point groups in VEX can be multithreaded, but creating them with expressions can´t. To export the points you should look at the Geometry node on the ROP context. Hope it helps Pencha
  19. Create edgy smoke

    If using the pyro shader, You could create a density modifier and manipulate the curve (maybe I´m just saying the same than Overload, not sure if the curve was called contour). You could also do it with a volume vop and a ramp or a function too, depending on what´s your need.
  20. VFXtoolbox

    Finally got the time to try to implement the volume blend, but one of the main ideas isn´t working yet, maybe it´s just a mistake on the point cloud lookup. Any suggestion is welcome!
  21. Volumetric Clouds Poping

    Have you tried overriding the bounds? maybe the popping is caused by the change in resolution of the volume.
  22. VFXtoolbox

    Funny you ask me this! I´ve been fascinated with your shots for british airways when I was developing volume tools last year. I ended up using fog volume / sdf copying to get the best of both worlds, a couple vops and the famous sandman level set mixing equation on top of it, but I couldn´t get actual blending or deforming (you can see the tools on my reel). A year later, as usual with Houdini (whenever you think you´ve seen them all already...), I discovered a couple magical nodes I hadn´t seen before: Volume sample VOP, volume index to pos VOP and volume pos to index VOP. With these It´s easy to accomplish deformations of a volume (modifying the lookup of the sample) and a more elaborate mixing, based on any attribute you like per-voxel (it´s like blendshapes vs. the point blend). I think with these nodes and point clouds you could get a volume blendshape that´s useable, but as I haven´t got the time to try it I´d prefer not to talk much nonsense here, I´ll be glad to PM you the main ideas and submit the tool if I get it to work.
  23. VFXtoolbox

    very nice! Intant favourite, I have been toying around with per-point custom blending myself, it´s a nice and useful trick. Next goal: volume blendshapes, I had to tackle a shot similar to the one you did for British Airways but never got to finish the tool, I´ll submit it there when It´s done.
  24. why dose not my VEX SOP node work ?

    It´s working fine, it returns the Y position based on the sin function, but I think you want to combine both sin waves, to do that you´d have to modify the VEX so that it adds the result of the function to the actual position, to get this behaviour just add an Add node, and pipe to it P from the global variables, and the result of floattovec1, the output of the Add should go to P on the output.
×