Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Community Reputation

1 Neutral

About pencha

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
    Buenos Aires
  1. Lil´trick just in case it can help with manual ungrouping: have in mind that wildcards in an Object merge can do wonders to organize multiple subnets. (object merging obj/yourSubnet*/*/* will automagically bring all of those geos to one node).
  2. accessing voxel neighbours

    Just dropping a +1 for this method in case you haven´t tried it yet! I used something similar a while ago with good results.
  3. merging mutliple subnet geometries

    Or you could just use the object merge using wildcards (reference "subnet1/*/*")
  4. create keys from a saved pose

    So THAT was messing my channel groups haha It still works as you said. Thanks!
  5. create keys from a saved pose

    The "moved channels" are the channels created on the Channel CHOP? If so, I´d lose the ability to modify them using handles? Thank you
  6. create keys from a saved pose

    thanks edward! Seems like i´ll have to delve into pythonland for this. May I ask why it´s not implemented by default? Ain´t it nice being able to have your keys "baked" back to keep keying by hand? (e.g, "baking" a walk cycle from chops, to be able to make adjustments with the pose tool) Am I missing something? As you may see I´m totally new to character animation workflows in H. Thanks again
  7. I´ve saved a pose to a channel CHOP from a channel group (left click, motion effects/create pose), animated the character, and now I can´t find the way to create a keyframe from that saved pose at another frame. Setting the export flag on the channel CHOP works fine, showing the desired pose while the flag is on, but I can´t find the way to copy the values as keyframes so that the pose becomes keyed and stays once I turn the export flag off. The copy to export button seems to mean do exactly that, but I can´t get it to work on channels that are already animated, even if I set a key on all the scoped channels at the desired frame before clicking copy to export. Any help appreciated Thanks!
  8. How to morph particle fluid into geometry??

    in the same direction, you could try applying a custom "point blend" type vop, blending based on position or life on the particles edit: maybe you´ll finde this is useful (you´ll have to hack it though) http://www.vfxtoolbox.com/users/Guillaume+Fradin/operators/Point+Blend
  9. Thanks again rdg, thanks Jacob and gracias Mario! Just needed the aclaration that the relationship in this case was linear, problem solved. I was studying light attenuation in real world photography and then switched to this, overcomplicating things (1/distance squared is the attenuation factor you need to compensate using real-world lights). ps: Never thought about solving this without VOPS! Nice file Edit: In case it´s useful for someone, If zooming is needed, dividing Mario´s expression result with the focal length seems to work OK
  10. Thanks rdg, that´s what I´m doing right now, but I don´t know how to get the correct factor. It would be hard to cheat as I have lots of lines with VERY different z values (and so the perspective wildly affects the resulting width in the render).
  11. I want to render, using mantra not wren, and using a non-ortographic camera, lines with a width of say 2 px, no matter how close or far they are from the camera. I assume the way to go would be an expression that modifies the width attribute of every point (in between the points I don´t really care about "correctness") based on the distance lens and aperture of the camera. Anyone out there has already got something like this working, or a better idea in how to achieve it? Thanks!
  12. Smoke effect

    I´ve just posted an example on how to achieve volume blending: http://forums.odforc...ume-blendshape/ good luck!
  13. LED Display

    I did that a year ago! Sadly I can´t find the file now. I built a matrix of spheres, and with vopsops I transfered the R G and B values of a 1st proxy video separately to alternating sphere primitives. A 2nd proxy video drove the effects on the LEDS (R values would trigger particle effects, G values would trigger DOPs, and B values would extrude the screen or something like that). To make bigger chunks to fall I´d just use an expand filter on the G channel of the 2nd proxy video (the one controlling the effects). Of course, the leds would start to fall in a bigger chunk, but they wouldn´t fall together till the end...
  14. Volume Blendshape

    Try this version instead! Changed pcimport for pcfilter, not sure it´s working as supposed (changing the number of points to filter doesn´t seem to have an effect) but it´s way less jittery, and not buggy on start-up. volblend2.hipnc
  15. Folding (or unfolding) character?

    Some random ideas to avoid simulations: -you could do some "space-blending" in a VOP SOP, something like a couple of clever ramps used to modify the position of the points according to the distance to a line/point/point cloud. -Another one, VOPSOPsy too, would be a custom blendshape to drive the folds, with a folded and a non-folded model. -last one, record the fold trajectories in CHOPS and then drive the folds with attributes. Of course, combine these three techniques and you´ve got a lot of to play with