Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


StepbyStepVFX last won the day on September 6

StepbyStepVFX had the most liked content!

Community Reputation

69 Excellent


About StepbyStepVFX

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
  • Interests
    VFX, Computer Graphics, AI, Deep Learning, Cinema and many more...
  1. Baking grass to plane

    What do you mean by baking ? You want to output a displacement map ? If you want to bake a disp map, you need a Bake Texture ROP (OUT context), indicate your high rez object and the low rez object - also called UV render object - (who gets propoer UV), and then check the Displacement or Vector Displacement output in the Images / Main part of the node. You need to tell which rez you wants your map, and you can select various "look up" options between high rez and low rez object (called Unwrap method). I guess yours would be by Trace closest surface but you can use a UV matching option if you work on objects that were created from a subdivided low rez object and that the UV didn't changed (like to get disp map from sculpting details on a subdivided object in ZBrush). Then you hit render button. Hope this helps you a bit, but for grass, I would go with point instancing if that's for rendering. If that's for a realtime render engine and you want to bake a map, of course that's different :-)
  2. packed geo confusion

    By the way, I read also that intrinsic are attributes that are computed on the fly, not stored on the primitive. Not sure if that’s true for all intrinsic for packed prim : it is clear for position, it can be computed from the unique point of the packed prim, but don’t know how to calculate orientation from just a point without other point attributes. That’s still obscure for me :-) But it does not prevent to manipulate them and do cool stuff ;-)
  3. packed geo confusion

    I think that makes sense when you think of how Houdini does structure geometry : primitives are made of vertices that refers to points. The "position" of a primitive is the barycentre of its components (and is calculated like that under the hood, in a Prim Wrangle for example, when you use @P for a prim). A packed primitive is a "fictitious" primitive that contains only one point. So it makes sense that moving the point position moves the prim. And if I remember correctly, the orientation of the primitive is an "intrinsic" attribute of the prim, not the point. That explains why you need to be in Primitive mode for the select tool to orient it, move it etc.... I think you can read more about that here : But that is just my guess, I don't really have info about the philosophy the dev used when coding that :-) I hope it helps you a bit...
  4. Nuke vs COPs

    By the way, if you want to do color grading and editing, DaVinci Resolve is a really nice tool. Nuke is really made for compositing, matchmoving, keying... very versatile, but DaVinci is really oriented toward grading and editing (though, I never tried Nuke Studio... only used NukeX).
  5. Sprite Pops - Image Sequence

    Hi, I modified a bit your file ad it seems to work. I just created a string attribute spritemap with a popwrangle (using your fit01(age ...)), and in the spritefog material I used a bind node to feed the base texture map (see in the file). Just save the file in a place where your sprites are in $HIP/Sprite_v001/***.png (you'll see an error when Mantra tries to load the texture if you did not save it there :-) Hope this helps SpriteTest.hip
  6. Jeff, you are definitely the most powerful spotlight there is in the dark realm of Houdini knowledge ! I would definitely like to see more masterclasses from you !
  7. Sprite Pops - Image Sequence

    Can you post a light version of your file so that we can see what's wrong with what you tested and understand better what you want to achieve ? In last resort, if you're lost with the POP Sprite, you can try to copy stamp or point instance grids on your particles after simulation, and simply put your sequence on the color map of your shader. You'll just need to be sure there is an orient attribute on your particles (use a POP look at node for example), so that it faces the camera.
  8. Moving points based on displacement

    If you use point instancing, the only other way I see to move your instances at render time, prior to the geometry generation, is to use a CVEX SHOP that output a xform matrix. To build it, you need to sample your displacement inside this CVEX shader... which lead to the previous answer : you need to recreate your displacement inside, because the disp data is created on the fly, once the shader is given a shading point. So either your disp data is baked in the geo prior to rendering, either it is calculated on the fly at rendering (see file attached), but you need to calculate it at some point, and it implies duplicating your network, because it is part of the terrain shader, not other shader. A third option, maybe, is to bake render your displacement, import in COPS, and use this image to sample the disp to apply on the instancing points. dispInstancesRenderTime.hip
  9. Drive Particles from Point A to Point B

    Made you a file to test the method. I don't know if that's what you're looking for. morphParticles.hip
  10. copy/stamp, alternative ways?

    Maybe you could try point instancing. You transfer all your colors, height info etc. to drive your shader color and size attributes that will modify your cubes at rendering....
  11. Moving points based on displacement

    I think you can solve this in various ways, one of it being scattering the points using the displacement info prior to rendering. For that, I would scatter the points on the non-displaced surfaced (which is what you do), then with a point wrangle I would transfer the UV of the "ground" to these points as well as the normals (or using scatter node to inherit these at first, more efficient :-), and then in point vop, use a Texture VOP to move these points using the same displacement information (same texture map as the one used for your ground), along the their normals. Then your points will be displaced prior to rendering, and when you instance grass blades on them, they will stick to your displaced "ground". Hope I understood the problem correctly :-)
  12. Drive Particles from Point A to Point B

    Hi, I didn't watch the video until the end, but I guess you can use a POP Steer Seek, using 1 goal for each particle by matching an "ID" attribute, inside a POP network. You can use two nodes (initial states with a set of points as goals, and a final state with another set of points - in SOP - as final goals). Then you increase the force of one while decreasing the force on the other, by keyframing these parameters. And maybe use a third node like POP Wind to add the swirls in between, always adjusting the force of each node so that only one is "active" at some point. Didn't tried but you can give a look. Additionally you can of course use a multisolver and a sop solver to adjust size of your particles during each phase of your morph.
  13. Hi, trying to do a « world war z » kind of shot, with human/zombies pyramids. I have only one clip for the moment (I wanted to manually animate my agents as a training), so no variations, no ragdolls yet. Just trying to layer my crowd in pyramid using a base terrain + local deformations on which i scatter points that are used as goals for my agents (with priority attribute to get the agent on the bottom of the pyramid arriving first... so lots of POP Wrangles and customs systems, but for a poor result yet :-) The plate is coming from a urban exploration group called Black Raven, that kindly accepted that I use it. Shot with a DJI phantom 3. I matchmoved it with 3dequalizer. Advises are welcome if you have ideas on how to reach these human pyramids :-)
  14. Earthquake in Paris

    Some new finished work ! Been working a long time on this one. Not totally finished yet, there may be some cleaning to work on... Any feedback is welcome.