Jump to content

toadstorm

Members
  • Content count

    347
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    21

toadstorm last won the day on January 2

toadstorm had the most liked content!

Community Reputation

244 Excellent

About toadstorm

  • Rank
    Illusionist

Contact Methods

  • Website URL
    http://www.toadstorm.com

Personal Information

  • Name
    Henry
  • Location
    San Jose, CA

Recent Profile Visitors

4,831 profile views
  1. mantra start time

    There's a few ways you can decrease time to first pixel. First thing I'd do is switch over as much geometry as possible to packed disk primitives... just cache your shit to disk as .bgeo and then load it back in via a File SOP in Packed Disk Primitives mode. This means your IFDs don't need to actually contain any geometry; they'll just point to the cached geometry on disk instead. Second, try to avoid using lots of unlocked materials or packing your materials in such a way that you need to use "Save all materials" under "Declare Materials" on the Mantra ROP, in case you're doing that. In bigger scenes with lots of materials that can seriously bloat your IFD size. Third, disable displacement if you can. Displacement means you have to generate all that geometry at runtime, instead of streaming it from disk. It can honestly be faster in some cases to just subdivide your geometry in SOPs and then cache, rather than running a displacement shader.
  2. Best way for Air buble in bottle

    For a few large bubbles in an otherwise full bottle, Vellum really does seem like the best choice. What's unsuccessful about your current tests with Vellum? Some more details might be helpful.
  3. I documented a process like this on a short animation I made a couple years ago: When I did this I was stuck with FEM because Vellum didn't yet exist, but you could do this waaaaay faster using Vellum and some Vellum-friendly force attributes... you'd just have to translate a little bit from the FEM solver attributes I was using to generate force. The short answer is that I just randomly applied forces to areas of the leaves/petals until the rhythm of the impacts looked believable, and then I isolated those impacted positions in the simulation and used them to generate raindrops after the fact. The raindrop particles were POPs that I slid down the petals using the "gradient descent" algorithm (described in the following blog post), and then randomly released after a time. It was easier to do it this way and get the timing right than to actually try to collide the leaves with falling raindrop particles. This is a long post, scroll about halfway down and you'll start seeing how the impacts were done: https://www.toadstorm.com/blog/?p=557 There's a HIP file you can download linked at the beginning of the first post in the series: https://www.toadstorm.com/blog/?p=534
  4. Normals not normal

    That looks right for point normals for that particular shape. Primitive normals don't really exist... you can't directly modify them with attributes like you'd think. They're more an indication of the vertex winding order per-primitive than anything else. If a primitive normal is facing the wrong way, you need to use a Reverse SOP, and then possibly invert your point or vertex normals. If you want to see primitive normals, it's a different button on the side toolbar (the one with the primitive with the normal sticking out of it). Generally speaking, you want vertex normals if you have polygon geometry. Open polylines and points generally use point normals. It really depends on what you want to do with them (for example, using N as a force vector would generally be best done with point normals), but for any typical polygon model, vertex normals is the expectation... this is how you get control over the apparent "hardness" of edges. Otherwise your corners are all going to appear mushy because the renderer is going to interpret your point normals as perfectly averaged vertex normals. If you drop down a Normal SOP in vertex mode and play with the cusp angle, you'll get an idea of why you want to be able to control normals at the vertex level.
  5. MOPs: Motion Graphics Operators for Houdini

    Hello again! It's been a long time. Today with the release of Houdini 18 marks the first "official" release of MOPs: v1.00. This includes a ton of changes since the previous Stable release, and is now feature complete, barring any future bugfixes. Development of new features will now be focused on the upcoming commercial version of MOPs. The list is way too long to post here, so I'll just link to the Github release page: https://github.com/toadstorm/MOPS/releases/tag/v1.00 Please continue to post bug reports, feature requests, or any other feedback, either here, on GitHub, or in the MOPs forums! Thanks as always!
  6. Karma benchmark

    Yeah, that's not really to be discussed here. If you want to talk about beta stuff, do it in SideFX's beta forum.
  7. copy/stamp, alternative ways?

    something like this? i'm attaching an example file. also, if you're coming from a C4D background... you might want to try MOPs. it uses a similar flow to Mograph... you could use Falloff from Texture and then the Transform Modifier to scale objects in Y in this particular case. height_from_luma.hip
  8. you're right, this is a terrible place and you should probably leave
  9. Assuming you're using Alembic to get your primitives out of Maya and back into Houdini, you need to get the "packedfulltransform" intrinsic of each packed Alembic. In VEX you can use the "primintrinsic" function to get this matrix. However, points don't have primitive intrinsics, and so when you delete your primitives you're going to lose all your transform information except for @P. You'll want to keep the primitives loaded, unless you're manually converting the primitive intrinsic transforms to template point attributes. To manually get an orient attribute from a packed Alembic: matrix m = primintrinsic(0, "packedfulltransform", @primnum); matrix3 m3 = matrix3(m); p@orient = quaternion(m3); To get the pivot: vector3 pivot = primintrinsic(0, "pivot", @primnum); v@pivot = pivot; If you want to try the easy route, MOPs has built-in tools to handle this. Use MOPs Extract Attributes on your Alembic primitives, and it'll automatically pull an `orient` and `pivot` attribute out of your primitives (make sure to enable Extract Full Transform). You can then use Transform Pieces, or use MOPs Apply Attributes if your source and destination primitives have either the same sorting order or matching i@id attributes.
  10. VOPs don't work in a vacuum; they modify geometry. If you want to combine the results of multiple VOPs, you'd have to either daisy chain multiple VOPs that all bound the noise function to a geometry attribute, or import some kind of attribute from both VOPs so that you know what the value is you're trying to read in. I'm attaching both approaches. layering_noise.hip
  11. Infection system with growing geometry

    You need to define an "age" that would be equivalent to the current frame, minus the frame at which the current point was first activated by the infection system. You could store that frame in your solver, then compute the difference afterwards and use that value to map to @pscale.
  12. I don't understand why this must be done in a single VOP network...? Like Tomas said, VOPs run in parallel over all points so you can't really smooth your geo between noises. You either need to add up all your noises and then blur at once, or apply blurs in between noise stages. If you really need to keep the appearance of a single work environment and you're good with Python, you could of course wrap this whole process into a single HDA and use a multiparm block to create new noise functions and associated blurs upon user demand.
  13. Cinema4D is just not built to handle that amount of data. If you don't need to modify the geometry at all, depending on your render engine you could load in a viewport proxy that would stream in the real geometry at rendertime... this would be something like an Arnold .ass file, or a VRay .vrmesh, or a Redshift3D .rsproxy. You'll see a bounding box or some other downrezzed representation in the viewport, but it'll still render normally.
  14. Straighten Curve Segments?

    I used minimal VEX for this... just to create the v@sourceprimuv and i@sourceprim attributes. Attribute Interpolate automatically reads these attributes when computing the final values for any attributes that are run through it. interpolate_curve.hip
  15. I don't think this is possible without modifying the Principled Shader, since it's using GGX as its specular model and there's no parameter to change that. I'm not sure if this is an inherent problem with the GGX microfacet model or with Houdini's implementation of it, but GGX will not resolve to a mirror surface. If you wanted to force the Principled Shader to allow for non-GGX specular models, you need to right-click > Allow Editing of Contents, then dive inside, find the Principled Shader Core node, Allow Editing of that node, then look for the node "metallic_reflection" and change the Specular Model there to Phong or Blinn. Then jump out and set your material's Base Color to be pure white, and Metallic to 1.0. That should create a pure mirror material. I hope in future builds that SideFX can add a switch in here to override the specular model if roughness is zero. Might be worth an RFE?
×