Jump to content

adrianr

Members
  • Content count

    103
  • Joined

  • Last visited

  • Days Won

    3

adrianr last won the day on July 15

adrianr had the most liked content!

Community Reputation

34 Excellent

About adrianr

  • Rank
    Initiate

Personal Information

  • Name
    Adrian
  • Location
    London

Recent Profile Visitors

1,041 profile views
  1. Rebelway workshops

    I doubt all you'll take away from that course is how to make good rocks. I bet there is a ton of useful and transferable info, real time stuff especially, that will be worth the cost. Providing of course you currently know very little about what the course intends to cover.
  2. London user group

    Serves me right for not turning on thread notifications! We've set up a slack at the following to try and organise meet ups, but whatever we decide we'll try and remember to post back here and the SideFX thread. https://join.slack.com/t/londonhug/shared_invite/MjE0MjM5Mjg4OTI5LTE1MDA0Njc3ODctN2RjYmNlMTY0ZA
  3. Faceted Normals After Displacement

    Sooo my solution worked? Just using quaternion dihedral as you had set up instead of a 3x3 matrix. Good to know about the raytracing bias. Regarding your latest scene pretty sure the source is the shading normal vop inside the displace vop. It creates new normals from the displaced position, rather than rotating the existing normals along with the position. You can go into the displace vop and hijack the matrix coming out of the 'get_space' subnetwork to rotate the normals and bypass the shading normal vop - Just be sure to turn off your transform after the displace normal. Edit: I haven't checked the full subnet to see what it's making or if the normals are now technically rotated the same as the points, but I think this gives you enough to go on for further poking around.
  4. Faceted Normals After Displacement

    Damn, thought I had it, still a bit broken.. Only render time subdiv fixes it but I know that wasn't the point. Curious to know the proper solution as this would technically work in sops regarding rotating the position and normals. Displacement_Normals_02.hipnc
  5. Constraining wires to Cached Geometry

    See my last post in this thread
  6. I'm not 100% but I don't think you can use transform pieces in this situation. You're getting out more points from DOPs than you're putting in. Transform pieces expects a matching point count (well, technically this can work without it as you'll see in Anthony's thread, but it's not ideal) and a matching name attribute. If you're trying to move 20 initial chunks based on 40 points coming from DOPs I'm not sure how you expect that to work? The geometry being made in DOPs doesn't exist pre-sim in SOPs.
  7. No worries chief. Shame about the Maya export but I guess it's most often the case. And re: clustering I don't mean the clustering on the voronoi fracture - I mean clustering your glue constraints with different strengths. If a glue primitive has a strength of -1 to the solver it is unbreakable, so you can mix not just between bonds of varying strengths but also of bond of unbreakable strength. You can control these with layered noise just as you would anything else you needed various sizes from. If you just want straight up different sized pieces based on some seed points you could look into the 'Voronoi fracture points' sop, hip attached. Can be pretty finicky though - I think the glue constraints method is better albeit more work. Voronoifracturepoints_01.hip
  8. That expression should work fine, but bear in mind that will place the focus of the camera at the centroid of the follow object, which isn't how we usually focus on things, so you might want to subtract a slight value from the result to shift the focus distance more in line with the nearest face of the object. You can check where the focus distance is by hitting Z when you have your camera transforms active in the viewport - This brings up the focus handles. You can also right click and select focus handle there. Oddly when you have a camera lookat constraint it doesn't orient the focus handle with the look direction of the camera which makes visualizing it a bit tricky - Not sure if there is a fix for this or if you'll just have to do spot renders and check focus. Follow_focus_01.hip
  9. Just thinking aloud but I'm guessing you want to convert the head/neck to a FEM solid object and use pintoanimation to constrain the bottom of it to the original animation (Definitely threads on that around here if you search, and done in H16). So the bottom is locked to your dancing but the rest of the neck up from that is free to wobble around. Then do a point deform from your FEM mesh to the original head at the end.
  10. It's all covered here - Really worth a watch.
  11. About modeling a bottle

    Hah, that is bloody impressive. Tube to bottle in 1 vop!
  12. Yeah you're really creating a headache for yourself with two voronois. Even feeding them the same fracture points doesn't guarantee the same output because you're feeding it a different mesh. Regarding this specific scene I have 2 quick solutions. One is doing an attribute transfer after packing the geometry, then unpacking it in the high res stream to do your inner noise stuff. Reason I pack is because then you're transferring the names by proximity from the center of each chunk, so it's way safer than trying to do each chunk where it rests (check the add sop). This does highlight the problem with the two voronois though, as even with the same fracture points the eventual chunk count doesn't match. Fortunately it just seems to ignore it but it doesn't feel very clean. Second, if you were happy with the edge wobble you're getting from that level of divide it's not *too* bad keeping that on your sim geo and chucking it in. Yes your collision geo will be a bit less accurate and it's a few more points but maybe that's ok for you. If it sims stable then it's no worse than the non-collisions you would get transferring the low res sim to the high res chunks anyway. v004 and v005 scenes attached. In my opinion I'd look at another route. If you wanted big chunks but with more detail along the edges I'd fracture into smaller pieces and then use clustering with glue constraints to hold those big chunks together. This has the added benefit of allowing secondary breakups when things hit the floor etc. You can then still do the high res stream and add detail to those chunks for even more detail, you can sub-fracture even smaller chunks, or create some displacement vectors to use in a shader for crazy pixel level detail. You get good collisions back too because even with all the edge jagginess to bullet it's still all convex hulls. building_collapse_v004.hip building_collapse_v005.hip
  13. Ahh bugger sorry chap I thought I'd covered that already. Rushed it out on a Friday between usual tasks - Lemme have another look.
×