Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

2 Neutral

About meldrew

  • Rank

Personal Information

  • Name

Recent Profile Visitors

1,335 profile views
  1. Hi all, I recently found this Toolset on Github allowing realtime input & record of input from a leapmotion controller. https://github.com/arqtiq/HouLEAP Unfortunately I'm having a little trouble getting it to run properly. In the readme it explains to 'simply copy the content of the **/houdini16.x** folder to your houdini home/hsite folder.' So my question is, where would be the correct place to place the python scripts that the tool provides? (I'm not entirely sure what the 'hsite' folder is referring to?) Houdini see's the OTL's however i return the attached error in, what i assume, is the python scripting. Or perhaps I need to define LEAP in the .env file? Any tips much appreciated EDIT: The error is reported from the example .hip contained in the Github repository linked in my post. Also This was run in H 17.5.258.
  2. Multiple cameras, one cook?

    Hi Luke, thanks for the quick response - So if I understand you correctly, Mantra isn't thinking about rendering anything outside the frustum in any case? If so that's great (and what I assumed/hoped anyway!) I assume you mean saving geo out as .bgeo, caching sims etc.? Then pointing IFD directly to these caches? I'll need to look more into this & packed geo, as I'm rendering on a cloud service... But that is another thread I suppose. ha. I always endeavour to cache things out as efficiently as I can, so I'll revisit this and see if there's any improvements I can make. It is a cube map of sorts yes - However it's not for VR, so any spherical mapping/fish-eye lens type solutions will lead to distortion that is not wanted in this case. I'm rendering content for 3 walls & floor of a box, all angles 90deg, with the pov at eye height - roughly centred within said box. Would there be a specific camera you would recommend? Thanks in advance for any additional pointers, appreciate your time. I know these are fairly rudimentary questions
  3. Hi all, Ok, so I have a scene/set of scenes where I need to render the same sequence from numerous cameras. Is there a way to render, say, 4 cameras at the same position, but facing different directions, but only calculating the lighting/reflections/shading etc. once? As far as I understand .ifd just include the instructions to render, not an actual cache of the scene itself. Time is a little tight, so any optimisation would be a benefit. (Cams are also various resolutions, not uniform, in case that has an impact.) Thanks!
  4. Calculating / matching orientation

    Hey all, Left this thread behind because the workflow got pretty intense! As an update - @moneitor @petz & @galagast solutions all worked well - Thank you so much! - But the issue I kept running into, which was eventually revealed, was that the tracking data was bad So eventually I got it working with all 3 approaches once we re-tracked and got it done correctly. Thanks again everyone for your kind help - I learned a heck of a lot solving this in the end!
  5. Calculating / matching orientation

    Thanks @moneitor - That is exactly what I was trying to achieve, done in a way I would have not even thought about. I'll spend some time this evening/over the weekend going through it & trying to better understand the math that's going on in there. Plus some VOP stuff that's pretty new to me. Thanks for the annotations as well, they're always very helpful!
  6. Calculating / matching orientation

    Essentially I am trying to access the results to whatever calculation(s) the 'extract transform' SOP does at object level... If anyone has any idea how to do that?
  7. Calculating / matching orientation

    Hey @jkunz07, That's great, and super lightweight. Thanks! However I'm still trying to calculate how to extract XYZ translate/rotate parameters from this process. Any ideas on that? i.e what XYZ translate/rotate is required to get from geo A > geo B. Being able to morph one to the other is v useful, but my poinclouds are *very* heavy, so being able to just apply a transform to the .fbx would be a lighter fix in this case, If I can get that data from a sample of 3 or 4 points. Thanks again!
  8. Calculating / matching orientation

    I've worked up a quick annotated example file, with a proxy version of my problem (and a few variants of my initial explorations to solve included/bypassed). I am calculating the centre (average) of each stream as a detail attrib, and my idea was to use that as a pivot point to base the orientation from. Unfortunately I have no real math background, so I'm struggling to wrap my head around the concept of reverse engineering the rotational/translation values. Also would be very helpful to get some tips on correctly using the centerpoints I've generated in the pivot parameters of the transform node. (I am currently exploring some other threads on this specific topic.) Thanks EDIT: One approach I hadn't considered was pclookup/filter etc in VOPs - Again, something I've never used before. So I'm beginning to explore that as well. pc_match_001.hip
  9. Calculating / matching orientation

    Yes, and I also want to extract the XYZ rot/scale transforms needed to place them on top, so I can then apply it to the .fbx upon import. Apologies if I wasn't clear, or if this is a v simple issue that I just am not sure how to approach - one of the pitfalls of being self taught I suppose. :/ Thanks again!
  10. Calculating / matching orientation

    Hi Jesper, Thanks for the response - Yep, have access to both in H. The camera track (which is the thing I'd like to re-orient) has been done in C4D and has been given to me as an .fbx, the pointcloud is directly from photogrammetry. So far my approach has been finding 3 points in the track that i can pintoint in the PC, then creating a bounding box of each and attempting to align. I would share a .hip but unfortunately can't at the moment due to NDA's etc. Thanks again!
  11. Hello all, Have been racking my brains/this forum on something seemingly quite simple, but not getting anywhere... Basically, I would like to match the scale/orientation of one set of points, based on another. Then be able to extract those 'transform' paramaters. For example, if i have a pointcloud A, with 10 points, which has been re-scalled/oriented to create Pointcloud B (in a separate process, I don't have access to those transf params). How would i then re-orient it to match? I have a photogrametry pc, which i am trying to match a tracked camera to. The issue is that the camera track is coming it at origin, so I need to transform to match the original pc. I have isolated 10 identically positioned (not identical ptnum) points from the pc & tracking data to use as 'calibrators', but can't figure out the best/most efficient next step in doing the re-orient/scale. Any help, tips, or pointers in the right direction for threads would be very much appreciated as always.
  12. Trail SOP help

    Hi all, I'm having what I'm expecting is a very simple issue with the trail SOP. As my points are dying/ptnum is resetting, I'm getting glitches at the ends of my trails. Obviously if it was a POP sim, I could just trail > cal vel, then append an add SOP to ad a primitive based on id, however I can't figure out how to do it using the @ ptnum variable in the attribwrangle. Is @ ptnum is actually just the wrangle variable, and not actually the point attrib? Been scratching my head on this for a while. .hipnc attached, any help very much appreciated.broken_trails.hipnc
  13. Thread PointVop Guidance

    Thanks a lot Jiri, Will take a look at these threads & see if I can get where I need - The helix along a curve seems ideal. (I need to start learning VEX poperly so I can *finally* start to move away from VOPs, they have such heavy overheards in comparison.)
  14. Hi all, I've started putting an asset together which allows me to create multiple 'threads' from a single line, then effect them as if they are fraying/weaving. I'm quite happy with it, however my approach doesn't lend itself well to anything other than straight lines, and I'd like to apply it to more/multiple complex curves - so that it follows their contours exactly. At the moment it does 'work', however it distorts the original curve quite a lot, which I'd like to avoid. Could anyone suggest a way for me to adapt my current VOP setup to calculate the trigonometry per curve? Or an alternative to using the 'wireU' attrib? Or alternatively, if there is a different approach I should be taking all-together? any pointers in right direction would be much appreciated .hip attached - Thanks in advance! (N.B. This setup is loosely based on a thread I originally found here on odforce some time ago - but I cannot for the life of me find it now, so a hat tip goes to OP if reading.) thread_tool_asset_003.hip
  15. fill mocap volume

    Hi all, I'm working with some mocap meshes that I'm trailing particles over using the 'minpos' technique in POP VOPs. I'm getting a little stuck though, as when i make trails of the particles, these obviously trail behind the mocap as the keep their birth XYZ coords. What I'd like to achieve is that the trails adhere to the surface of the body/geo in the same way that the particles do, so i end up with something like 0:24 > 0:26 of the attached video. Maybe I need to do something within a sop solver? perhaps dop pop is the completely incorrect way to go? Any hints or tips greatly appreciated - I've attached my .hip here should anyone have the time to take a look! Thanks! (hints on filling the volume as-in the video will also be met with rapturous thanks... I'm not even sure where to start with that one.) surface_particles_trail_test.hip