Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

substep

Members
  • Content count

    85
  • Joined

  • Last visited

  • Days Won

    2

substep last won the day on February 18

substep had the most liked content!

Community Reputation

15 Good

About substep

  • Rank
    Peon

Personal Information

  • Name
    kevin

Recent Profile Visitors

2,015 profile views
  1. Cool, although the point of the slate theme was to remove the orange accent. That is literally the best part of it. Did you think it was a mistake?
  2. I'm blown away that this is becoming a popular theme. Especially 4 years after it's creation. Glad to know its working well for H16. Maybe SE will adopt it, and make it standard...? Cheers!
  3. you still need to tell houdini to use the theme. it's called ' slate', and will be next to 'dark' and 'light'
  4. wow, this is really old. haha. files go here: C:\Users\username\Documents\houdini15.0\config
  5. is sesinetd running in the activity monitor? under 'all processes'? did you run the installer as root?
  6. everything everyone has said is spot on. Animated static objects are basically popping into place each frame. There's no interpolation. It may help to append a timewarp sop to your cache, with "interger frames' unticked, and $FF instead of $F for the frame expression. Then up the substeps on the dopnet itself. That way, at least your feeding interpolated data into the static object, so the substepping will be more accurate. Also, scene scale is not very important, and obviously animated static objects have no mass. So a really big dynamic chunk hitting a really small animated static object is gonna look really weird in regards to mass. also try not using static solver, and just use an rbd object, or fracture object, turn off 'create active object', and turn on 'use deforming geometry', and feed that through the bullet solver with all the active objects.
  7. I almost always have an attribute node (removing unnecessary attributes) right before any ROP outputs, and in some cases a group node, removing excess groups. I usually use .bgeo.gz to save on disk space. File sizes can get out of hand fast without compression IMO.
  8. some new tech coming from newtek. I know some of you guys are lightwave artists also. This is some rather interesting software for resculpting/shaping a geometry cache. what, what, waht? check out the videos. It's seems pretty cool. anyone have any experience with it yet? https://www.lightwav...m/chronosculpt/
  9. Maybe there's a better way, but since voronoi creates inside and outside prmitive groups. You could separate the two groups into their own geometry. Simply object merge it into a new geometry object, and then delete the outside group. then just assign the object light to the inside face geometry.
  10. Wow! Yes, yes, and more yes.
  11. Okay so i've managed to clean up/get rid of a few nodes. But I'm starting to rethink the camera thing. Without defeat, maybe it is best to have a seperate camera for the shaking, and just pass all the camera parms to it. That way, It's still really easy to animate the original camera, and see the shake side by side. Anyone else have any thoughts? Here's the camShake asset i'm working on slightly updated. I'm still having issues with the parameter interface and chops though. It seems that when I add/modify the paramter interface on my subnet, the fetch nodes within chops stop updating. CameraShake_v02.hip
  12. Hey all, I'm trying to get more into both chops, and creating hda's in general. I thought creating a camera shake asset would be a good place to start, as it can start very simple, and gradually get more complex. I have things off to a good start I think (with my limited experience), but I'm hitting some walls trying to make everything as procedural as possible. Right now, I use the input camera's translation and rotations, apply the chops, then export it out to a different camera. I'd like to not have to use a different camera, but rather, just export/override the original camera animation with the updated shakey animation, and this is where i'm stuck. I know I can easily right click on a channel, and choose 'create clip', which creates the chop net, with a channel chop, and the corresponding channel(s). I can't really figure out how it reads the original camera's animation into the channel chops 'values'. It seems like it just copies the keyframes into it. So I guess I'm just looking for the best way to make this more procedural. also, sometimes when I change, or modify the parm interface. it makes my chop net no longer function correctly, and I have to click through each node, one by one for them to update. I must be doing something wrong, or is this normal behavior? If I change the display name for a parm, everything stops working, then I click though each chop, and it works again. I don't think I'm bringing things into chops correctly. A little explanation to clear things up would be super heplful and much appreciated! Thanks! CameraShake_v01.hip
  13. Thanks Szymon, that makes a lot of sense. I think I need to research alembic a bit more. I wonder why they decided to keep alembic self contained, as opposed to a file sequence. This get's me pointed in the right direction though. Best!
  14. sop solver works great for this stuff. simple_gluetest_sopsolver.hip
  15. Hey all, I've been doing a bit of caching lately, both bgeo's and alembic, and had a few questions about where other people where using them in the pipeline. Alembic obviously works great for getting data in/out of other apps. But for just houdini, is it better to use bgeos? Also, is it better to be rendering bgeo's instead of alembic files? If I have a couple 80gb alembic files in a scene, and I shoot that over to a render farm, is each render node loading each entire alembic file per frame? I'm fairly certain that a render node will keep the scene open inbetween frames, so does it only load them once? It seems a bgeo sequence would be better, since it only is loading what it needs per frame/per render node? Thanks for any info!