Jump to content

ch3

Members
  • Posts

    104
  • Joined

  • Last visited

  • Days Won

    2

ch3 last won the day on January 27 2016

ch3 had the most liked content!

About ch3

  • Birthday 03/18/1981

Contact Methods

  • Website URL
    http://www.ch3.gr

Personal Information

  • Name
    Georgios
  • Location
    NYC

Recent Profile Visitors

3,624 profile views

ch3's Achievements

Newbie

Newbie (1/14)

33

Reputation

  1. In short, how can I calculate the transformation matrix, or just rotation similar to how a geometry constraint/rivet works, where you define a few points as reference to glue a geo to an animated one? Long one: I want to procedurally animate the preroll of some packed RBDs, by comparing frame 1 and 2 of the simulation and project the objects backwards in time using that offset. It's a pre-broken sphere, so for translation, I find the average position of all points for both frames to calculate the offset per frame, which works great. [@P -= offset * backwardFrame;] The rotation part isn't as accurate though. I believe multiplying a rotation matrix with a scalar [@P*= rotMatrix*time] doesn't produce the desired results, so I am trying to calculate an axis and a rotation value that describes the difference between these two frames. I am currently picking the positions of 1 random but similar point in both geometries relative to their center, which I cross to find the axis. Then I use an acos(dot()) to get the angle between these two vectors. It almost works, but not 100% and picking a different point changes the result a bit. I guess that's happening because I need to take in consideration more than 1 points into the two geometry. thank you
  2. Thank you @Librarian for your in depth response I really appreciate. After some more reading I found out about the slip on collision parameter on the solver which helped by a lot!
  3. I am simulating a beer overflow and somehow I can't get all the particles to slide down the collision geometry, which creates artifacts when meshing the simulation. Friction is set to 0 for both the flip object and staticObject (not animated), stickOnCollision on the flipSolver/volumeMotion is off and I have also set collision detection to none under the flipSolver/ParticleMotion Any ideas of what to try? Thank you (All the blue particles on the side of the pint don't move much towards the end of the sim)
  4. To use a sequence of .rs proxies, you just have to manually set the right frame on each instance in the s@inceancefile attribute. I use the code bellow to randomly pick a proxy out of a sequence of variations. // ../file_<v>.<f>.bgeo.sc string version = sprintf("v%03d", chi("version")); string frame = sprintf("%04d", int(fit01(rand(i@ptnum),chi("fMin"),chi("fmax")+1)) ); s@instancefile = chs("path"); s@instancefile = re_replace("<v>", version, s@instancefile, 0); s@instancefile = re_replace("<f>", frame, s@instancefile, 0); I guess if you want to cycle the an animation, you can modify the vex to something like that // ../file_<v>.<f>.bgeo.sc string version = sprintf("v%03d", chi("version")); //string frame = sprintf("%04d", int(fit01(rand(i@ptnum),chi("fMin"),chi("fmax")+1)) ); int f = int( @Frame + (rand(i@ptnum)*100) ); f = ((f-chi("fMin")) % (chi("fmax")-chi("fMin"))) + chi("fMin"); string frame = sprintf("%04d", int(f) ); s@instancefile = chs("path"); s@instancefile = re_replace("<v>", version, s@instancefile, 0); s@instancefile = re_replace("<f>", frame, s@instancefile, 0);
  5. Alembic packed geometry has a pivot intrinsic which sits on the centroid of each primitive. Is there a way to transfer an object from Maya and maintain the pivots that were set there? thank you and I was wondering if there is a way to
  6. In an RBD simulation I want to use both glue constraints as well as the i@active attribute to time my destruction, but seems like the two can't work together. Has anyone come across that before and has a solution? I have an example file that illustrates the problem. thanks georgios rbdContraintsActive.hip
  7. I think they since removed the installer which made things easier. But just for the record and for my future self who will probably forget again. 1. Install python https://www.python.org/ 2. From cmd go to C:\Python27\Scripts\ 3. pip install reportlab (or any other package you may need) 4. copy C:\Python27\Lib\site-packages\reportlab to <userDocuments>\houdini16.0\python2.7libs (or link via env variable)
  8. I want to use an external python library to create and write out a pdf file from houdini. Maybe it's a simple thing to many, but I've been struggling for the past couple of hours, even if I've done this before!! About a year ago I managed to usereportlab, but since I installed the latest houdini (or something else change on my computer) my python SOP fails to import the library and I am trying to figure out how to do it again. What's the typical process of installing and importing any external library to houdini? There is also the cairo library which would like to try. Any help will be highly appreciated thanks
  9. So even if the shader pulls the image from the /img content, it doesn't seem to update it over time. Whether it's an animated noise pattern, or a changing heightfield which is what I am trying to use it for. The frame the scene is when I kick off the sequence render, is used across all frames. Any ideas for that? thanks again
  10. Ah great, that makes total sense now. I guess it's somewhat similar to the way glsl/openCl shader kernels expect all parameters to be imported a certain way. thanks a lot for the in-depth explanation.
  11. Is there a general limitation to expressions and connections within a material builder in comparison to promoted parameters? Seems like the op: expression or even a reference to a path chs() doesn't work within the material builder and they have to be promoted outside it. Is that normal?
  12. I have a small compositing network which I want to reference as texture in the shader using the op: expression .ie op:/img/trail Even though I've managed to make it work several times, I always find it a bit flaky and many times mantra doesn't manage to load the image, even though it may be visible on the viewport when referencing the same image operator in a uvquickshade node, or just by loading the shader. It works with a principle shader out of the box, but if I put the same shader within a material builder it breaks. Is there a render attribute or something else I need to add to the shader? I understand it's better to use pre-rendered images the normal way, but I want to use dynamic heighfields SOPs for textures and ideally avoid having to write out thousands of frames in advanced.
  13. I may be wrong about the rest volume, but can't you just manually make 3 volumes one for each axis and use a volume wrange to populate the values like that? @restX = @P.x; @restY = @P.y; @restZ = @P.z; I believe this makes sense to use when you advect it together with density, so you can have a reference to a "distorted" coordinate to drive noises with. Otherwise using the above rest fields will be the same with using world space P in the shader (P transformed from screenspace to worldspace)
  14. There are many ways you can project onto volume. The rest field is one of them and as you mentioned it can been used as UVs. I tend to skip that step and directly use the P which has been fitted to a the bounding box of the desired projection. It's easier to try that on a volumeVOP to begin with. Let's say you want to project along the Y axis between x and z values of -10 to 10. All you need to do it fit the x and z values within that range so you have a 0 to 1 and feed that to the UVs (st) of the texture node. You can even have a second object as input and automatically get its bounds to calculate your fit range. Now if you want the projection to be on an arbitrary axis, you will have to do some extra maths to rotate the P, project and rotate back within VOPs, or if it's easier, you can do it at the SOP level. What is important to keep in mind, is that volumeVOP will operate on a voxel level and you will never get any sharper detail than the voxel size. But once you do this, you can easily transfer the same nodes/logic onto a volume shader, which operates on rendered samples, which means you can go as sharp as your texture. But of course if you move your camera away from your projection axis, the texture representation will get blurred along that axis. But then again, that's just one approach and maybe there are other ways that may give you more control and better results.
  15. ch3

    2D dynamics

    What's the best way to simulate dynamics with just 2D shapes? Is it possible to use any of the existing solvers to simulate rigid bodies, but also flexible curve/polygonal shapes that respect line to line collisions ? I've tried using the wire and the grain solver (with rest length on a network of lines that connect the points), but the collisions only happen on the point level, resulting in penetrations between shapes. Is there anything else I should look into, or a working example I can take a look? thanks a lot georgios
×
×
  • Create New...