Jump to content

Cake Kid

Members
  • Content count

    21
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

4 Neutral

About Cake Kid

  • Rank
    Peon

Personal Information

  • Name
    Cake

Recent Profile Visitors

1,632 profile views
  1. Realistic paper crumpling

    Also I am very sorry for such a super slow reply, I didn't get email notification about the reply on this topic. Will make sure to turn the email notifications on from now on.
  2. Realistic paper crumpling

    Hello everyone, I started working on dynamic remeshing, but the geometry often freaks out, because the number of faces and edges changed dynamically. My results of paper crumpling were mainly influenced by topology (there is a difference when simulating quads, triangles, high and low res geometry). I simulated low-res paper first, and then subdivided it, to make the paper high-res. This helped me to achieve more natural deformation and crumpling. Regarding the cloth parameters, it's important to have high stretch stretch and strong bend. It is cruitial to fully understand the importance of cloth parameters such as stretch, shear, weak/strong bend, damping, etc... (There's good info on Houdini Documentation website).
  3. I played with rendering settings and figured out that anti-aliasing gave grey values around the edges which of course the zdepth reads as something being very close right next to something being very far and that's why I got the edge artifact. Blurring the zdepth makes it worse, so solution for this problem is to render zDepth pass (Pz) with following settings: 1. VEX type: float type 2. quantize: 32bit float 3. sample filter: closest surface 4. pixel filter: minmax idcover = object with most pixel coverage (no filter) everything else i set to default This solved my problem.
  4. Hi everyone, I can't really find any usefull information about this problem. I rendered my 3D model from Houdini with settings: VEX Type: float, Quantize: 32 bit float, Sample filter: Closest Surface, no pixel filter and everything else on default. And when try using zDefocus in Nuke using this depth pass, I experience weird artifact around the edges. Am I rendering the depth pass in a wrong way, or there's something wrong in my settings in Nuke? I'm attaching picture of the artifact - RGB and alpha version. Thanks
  5. Oooooh amazing! I was overcomplicating things with those uv coordinates. Thank you so much!!!!
  6. Hi guys, I'm experiencing a problem with a noise in mantra surface shader. As the noise is created from the objects coordinates, it looks like it's animated, when the object is moving. What I need to achieve is to just have static noise on a sphere, so when the sphere moves, the noise looks the same. I tried plugging in the UV coordinates of the object into position for turbulent noise in shader, but that resulted in having artifacts in UV seams. How should I approach the problem? I'm attaching the example file, showing the animation of the texture when rendered movingTextureNoise.hipnc
  7. Voronoi on deforming object

    Oh I see, voronoi chunks still change their color, when I visualise it, but they don't change the shape which is sweet! Thank you for help!
  8. Voronoi on deforming object

    I see, thank you very much, I tried it on sphere with animated mountain sop and it worked perfectly. But the thing is, the sphere before the deformation doesn't change the position, it's x,y,z coordinates stay the same. With my animation, the whole model is moving and deforming at the same time, so when I scatter points on first still frame, they remain still until the end of animation... I tried sticking points onto a model with uv, which works very well, scattered points stick to the geometry, but the distance of the points is changing, therefore the voronoi is calculated again causing the change of the voronoi shape. I'm attaching a .hip with my basic animation and problem I explained. voronoi animated.zip
  9. Hi guys, I'm facing a problem in Houdini. I've been given animation of a man and I need to create skin fracture on him and blow parts of his skin away. As it's deforming object, when I voronoi piece of his skin, voronoi pieces don't stay the same through the whole animation. I was wondering and researching, how else could I make pieces of his skin fly away, but I am a bit stuck. I have the animation in both low res (6000 poly) and high res (100 000). We aim to render the high res model, meaning I need to fracture the high poly one somehow, without having to wait a week for simulation to finish. Do you guys have any tips on how to tackle this problem? I'm attaching a .hip with animated sphere, showing that after I voronoi it, the voronoi freaks out on some frames. voronoi_on_animation.hipnc
  10. Realistic paper crumpling

    Well, everything's possible This was part of my short innovation project at uni and in the end I managed to make the cloth act like paper by loads of experiments with cloth parameters. I figured out that the topology is the most important factor, when achieving the effect like this, so I used topology with triangles only and simulated the super low res version first and after the simulation subdivided it to make more high res geometry. It's not dynamically remeshing, but for some purposes it's enough (It worked very well for me, I'm attaching the result of my project). I am still researching and developing the dynamic remeshing, but it's a challenge, because there isn't much information about how to achieve it. At least none that I know of.
  11. I see, thank you very much
  12. Hello, I've created a cloth simulation in Houdini and I would like to compare the time needed to simulate cloth with different parameters. I know it is possible to check last cook option when I middle click on a node, but that shows the time for a certain frame. Is there a way how to see the total time needed to simulate cloth? Also, when I am caching these simulations I usually cache with ROP output and then read the files back in. Is it possible to check the total caching time somewhere? Thank you very much for your time Have a nice day
  13. The solution is to just Turn On "Use Deforming Geometry" option in the RBD Fractured Object (box_object1 in the AutoDopNetwork) http://www.sidefx.com/index.php?option=com_forum&Itemid=172&page=viewtopic&p=141895#141895
  14. Hey guys, I've been trying to solve one problem I have with RGB applying to voronoi pieces, which are driven by particles. So in the beginning I create a box and voronoi it, create centroid point for each piece and move the points to affect the movement of voronoi pieces with a copy node. Now, the effect I'm trying to achieve is that when the particle turns red, the particle stops moving, and the voronoi piece that the particle is driving stops as well (velocity becomes 0). The problem I have is, that if the particles are not moving (its just static voronoi cube) the switch to RGB works perfectly. But as soon as I add curl noise on particles or any movement, the RGB simulation messes up. You can see that in my .hip file when you go to - box_object1 - popnet1 and you visualise the "BROKEN_SIM" node (which makes the particle stop when they turn red). If you visualise CORRECT_SIM, everything works just fine. Could it be that the rest position takes the frame 1 position of voronoi and then messes up, because voronoi has moved? Does anyone have an idea what could I be missing? dropFreakingOut.hipnc
  15. Hey guys, I'm trying to create an effect, where spheres move from the centre and after a certain frame, they all fall on the ground after each other. I've created pop net which drive the movement of spheres. I've tried to achieve the falling of spheres in two ways: 1., force and collision in popnet - works nice, but the rotation looks funny when colliding with the ground + I can't come up with a way, how to make the spheres fall after each other (so they don't just fall on the ground at the same time) 2., rigid body for spheres - I can't find a way how to mix the velocity from pop with rigid body. What I did was, that put a switch node and after a given frame - pop driven spheres are invisible and the RGB spheres become visible, but I suppose that is a very nooby way of doing it There is a visible difference between these two ways, because the popnet driven spheres fall down nicely and the fall looks more natural. With RGB spheres you can see when the movement is switched. Is there a way how to take the velocity form the popnetwork for each particle driving the sphere and blend it with the RGB simulation somehow? Or in other words, could I make the rigid body simulation of spheres make look like the one I created with the popnet? Sorry if there already is a topic explaining this problem, but I wasn't able to find it yet. Thank you RGB_POP.hipnc
×