Jump to content

Ankit Pruthi

  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Ankit Pruthi

  • Rank

Personal Information

  • Name
  • Location
  1. Constraint woes - pinning two different packed objects together

    Thanks for the example Pavel!! This does exactly what i was looking for! Houdini really does need a more straightforward way of doing this however.
  2. Alright so i have been through plenty of posts on using constraint networks in case of packed RBD objects, but i still can't quite find a way to achieve this simple task - Two packed rbd objects (pre-fractured) - one has animation and is set to be a deforming static object, and the other is an active object. I need to pin few of the pieces of the active object to the deforming one. Glue and hard constraints work well for scenarios where the static object isn't deforming. But on deforming geometry, the objects stay constrained to their location and do not move with the animated points. I've also tried cone constraint which seems to make the active object follow the animation but is somewhat unstable, the pieces try to stick but wobble all the way. Not sure what would be the correct way to do this. Any insights on this if any of you guys have tried would be of great help!! GluePinTest.hipnc
  3. Scene transformations and rendering volumes

    Yes that makes complete sense. It also explains why a volume which is about a 1000 metres in size needs to have a density of the order of 0.001 or so to start looking different. So, if i understand it correctly, it depends on how many units the ray travels according to camera space coordinates. Now, what would be the appropriate way to convert a camera imported using alembic from cm units into metres?? Something that works only on the x,y and z transforms and does not effect the scale? I'm not even sure if it makes sense to do that. As of now the workaround is scaling the density down or up by the same factor used on the camera.
  4. Now i've noticed this on a couple of projects. If we take a camera exported from other softwares such as maya into houdini, and scale it down by a factor of 100, to account for cms to metres conversion, volumes tend to render very very dense, almost as if the density itself has been multiplied by a 100 times. What is the reason behind this behavior? I've noticed the same happen if there are no transforms anywhere, but a transform is done inside of a sop on the volume. Below are the attached images showing the difference.
  5. Using biased techniques for GI (the older way)

    I've attached a simple file here. It contains a skylight, portal geometry, and a box with cutouts. The gilight in the scene generates the photon maps (distance threshold is set to 0 to force it to use only photons). I hope this is the kind of example you were looking for. lightCacheTest.hip
  6. Hi all, I've been trying to reduce render times in mantra 12.5 as much as possible without caring for physical correctness. PBR does give really good results at around 20-30 minutes per frame, but i was hoping to find a solution that would give render times of perhaps 4-5 minutes maximum. I noticed that GI light works only when BSDFs are connected to the shading networks. I wanted to know if it was possible to get GI working with shaders that implement illuminance loops etc (the old style) without having to use physically correct shading. What would be the workflow in mantra for this kind of rendering? Does it require having to code custom light shaders that would write out or look up a point cloud and get the illuminance from them? If so, what would be the procedure to do that? I'm completely clueless about this one and shooting arrows in the dark at the moment. Any direction would be really helpful.
  7. Whenever i try to use a GI light in this particular scene, mantra crashes a few seconds after it starts to generate the photon map. The file simply contains a lot of geometry nodes pointing to different obj files and has shaders assigned to them. Since this happens only with this file and the objs are absolutely fine, i am guessing something is going wrong with the shaders. But for the most part, i am clueless. UPDATE: GI works correctly once all the shaders are disconnected. Hence, it probably is an issue with a shader somewhere. fruit_v2.hip
  8. Hi everyone, Just today I was working on a flip fluid simulation and tried caching it. These are the steps i followed: Add a file node after gravity node, and set it to write simulation to disk, and set file compression to on. And then, i pressed the play button and let it cache the sim in specified directory. After this, i set it to read mode. Now the issue i believe i am facing is that the whole DOP network is still evaluated and calculated even if i am loading the sim from disk. So whats the correct way to load it? I went through the help file and seems like it was the right thing to do, but this is the first time i tried caching so i am probably wrong. Would be really helpful if someone could give a basic overview of whats the process. Thanks for your time!!
  9. Hey guys, recently i started learning houdini since i need to make some forests for a personal project (may sound noobish but its not, and will probably take a long time to make). I've already made a few different plants and trees in blender for that. Coming to the questions i had, i saw the tutorial "paint by culling" on old school blog on sidefx.com. This method works by deleting the surface after scatter sop scatters points, and hence it appears as if one is painting points over the surface and they are not randomly generated each time a stroke is made. It works really well in conjunction with instancing on points to paint things like grass etc on low poly surface. However i wanted to use a similar technique to paint instanced trees on detailed geometries.. Now the issue is, with heavy geometries, that method tends to be slow i.e. my computer starts to crawl since deleting high polygon meshes is heavy on it. Is there any other way of placing points on a geometry where they are not randomly generated all over again when someone paints on it? What i was trying to achieve was that if someone paints points, they must stay there, and if someone paints over them again, more points should be added. And if needed, they can be moved around or deleted. Any suggestions or hints on what direction i should be headed in?? I really wish scatter sop didnt re-distribute points randomly each time. I believe there would be other ways of generating/distributing points which i am completely unaware of at the moment. I did try looking through the forums or asking friends but couldnt find much info on this. Most methods i came across still placed points randomly. Thanks for the help!!
  10. Hey sorry for not being able to reply earlier but thanks for the quick response. This worked like a charm and using shell was twice as fast to generate all the surfaces etc!
  11. Here goes my first post in this forum. I've started to learn houdini recently and I am in love with it so far Before I start with the actual problem, i wanna say that i have searched the help file and forum too but couldnt find any solutions. And here goes the question. Is it possible to use command line to simulate dynamics in houdini 11 apprentice? I was trying out FLIP fluids the other day. And while experimenting with it, i ended up with a very heavy geometry being generated while surfacing. The issue i'm facing is that since its a very heavy mesh, i run out of ram midway (i've only 4 GB). And then system starts using swap and crawls. I realized that houdini takes up around 2.5 gb ram just to load up the scene alone. So i was wondering if there is a way to generate the surface using command line so that i can finish it without having to open the scene? In fact not just generating surface, but any kind of DOPs and POPs etc too. If not, i guess i'm gonna have to do with fewer particles and much coarser mesh or wait for hours while my system crawls begging for mercy. Thanks for your time!!