Jump to content

StopTheRain

Members
  • Content count

    9
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About StopTheRain

  • Rank
    Peon

Personal Information

  • Name
    Konstantin
  • Location
    USA
  1. Clear cache for a rop render

    Seems to be an ancient topic by now. I personally do not see other solution then scripting sequence of 'load scene, GL render frame, unload scene' commands. I need to render my water simulation mesh using GL render just to daily it. That in my case would be faster then software-rendering the whole frame range. Everything works but RAM usage grows from frame to frame until Houdini crashes. And I have nothing fancy other then Cache SOP that reads the fluid meshes and some geometry processing after that before the final Null SOP that is supposed to be rendered. Here is what I have tried to clear the cache: 1. Use 'sopcache -c' as a post frame script in OpenGL render. Or generally other combinations of flags on 'sopcache' command to restrict the amount of RAM used for cache. 2. Setting 'unload' flag on every SOP (including the final Null SOP) in my objects to be rendered. This can be done manually of with Python pre-render or pre-frame command. 3. Un-bypassing / bypassing SOPs with pre-frame and post-frame Python scripts. Nothing works so far. Is there a reliable non-GUI way to clear Houdini memory cache ? Any other ideas ?
  2. Nobody responded, so I guess I will just put here what I finally have came up with. Here is the final piece of the puzzle. How to use point array attribute of 16 floats representing transform to actually get correct translate, orient and scale attributes on those points ready for instancing with CopySOP. First small correction to the code to create points and set their attributes. My Python dictionary coming from the other application has lists of 16 floats representing transforms. That is important for initializing correctly my point attribute which holds that list of 16 floats. node = hou.pwd() geo = node.geometry() # For simplicity dictionary consists only of 2 items # That is just a sample data my_dict = {'foo':[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0], 'bar':[5.0,6.0,7.0,8.0,9.0,8.0,7.0,6.0,5.0,4.0,3.0,2.0,1.0,1.0,2.0,3.0]} name_attr = geo.addAttrib(hou.attribType.Point, 'name', '') m4_attr = geo.addAttrib(hou.attribType.Point, 'm4', [0.0]*16) for key in my_dict.keys(): point = geo.createPoint() point.setAttribValue(name_attr,key) point.setAttribValue(m4_attr,tuple(my_dict[key])) Now we have points, and we have 'name' attribute on them and 'm4' attribute representing the 4x4 transform matrix. Here is how I am getting translate, orient and scale attributes. First import m4 16 floats 'm4' attribute as Type '16 Floats (matrix)'. Simply multiplying point position by this matrix gives me the correct translate. In order to get the quaternion 'orient' attribute I convert transform matrix to rotation matrix3 and convert the result to quaternions. And finally to get the 'scale' attribute I use ExtractTransform VOP that does just what its name says - extracts translation, Euler rotation, scale and shear vectors from the transform. Now my points have all the necessary attributes needed for instancing with CopySOP. Thanks for observing my humble efforts!
  3. o.k. I am a bit closer now to what I am trying to achieve. Here is the code snippet that when used in Python SOP will add points with attributes coming from my dictionary. Again, my dictionary consists of pairs of an object name and a list of 16 floats for world transform for that object. Let's say that dictionary already exists and was imported from some other application. node = hou.pwd() geo = node.geometry() # for test case dictionary consists only of 2 items and transforms are # represented by lists of integers instead lists of floats my_dict = {'foo':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], 'bar':[5,6,7,8,9,8,7,6,5,4,3,2,1,1,2,3]} name_attr = geo.addAttrib(hou.attribType.Point, 'name', '') m4_attr = geo.addAttrib(hou.attribType.Point, 'm4', [0]*16) for key in my_dict.keys(): point = geo.createPoint() point.setAttribValue(name_attr,key) point.setAttribValue(m4_attr,tuple(my_dict[key])) The next step for me is to find how to create translate, orient and scale attributes from that 'm4' attribute and prepare those points for instancing using CopySOP
  4. Hi, I am looking for the ways to replicate in Houdini point instancing done in some other application. I will skip here the data importing part because it is clear in my situation how to do that with Python. Let's say I already have a Python dictionary with elements like 'name':'transform', where 'name' is the name of the object to be instanced and 'transform' is a list of 16 floats representing world transform matrix of that instance. I have figured out so far how to do it at the object level. Here is my Python code for that: # My existing dictionary containing pairs like 'name':'transform' my_dict = {'foo': '....', 'bar':'......', ......} # where each dictionary value is a list of 16 floats for key in my_dict.keys(): node = hou.node('/obj').createNode('null') node.setName(key) m4 = hou.Matrix4(1) m4.setTo(my_dict[key]) node.setParmTransform(m4) This gives me bunch of named nulls with the correct transforms. And I can parent my object under those nulls. But I need the same at the SOP level. I need a bunch of points with 'name', vector 'scale' and preferably quaternion 'orient' attributes which I can pipe into the the right input of the CopySOP for instancing. Any help on that would be much appreciated.
  5. Lacy texture from point cloud

    Animated source point cloud might be in the range of millions of points. So far I have abandoned idea to do the whole transformation form set of points to texture in shader. Manipulating source points in SOPs and exporting the resulting point cloud for use in shader works better so far. Here is static example of my source points and the texture that is created from them. Possible flickering due to the fact that source set of points is the result of particle simulation is a big concern now. Have not tested yet on sequence, but some sort of accumulation using SOP solver would be probably necessary.
  6. Lacy texture from point cloud

    kleer001, Thanks a lot for your reply. I certailny need to investigate possible use of findshortestpath SOP for what I am trying to accomplish. Though I am more after some sort of Delaunay triangulation for randomly distributed set of points, but not creating random shapes based on that set. One thing I guess I am agree with you is that this could be easier (though slower) done in geometry context then in shader and then passed into shader afterwards. Particularly after seeing tutorial on cmivfx.com called "Houdini vein work". There is lots of VEX inline code in that tutorial instead of VOP networks, but results are looking somewhat closer to what I need to achieve
  7. Lacy texture from point cloud

    Still no luck. Here is another example with the same data of trying to connect points from point cloud file in shader. This time instead of using distancePointToLine VOP, direction of vectors is used: first - between 2 points of point cloud, second - between one of those point cloud points and shaded point. The final goal of those exersises is creating some cool looking dynamic textures based on relatively sparse particle simulations. So far I am stuck at the base level of "connecting the dots". lines2.hipnc
  8. Hello, I am trying to create lace-looking texture on a grid using positions of the points scattered on that grid instead of some UV-based noise VOP. I use point cloud VOPs and distancePointToLine VOP to create a texture where each point is connected to its neighbors with one line. And by changing search radius in pointCloudOpen VOP I can make it so that only close neighbors are connected. It works fine for 3 points or for more then 3 points as long as they are equidistant. And I am getting lines overlapped sharply when I am using point cloud with somewhat random point distribution. Could someone please suggest how to fix that or maybe use another way of connecting points other then with distancePointToLine VOP. So that there is no sharp gaps where lines intersect. lines.hipnc
  9. Whitewater Foam MOd

    Just wanted to add something to that year old thread. Topic starter mentioned problem when large groups of foam particles disappear suddenly and re-appear again. I have encountered the same problem with my simulation and no recipe described here (short of sticking particles to the surface with GasParticleMoveToIso DOP) helps. And I have seen the same problem with white water simulatios that other people are working on. Particles in whitewater solver DOP get classified into foam, spray and bubbles classes based on their proximity to surface. That surface volume is generated in whitewater source SOP and gets pulled into DOPs in whiteWaterSolver DOP | fetch_surface DOP | object_merge_1 SOP. Sometimes at least in my case that object merge fails to read "surface volume", and as a result particles at current time step get classified based on their proximity to the surface generated for one of the previous time steps. My fix for that is to generate separate output from white water source object consisting only of "surface" volume and set that as an input for above mentioned object merge SOP inside whitewater solver. Not sure that I will have luck re-creating that situation in simple example .hip file to send bug report to SIde Effects. Anyway that fix might help somebody.
×