Jump to content

djiki

Members
  • Content count

    94
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by djiki

  1. setPointGroup behavior

    Most information are not Houdini specific but simply concept of programming for multi threaded execution. Primer is just an example of how following human logic can lead to wrong results. As it runs only one pass of your code and there is no mechanism to start multi threads from code, Detail is single threaded. But, in new version of Houdini you will see one more "Run over: NUMBERS" method which would allow multi-threaded execution. Well, that's depend on how coders of Houdini implemented synchronization of threads in that hidden part of code . At least, if your overall number of points is less than thread pool block size, that should work. First time sync among finished threads is done, rest (still unfinished) threads could have different result. But as I said, it depends on code implementation which is hidden from user. So things like: decision about of thread pool block size which will run in parallel, and possible synchronization barriers and possible different algorithms of handling divergent execution paths etc are not known, so you can not guarantee that such code will work in all cases. That is the reason why you should use number returned from that addPoint() function. Even if you know all of those things behind, that doesn't mean they can not change with every new version of Houdini so such "dirty" code wouldn't run properly in newer versions. These days almost every processor has several cores. Also modern GPU has thousands of cores. Separating an execution on many of them and execute in parallel usually means faster execution. How much faster, depends on nature of problem and code optimization for hardware specific advantages/limitations etc. If you want to learn more on that topic, google for CUDA, OpenCL, PTX, Parallel programming algorithms etc.
  2. Invalid Bindary Token

    It "smells" like some disk issue, possibly lost fragments or even bad blocks but it could be also some bug from older version. Don't waste time searching details, try re-simulate scene on some other drive (or repaired) with latest version of Houdini. If that doesn't resolve the issue, then upload minimal scene which can reproduce error, here on forum. cheers
  3. well, yes in new Houdini v18 GroupExpand is implemented (finally)
  4. Hehe .... try this.... description is in scene ..... sensitivity is keyed for every input geometry because they are very distinctive but it works properly on all geos you provided. keep_outer_side_examples_djiki.hipnc cheers
  5. setPointGroup behavior

    Modifying geometry inside wrangle/VOP node has nothing with internal node's local variables which refer input geometry . So, @numpt is the number of points of connected geometry to first input and it is used to form an internally hidden a FOR loop with iterator @ptnum. So think of your code in point wrangle node as code inside FOR loop which is hidden. Adding or removing points will not impact values of @ptnum or @numpt inside that loop (whole your wrangle code). Function for adding point returns an index of new added point and that has nothing to do with @ptnum or @numpt values. However such index can be forwarded to almost all functions which require point number. Adding point occur immediately (not in visual manner but in internal data structure it exists as regular point) but removing point not. Function remove point only mark points for deletion which occur internally after whole loop is finished. That apply to all wrangle node types with one exception. DetailWrangle node doesnt form internal loop of any kind so iterator value like @ptnum or @prnum doesn't exist. Values of @numpt or @numpr are still valid because they are referencing input geometry. Those hidden FOR loops can be easy "visible" by comparing a code inside PointWrangle node: setpointattrib(0,"myattr",@ptnum,1,"set"); with a code which has identical behavior but written in DetailWrangle node: for(i@ptnum=0; @ptnum<@numpt; @ptnum++){ setpointattrib(0,"myattr",@ptnum,1,"set"); } This is valid code. @ptnum doesn't exist in DetailWrangle as reserved variable so you can use it like you would use any other variable. I chose it just for illustrative purpose because those two lines for comparison are identical that way. Any context insensitive code written in PointWrangle node can be copy/pasted in such loop in DetailWrangle node and it should work fine. For example, context sensitive code is: i@myattrib=5; and that will not work because that line of code if executed in point context will create/assign point attribute "myattrib". If it is executed in primitive context it will create/assign primitive attribute "myattrib" etc. But if that is written as context insensitive function call setpointattrib(0,"myattrib",ptnumber, 5,"set"); that will work doesn't matter in which context is executed as long as you can provide handle (the very first argument in most context insensitive functions) and index (not required in all functions). Now, consider your original code inside PointWrangle @group_basePoints = 1; vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointgroup(0, "basePoints", @ptnum+@numpt, 1); if you change it to be context insensitive it would looks like this: setpointgroup(0, "basePoints", @ptnum, 1); vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointgroup(0, "basePoints", @ptnum+@numpt, 1); Copying such code in DetailWrangle loop from previous example and voilaaa .... it works. Now you are probably more confused Same code works one way and not the other way. To keep track what actually happened modify code like this setpointgroup(0, "basePoints", @ptnum, 1); vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointattribute(0,"np", @ptnum, newpt, "set"); // this is new line setpointgroup(0, "basePoints", @ptnum+@numpt, 1); Now, watch spreadsheets on both DetailWrangle and Pointwrangle nodes with this same code. You can see results of addpoint() function in that np attribute on each point. In the case of DetailWrangle, all execution is done on single thread and result of addpoint() is incremented each time, as expected, and that's why that last line of code works. In the case of PointWrangle node, different values are consequence of multi threaded execution. You should always have on mind, that execution of PointWrangle will start on multi threads in parallel. So from the aspect of single point on some execution thread, result of single calling addpoint() will always return @numpt. If you modify your code in such way that, single point calls more than one addpoint(), their results will be @numpt+1, @numpt+2 and so on. In the moment of execution, result of addpoint() function is not synchronized among other threads, thats why each execution thread get @numpt value as the result of first such call, @numpt+1 for second call etc . First synchronizing barrier for them is at the end of thread execution and that is end of your code. That part is hidden from user. So, as long as you use returned value in the rest of code, you are sure you are addressing just added point(s), doesn't matter if those values are same among different threads. So line: setpointgroup(0, "basePoints", @ptnum+@numpt, 1); will fail in each particular thread because, each thread added only one point and each thread gets @numpt as the result of addpoint() and your code is trying to address point with larger index. You can change that line only in PointWrangle node just to see result: setpointgroup(0, "basePoints", @numpt, 1); and it will work. BUT, whole this is written only for behavior testing purpose so you can see differences in execution logic. In your real code you would NEVER use such "tricks". Especially because such things depend on internal design which could vary with every new version of Houdini. So as long as you stick to rule to use returned point index of addpoint() function, you are sure it will work regardless of "what is behind". So proper line would be: setpointgroup(0, "basePoints", newpt, 1); as Skybar already pointed. cheers
  6. I made another approach for general solution. You can test its speed and compare to other methods. Algorithm used is very simple 1. if P1 and P2 are not neighbors points cancel all 2. polyarr1 = polygons that share P1 and polyarr2 = polygons that share p2 3. parsing in direction of P1 means finding same values in both arrays and discard them in polyarr1. Rest values are 2 polygons containing next edge. Finding shared points between them and discard P1. Rest point is new one. Replace p1 with NewPoint end repeat until P2 is reached (fully closed loop) or until termination point is reached. 4. If full closure is already found skip this step else parse in direction P2 until second termination point is reached Code generate detail attributes for TerminationPoints (if they exist) and detail attribute LoopClosed. Houdini's function pointprims() return sorted array. That fact is used for function optimization in finding different values in two arrays. Points are marked in group LOOP which is promoted to edges. cheers Test scene: edges3.hipnc
  7. Yeah. All your examples are grid based so I posted "grid specific" solution only. Even if point index are not default it is trivial to regenerate them by converting quads into rows and columns. You can even create id attribute if you have to keep point order and use that @id in math instead @ptnum. For generalized solution, on quads-triangles mixed topology, how an edge sequence, which separate let's say several quads and then reach some triangle, does continue? Does it stop on that vertex of triangle? Does it continue in one or other direction? What is the criteria?
  8. Am I missing something? If your input geometry is always "grid like" and grid is formed of ROW x COL points, then for any point on that grid you "know" to which row/column that points belong: Pcol = @ptnum % COL; Prow = trunc(@ptnum / COL); So, for both your input points, you know their columns and rows, only you have to detect do they share same row or same column. So if they have same column, you have to select all points from that column or if they share same row select points from that row. And to promote that point group to edge group. To optimize that, do not iterate through all points on grid. Use DetailWrangle instead and iterate through only those: 1. In case of same row your code loop will be: int startindex= commonRow * COL; for(int n=startindex; n<startind + COL; n++){ // move to group or mark attribute or whatever } 2. In case of same column int startindex = commonCol; // index of column is same as start point index for(int n=startindex; n<ROW*COL; n += COL){ // move to group or mark attribute or whatever } So all this is done on single wrangle node. edges2.hipnc
  9. Delete faces occluded by shadow?

    Houdini allows you to sort primitives on many different criteria and there are a lot of tools for filtering different types of data and isolate specific group of primitives. However, methods and technique you will use, much depends on your geometry. If you found "illuminating" method as one that will solve your problem, well, yes, you can do that. Forget about light. You have RAY sop node. Connect let's say XY grid in first input and torus in second. Make sure normals (or your custom attribute) from grid point to direction of torus and make sure grid has enough divisions. That grid will be your "light". DeleteByIllumination.hipnc
  10. Hold Attribute Value with Solver

    This is modified, working version. You have to init some value before solver node, and then inside solver make tests only for unaffected (initialized) values. So those, already affected stay intact. AIL_FX_cubeAnimatedVertexColor_v02.hiplc
  11. I also do not have Red Shift, but you can check if inside red shift surface shader, houdini's BIND node works. If it works then you should be able to bind primitive's string attribute for texture name. In case if binding is working in RS only for non-string attributes, well, you can pass an integer attribute which would be representing let's say texture index and form full path+texturename+index+ext string inside shader and connect it in your texture name input. Also check if RS maybe has its own node for binding attributes, like Arnold.
  12. I am testing some 3D point cloud data captured by kinect sensors in Houdini. My question is, is there any efficient way of computing velocity field (VF) such that a known SDF (A) advected by such VF results in known sdf B? For example: Houdini has node "VDB advect SDF" which calculate sdf B if source sdf (A) and velocity field (VF) are known. I need opposite calculation, if A and B are known the goal is to calculate VF. Scene in attachment contains biped animation used to represent an actor and "kinect emulator" (two of them) created by several houdini nodes which generate similar point cloud structure as real kinect does. On that way, sending large point cloud structures from real depth camera via attachment is avoided. Processing node contains volumes A and B from two successive frames. Human eye (brain) instantly finds the way on how volume shape A is transformed to shape B but math behind that is not trivial. Anyone has some idea? KinectEmulation_ProcessingTest1.hipnc
  13. Yes. Houdini, to be more precise, Mantra allows you to do any kind of projection you can imagine using custom lens shader. I wouldn't call that projection because it is much more. You can create your own rays (their origins and directions) for each rendered pixel and do manual ray trace not only in lens shader but in any kind of shader. Off course, you have to figure out first what "Robinson" table data represents and then we can help.
  14. Your final node in sop connect to all materials node you have and now use a switch node and connect in it's inputs all outputs from those material nodes. In expression of switch node you can use pointinstance() function to tell the switch node which input (material) to use
  15. Sometimes UndoHistory list can help you distinguish some crucial operation from the bunch of totally non-descriptive "parameter changed" and other "selection changed" ecents.
  16. Consider using EXR file format and apply your custom channels but not in RGBA plane but make your custom channels like Metallic, AO, Emmis, Smooth etc. EXR can handle any number of custom channels. Every software which can open EXR allows you to pick any of your custom channel and operate with it like you would do with any grayscale image. If you really want PNG or TIF you have to render files without premultiplication (premultiplication will multiply each R,G and B by Alpha and that's not what you want) and store all 4 channels in RGBA. In general, Color plane is not good way for exporting custom data (masks are ok) because color plane uses gamma curve which is applied to data, also it clamps negative range etc
  17. For what is "cl.cfg"

    No. That is option for inter-exchange operations between OpenGL and DirectX. To force NVIDIA GPU to run OpenCL you have to setup followed environment variables: HOUDINI_OCL_VENDOR = NVIDIA Corporation HOUDINI_OCL_DEVICENUMBER = 0
  18. Matching Curves

    Yes, primuv function is for that purpose. I didn't open your scene.... but usual technique would be like this:. Suppose you have low res curves. Then just generate U coordinate attribute on those (resample node can do that or you can process each curve separately where U = float(@ptnum / (@numpt-1)) on each curve assuming point order is sorted same way as on high res curve).. That puts U in range (0 to 1). Later at the place you need some attribute from lowres curve to put on hi res (assuming curve count is the same) you can use pointwrangle node, connect high res curve into first input and low res curve into second : vector tempUV = set(@u, 0, 0); @attribute = primuv(1,"SomeAttributeFromLowRes", @primnum, tempUV); This way you can fetch any attribute from LowRes curve to HighResCurve (and vice versa if you exchange inputs into wrangle node) according to U coordinate which should exists on both geometries. So if you write: @P = primuv(1,"P",@primnum, tempUV); This will put your high res curve at low res curve
  19. flip reseed group new particles

    For post simulation task you can use @ID at current frame to distinguish new (resided or emitted) particles by simply grouping all @IDs greater than MaxId from previous frame. Generally same should apply for detection during sim but you have to take care of exact place inside solver that comparison is possible. And if your solver runs over sub-steps you have to decide does that should take into account. Also if you turn on @age attribute, just emitted particles will have zero age attribute, I'm not sure does that apply to reseeded particles too or they inherit age from particles they are reborn.
  20. Wheel speed (rpm)

    Also you can use builtin mechanisms for simple integration. Your example can be solved by area integral (integral of first order) over RPM (rotations per minute) curve. Here is an example using CHOP for integration. Example is without scaling factors 2*r*PI and without conversion minutes to frames. It is fast enough and allow you changing RPM curve during playback Integration.hip
  21. ocean displacement with deformed geometry

    In that case its much easier .... Make sure your deformed grid have proper UV and instead of exporting bgeo, export 3D displacement only in texture (ocean spectrum node can do that) then apply that texture in displacement shader
  22. Wheel speed (rpm)

    Yes, you can skip solver (as the easiest method for any kind of integration) and do your manual integration inside of a loop by using ch("../rpm", NNN) to fetch value from rpm curve at any given frame NNN. This way your loop have to integrate from scratch at every frame (not so efficient) but can finish job.
  23. Copy Points With Rotation Along Curve

    I modified your scene. Is this what you need? epe_copyWithRotationAlongCurve_modified.hipnc
  24. Pyro Viewport Lighting Problem

    Put ROP_OUTPUT_DRIVER node in your network1 and connect OUTPUT node into it. (On some Houdini version it doesn't work, but on some it works sporadically like in 16.0.731 it looks like a bug ). If it doesn't work on your version, leave that node connected and go outside your network1 and go back inside again. That should do the trick
  25. P world in mantra

    @protozoan: you are right. @marty: CryptoMatte use ObjID or MaterialID to generate different layers, but in this specific case both butterflies are generated at geometry level so it is only one object and both butterflies use same shop material. Ok, you can separate those, to two objects, or two materials, but there is new problem which CryptoMatte can not handle. (Ok, it can, but solution you have to apply for CryptoMatte to work properly is based on regular solution which applied to P pass solves problem too without CryptoMatte.) And problem is hidden in fact that those butterflies are not "shaped" by geometry but by alpha channel in image which is projected to plane(s). So in case of P pass (same apply to N, Pz, Pworld etc) pixels around alpha edges really exist on scene (only their opacity differs). In general, you can deal with that on two different ways. One and probably most common solution is to use already prepared presets for pixel filtering as protozoan suggests and second to handle those specific things in shader by yourself. Second approach gives you endless possibilities and you are not limited only to filtering. This is modified scene which works using closest surface filtering p_world_modified.hip And this is custom shader solution which solve your problem in shader. It simply does manual pixel in-shader compositing, for all semitransparent pixels but for full transparent or full opaque pixels it works like your basic shader. p_world_modified2.hip cheers
×