Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


djiki last won the day on March 20 2020

djiki had the most liked content!

Personal Information

  • Name
    Srdjan Crnjanski

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

djiki's Achievements


Newbie (1/14)



  1. Most information are not Houdini specific but simply concept of programming for multi threaded execution. Primer is just an example of how following human logic can lead to wrong results. As it runs only one pass of your code and there is no mechanism to start multi threads from code, Detail is single threaded. But, in new version of Houdini you will see one more "Run over: NUMBERS" method which would allow multi-threaded execution. Well, that's depend on how coders of Houdini implemented synchronization of threads in that hidden part of code . At least, if your overall number of points is less than thread pool block size, that should work. First time sync among finished threads is done, rest (still unfinished) threads could have different result. But as I said, it depends on code implementation which is hidden from user. So things like: decision about of thread pool block size which will run in parallel, and possible synchronization barriers and possible different algorithms of handling divergent execution paths etc are not known, so you can not guarantee that such code will work in all cases. That is the reason why you should use number returned from that addPoint() function. Even if you know all of those things behind, that doesn't mean they can not change with every new version of Houdini so such "dirty" code wouldn't run properly in newer versions. These days almost every processor has several cores. Also modern GPU has thousands of cores. Separating an execution on many of them and execute in parallel usually means faster execution. How much faster, depends on nature of problem and code optimization for hardware specific advantages/limitations etc. If you want to learn more on that topic, google for CUDA, OpenCL, PTX, Parallel programming algorithms etc.
  2. It "smells" like some disk issue, possibly lost fragments or even bad blocks but it could be also some bug from older version. Don't waste time searching details, try re-simulate scene on some other drive (or repaired) with latest version of Houdini. If that doesn't resolve the issue, then upload minimal scene which can reproduce error, here on forum. cheers
  3. well, yes in new Houdini v18 GroupExpand is implemented (finally)
  4. Hehe .... try this.... description is in scene ..... sensitivity is keyed for every input geometry because they are very distinctive but it works properly on all geos you provided. keep_outer_side_examples_djiki.hipnc cheers
  5. Modifying geometry inside wrangle/VOP node has nothing with internal node's local variables which refer input geometry . So, @numpt is the number of points of connected geometry to first input and it is used to form an internally hidden a FOR loop with iterator @ptnum. So think of your code in point wrangle node as code inside FOR loop which is hidden. Adding or removing points will not impact values of @ptnum or @numpt inside that loop (whole your wrangle code). Function for adding point returns an index of new added point and that has nothing to do with @ptnum or @numpt values. However such index can be forwarded to almost all functions which require point number. Adding point occur immediately (not in visual manner but in internal data structure it exists as regular point) but removing point not. Function remove point only mark points for deletion which occur internally after whole loop is finished. That apply to all wrangle node types with one exception. DetailWrangle node doesnt form internal loop of any kind so iterator value like @ptnum or @prnum doesn't exist. Values of @numpt or @numpr are still valid because they are referencing input geometry. Those hidden FOR loops can be easy "visible" by comparing a code inside PointWrangle node: setpointattrib(0,"myattr",@ptnum,1,"set"); with a code which has identical behavior but written in DetailWrangle node: for(i@ptnum=0; @ptnum<@numpt; @ptnum++){ setpointattrib(0,"myattr",@ptnum,1,"set"); } This is valid code. @ptnum doesn't exist in DetailWrangle as reserved variable so you can use it like you would use any other variable. I chose it just for illustrative purpose because those two lines for comparison are identical that way. Any context insensitive code written in PointWrangle node can be copy/pasted in such loop in DetailWrangle node and it should work fine. For example, context sensitive code is: i@myattrib=5; and that will not work because that line of code if executed in point context will create/assign point attribute "myattrib". If it is executed in primitive context it will create/assign primitive attribute "myattrib" etc. But if that is written as context insensitive function call setpointattrib(0,"myattrib",ptnumber, 5,"set"); that will work doesn't matter in which context is executed as long as you can provide handle (the very first argument in most context insensitive functions) and index (not required in all functions). Now, consider your original code inside PointWrangle @group_basePoints = 1; vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointgroup(0, "basePoints", @ptnum+@numpt, 1); if you change it to be context insensitive it would looks like this: setpointgroup(0, "basePoints", @ptnum, 1); vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointgroup(0, "basePoints", @ptnum+@numpt, 1); Copying such code in DetailWrangle loop from previous example and voilaaa .... it works. Now you are probably more confused Same code works one way and not the other way. To keep track what actually happened modify code like this setpointgroup(0, "basePoints", @ptnum, 1); vector npos = point(1, "P", @ptnum); int newpt = addpoint(0, npos); setpointattribute(0,"np", @ptnum, newpt, "set"); // this is new line setpointgroup(0, "basePoints", @ptnum+@numpt, 1); Now, watch spreadsheets on both DetailWrangle and Pointwrangle nodes with this same code. You can see results of addpoint() function in that np attribute on each point. In the case of DetailWrangle, all execution is done on single thread and result of addpoint() is incremented each time, as expected, and that's why that last line of code works. In the case of PointWrangle node, different values are consequence of multi threaded execution. You should always have on mind, that execution of PointWrangle will start on multi threads in parallel. So from the aspect of single point on some execution thread, result of single calling addpoint() will always return @numpt. If you modify your code in such way that, single point calls more than one addpoint(), their results will be @numpt+1, @numpt+2 and so on. In the moment of execution, result of addpoint() function is not synchronized among other threads, thats why each execution thread get @numpt value as the result of first such call, @numpt+1 for second call etc . First synchronizing barrier for them is at the end of thread execution and that is end of your code. That part is hidden from user. So, as long as you use returned value in the rest of code, you are sure you are addressing just added point(s), doesn't matter if those values are same among different threads. So line: setpointgroup(0, "basePoints", @ptnum+@numpt, 1); will fail in each particular thread because, each thread added only one point and each thread gets @numpt as the result of addpoint() and your code is trying to address point with larger index. You can change that line only in PointWrangle node just to see result: setpointgroup(0, "basePoints", @numpt, 1); and it will work. BUT, whole this is written only for behavior testing purpose so you can see differences in execution logic. In your real code you would NEVER use such "tricks". Especially because such things depend on internal design which could vary with every new version of Houdini. So as long as you stick to rule to use returned point index of addpoint() function, you are sure it will work regardless of "what is behind". So proper line would be: setpointgroup(0, "basePoints", newpt, 1); as Skybar already pointed. cheers
  6. I made another approach for general solution. You can test its speed and compare to other methods. Algorithm used is very simple 1. if P1 and P2 are not neighbors points cancel all 2. polyarr1 = polygons that share P1 and polyarr2 = polygons that share p2 3. parsing in direction of P1 means finding same values in both arrays and discard them in polyarr1. Rest values are 2 polygons containing next edge. Finding shared points between them and discard P1. Rest point is new one. Replace p1 with NewPoint end repeat until P2 is reached (fully closed loop) or until termination point is reached. 4. If full closure is already found skip this step else parse in direction P2 until second termination point is reached Code generate detail attributes for TerminationPoints (if they exist) and detail attribute LoopClosed. Houdini's function pointprims() return sorted array. That fact is used for function optimization in finding different values in two arrays. Points are marked in group LOOP which is promoted to edges. cheers Test scene: edges3.hipnc
  7. Yeah. All your examples are grid based so I posted "grid specific" solution only. Even if point index are not default it is trivial to regenerate them by converting quads into rows and columns. You can even create id attribute if you have to keep point order and use that @id in math instead @ptnum. For generalized solution, on quads-triangles mixed topology, how an edge sequence, which separate let's say several quads and then reach some triangle, does continue? Does it stop on that vertex of triangle? Does it continue in one or other direction? What is the criteria?
  8. Am I missing something? If your input geometry is always "grid like" and grid is formed of ROW x COL points, then for any point on that grid you "know" to which row/column that points belong: Pcol = @ptnum % COL; Prow = trunc(@ptnum / COL); So, for both your input points, you know their columns and rows, only you have to detect do they share same row or same column. So if they have same column, you have to select all points from that column or if they share same row select points from that row. And to promote that point group to edge group. To optimize that, do not iterate through all points on grid. Use DetailWrangle instead and iterate through only those: 1. In case of same row your code loop will be: int startindex= commonRow * COL; for(int n=startindex; n<startind + COL; n++){ // move to group or mark attribute or whatever } 2. In case of same column int startindex = commonCol; // index of column is same as start point index for(int n=startindex; n<ROW*COL; n += COL){ // move to group or mark attribute or whatever } So all this is done on single wrangle node. edges2.hipnc
  9. Houdini allows you to sort primitives on many different criteria and there are a lot of tools for filtering different types of data and isolate specific group of primitives. However, methods and technique you will use, much depends on your geometry. If you found "illuminating" method as one that will solve your problem, well, yes, you can do that. Forget about light. You have RAY sop node. Connect let's say XY grid in first input and torus in second. Make sure normals (or your custom attribute) from grid point to direction of torus and make sure grid has enough divisions. That grid will be your "light". DeleteByIllumination.hipnc
  10. This is modified, working version. You have to init some value before solver node, and then inside solver make tests only for unaffected (initialized) values. So those, already affected stay intact. AIL_FX_cubeAnimatedVertexColor_v02.hiplc
  11. I also do not have Red Shift, but you can check if inside red shift surface shader, houdini's BIND node works. If it works then you should be able to bind primitive's string attribute for texture name. In case if binding is working in RS only for non-string attributes, well, you can pass an integer attribute which would be representing let's say texture index and form full path+texturename+index+ext string inside shader and connect it in your texture name input. Also check if RS maybe has its own node for binding attributes, like Arnold.
  12. Yes. Houdini, to be more precise, Mantra allows you to do any kind of projection you can imagine using custom lens shader. I wouldn't call that projection because it is much more. You can create your own rays (their origins and directions) for each rendered pixel and do manual ray trace not only in lens shader but in any kind of shader. Off course, you have to figure out first what "Robinson" table data represents and then we can help.
  13. Your final node in sop connect to all materials node you have and now use a switch node and connect in it's inputs all outputs from those material nodes. In expression of switch node you can use pointinstance() function to tell the switch node which input (material) to use
  14. Sometimes UndoHistory list can help you distinguish some crucial operation from the bunch of totally non-descriptive "parameter changed" and other "selection changed" ecents.
  15. Consider using EXR file format and apply your custom channels but not in RGBA plane but make your custom channels like Metallic, AO, Emmis, Smooth etc. EXR can handle any number of custom channels. Every software which can open EXR allows you to pick any of your custom channel and operate with it like you would do with any grayscale image. If you really want PNG or TIF you have to render files without premultiplication (premultiplication will multiply each R,G and B by Alpha and that's not what you want) and store all 4 channels in RGBA. In general, Color plane is not good way for exporting custom data (masks are ok) because color plane uses gamma curve which is applied to data, also it clamps negative range etc
  • Create New...