# djiki

Members

94

8

2. ## Invalid Bindary Token

It "smells" like some disk issue, possibly lost fragments or even bad blocks but it could be also some bug from older version. Don't waste time searching details, try re-simulate scene on some other drive (or repaired) with latest version of Houdini. If that doesn't resolve the issue, then upload minimal scene which can reproduce error, here on forum. cheers
3. ## Automatically remove interior from double-sided geometry

well, yes in new Houdini v18 GroupExpand is implemented (finally)
4. ## Automatically remove interior from double-sided geometry

Hehe .... try this.... description is in scene ..... sensitivity is keyed for every input geometry because they are very distinctive but it works properly on all geos you provided. keep_outer_side_examples_djiki.hipnc cheers

6. ## Is it possible to walk the edges in Houdini?

I made another approach for general solution. You can test its speed and compare to other methods. Algorithm used is very simple 1. if P1 and P2 are not neighbors points cancel all 2. polyarr1 = polygons that share P1 and polyarr2 = polygons that share p2 3. parsing in direction of P1 means finding same values in both arrays and discard them in polyarr1. Rest values are 2 polygons containing next edge. Finding shared points between them and discard P1. Rest point is new one. Replace p1 with NewPoint end repeat until P2 is reached (fully closed loop) or until termination point is reached. 4. If full closure is already found skip this step else parse in direction P2 until second termination point is reached Code generate detail attributes for TerminationPoints (if they exist) and detail attribute LoopClosed. Houdini's function pointprims() return sorted array. That fact is used for function optimization in finding different values in two arrays. Points are marked in group LOOP which is promoted to edges. cheers Test scene: edges3.hipnc
7. ## Is it possible to walk the edges in Houdini?

Yeah. All your examples are grid based so I posted "grid specific" solution only. Even if point index are not default it is trivial to regenerate them by converting quads into rows and columns. You can even create id attribute if you have to keep point order and use that @id in math instead @ptnum. For generalized solution, on quads-triangles mixed topology, how an edge sequence, which separate let's say several quads and then reach some triangle, does continue? Does it stop on that vertex of triangle? Does it continue in one or other direction? What is the criteria?
8. ## Is it possible to walk the edges in Houdini?

Am I missing something? If your input geometry is always "grid like" and grid is formed of ROW x COL points, then for any point on that grid you "know" to which row/column that points belong: Pcol = @ptnum % COL; Prow = trunc(@ptnum / COL); So, for both your input points, you know their columns and rows, only you have to detect do they share same row or same column. So if they have same column, you have to select all points from that column or if they share same row select points from that row. And to promote that point group to edge group. To optimize that, do not iterate through all points on grid. Use DetailWrangle instead and iterate through only those: 1. In case of same row your code loop will be: int startindex= commonRow * COL; for(int n=startindex; n<startind + COL; n++){ // move to group or mark attribute or whatever } 2. In case of same column int startindex = commonCol; // index of column is same as start point index for(int n=startindex; n<ROW*COL; n += COL){ // move to group or mark attribute or whatever } So all this is done on single wrangle node. edges2.hipnc
9. ## Delete faces occluded by shadow?

Houdini allows you to sort primitives on many different criteria and there are a lot of tools for filtering different types of data and isolate specific group of primitives. However, methods and technique you will use, much depends on your geometry. If you found "illuminating" method as one that will solve your problem, well, yes, you can do that. Forget about light. You have RAY sop node. Connect let's say XY grid in first input and torus in second. Make sure normals (or your custom attribute) from grid point to direction of torus and make sure grid has enough divisions. That grid will be your "light". DeleteByIllumination.hipnc
10. ## Hold Attribute Value with Solver

This is modified, working version. You have to init some value before solver node, and then inside solver make tests only for unaffected (initialized) values. So those, already affected stay intact. AIL_FX_cubeAnimatedVertexColor_v02.hiplc
11. ## Same material - Different Textures

I also do not have Red Shift, but you can check if inside red shift surface shader, houdini's BIND node works. If it works then you should be able to bind primitive's string attribute for texture name. In case if binding is working in RS only for non-string attributes, well, you can pass an integer attribute which would be representing let's say texture index and form full path+texturename+index+ext string inside shader and connect it in your texture name input. Also check if RS maybe has its own node for binding attributes, like Arnold.
12. ## Does someone know an effective algorithm for computing this

I am testing some 3D point cloud data captured by kinect sensors in Houdini. My question is, is there any efficient way of computing velocity field (VF) such that a known SDF (A) advected by such VF results in known sdf B? For example: Houdini has node "VDB advect SDF" which calculate sdf B if source sdf (A) and velocity field (VF) are known. I need opposite calculation, if A and B are known the goal is to calculate VF. Scene in attachment contains biped animation used to represent an actor and "kinect emulator" (two of them) created by several houdini nodes which generate similar point cloud structure as real kinect does. On that way, sending large point cloud structures from real depth camera via attachment is avoided. Processing node contains volumes A and B from two successive frames. Human eye (brain) instantly finds the way on how volume shape A is transformed to shape B but math behind that is not trivial. Anyone has some idea? KinectEmulation_ProcessingTest1.hipnc
13. ## Custom camera projections

Yes. Houdini, to be more precise, Mantra allows you to do any kind of projection you can imagine using custom lens shader. I wouldn't call that projection because it is much more. You can create your own rays (their origins and directions) for each rendered pixel and do manual ray trace not only in lens shader but in any kind of shader. Off course, you have to figure out first what "Robinson" table data represents and then we can help.
14. ## Varying *one* shader on multipart object being instanced?

Your final node in sop connect to all materials node you have and now use a switch node and connect in it's inputs all outputs from those material nodes. In expression of switch node you can use pointinstance() function to tell the switch node which input (material) to use
15. ## How to make Houdini more verbose about the last operation

Sometimes UndoHistory list can help you distinguish some crucial operation from the bunch of totally non-descriptive "parameter changed" and other "selection changed" ecents.
16. ## ROP File Output: Render png or tiff with alpha channel (Instead of transparency)

Consider using EXR file format and apply your custom channels but not in RGBA plane but make your custom channels like Metallic, AO, Emmis, Smooth etc. EXR can handle any number of custom channels. Every software which can open EXR allows you to pick any of your custom channel and operate with it like you would do with any grayscale image. If you really want PNG or TIF you have to render files without premultiplication (premultiplication will multiply each R,G and B by Alpha and that's not what you want) and store all 4 channels in RGBA. In general, Color plane is not good way for exporting custom data (masks are ok) because color plane uses gamma curve which is applied to data, also it clamps negative range etc
17. ## For what is "cl.cfg"

No. That is option for inter-exchange operations between OpenGL and DirectX. To force NVIDIA GPU to run OpenCL you have to setup followed environment variables: HOUDINI_OCL_VENDOR = NVIDIA Corporation HOUDINI_OCL_DEVICENUMBER = 0
18. ## Matching Curves

Yes, primuv function is for that purpose. I didn't open your scene.... but usual technique would be like this:. Suppose you have low res curves. Then just generate U coordinate attribute on those (resample node can do that or you can process each curve separately where U = float(@ptnum / (@numpt-1)) on each curve assuming point order is sorted same way as on high res curve).. That puts U in range (0 to 1). Later at the place you need some attribute from lowres curve to put on hi res (assuming curve count is the same) you can use pointwrangle node, connect high res curve into first input and low res curve into second : vector tempUV = set(@u, 0, 0); @attribute = primuv(1,"SomeAttributeFromLowRes", @primnum, tempUV); This way you can fetch any attribute from LowRes curve to HighResCurve (and vice versa if you exchange inputs into wrangle node) according to U coordinate which should exists on both geometries. So if you write: @P = primuv(1,"P",@primnum, tempUV); This will put your high res curve at low res curve
19. ## flip reseed group new particles

For post simulation task you can use @ID at current frame to distinguish new (resided or emitted) particles by simply grouping all @IDs greater than MaxId from previous frame. Generally same should apply for detection during sim but you have to take care of exact place inside solver that comparison is possible. And if your solver runs over sub-steps you have to decide does that should take into account. Also if you turn on @age attribute, just emitted particles will have zero age attribute, I'm not sure does that apply to reseeded particles too or they inherit age from particles they are reborn.
20. ## Wheel speed (rpm)

Also you can use builtin mechanisms for simple integration. Your example can be solved by area integral (integral of first order) over RPM (rotations per minute) curve. Here is an example using CHOP for integration. Example is without scaling factors 2*r*PI and without conversion minutes to frames. It is fast enough and allow you changing RPM curve during playback Integration.hip
21. ## ocean displacement with deformed geometry

In that case its much easier .... Make sure your deformed grid have proper UV and instead of exporting bgeo, export 3D displacement only in texture (ocean spectrum node can do that) then apply that texture in displacement shader
22. ## Wheel speed (rpm)

Yes, you can skip solver (as the easiest method for any kind of integration) and do your manual integration inside of a loop by using ch("../rpm", NNN) to fetch value from rpm curve at any given frame NNN. This way your loop have to integrate from scratch at every frame (not so efficient) but can finish job.
23. ## Copy Points With Rotation Along Curve

I modified your scene. Is this what you need? epe_copyWithRotationAlongCurve_modified.hipnc
24. ## Pyro Viewport Lighting Problem

Put ROP_OUTPUT_DRIVER node in your network1 and connect OUTPUT node into it. (On some Houdini version it doesn't work, but on some it works sporadically like in 16.0.731 it looks like a bug ). If it doesn't work on your version, leave that node connected and go outside your network1 and go back inside again. That should do the trick
25. ## P world in mantra

@protozoan: you are right. @marty: CryptoMatte use ObjID or MaterialID to generate different layers, but in this specific case both butterflies are generated at geometry level so it is only one object and both butterflies use same shop material. Ok, you can separate those, to two objects, or two materials, but there is new problem which CryptoMatte can not handle. (Ok, it can, but solution you have to apply for CryptoMatte to work properly is based on regular solution which applied to P pass solves problem too without CryptoMatte.) And problem is hidden in fact that those butterflies are not "shaped" by geometry but by alpha channel in image which is projected to plane(s). So in case of P pass (same apply to N, Pz, Pworld etc) pixels around alpha edges really exist on scene (only their opacity differs). In general, you can deal with that on two different ways. One and probably most common solution is to use already prepared presets for pixel filtering as protozoan suggests and second to handle those specific things in shader by yourself. Second approach gives you endless possibilities and you are not limited only to filtering. This is modified scene which works using closest surface filtering p_world_modified.hip And this is custom shader solution which solve your problem in shader. It simply does manual pixel in-shader compositing, for all semitransparent pixels but for full transparent or full opaque pixels it works like your basic shader. p_world_modified2.hip cheers
×
• Donations