Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.


  • Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by ikoon

  1. I did few edits to your file, you may learn from that, I hope. I used copytopoints and directly pscale to drive the circle scale. The Copy SOP node reads @pscale attribute, see the attached text. For the second example, I used the attribute wrangle, as I am not that good in Attribute VOP. But you may use the IF Vop and compare random values in the VOP too. if (@pscale * rand(@pscale) > 0.5) { @pscale = 1.0; } else { @pscale = 0.0; } I have also attached the For Each attitude, simple example, may not be universal solution (it doesn't solve normals, for examples). Grid circle - v2.hipnc
  2. Each Scene View could have "SOP Source" setting, to either show the SOPs with the display or the render flags. Then you could have two Scene Views: - "final", set to /obj level, viewing results through camera, render flags - "working", classic as it is now, view the display flag, work inside the structure
  3. I had similar effect in my last project, attached is my solution. I just added the point inside camera and read it in the VOP. Subtract P.y (camera-point) and ramp the resulting float to color. camera.hiplc
  4. Please, I have been searching, but I cannot find these: 1) If multiple objects are selected and they have different values on a channel (e.g. different resolution on two cameras), then display the number as bold red. Now it is grey and hard to check up. I have been editing the .hcs but did not succeed: 2) Default Ignore case ON in the find node dialog (when Houdini starts) 3) Default Filter string in the operator tree (when Houdini starts) 4) Ladder values - bigger distance to change the value I am using Wacom and with 1.0 for example I can change values from almost -170 to 170 on my screen. Enough will be -50 to 50 so triple the distance to change the value. The "steps" are dense for my work, and sometimes I miss the desired value.
  5. I have been reading the docs, Sop Solver DOP says: By using an expression like stamps("../OUT", "DATAPATH", "../.:objname/Geometry") in an Object Merge SOP, the output of the previous timestep can be used as the starting point for the next timestep within the SOP Network. - shouldnt the doc say "output of the previous timestep, altered by the solvers which have been already evaluated in the current timestep" - the mentioned path "../OUT" confuses me ... because global parameter DATAPATH is stored on the "sopsolver" itself When I saw the path "../OUT", I thought that there is really a storage of "the previous frame value", no matter if the current timestep already altered that value. But there isn't such a storage? During the timestep, all the values are "live", am I right?
  6. This is a newbie DOP question. Please, I have been investigating the functionality of pre-solve and post-solve. I have tried the POP solver. If I connect SOP Solver to presolver and alter the @P, then there is difference between @pprevious and "real" positions. Pprevious gets values, which never "were". As I understand it, POP solver works like this: Just born particles: post-solve store to spreadsheet (call it "The Previous Frame Value") All other particles: pre-solve store pprevious to spreadsheet (so not "The Previous Frame Value") post-solve store to spreadsheet I am trying to understand it, because I would like to dive into these relationships: Point Number: dopoption($DOPNET, $OBJID, "CopyInfo", "copynum") Position: dopoption($DOPNET, $OBJID, "PointPosition", "tx") Now I am not sure, when are the values being written in the spreadsheet: E.g. when referring to "tx", is it always "The Previous Frame Value"? Probably not and probably it works like this? Simplified: End of 20th frame: - "tx" was 1.0 Computing 21st frame: - read "tx" as 1.0 (because "solving" something) - alter "tx" to 2.1 (because "presolving" something) - read "tx" as 2.1 (because "postsolving" something else) - read "tx" as 2.1 (because "presolve"..) - alter "tx" as 5.0 (because "solve"..) - read "tx" as 5.0 (because "presolve"...) - write "tx" as 5.0 into spreadsheet, render it Is it one single thread like this? So the value goes trough one thread of the code of different solve/post/pre/post/solve/pre/post/... and is being read/altered on the way? How does the compiler sort the influences? How do you debug these? Can I see the code, or filter $OBJID and "tx" influences? Probably the "@pprevious" is just a specific exception of "not previous frame value", because it is just a side product of some "in-between" collision solvers? I didn't even find, where is @pprevious initially created. I didn't even find, where is the particle decided to be presolved or not. Is it blackboxed?
  7. I did it with Hscript, everything is working fine as attached, so you may find it useful. But I would be glad if anybody could help me rewrite the script to Python, because HScript doesn't work with arrays and the python split() is much more convenient than hscript substr(). Or, please, how do you automate the combinations of takes/cams/frameranges renders? In the current project, I had more than 20 takes, multiple cameras and different frame ranges. So I used the Wedge ROP (By Take all the takes) and I have named my takes this way: name_startframe_endframe, e.g.: “magnetic_400_2200” “terrain_600_1800” Name is descriptive, for further comping. The first number is Start of the frame range, the second number is the End of the frame range. //For example this one is to extract the Start Frame or End Frame from the name of the take: { string take = chsop("take"); float first = index(take, "_"); float last = rindex(take, "_"); float start = first + 1; float length = last - first - 1; string startFrameString = substr(take, start, length); float startFrame = atof(startFrameString); float start = last+1; float length = strlen(take) - last - 1; string endFrameString = substr(take, start, length); float endFrame = atof(endFrameString); return startFrame; } The expression to generate the folder and filename (Output Image) e.g. is in the attached file. $HIP/render/magnetic_camL/magnetic_camL_$F4.png wedge.hiplc
  8. Please, how to follow path and auto-bank now with the new H16 constraints? In H14 we had the Auto-Bank factor, which controlled the automatic banking of an object as it turned corners. For now I just revealed the old interface in H16 (it is just set invisible) https://www.sidefx.com/tutorials/motion-path/
  9. Nice! Thank you! Probably we are still forced to wrangle the up vector "manually"? Is there any CHOP trick? Any Slope CHOP analysis on 3D? Curvature Analysis SOP is not enough. I have attached a file, which solves the up vector with Javier's method. It may be helpful for somebody in the future. Thanks go to Javier. follow path - v0.hiplc
  10. This is a DOP newbie question. I have noticed, that the automatically created SOP Solver DOP is legacy version. It has a gray input for „Objects to be processed”. The actual version of SOP Solver DOP is sopsolver::2.0 and it has only inputs „Data to attach”. I have tried to reconstruct the thing with the sopsolver::2.0, and I succeeded to made it work with Multisolver DOP. As attached. Please, am I doing it right? What is the purpose of this ::2.0 change of inputs?
  11. @mrcoolmen you are running over the detail, so append(points,@id) looks for @id as a detail attribute, you may need to read @ from point first, and then append for (int i=0; i<@numpt; i++) { int id = point(0,"id",i); append(points, id); } @Hartman- please, why is this line necessary? and why is it outside the loop? addpointattrib(0, "allpoints", points);
  12. I would like the Animation Editor to be able to show multiple separate channels, as the Motion FX View can. Each with its own vertical scale homing:
  13. I had it also reverse, when trying inside of your file. Those numbers (handle/channel) are linked and the handle may generate vector, instead of radians or degrees. I unfortunately have no time to experiment now.
  14. Thank you very much! I will explore.
  15. I think they are quite easy to use. For me the biggest problem was to find the right type of handle. There are so many types of handles. Then you just type the channels you want to link with the handle (as in my previous post). You may explore my file here. Also I have used Guide geometry in that file. Please take my file just as a small inspiration ... I am really not an expert and this was my first hda. Now I know there are better ways.
  16. Great! I have seen that unfortunately Clip is not compilable, maybe in the future it will be? The Boolean version is almost compilable, except for the Sort node. But that reverse Sort might be done in vex? Btw have you tried to link the Handles in Type Properties? I am no expert, these are just tips.
  17. Afaik, you have to change the type of the ramp in the Edit Parameter Interface manually. Like this: Also the vex code then will be for example @Cd = chramp('myramp',@P.x);
  18. This is great. I am just thinking (with my very limited knowledge), could you find and exaggerate patterns in curvatures with this microsolver? Something like this?
  19. The left side looks like golden ratio pattern, you will find tutorials and equations. You may also need the divide sop, compute dual.
  20. Well ... but for that crown, I think that you need some liquid on the ground/leaf too. At least a bit. Then the drop has a "cup" shaped collider made of repelled liquid and makes the crown. The crown is also made of that previously static liquid being shoot ... imho. But I understand ... then some other guys have to help, I am not that good. EDIT: both of your reference images are a drop fallen into liquid, do you have another reference without the liquid on the ground?
  21. https://vimeo.com/209763376 Time 29:15 in that tutorial. There is also a crownsplash scene file available for download. The link is in the description.
  22. As posted here: https://www.sidefx.com/forum/topic/48739/?page=1#post-220104 Quote jlait: In the expression language, there are two named functions to get detail attributes, `detail` and `details`. The first returns a float and takes an index into the attribute (so you can get components of a vector). The second returns a string and has no index. CopyPaste this to your Geometry File parameter and fix the myname, to fit your name: `details(0, "myname")` Middle click on the Geometry File parameter to see how it evaluates.
  23. Hi, I may try. Can you attach your concept?
  24. Maybe here is something for you? http://www.sidefx.com/docs/houdini/hom/locations Before Houdini saves the scene file, it will run HOUDINIPATH/scripts/beforescenesave.py if it exists. After Houdini attempts to save the scene file, it will run HOUDINIPATH/scripts/afterscenesave.py if it exists.
  25. Please, could anybody give me a hint to solve the issue from my previous post? Quote: "When I unset Display or Render flag (in sops), it doesn't jump back to its last position." Should I use hou.getenv() and hou.putenv() ? Btw: I have put list of my hotkeys here. It is really convenient to browse the nodes pgup/pgdn and hit Insert to view, or ctrl-del to un/bypass multiple nodes. http://lab.ikoon.cz/index.php/2017/07/26/shelf-tools-and-hotkeys/